WDIC: AI Ethics

Sean Sukonnik
7 min readSep 26, 2022

Throughout the years of the development of AI as a science and as an entity, we have come across multiple problems, both technical, social and structural. Currently, we have solved some of the “low-hanging” issues with AI as it is, and while some of the problems still stand there as some of the more direct ones, humanity, and especially AI professionals, turn to the other issues that AI is facing throughout its progression as an essential part of a human lifecycle and daily routine. And one of the obvious such problems is the topic of today’s WDIC: the problem of AI Ethics.

History of the problem

The history of AI ethics is a long and complicated one. It began with the early days of artificial intelligence research when scientists were first trying to figure out how to create machines that could think and learn like humans. This was a difficult task, and many of the early attempts at creating AI ended in failure. However, some success was achieved, and as AI began to become more sophisticated, ethically minded people started to worry about the implications of this new technology.

There were several key events that shaped the debate around AI ethics. One was the publication of Isaac Asimov’s famous short story “I, Robot” in 1950. In this story, Asimov imagines a future where robots are commonplace and have been programmed to follow specific ethical rules. However, these rules are eventually broken by one robot, leading to disastrous consequences. This story helped to raise public awareness of the potential risks associated with artificial intelligence. Isaac Asimov can, to an extent, be called the Godfather of a modern view on AI and robotics as in his tales (like “I, Robot” and others), he expresses his views and aspirations to what humanity might achieve, both in “happy” and more grim fashion.

Another important event occurred in 1997 when IBM’s Deep Blue computer defeated world chess champion, Garry Kasparov. This was seen as a major milestone for artificial intelligence, as it showed that computers could now outthink human beings in some tasks. Some people started to worry that computers might eventually become more intelligent than humans and threaten our species.

Current affairs

In recent years, there has been an increased focus on the ethical implications of artificial intelligence. This is partly due to the rapid advance of AI technologies, which has led to concerns about their impact on society and the economy. There are also worries about how AI will be used in future military conflicts. As more countries develop sophisticated AI systems, there is a risk that they could be used to launch devastating attacks against each other without any human involvement.

The debate around AI ethics is likely to continue for many years to come. It is an important issue that needs to be carefully considered by both policymakers and the general public.

The study of AI ethics today examines the ethical implications of artificial intelligence and its impact on society. It is a relatively new field that is still evolving, and there is no consensus on how to approach it. There are many different stakeholders involved in AI, including government agencies, tech companies, researchers, and the general public. Each of these groups has different values and interests that need to be considered when making decisions about AI. This leads to AI-centric decision-making being incredibly difficult and gruesome due to each stakeholder’s opinion pulling the theoretical “rug” in different directions.

There have been some significant developments in AI ethics in recent years. In 2018, the European Union released a set of ethical guidelines for artificial intelligence. These guidelines are intended to help ensure that AI is developed and used in a way that is ethically responsible and beneficial to society. In 2019, Google released its own principles for developing ethical artificial intelligence. These principles are based on the company’s core values of responsibility, fairness, and privacy. Currently, companies worldwide abide by the global rules of AI Ethics, as well as imply their own local AI Ethics rules to regulate AI production and creation, allowing their AI engineers to tinker and adapt as much as they want, as long as it abides by the implied rules needed fun humanity’s potential safety.

Schools of thought

Consequentialism is the belief that the morality of an action should be based on its consequences. The most well-known form of consequentialism is utilitarianism, which holds that an action is right if it leads to the greatest happiness for the greatest number of people. Deontology, on the other hand, is the belief that there are certain moral laws that are independent of their consequences. So, an action can be considered morally right or wrong regardless of its outcomes.

The debate between consequentialism and deontology in AI ethics has largely been shaped by the work of two philosophers: Philippa Foot and Immanuel Kant. In her 1967 paper “The Problem of Abortion,” Foot argued that abortion could be morally permissible in cases where continuing a pregnancy would lead to greater suffering than ending it. This argument was later taken up by Peter Singer, who applied it more broadly to animal welfare issues. Kant, meanwhile, argued that there are some actions (such as lying or breaking a promise) which are intrinsically wrong and cannot be justified even if they lead to good outcomes.

There is no easy answer to the question of which approach is better when it comes to AI ethics. Consequentialists tend to focus on the outcomes of actions, while deontologists focus on the intrinsic moral value of those actions. Both approaches have their merits, but ultimately it may come down to a matter of personal preference.

What exactly are we scared of?

The main problems and controversies of AI Ethics are:

  1. The ethical implications of artificial intelligence (AI) technology are becoming increasingly advanced and life-like. That’s the key concern and something that people are increasingly getting aware of and scared of. Even though it’s implied that we cannot create humane AI (I’d touch on AGI, artificial general intelligence, further on in my writing), people are being scared by the human-like nature of the AI, even leading to a Google debacle, where one of its software engineers has claimed that the AI they worked on has “gotten sentient” and behaves like a real person now — this raised a new round of the AI sentience debate, but ultimately the majority agreed that either an engineer had overworked himself or it has been a PR stunt from Google.
  2. The impact of AI on the workforce, with many jobs at risk of being automated by robots in the future. This is an interesting one, as nobody really can predict what would the AI replacement of jobs mean — a lot of people imply that it’d result in a job loss, but there are numerous other schools of thought, with Packy McCormick of Not Boring, for example, implying that currently, any average human does way more work days than physically possible in a 24-hour timespan and that the replacement by robots is a logical continuance of such process, leading to humans being open for new positions and other operator roles. Again, ideas vary on this one.
  3. AI for surveillance and control, including facial recognition technology and predictive policing. This one is more of a problem of AI Bias recognition because I see nothing terrible in predictive policing — if there is less crime on the streets, I am happy, especially given that I live in quite a neighbourhood where not a single e evening passes without 5–7 fights and countless police sirens chasing thieves, drunkards and criminals.
  4. The potential for AI to be used for weapon systems and other military applications. That’s something that we’ve all seen in movies, and that scares the bejesus out of us. For a good reason — nobody wants to be in a war that AI started, especially if said AI started it not for you but against you, as it implies that everything is calculated and you are fated to lose — AIs don’t do irrational things, so it wouldn’t start a war that I cannot win. Wars are scary, kids, don’t do them.
  5. The danger of AI technology becoming uncontrollable or self-aware, as depicted in popular culture such as in the Terminator films. Yeah, well, nothing much to say here — Sentient AI that can wipe us out off the face of the earth is scary. We cannot do much here.

Home reading

There are many popular writers who have discussed AI ethics in their work. Some notable examples include Nick Bostrom’s book Superintelligence: Paths, Dangers, Strategies, which explores the risks associated with artificial intelligence becoming smarter than humans; Eliezer Yudkowsky’s book Rationality: From AI to Zombies, which discusses the importance of rationality in decision-making; and Kate Crawford’s book The Ethics of Artificial Intelligence, which explores the ethical implications of artificial intelligence technology.

If you don’t feel like reading books or have something against these people out of principle, here’s a list of good articles on a topic I found on the internet:

That about cuts it. I hope you enjoyed the read! What are your thoughts on the issue? Have you ever encountered issues with AI ethics at your job or in daily life?

--

--

Sean Sukonnik

I'm Sean and as a student of Bayes I write on all things economics, VC, startups and marketing. Can be found under @VaguelyProf on twitter