WDIC: AI

Sean Sukonnik
4 min readAug 13, 2022
It’s an AI-generated image, but that’s a whole other story

Where are we?

Over the last four months, I have proactively worked with and around some of the most competent people in their respective fields of tech in Europe, and while it is very flattering for them to assume that I understand everything, it’s not true. With that in mind, I decided to start a weekly article marathon WDIC (Why Do I Care?) — a series of articles describing all the possible buzzwords and fields of science and tech that I hear about often but have no idea about what they really mean. And I’d want to start with the one that rules them all — AI or Artificial intelligence.

What?

Artificial intelligence, or AI for short, is a term that people use to describe machines that can think like humans. To be precise, they cannot “think” like humans (that’s AGI, and we’ll talk about it later on), but they can emulate human thinking really well, especially the latest technologies. John McCarthy, a legend-like figure at Stanford University, introduced the term halfway through the 20th century to describe in more straightforward terms what we see today to his colleagues back almost 70 years ago. Since then, the term has been gaining popularity, and the field has been gaining momentum. Today, AI is implemented in all the fields of our lives on some level, from facial recognition to computers deciding what towels to sell to you at your local store.

So what?

Well,” you can say, “Some computers can think and do fun stuff, so what?” That was my attitude a few years ago — fascinating in theory, but I didn’t really think about the potential implementations. There are several fields of AI today that attract the most interest both from academics and investors, and do, in fact, change our lives on a daily basis. Each one of those fields will get a separate article later on:

  1. Machine learning: A field of AI that deals with the construction and study of algorithms that can learn from data.
  2. Deep learning: A subfield of machine learning that deals with algorithms inspired by the structure and function of the brain called artificial neural networks.
  3. Natural language processing: A field of AI that deals with the ability of computers to understand human language and respond in a way that is natural for humans.
  4. Robotics: The study and design of robots, which are machines that can be used to automate tasks or perform humanlike functions.
  5. Computer vision: The ability of computers to interpret and understand digital images.

Few of the latest researchers who made profound breakthroughs help to understand what the scale of AI at the moment is and why it is so cool:

  1. Yoshua Bengio: He is a computer scientist who specializes in machine learning. His research has led to the development of algorithms that can better understand and learn from data, making them more effective at tasks such as classification and prediction.
  2. Demis Hassabis: He is an artificial intelligence expert who developed deep learning, a type of machine learning that uses algorithms to learn from data in a way that mimics the workings of the human brain. This has led to significant advances in image recognition and natural language processing.
  3. Pieter Abbell on robotics. Dr Abbell has introduced the term “inverse reinforcement learning,” which is a sub-structure of deep learning that would allow robots to observe humans and shape their decisions to be more human-like

Now what?

Okay, we need to care about AI and people writing code in a way that progresses the field, but what exactly can we expect in the near future that would make me say, “Wow, Sean. That’s some cool stuff”?

Here are some of the Quality of Life implementations of the AI that, I’d assume, we’ll see in the next 5–10 years:

  1. Brighter and more efficient search engines that are able to understand the user’s needs better and provide more accurate results. You would hear “you have to Google right” way less, as Google will actually understand your thoughts way better
  2. More widespread use of voice-recognition technology, allowing people to interact with devices and systems using natural language instead of having to learn specific commands. It’s incredibly annoying to use Alexa, Alisa, Bixby and others right now, as nobody really remembers commands and also, nobody really likes commands — we want to talk. And well, talk we will!
  3. Increased accuracy in predictive analytics, providing individuals with better recommendations for products, services, and content based on their past behaviour. While the advertisement is annoying, it’s way better to have one that would actually lead to something rather than just taking 30% of your webpage.
  4. Development of autonomous vehicles that are capable of safely driving themselves without any input from a human driver. One South African gentleman seems to be thinking in the same direction as me, but in 10 years, the industry must feel way better with advanced tech and better algorithms.
  5. The proliferation of personal assistant robots in homes and workplaces, performing tasks such as laundry, cleaning, cooking, and basic childcare. Fallout meets Black Mirror here, but to me, this sounds exceptionally nice — I’ve seen way too many cool movies where this happens, so not to want an assistant for myself.

Conclusion

That about cuts it with the AI. Of course, our path to the understanding of AI is far from over — both as species and as you, readers, and me, but that’s a good first brick on this long and enjoyable road, isn’t it?

Feel free to share your opinions on the future of AI on any social media with me — I’d love to have some chat about the potential advantages that the technology might bring to us.

All yours,

Sean!

--

--

Sean Sukonnik

I'm Sean and as a student of Bayes I write on all things economics, VC, startups and marketing. Can be found under @VaguelyProf on twitter