The Fictions & The Facts of Artificial Intelligence

Alberto M. Chierici
5 min readSep 22, 2021

--

This is an excerpt from Chapter 2 of The Ethics of AI: Facts, Fictions, and Forecasts.

What People Think AI Is

Robotics companies, movies, and the media often exaggerate AI and its capabilities. For example, a thought-provoking article in Futurism describes a robot’s demo, Sophia, who can interact with a human interrogator in a seemingly natural way. It can also smile and mirror facial expressions that are common to humans to the extent that the CEO of the company that created it said, “Oh yeah, she’s basically alive.”

Companies making robots, movies and the media often exaggerate A.I. and its capabilities

When the public sees such demonstrations, it’s easy to be tricked into thinking that we already have the robots we have seen in science fiction movies, from the century-old Metropolis to the more recent thriller Ex Machina. In fact, even experts in the field have fallen prey to the superstitions of science fiction.

A quick search for news about AI reveals how the press speaks about this technology. When you read headlines such as “We Are Still Smarter than Computers. For Now”, or “Police Drones Are Starting to Think for Themselves”, you may think that AI is a technology so powerful that it makes computers able to think and act autonomously.

Nancy Fulda, a computer scientist working on broader AI systems at Brigham Young University, told Futurism, “I am frequently entertained to see the way my research takes on exaggerated proportions as it progresses through the media. The whole thing is a bit like a game of ‘telephone’ — the technical details of the project get lost, and the system begins to seem self-willed and almost magical. At some point, I almost don’t recognize my own research anymore.”

As we make progress in this field, it’s crucial that we help the public understand what this technology is in actuality in order to help them see the more practical consequences it generates. Let’s start with where AI is currently being used.

Real-World AI Applications

One course I found that helped me understand the computer science aspect of AI was from Columbia University. I got to work in 2014 by registering for a class, provided free-of-charge by online learning platform EdX. In one of the first lessons, I learned about the search problem.

Searching is a cornerstone problem in computer science. It is used for designing algorithms, data structures, and optimizing speed and physical memory used by a computer. Search is one of the problems to which AI brought a practical solution. It’s not by chance that the company at the forefront of AI development today is Google. This business started as a search engine.

From my years of professional work and research in this field, I’ve learned that AI is all around us, influencing our day-to-day digital interactions in ways we may not even fully understand.

Many of the apps you might use every day, like Apple News, Facebook, Twitter, YouTube, Netflix, Amazon, or Spotify, often filter the news and posts you may be more interested in reading. They also recommend the next video or movie you should watch, the next item you should buy, or the next song they think you’ll want to hear. Their recommendations aren’t random — in fact, they always feel spot on. How do they do that? They use AI.

Several mainstream news’ sites now offer personalized content to paid subscribers based on their interests. For instance, The New York Times’ home page you’re looking at in your browser now may look very different from the one I see at the same time, while the printed version is the same for all readers. Computer algorithms determine pictures and content to show us based on our location, browsing history, and website activity, which includes clicks and time spent on a page. These algorithms are again a form of AI.

Such systems are a double-edged sword. On one side, they personalize content to make them more convenient and useful to you. Think about Twitter — could you possibly be interested in every single post made by the people you follow or even have the time to read them all? You need some sort of filter, and if the filter is smart enough to figure out what you like, it saves you time. On the other side, though, these companies use personalized content to keep you sticking to the platform. Netflix, Facebook, and YouTube are a few of the companies that aim to increase your time watching or interacting with your social network.

Such systems are a double-edged sword. On one side, they personalize content to make them more convenient and useful to you. […] On the other side, though, these companies use personalized content to keep you sticking to the platform.

Guillaume Chaslot, a former Google software engineer who worked on the YouTube video recommendation algorithm, said to The Guardian, “Watch time was the priority, everything else was considered a distraction.” This was also confirmed in an academic paper written by Google engineers where they describe the impressive improvement achieved by deep neural networks on YouTube recommendations. The authors state, “Our goal is to predict expected watch time.” A similar technique boosted the Twitter timeline algorithm; this system ranks which tweets you get to see first, as Twitter engineers Koumchatzky and Andryeyev explain in a blog post. In the same blog post, you could see which metrics are most important for the business: “Online experiments have also shown significant increases in metrics such as tweet engagement and time spent on the platform. And as we’ve shared before during our previous earnings, the updated timeline has in part driven increases in both audience and engagement on Twitter.”

While this may seem convenient to the average, everyday user, there are often unintended (and sadly, sometimes intended) consequences when AI is used to personalize content. Filter bubbles, echo chambers, and troll factories can develop, and, if left unchecked, become a dangerous environment for hate and misinformation [see here]. To understand how these unintended consequences may happen, we shall first define what AI is and what it is not.

In the rest of the chapter, I provide AI definitions as they are accepted by academics and practitioners, expanding what these definitions mean vs. the public perception.

I hope you enjoyed the first post of my article series, where I share excerpts and stories from my book, The Ethics of AI: Facts, Fictions, and Forecasts — if you enjoyed it and want to connect, you can reach me here via email or connect with me on social: LinkedIn, Twitter. Also, you can find my book on Amazon — here is the link to buy it.

--

--

Alberto M. Chierici

Author The Ethics of AI, Data Scientist, Entrepreneur, Product Manager. Currently PhD Candidate @NYU and Head of Data and Finance @around