AI Meets Capital: Do We Need a New Economy?

Alberto M. Chierici
5 min readSep 13, 2021

Thanks to Jason Dent @jdent for making this photo available freely on Unsplash 🎁 https://unsplash.com/photos/P79ifvfxC0A

I hope you enjoyed the first post of my article series, where I share excerpts and stories from my book, The Ethics of AI: Facts, Fictions, and Forecasts— if you enjoyed it and want to connect, you can reach me here via email or connect with me on social: LinkedIn, Twitter. Also, you can find my book on Amazon — here is the link to buy it.

What is the difference between human progress and human development?

Let's start with a couple of basic definitions. A quick Google search tells us that progress is a “forward or onward movement toward a destination”. Development means “the process of developing: an event constituting a new stage in a changing situation”.

I have the impression that while we keep calling technological advancements “progress”, we actually mean development. Technology like AI is a development and is neither good nor bad.

When we speak about progress, we should state what the endgame is. What is the destination towards which we are moving to? (Or at least tip-toeing towards?)

With regards to AI, there are many big hairy statements that have been proposed: creating machines with minds, solving problems like cancer, and building a better future — whatever that means.

I am not sure we have seen any of that. At the time of writing my book, I, unfortunately, couldn’t cover Google DeepMind’s AlphaFold, a breakthrough catalog of protein structures. It was developed using AI techniques and has the potential to bring about advancements in pharmaceuticals and cures to disease. But it is too early to say.

The practical reality of today’s goal of AI technologies is that they are aligned with mere utilitarian and profit-making objectives. Here’s an excerpt from the chapter in my book where I draw the historical development of AI.

Like many technological breakthroughs, AI was initially a philosophical and scientific endeavor until it became an economic propellant between the 1980s and the early twenty-first century.

The first successful commercial applications of AI took the form of the so-called expert systems. These are programs where knowledge is hardcoded into a set of rules from textbooks, direct experience, and interviewing a large number of subject matter experts. Each rule codifies a level of certainty for its specific application. The system would then use probability to compute the most likely knowledge item for a new scenario.

One of the first expert systems developed was in the area of medical diagnosis (Russell and Norvig, 2013). MYCIN was developed over five to six years in the early 1970s at Stanford University for diagnosing blood infections. Its creators hard-coded about 450 rules. The system performed as well as some experts, and considerably better than junior doctors. Despite its performance and strengths, MYCIN was never used in practice because, at the time, the state of technologies for integrating the system with existing tools was underdeveloped. Moreover, some observers raised ethical and legal issues related to the use of computers in medicine.

This new, robust automation level became an industry in the 1980s with the first commercially successful expert system, R1, deployed at the Digital Equipment Corporation. The program helped configure orders for new computer systems. It was saving the company about $40 million a year by 1986. Two years later, DEC’s AI group deployed forty expert systems and had more on the way. DuPont had one hundred in use and five hundred in development, saving an estimated $10 million a year. Nearly every major US corporation had its own AI group and was either using or investigating expert systems.

The commercial success of AI drove forward research and investment in R&D. Hundreds of companies were created in the fields of robotics, vision, language processing, and software to support such applications. In the 1990s, there was an “AI winter” that occurred parallel to the internet’s development and culminated with the dot-com bubble. These companies didn’t live up to their hype and promises. At the same time, too much capital and exuberance kept flowing into these businesses.

The lesson here is that commercial AI, like many other technologies, was developed and built mainly to satisfy economic goals. The expert systems created considerable savings for businesses. Still, we had to wait for the personal computer’s maturity and the omnipresence of the internet (that smartphone and wearable devices brought in the first decade of the twenty-first century) to see a second explosion in AI adoption.

This time, the main force of development was data, capital, and socially engineered trends.

AI not only helped cut costs and increase productivity — this technology was directly responsible for hyper-growing user numbers, revenues, and companies’ valuations. This was the ideal type of business for venture capital firms to target: companies that can grow ten or twenty times in less than ten years.

Google search is a direct application of AI. Artificial intelligence techniques are responsible for making the search algorithm work better and faster. The better the algorithm worked, the more users it got.

Facebook, LinkedIn, and other social networks use AI to suggest connections, hence expanding the network and the user base.

Amazon, Spotify, and Netflix use recommendation algorithms (developed using AI techniques) to increase sales, usage, and hours of listening and watching.

It’s no wonder the majority of these internet companies have as their core an ad-based business model. They offer a product that is free to use, and they make money by selling the best display space for promoters.

A platform that uses AI to capture hours and hours of people’s time gathers people’s attention span and a lot of valuable data. Brands, in turn, can use AI techniques themselves for hyper-targeting their ads. Tristan Harris calls this economic development “the attention economy” in the documentary The Social Dilemma.

I explore this in the rest of the book and find that the problem isn’t necessarily that AI satisfies economic goals. The issue is whether or not the economical goal is morally viable. And to understand that, I explore further what is the anthropological view of the classical economy.

If the human person is individualistically conceived, makes rational decisions and their only criterion is the self utility — like a robot — no wonder we have many developments that don’t seem like progress at all. Perhaps we need to redefine our economic model, but this can only start from an all-encompassing, not oversimplified, human model.

References

  • Russel, Stuart, and Peter Norvig. Artificial Intelligence: A Modern Approach. London: Pearson Education Limited, 2013.

Sign up to discover human stories that deepen your understanding of the world.

Free

Distraction-free reading. No ads.

Organize your knowledge with lists and highlights.

Tell your story. Find your audience.

Membership

Read member-only stories

Support writers you read most

Earn money for your writing

Listen to audio narrations

Read offline with the Medium app

Alberto M. Chierici
Alberto M. Chierici

Written by Alberto M. Chierici

Author The Ethics of AI, Data Scientist, Entrepreneur, Product Manager. Currently PhD Candidate @NYU and Head of Data and Finance @around

No responses yet

Write a response