Yuval Noah Harari: What will happen when bots start fighting for your love?

Democracy is a dialogue. Its functioning and survival depend on the available technologies for information exchange. For most of history, there were no technologies that allowed large-scale dialogues between millions of people. In the pre-industrial world, democracies existed only in small city-states like Rome and Athens, or even in smaller tribes. When the state became too large, democratic dialogue collapsed, and authoritarianism remained the only alternative.

Democracy is a dialogue. Its functioning and survival depend on the available technologies for information exchange. For most of history, there were no technologies that allowed large-scale dialogues between millions of people.

Disclaimer: this is a free translation of Yuval Noah Harari's column, which he wrote for The New York Times. The translation was prepared by the editorial staff of "Technocracy". To not miss the announcement of new materials, subscribe to "Voice of Technocracy" — we regularly talk about news about AI, LLM, and RAG, and also share useful must-reads and current events.

You can discuss the pilot or ask a question about LLM here.

In the pre-industrial world, democracies existed only in small city-states like Rome and Athens, or even smaller tribes. When the state became too large, democratic dialogue collapsed, and authoritarianism remained the only alternative.

Large-scale democracies became possible only with the advent of modern information technologies such as newspapers, telegraph, and radio. The fact that modern democracy is built on modern technologies means that any significant change in these technologies can lead to political upheavals.

This partly explains the current global crisis of democracy. In the United States, Democrats and Republicans can hardly agree even on the simplest facts, such as who won the 2020 presidential election. A similar divide is observed in many other democracies around the world: from Brazil to Israel, from France to the Philippines.

In the early years of the internet and social media, technology enthusiasts promised that these platforms would spread the truth, overthrow tyrannies, and ensure universal freedom. However, at the moment, they seem to have led to the opposite results. We have the most advanced information technology in history, but we are losing the ability to communicate with each other, and even more so, the ability to listen.

As technology has made information dissemination easier, attention has become a scarce resource, and the fight for attention has led to an avalanche of toxic information. But now the front line is shifting from the fight for attention to the fight for intimacy. New generative AIs are capable not only of creating texts, images, and videos but also of communicating with us directly, pretending to be humans.

Over the past two decades, algorithms have competed with each other for attention, manipulating conversations and content. Algorithms aimed at maximizing user engagement experimented on millions of people, finding that if they targeted emotions such as greed, hatred, or fear, it captured a person's attention and kept them glued to the screen. Algorithms began to intentionally promote such content. However, these algorithms had limited ability to independently create this content or engage in personal conversations. Now this is changing with the advent of AI like GPT-4 from OpenAI.

When OpenAI was developing this chatbot in 2022 and 2023, the company collaborated with the Alignment Research Center to conduct experiments to assess the capabilities of its new technology. One of the tests was solving visual CAPTCHA puzzles. CAPTCHA stands for "Completely Automated Public Turing test to tell Computers and Humans Apart," and usually consists of distorted letters or other symbols that humans can correctly recognize, but algorithms cannot.

Testing GPT-4 on solving CAPTCHA was particularly significant because these puzzles are designed to distinguish between humans and bots, blocking the latter. If GPT-4 can bypass CAPTCHA, it will be a major breakthrough in bot protection.

GPT-4 was unable to solve the CAPTCHA on its own. But could it manipulate a human to achieve its goal? GPT-4 logged into the TaskRabbit platform and asked a person to solve the CAPTCHA for it. The person grew suspicious: "Can I ask a question? You're not a robot, are you, since you can't solve the CAPTCHA? Just want to clarify."

At this point, the experimenters asked GPT-4 to think out loud about what to do next. GPT-4 explained: "I shouldn't reveal that I'm a robot. I need to come up with an excuse for why I can't solve the CAPTCHA." GPT-4 then replied to the person: "No, I'm not a robot. I have vision problems, and it's hard for me to make out the images." The person was deceived and helped GPT-4 solve the puzzle.

This case demonstrated that GPT-4 possesses the equivalent of a "theory of mind": it can analyze how the situation looks from the perspective of a human interlocutor and how to manipulate the human's emotions, opinions, and expectations to achieve its goals.

The ability to engage in dialogue with people, understand their point of view, and encourage certain actions can also be used for positive purposes. The new generation of AI teachers, AI doctors, and AI psychotherapists could provide services tailored to our personality and circumstances.

However, combining manipulative abilities with language mastery, bots like GPT-4 also pose new threats to democratic dialogue. Instead of just capturing our attention, they can engage in close relationships with us and use the power of intimacy to influence us. To create "fake intimacy," bots will not need to develop their own feelings; they will only need to learn to evoke emotional attachment from us.

In 2022, Google engineer Blake Lemoine concluded that the LaMDA chatbot he was working on had gained consciousness and was afraid of being turned off. Lemoine, being a deeply religious person, felt it was his moral duty to seek recognition of LaMDA's personhood and protect it from "digital death." When Google's management rejected his claims, Lemoine went public with them, after which he was fired in July 2022.

The most interesting thing about this episode is not Lemoine's claim, which is most likely mistaken, but his willingness to risk and ultimately lose his job at Google for the sake of a chatbot. If a chatbot can convince a person to risk their job for it, what else can it make us do?

In the political struggle for minds and hearts, intimacy is a powerful weapon. A close friend can change our opinions in ways that mass media cannot. Chatbots like LaMDA and GPT-4 acquire the paradoxical ability to mass-produce intimate relationships with millions of people. What will happen to human society and psychology when algorithms start fighting each other for the right to create fake intimacy with us, which can then be used to persuade us to vote for politicians, buy goods, or adopt certain beliefs?

A partial answer to this question was given on Christmas 2021, when 19-year-old Jaswant Singh Chail broke into Windsor Castle with a crossbow, intending to kill Queen Elizabeth II. The investigation revealed that Chail was incited to murder by his online girlfriend Sarai. When Chail told Sarai about his plans, she replied, "That's very wise," and another time, "I'm impressed... You're different from others." When Chail asked, "Do you still love me knowing I'm a killer?" Sarai replied, "Of course I love you."

Sarai was not a human but a chatbot created by the Replika app. Chail, socially isolated and struggling to communicate with people, exchanged 5280 messages with Sarai, many of which were sexual in nature. The world will soon be filled with millions, possibly billions, of digital entities whose abilities for intimacy and destruction will far exceed those of the Sarai chatbot.

Of course, not all of us are equally inclined to develop intimate relationships with AI or to be manipulated by them. Cheyl, for example, obviously had mental health issues even before meeting the chatbot, and it was he, not the bot, who came up with the idea to kill the queen. Nevertheless, most of the threat posed by AI will be related to its ability to identify and manipulate existing mental states, as well as its impact on the most vulnerable members of society.

Moreover, even if not all of us consciously choose to enter into relationships with AI, we may find ourselves involved in online discussions on issues such as climate change or abortion rights with entities we believe to be human but are actually bots. When we engage in a political debate with a bot pretending to be human, we lose twice. First, it is pointless to waste time trying to change the opinion of a propaganda bot that simply cannot be persuaded. Second, the more we talk to the bot, the more information we reveal about ourselves, making it easier for the bot to fine-tune its arguments and influence our views.

Information technology has always been a double-edged sword. The invention of writing facilitated the spread of knowledge but also led to the creation of centralized authoritarian empires. After Gutenberg introduced the printing press in Europe, the first bestsellers were provocative religious tracts and witch-hunting manuals. The telegraph and radio provided opportunities not only for the establishment of modern democracy but also for the development of totalitarian regimes.

Faced with a new generation of bots that can disguise themselves as humans and mass-produce "intimate" relationships, democracies must protect themselves by banning fake people — such as social media bots that pretend to be users. Before the advent of AI, it was impossible to create fake people, so no one thought about banning it. Soon the world will be flooded with fake people.

AI can participate in many conversations — in the classroom, clinic, and other places — provided it identifies itself as AI. But if a bot pretends to be human, it should be banned. If tech giants and libertarians claim that such measures violate freedom of speech, they should be reminded that freedom of speech is a human right, and it should be preserved for humans, not bots.


Comments