How AI Will Really Change People

While the world debates whether AI will replace programmers and writers, it's already quietly studying us. Every day, millions of people leave emotional traces in texts—and AI is learning to read them better than psychologists. We discuss unemployment, breakthroughs in science, the dumbing down or development of cognitive skills. Psychologists talk about codependency, teachers—about digital addiction. Overall, the picture that emerges isn't always bright, but it's more or less clear and consistent.

And it’s quite surprising that one aspect of AI — which will radically transform human relationships in the near future — is barely being discussed.

Emotional Archaeology

Models, as everyone knows, are trained on millions of texts. The emotional patterns of humanity, given their universality, are literally embedded into the AI’s core. Modern models like Claude or GPT can analyze not only explicit emotions, but also hidden motives, passive aggression, manipulative techniques. Through comments, transcripts, negotiation transcriptions, social media posts, letters, messages in messengers, AI forms an emotional portrait of a person, detects connections between people, and makes behavioral predictions.

The accuracy of these systems already reaches 70-85%, depending on the task. By comparison: the average person correctly identifies the emotions of their interlocutor in 60-65% of cases. AI already reads us better than we read each other. (Nature Communications Psychology, 2025)

Anatomy of a Digital Autopsy

Let’s look at how this works in practice. Here’s an example of a Claude 4.0 analysis with a prompt of a comment chain on tekkix under a technical article:

EMOTIONAL PORTRAIT OF THE COMMENTER:

Psychological state: "Intellectual envy in technical packaging"

Hidden techniques:

1. FALSE SELF-IRONY AS ATTACK: "________" — self-deprecation to elevate oneself over the “arrogant” author

2. PROFESSIONAL DIMINISHMENT: "_________" — devaluing achievements by referencing external sources

3. TECHNICAL PEDANTRY AS SABOTAGE: A flood of detailed questions about configurations, steering away from the main topic

HIDDEN MOTIVES: Envy of the author’s resources, a need for self-affirmation by belittling another, a desire to discredit in the eyes of the community

AI objectively exposes emotional patterns, leaving it to the person to decide whether or not to continue the conversation. This seems like a pretty harmless use — after all, not everyone has a well-developed sense of empathy.

Corporate Emotional Audit

The next level is team interaction analysis. Based on the transcript of an hour-long conversation, AI determines the emotional roles of the participants, hidden conflicts, pain points, and gives specific recommendations.

At first glance, it seems harmless, but imagine a manager who regularly receives this kind of information from AI:

RESULTS: The team suffers from functional disorganization—competent members display systemic helplessness due to a destructive loop of "impulsive planning → technical resistance → increased pressure".

THE WEAKEST MEMBER: Oleg Antonov—shows expert helplessness with active avoidance of responsibility, creating a competence vacuum in a critically important area.

In such a situation, Oleg's career prospects look bleak. However, when the team lead starts getting online tips from AI on optimal meeting management, manipulating participants, and neutralizing resistance, Oleg might have to change somehow. And that's easy to implement too.

Here's an example of an emotional analysis of the team based on a generated transcript (for ethical reasons).

The State as Psychoanalyst

Artificial intelligence doesn't care what it analyzes—a personal chat or government meetings. Modern systems are already capable of detecting emotional patterns of power, predicting political decisions, and uncovering hidden conflicts among elites. For example, an emotional profile based on the analysis of a transcript

## ALEKSANDR MIRZAYEV (Ministry of Health): Departmental Defense via Expert Resistance

Emotional state: Professional irritability/Departmental protection/Technical superiority

Key manifestations:

- "the cost is indicated at over 30 billion rubles. This amount greatly bothers us" — using expertise as a weapon against "irrational" demands

- "we just won’t let you go anywhere with that price" — demonstrative technical domination

- Comparison with a project in Kursk — creating an impression of a systematic approach while actually resisting

Hidden motives: Protecting departmental resources by creating technical obstacles masked as professional standards

Relationship to the system: Low willingness to coordinate, highly protective of corporate interests

Interaction patterns: Turns technical expertise into a tool for blocking decisions, undermining the coordination architecture

Hiring will turn into a comprehensive candidate analysis. During interviews, the headhunter will see a live-updating profile of the applicant—confidence, stress resistance, loyalty, tendency to conflict. The employee’s task is to create a relaxed atmosphere and ask questions from the list. The decision will be made by AI.

Total Emotional Surveillance

In fact, emotional analytics will penetrate all spheres of interaction.

In business, not only employees' work will be monitored, but also their loyalty, attitude towards processes, relationships within the team, and engagement in the project. A deep emotional dossier will be maintained for each employee. Teams will be formed not only based on professional skills, but also on psychological compatibility. All business correspondence, chats, meeting transcripts, and all company voice traffic will be analyzed.

And this won't be limited to work. Businesses will become interested in employees' social networks—the texts from there will also be analyzed.

The government won't stand aside. This will be about global monitoring of the infosphere to determine the prevailing emotional patterns in society. Dynamic mood maps by region will be created, potentially dangerous topics will be identified, and reactions to news opportunities will be analyzed.

Statistical inevitability

You might say—AI can make mistakes, be superficial, hallucinate? Of course. But for clients this doesn't matter. Simply because the technology is convenient, relatively effective, and easy to use. AI offers statistically probable forecasts—and for management that's reason enough to use it.

The emotional analytics market is already valued at $3.8 billion and is growing 25% per year. Microsoft Viva Insights analyzes employees' emotional states. Affectiva implements emotional AI in cars. Beyond Verbal analyzes emotions by voice. This isn't the future—it's the present.

The masks of mandatory wearing

How will people react? They'll start controlling themselves and their emotions, constantly asking: "How do I look from the outside?" A new type of self-censorship will emerge—emotional. Just as in the US a smile has become a mandatory form of dress, here sincerity will become a luxury.

People will learn to write "safe" texts, say the "right" words, display the "appropriate" emotions. An emotional protection industry will emerge—services for checking texts for "toxicity," training for "safe communication," consultants for "digital hygiene of emotions."

Resistance to inevitability

Defensive mechanisms will also appear. Technical—special tools for "masking" emotional patterns in texts. It will be possible to generate “flat” letters to avoid analysis for “toxicity.” Legal—laws on emotional privacy protection (the EU is already working on AI regulation, the draft EU AI Act: Article 5 bans “emotion recognition in workplace and education” unless there is explicit consent). Social—movements for the right to sincerity.

But the main resistance will be psychological. People intuitively understand the difference between "reading emotions" and "being understood." AI can calculate a pattern, but can't experience it. It analyzes the mask, but doesn’t see the face.

However, there will also be advantages—increased efficiency and productivity—emotional analytics will be used in education, medicine, marketing, the gaming industry, and so on.

The price of emotional transparency

AI emotional analytics isn’t a distant future, but today’s reality. The question isn’t whether this technology will appear, but how we will regulate it. Will we be able to keep the balance between efficiency and privacy, between benefits and manipulation?

Perhaps the most important change will not be in how AI reads our emotions, but in how this reading changes the emotions themselves. When every word is analyzed, every gesture is interpreted, every emotion is evaluated—what remains of the spontaneity of human communication?

We stand on the threshold of an era when sincerity will become a strategy rather than a natural state. And this is probably the most serious change that AI will bring to humanity. Although it is possible that we will experience firsthand the meaning of a bitter cataclysm, when we say what we do not think and think what we do not say.

P.S. Contrary to my usual habit, I did not include the prompts used in the article. But I will say: they exist, they work even in current LLMs, and with each iteration of AI development, they will work more accurately and efficiently.

Comments