- AI
- A
Brian Chesky on the demand for authenticity and the unresolved issue of verification
The founder of Airbnb predicts a shift towards the real in the era of artificiality. The technical implementation is where the main problem lies now.
The founder of Airbnb predicts a movement towards the real in the age of the artificial. The technical implementation is where the main problem lies now.
In November, Brian Chesky spoke with a thesis that recently caught attention: he was called one of the most prescient visionaries of Silicon Valley.
Brian made a worthy provocation: after three years since the rise of ChatGPT 3.5, we still do not see a real and significant application of these technologies to solve problems: "People are still glued to their phones, only instead of Google, they go to ChatGPT." In the top 50 of the App Store, the first three places are Sora, ChatGPT, and Gemini, while the other 47 are the same apps that existed before the advent of AI.
Chesky's metaphor:
"Intellect is gold. Enterprise startups are companies selling picks and shovels. But what do we do with gold? We invented the jet engine and attached it to a car. We need to invent the airplane."
Well, suppose this thesis can be flipped like this: the real gold, the breakthrough of the future, lies not among the top AI laboratories but among real businesses, the efficiency or product of which will be turned upside down by AI. We should look there, at the "other 47" services from the top 50—who among them will successfully disrupt their business with AI, or who will come and disrupt them, knocking them off Olympus.
But here’s another interesting (though not original) thought from Brian:
"The important word in AI is 'artificial.' After Sora, you can’t know for sure if what you see on the screen is real. In the future, everything on the screen is artificial or may be artificial."
From here comes the strategy: "The opposite of artificial is real. The opposite of the screen is the real world."
His forecast: the digital world will become increasingly artificial and immersive, but the physical world will remain physical. Robots will replace many jobs, but people will still need connection with other people.
"Your followers won’t come to your funeral. No one has changed someone else's opinion in a YouTube comment section. Soon your friends will be AI. Therefore, there must be a movement towards the real."
A problem without a solution
The vision of Brian operates at the demand forecasting level: people will really want to swing back to reality when the content around them does not provide an objective representation of what is happening.
But how can this be implemented technically? Here’s what we have seen so far:
Account verification is insufficient—human accounts can host AI-generated content. Even if you verify the account owners' passports, it does not protect against the posting of AI-generated junk or misinformation on their accounts.
Watermarks are faked or removed. Feeds are clogged with videos from Sora and images of Nano Banana with removed markers of AI content.
Fact-checking centers are deceived, ignored, and struggle to process large volumes of tasks. Both Zuckerberg and the previous Twitter administration experimented with them—they are more necessary for cover than for solving the problem.
The demand for reality will increase. But a good question does not always have a good answer. Verifying the reality of content in a way that cannot be circumvented is still an unsolved problem.
Cryptography and Personal Responsibility: The World Approach
The founder of OpenAI, Sam Altman, and the founder of Tools for Humanity, Alex Blania, have a joint project World (formerly Worldcoin). Its stated mission is to ensure the prosperity of people in an era when there are more and more AI bots around: the creators claim that access to certain services should only be given to living people—financial applications, trading platforms, social networks, dating, and video games.
At the core of World is biometrics as a way to identify a specific person and securely link them to digital accounts. As far as can be judged, based on these technologies, a small team around this project is working (at least since April 2025) on creating a social network for living people, where verification will be through biometrics in special spherical devices called Orbs, scattered around the world. Mass services have not yet done this, relying instead on email addresses or phone numbers.
But the interest here is not only in social networks and predictions about whether the launch will be successful and in demand, but also in the way to control what a person posts — not just to verify the reality of their identity.
In the version of the World ID protocol 3.0 a feature called Deep Face was introduced. It aims to address the problem of deepfakes in real-time (for example, during video calls).
The system compares the video stream with your biometric data, previously verified through Orb — a registration station that cannot be faked. If someone tries to impersonate you on FaceTime or Zoom using an AI filter, the system will detect a mismatch in the "digital signature" of the face and block the authorization.
As for regular videos and posts, the approach of the architects of World (and Sam Altman in particular) is not to "guess" whether the video is generated (which is impossible, and all AI-detector services are completely useless), but to cryptographically verify the source of the data.
Responsibility lies with the author: if a video is published by an account with a World ID, it means that a specific living person is responsible for it. If they publish a fake, their reputation (and access to services) can be revoked.
Verification of the recording: World promotes the idea of integration with camera hardware. In the future, a smartphone camera may "sign" the file at the moment of shooting with a key linked to your World ID. This creates metadata (according to the C2PA standard) that confirms: "This video was recorded by a physical lens at a certain time, not generated by a neural network."
Let’s make a few assumptions to teleport into the future: let’s assume that Altman has managed to create a sought-after social network. Let’s assume that other products, flooded with AI bots and neural waste, have been abandoned by real people. Let’s assume that new verification methods have proven effective and passed government regulation, including regulations holding content creators responsible for its originality.
And here we are in a world where personal responsibility of content creators for AI-generated content has been introduced and legalized, and to gain the attention of other living people, you must operate under an account linked to your biometrics.
And if you post something that, in the worldview of Sam Altman, his companies, and the government apparatus, will be perceived as misinformation or misleading, then your social rating will drop. At first, you will lose access to posting in this social network or some benefits of ChatGPT, and then your mortgage interest rate will increase.
Is that normal?
Such posts are more often published on my Telegram channel, where I mainly write about AI and its applications. What? I just revealed this spoiler myself.
Write comment