As we move into 2024, Generative AI, powered by large language models (LLMs), has become a global force, driving a wave of automation changing how we work and shaping the world. AI models like OpenAI’s ChatGPT, Google’s Gemini, and others continue to improve, demonstrating remarkable potential with each update. While Generative AI brings many benefits and could revolutionize various industries, it also poses risks of misuse. In this blog post, we will explore the potential downsides of Generative AI and how they can be addressed. But first, let’s take a quick look at 2001.
We are writing the year 2001
The internet’s first iteration, known as Web1, began in the 1990s until the early 2000s as “also known as the Static Web. It was a read-only ecosystem, where users consumed content published by a few website owners.“ [1] Web1 was a revolutionary technology that allowed a few content creators to reach a wide audience of consumers. The internet was a major hype during that time, but its business potential was still unclear. Web1’s high decentralization made it difficult to enforce laws and create sustainable business models, leading many companies that relied on the internet to shut down after the 2000 dot-com bubble. To understand the challenges and dynamics of this time, several interesting books are worth reading. One notable example is the 2001 book titled “Crime and the Internet“ by David S. Wall where he described the dangers posed by the Web1.
As one of the first things David wrote “First, the Internet has become a vehicle for communications which sustain existing patterns of harmful activity, such as drug trafficking, and also hate speech, bomb-talk, stalking and so on.“ Next, he describes that “Second, the Internet has created a transnational environment that provides new opportunities for harmful activities that are currently the subject of existing criminal or civil law like fraud, etc.“ and “Third, the nature of the virtual environment, particularly with regard to the way that it distanciates time and space, has engendered entirely new forms of (unbounded) harmful activity such as the unauthorized appropriation of imagery, software tools and music products, etc.“ [2]
These challenges underscored the need for a new iteration of the internet – one that would provide a safer environment and introduce sustainable monetization models. Many of these issues were addressed with Web2, which marked a shift toward centralization, concentrating control in a few dominant companies, later exemplified by the FANG giants: Facebook (Meta), Amazon, Netflix, and Google (Alphabet). Web2 is described as “the web as we currently know it. It introduced rich user experiences like social media networks, media sharing, blogs, wikis, images, and appealing aesthetics.“ [1] where consumers also became content creators. This centralization helped make the web as powerful and popular as it is today, supporting robust business models and attracting over 5.45 billion users worldwide.
The growth of generative AI powered by large language models (LLMs) has brought about a major shift, putting us in a transitional phase similar to what we experienced in 2001. This shift underscores the need for a new iteration of the internet, one that can tackle challenges that were not even imaginable in 2001 and that Web2 has not fully addressed. In the age of Generative AI, which creates content faster and more easily than ever, Clifford Stoll’s early criticism of the internet in his 1995 Newsweek article, “Why the Web Won’t Be Nirvana“ now feels more relevant than ever. Clifford wrote that “the Internet has become a wasteland of unfiltered data. You don’t know what to ignore and what’s worth reading.“ [3] He recognized early on the downsides and challenges the internet would bring to our daily lives, but he could not foresee how major new inventions would amplify these issues and be used to negatively influence large audiences.
Who better to understand the risks than those who work daily in the AI field and are best positioned to recognize them? One of the most powerful tools is ChatGPT from the company OpenAI, led by CEO Sam Altman. In 2019, Altman also founded another company named Tools for Humanity which is working currently on a Proof of Personhood solution. What does this mean, and what exactly is being referred to? Before we delve into that, let’s explore a few examples of how Generative AI is challenging the modern internet.
AI’s challenge to the modern internet
One of the biggest challenges facing the modern internet is the spread of synthetic media, defined by Europol as “media generated or manipulated using artificial intelligence (AI).“ [4] There are various types of synthetic media, and as generative AI improves, it is becoming increasingly difficult, if not impossible, to distinguish between reality and fake.
Deepfakes
One example is video deepfakes, an AI-based technology that creates realistic fake videos of people, making it seem as though the events occurred. Europol wrote in the paper “Law enforcement and the challenge of deepfakes“ from 2022 that deepfakes are “in the original, strict sense, … a type of synthetic media mostly disseminated with malicious intent.“ [4]. A well-known example of a deepfake is a fabricated video of an Obama speech. Another type of deepfake is audio deepfakes, where AI can perfectly replicate a person’s voice, making it seem as though they said things they never actually did. One example is Donald Trump reading the Darth Plagueis Copypasta. Both video and audio deepfakes can significantly influence large audiences and pose one of the greatest threats to the internet.
AI-generated fake images
Another area of synthetic media includes AI-generated fake images. A 2023 article from the German magazine Der Spiegel, titled “When machines learn to lie“ and the subtitle “… Artificial intelligence creates new realities. What happens when we can no longer distinguish them from the real world?“ [5] demonstrates how dangerous they can be. Appropriately with a cover collage of four computer-generated fake images from prominent people of our time: Papst Benedikt was dancing at a party, Greta Thunberg drank a beer in an airplane, Angela Merkel had a Hawaiian shirt on the beach, and Donald Trump was wearing an orange prison uniform. AI-generated fake images pose a significant threat to social media platforms, where visuals often dominate over text.
Image-to-video and text-to-video
A cutting-edge area of synthetic media is image-to-video technology, where users upload images with a prompt, and AI generates a complete video. AI can also create videos solely from a text description or script. A striking example is the entirely AI-generated love video featuring Kamala Harris and Donald Trump, demonstrating how quickly AI can blur the lines between reality and fiction. This technology raises significant concerns about the potential for misinformation and the erosion of trust in visual media.
Bots and their role in spreading misinformation
Once synthetic media is created, the most effective way to spread it is through bots, which can be used to disseminate misinformation, manipulate public opinion, fuel political polarization, or erode trust. They are prevalent across nearly every social media platform. Bots are dangerous because they can quickly create numerous fake identities, allowing the spread of synthetic media rapidly in a very short time, also known as Sybil attacks.
The internet is becoming more and more filled with synthetic media, and in the coming years, much of what we see online will likely be created by AI and spread by bots instead of humans. This shift makes it harder to trust information and could harm AI, as it might start learning from synthetic media itself, creating a vicious cycle.
To prevent this in the future, it is essential to implement technology that differentiates between human and AI-generated content, while also defending against bot-driven Sybil attacks to limit the spread of AI-generated misinformation. One promising solution to address these challenges is the concept of Proof of Personhood.
Proof Of Personhood
John R. Douceur from Microsoft Research wrote in a paper from 2002 “if a single faulty entity can present multiple identities, it can control a substantial fraction of the system. One approach to preventing these known as Sybil Attacks is to have a trusted agency certify identities“ [6]. On today’s internet, identities are usually linked to email addresses and passwords, enabling individuals to hold multiple identities. A possible solution is to decouple identities from email addresses and ensure each entity has a single verified identity across all internet services. If the entity is human, it would be verified as a unique human identity, with a maximum number limited to the global population of around 8.2 billion people. This process is called Proof of Personhood and also the idea to “link virtual and physical identities in a real-world gathering … while preserving users’ anonymity.“ [7] A unique machine identity could serve as the counterpart to a unique human identity for machines.
A key concern emerges if a centralized authority controls unique human identity verification: It would be restricted to specific geographic regions, only serving those it can verify. This would exclude approximately 850 million people without government-issued IDs and 1.7 billion without access to banking. In underdeveloped areas, these services may not even be available. As more internet applications require verified identities, millions could be left out, denying them access to vital parts of the web. This contradicts Tim Berners-Lee’s original vision of the Internet as a platform connecting people worldwide, regardless of race or location. Additionally, such a system would concentrate power in the hands of a few, allowing them to control and potentially revoke internet access, undermining digital freedom.
With this in mind, Tools for Humanity addresses these challenges with a decentralized Proof of Personhood solution called World ID, aiming to eventually operate independently of geographic borders and single-entity control as its development progresses.
World ID
World ID is described as “a mechanism that establishes an individual’s humanness and uniqueness. It can be thought of as the first and most fundamental building block in establishing digital identity.“ [7] It consists of five fundamental elements.
The first element, Authentication, is crucial for preventing fraudsters from misusing credentials, even if the legitimate user is unaware or complicit. The second element, Deduplication, ensures that each individual can verify their identity only once, preventing the creation of multiple identities. Recovery is the third element, which establishes effective mechanisms to regain access in case a user loses their credentials or if they are compromised. The fourth element, Revocation, is essential for removing compromised or malicious credentials. Finally, Expiry involves setting a predefined expiration date for credentials to maintain long-term security.
Verification with biometric technology
It is also written in the Whitepaper that “In a time of increasingly powerful AI, the most reliable way to issue a global proof of personhood is through custom biometric hardware“ [7] The biometric device used in World ID is called the Orb, which is AI-safe and has two main tasks. First, it confirms that the user is a real, living person without tricking the system. Second, it captures an iris scan to create an iris code, which is a numerical version of the key features of the iris. The Orb then checks this code in a database against all other iris codes to make sure no one is verified more than once. After this, the iris images are deleted, so that the iris code cannot be reversed to protect privacy. The Orb is also designed to ensure no raw biometric data can leave the device. After the Orb process is completed, the user receives a verified World ID. Currently, verification can only be done through various Orb operators in specific locations, but the goal for the Orb is to decentralize its development, production, and operation so that no single entity has control over it. Orb software and hardware are the most crucial components for the success of World ID.
Decentralized and Transparent
A verified World ID is pseudonymous, meaning users do not need to provide personal information during registration and it is also untrackable across applications ensured by Zero-Knowledge Proofs. Until now “6.580.502 people across all continents have already verified their World ID with an Orb, including more than 1% of the population of Chile, 1% of Argentina’s, and 2% of Portugal’s.“ [8] Since Tools for Humanity considers World ID a public good, it is built in a decentralized manner on the Ethereum blockchain, with bridges to Ethereum Layer 2 solutions like Polygon and Optimism. As we can see, most of the World ID project’s source code is accessible on GitHub, and the contracts managing the identities are openly available on Blockchain networks.
Project | Source Code |
---|---|
World ID State-Bridge | GitHub Repository |
MPC Uniqueness Check | GitHub Repository |
World ID Contracts | GitHub Repository |
Orb Software | GitHub Repository |
World ID Identity Operator Ethereum | Ethereum address |
World ID Identity Manager Ethereum | Ethereum address |
Bridged World ID Optimism | Optimism address |
Bridged World ID Polygon | Polygon address |
When examining the identity registration process in the World ID Contracts which are deployed on Ethereum, only an Identity Operator is allowed to register identities. The contract stores multiple identities in a single transaction using a tree structure, reducing both the number of transactions and associated costs, which would otherwise be higher if each identity required its transaction. However, restricting access to Identity Operators reflects a centralized approach before World ID is established on the blockchain for decentralized use.
Below is an example transaction on the Ethereum Blockchain, where the registerIdentities function receives an array of identities as public keys linked to a user’s wallets, stored on the chain. No sensitive information, such as iris codes, biometric data, or custom user details, is included.
1 2 3 4 5 6 7 8 9 10 11 12 13 14 15 16 17 18 19 20 21 22 23 24 25 26 27 28 29 30 31 32 33 34 35 36 37 38 39 40 41 42 43 44 45 46 47 48 49 50 51 52 53 54 55 56 57 58 59 60 61 62 63 64 65 66 67 68 69 70 71 72 73 74 75 76 77 78 79 80 81 82 83 84 85 86 87 88 89 90 91 92 93 94 95 96 97 98 99 100 101 102 103 104 105 106 107 108 109 110 111 112 113 114 115 116 117 118 119 120 |
{ "function": "registerIdentities(uint256[8],uint256,uint32,uint256[],uint256)", "params": [ [ "16930029079785428466595050875223914707158196087242849105937750502610676520541", "21727021586532668319387188814439919063017083266923391500871995156037353919677", "7030520674166596757577517483888230801527697829694786641891693523785953878516", "7911025416067019898477231204458117119555463300170230634505552339430253172572", "13026574540558965169735638555107508123257118193675569139130817220825364256391", "9468214212811752309768994872240299624193102458581246708766885731075422468259", "10562681395157563085731281884070260611141006911318493544413475337775820433293", "17077797785007301432487821403554541248557339911877332333415198303543551215913" ], "20102718352032679796815669287792161311965319462559401942033350191982786337212", "6153038", [ "6458536842794523992934325826299835201893240762013145854250541678506516450569", "10756682090653715245747300703460095940595509722096373038311887351947479666148", "16533647536012249214062504511210823421410782060811647987619116165485946201759", "7595610507955464646309000513389038650635513723799878846001442898235946530610", "21460404176264940433904084629450327558762407109726293514263787853232217419835", "11298600742892184146101036640102994090559670911873747767017637643567738216780", "7454380940665384546769170352339235983691685689370415997186697395736253333593", "13228213272204604429084083374053193589030405126185251041285533149133037347541", "10278783490481214160739413406295807149846507109202960937329559188723637778732", "19465082411895562681288417566402837985184179584704838521649832722411237948920", "7669060975459553944956458034538382771369594005885383025747192046889339182626", "16272454945212193711608964871744088650476929518803541809269430516496480361505", "19763192957057570941080791855011713443151082576918857275241116329103443881079", "7207769826013902091315848417460837424126913084462352573150445954317628486680", "3836192925271059014690165007370851699807244604004197193635446718163090873770", "5476565206605388910514387238977707004882438669753149789732321155833231009155", "13260992143243481283907594159183669133990392569277742370407580987513668054810", "8860366552610032086290282881236206388766769241320000608614453528342120399349", "10626294612549047450901194870117946817624206349156909053852878056790655765491", "812977710108046027926330007616296192958826654623835117522122233531768116242", "14191635613523928801894813612762370388405316663483471973188311668572115143034", "9769010178727065276918343027857945214296290705454463572112661348136294117743", "20182443217754548939891273718786895320783345678779214340210769988733303592811", "12295986682682160056187268715948069225423025538402626915422228825654030361589", "1570989561147148694658349542406951575342923488968414256913495641203100854825", "4560540942530903401610629307449892362627295464370724613212814159098989624336", "14816452944848295950280666974371313059822619576978521141347029531851275527508", "0", "0", "0", "0", "0", "0", "0", "0", "0", "0", "0", "0", "0", "0", "0", "0", "0", "0", "0", "0", "0", "0", "0", "0", "0", "0", "0", "0", "0", "0", "0", "0", "0", "0", "0", "0", "0", "0", "0", "0", "0", "0", "0", "0", "0", "0", "0", "0", "0", "0", "0", "0", "0", "0", "0", "0", "0", "0", "0", "0", "0", "0", "0", "0", "0", "0", "0", "0", "0", "0", "0", "0", "0" ], "8799313405953709675471649033130603604313172284336812431026600251456404889185" ] } |
The source code also shows that third-party verification is built using the Semaphore library, a zero-knowledge protocol designed to verify identity while preventing World ID tracking across different internet services. Since the iris code is not stored on the blockchain and the identity remains on the user’s phone, if the phone is damaged or replaced without a backup, the World ID will be lost and require recovery.
After the identity is registered on Ethereum, the data is bridged to Layer 2 blockchains, Optimism, and Polygon, as seen in the World ID State-Bridge contracts. This enables identity verification not only on Ethereum but also on the Optimism and Polygon blockchains, with support designed for adding more blockchains in the future.
To address security concerns about potential hacks and exposure of sensitive data from the centralized storage of iris codes used for identity comparison, Tools for Humanity developed the MPC Uniqueness Check library. This process encrypts each iris code into several unique secret shares, distributed across multiple parties. These parties collaborate to compute results based on the encrypted data without gaining any knowledge of the actual secret information.
The Orb’s source code is also publicly available, providing detailed insights into its functionality and the structure of its various components. It contains all the code on the Orb necessary for capturing images and securely transmitting them to the World App, the frontend for World ID. While this sounds like a promising development, there are also potential drawbacks to consider.
Potential drawbacks
Since the Orb plays a key role in the identification process, there is a risk that it could be hacked or altered, potentially leading to incorrect identity verification. While the Orb’s software is open source, the hardware remains a closed system, making it impossible to verify whether it properly handles sensitive data as intended.
Another concern is the centralization of key elements, including the iris code database and the verification which is limited to authorized Orb operators. This increases the risk of the database being vulnerable to hacking and potential exploitation by malicious actors. Also, the algorithm that generates an iris code could mistakenly reject an identity by thinking the person has already been verified. Since World ID is the same identity across all integrated internet services, a bug could allow users to be tracked across platforms, posing a significant privacy risk.
Another potential issue is the transferability of World ID. If someone is deceived into selling or giving away their World ID keys, a fraudster could then use the ID for authentication. This would allow fraudsters to bypass the “one person, one ID“ principle by acquiring multiple World IDs. This risk could be reduced by implementing a second biometric authentication, such as Face ID on iOS, on the user’s device. Additionally, in cases of suspicion, a reauthentication process at an Orb could be required to confirm the user still controls their World ID.
Summary
In this blog article, we explored the dangers of Generative AI, such as deepfakes, fake images, and text-to-video content—collectively known as synthetic media—and the risks of spreading them via bots. We also took a brief look back at the history, exploring the transition from Web1 to Web2 and the challenges that arose during the Web1 era.
Next, we explored Proof of Personhood, the challenges it addresses, and its implementation through a system called World ID. We examined how biometric verification and decentralized systems could play a crucial role in preventing the misuse of AI-generated content and ensuring trust online. World ID is still in its early development stages, as shown by the centralization of key components like the verification process and the iris code database. However, the Whitepaper details ambitious plans to gradually decentralize these elements, preventing any single entity from exerting excessive control over the system. While there are several drawbacks, the project is aware of them, and most are addressed in the Whitepaper with ongoing work to resolve them.
The World ID App Store offers several apps for Discord, Reddit, Shopify, Telegram, and many more that allow identity verification with World ID [10]. Thanks to its open system, third parties can easily develop verification apps for unique human identities, helping to prevent bots and malicious users from disrupting the system.
In the coming years, we will see if they can fulfill their promises and whether World ID can become a significant player in the future of the internet. As AI continues to evolve, safeguarding digital identities will be increasingly important to counter misinformation and protect privacy. It will be interesting to see how the challenges AI poses to the internet will be addressed and which concepts will ultimately prevail in shaping the future of the web.
[1] Ledger Academy (2023), Web 1.0 Meaning,
online. Available at: https://www.ledger.com [Accessed 10 Sep 2024]
[2] David S. Wall (2001), Crime and the Internet,
book. ISBN 9780415244299 [Accessed 10 Sep 2024]
[3] Clifford Stoll (1995), Why the Web Won’t Be Nirvana,
online. Available at: https://www.newsweek.com [Accessed 10 Sep 2024]
[4] Europol (2022), Law enforcement and the challenge of deepfakes,
online. Available at: https://www.europol.europa.eu [Accessed 10 Sep 2024]
[5] DER SPIEGEL 28/2023 (2023), Wenn Maschinen lügen lernen,
online. Available at: https://www.spiegel.de [Accessed 10 Sep 2024]
[6] John R. Douceur (2002), The Sybil Attack,
online. Available at: https://www.microsoft.com [Accessed 10 Sep 2024]
[7] Worldcoin (2024), Worldcoin Whitepaper,
online. Available at: https://whitepaper.worldcoin.org [Accessed 10 Sep 2024]
[8] Worldcoin (2024), Introducing World ID 2.0,
online. Available at: https://worldcoin.org [Accessed 10 Sep 2024]
[9] Worldcoin (2024), Worldcoin Foundation unveils new SMPC system, deletes old iris codes,
online. Available at: https://worldcoin.org [Accessed 10 Sep 2024]
[10] Worldcoin (2024), Worldcoin Apps,
online. Available at: https://worldcoin.org [Accessed 10 Sep 2024]