Artificial intelligence (AI) researchers are increasingly concerned that AI bots could dominate the internet, spreading uncontrollably like a digital invasive species. Instead of trying to limit the growth of bots and AI-generated content, one group of researchers has proposed a different approach.
In a recently published preprint paper, a team of researchers suggests the creation of a system where humans must verify their humanity in person through another human to obtain personhood credentials.
The concept revolves around establishing a method to prove someone’s humanity without revealing their identity or personal information. This idea may sound familiar to those in the crypto community, as it draws on proof of personhood blockchain technologies.
Digital Verification
Typically, services like Netflix or Xbox Game Pass that charge a fee rely on users’ financial institutions for verification. While this compromises anonymity, most people accept it as part of the service.
In contrast, anonymous platforms that don’t use payment information for verification must find other ways to limit bots and duplicate accounts. As of August 2024, for instance, ChatGPT’s security measures would likely prevent the creation of multiple free Reddit accounts using AI. While some AI systems can bypass CAPTCHA-style checks, it remains difficult to automate the full process of account setup and verification.
However, the research team – which includes experts from organizations such as OpenAI, Microsoft, a16z Crypto, and academic institutions like Harvard, Oxford and MIT – argues that current methods won’t suffice for long.
In the near future, it may become impossible to distinguish between humans and AI without face-to-face interaction.
Read more: Bitcoin $59K Price Could See Major Swings Amid Election Year Trends
Pseudo-Anonymity
The researchers propose a system in which designated organizations or facilities would serve as “issuers” of personhood credentials. These issuers would employ humans to verify the humanity of individuals. Once verified, the issuer would certify the individual’s credentials, and the issuer’s ability to track the usage of these credentials would be limited. How such a system could be secured against cyberattacks and the threat of quantum-assisted decryption remains unclear.
Organizations offering services could choose to only allow access to users with verified credentials, effectively limiting each person to one account and preventing bots from accessing these services.
While the paper doesn’t delve into which centralized pseudo-anonymity method would be most effective, it does acknowledge the potential challenges and calls for further research.
Cre: cointelegraph
I’m Jessi Lee, currently living in Singapore. I am currently working as a trader for AZCoin company, with 5 years of experience in the cryptocurrency market, I hope to bring you useful information and knowledge about virtual currency investment.
Email: [email protected]