Nym: Chelsea Manning on tech in the age of AI and Web3

Matthew Taylor's avatar
Matthew Taylor
Share

Ahead of Web Summit Rio, we sat down with activist, whistleblower and Nym security consultant Chelsea Manning to discuss the ongoing challenges of privacy and data collection, and the shifting landscape of tech in the age of AI and Web3.

For the last two years, tech news has been dominated by the rising and falling fortunes of cryptocurrency and, latterly, AI. Both trends have raised concerns not only about the technologies themselves, but about their wider impact on civil society.

Ahead of a Center Stage talk at Web Summit Rio, we spoke to renowned whistleblower Chelsea Manning about the state of tech in 2023.

Chelsea is a security consultant at Nym, a cutting-edge privacy platform combining multiple technologies to protect users against invasive surveillance and traffic analysis across various digital services. It is built on open-source code and a decentralized architecture. This means it’s not controlled by a single entity, thereby preventing censorship or manipulation by governments or corporations.

Web Summit Rio: How did you become involved with Nym?

Chelsea: In 2021, I met the CEO of Nym, Harry Halpin, who contacted me to look for security weaknesses in his new privacy project, Nym. After reviewing a white paper they published, and running a security audit, I progressed into more deeply reviewing their code, the math and defensive scenarios against government attacks. I then stayed on as a security consultant, focusing on hardware optimization.

At Web Summit Rio, you’ll be giving a talk on the future of online privacy, with specific reference to AI and Web3. What opportunities or challenges do you think these new technologies offer in the fight for greater privacy rights?

As AI is more integrated into society, information verification will become a fundamental problem. We can use Web3 to overcome this. We can use blockchain technology to create a decentralized list of where information is coming from, who is producing it and where it was created.

I think that there’s a natural selection process where blockchain technology is useful. What we have seen is that it has allowed for web scamming, rampant data violations and various problems.

But I think that we’re reaching an iterative process phase where it is starting to mature and there is now the possibility that they can be used to augment security based applications. This can then be verified on a distributed ledger to prove that a particular event historically occurred, resulting in less dispute.

It’s unfair to ask general users to protect themselves digitally. This is an infrastructure problem, not an individual user problem, because people are really forced by society to interact on social media and text-based communications platforms.

Do you think that anything has changed, for better or worse, in the wider privacy debate since the events of 2010?

On one hand, the disclosures of the 2010s have led to progress in privacy-enhancing technologies, and legal reforms like GDPR.

These developments have helped protect individual privacy rights and enabled more secure communication channels, but, as we drown in information, new challenges have emerged. Governments continue to invest in surveillance capabilities and seek ways to access encrypted data, and technology-focused companies face pressure to share user data.

In this context, our struggle is no longer simply about secrecy versus privacy. It’s about navigating a world overflowing with data and ensuring that valuable, accurate information remains accessible and protected.

It’s crucial that we remain vigilant and continue advocating for transparency, accountability, and stronger privacy protections.

Can we trust private companies to develop and use AI technology ethically? Should there be a standardized code of AI ethics to ensure the technology isn’t misused?

While many companies are genuinely committed to ethical AI development, there are still risks associated with profit-driven motivations, competitive pressures, and potential misalignment of values between companies and the public interest.

I believe that a standardized code of AI ethics is essential to ensure that the technology isn’t misused or its potential harms left unchecked.

Such a code would provide guidelines for responsible AI development and usage, including considerations like transparency, fairness, accountability, privacy and security. It could also address challenges related to bias, discrimination, and the potential amplification of existing social inequalities.

But we also need strong regulatory oversight, and collaborative efforts among governments, academia, civil society and the tech industry working together to create a more robust framework.

Is it possible to roll back the clock on data collection? Or has that fight been lost to large private corporations and state spying organizations?

The reality is that we live in a world where data is the new currency, and many entities benefit from collecting, analyzing and utilizing this information.

While it may not be feasible to completely reverse the situation, there are still avenues to push for more robust privacy protections and greater transparency in data collection practices. We have to continue to fight for stronger privacy regulations and demand accountability from both private companies and government organizations.

One approach is to advocate for the development and adoption of privacy-enhancing technologies, such as end-to-end encryption, decentralized systems and secure AI-driven solutions. These technologies can help protect user data and limit the ability of corporations and governments to exploit personal information.

Public awareness and engagement plays a critical role in shaping the future of data collection practices. By staying informed and actively participating in the privacy debate, individuals can exert pressure on companies and governments to be more transparent and accountable for their data collection practices.

While it might not be possible to entirely roll back the clock on data collection, the fight for privacy is far from lost, and our collective efforts can make a significant difference in shaping the digital landscape for generations to come. We can still strive for a future in which privacy is valued and the power dynamics between individuals, corporations and governments are more balanced.

As AI technology becomes more and more adept, does it pose a threat to civil society? And what do you think we can do now to ensure that the benefits of AI outweigh the risks?

There is a danger with AI and deep fakes in that these will eventually become so convincing that it could end up in a courtroom setting where individuals will have to forensically verify to a court if it is real or AI-generated.

From a privacy perspective, we can use cryptography to ensure that society’s exposure to products or surveillance that leverage AI is kept at the minimum. Cryptographic proofs and verification techniques will be essential tools for countering disinformation and ensuring verifiable information.

Other privacy enhancing technologies – like differential privacy, homomorphic encryption, and secure multi-party computation – can help protect sensitive data and maintain individual privacy.

50 years from now, how do you think people will look back on this era of cyber activity?

I am hesitant to speculate that far ahead myself, but this era might be remembered as a pivotal moment in history when humanity first grappled with the complexities of integrating digital technologies into everyday life.

People in the future may see this time as a period of experimentation, where societies tried to find a balance between the benefits of these technologies and the potential risks and drawbacks they presented.

Looking back at how technology was viewed in the 1980s, looking at how quaint the debate regarding data collection and privacy was, the issues facing us now might pale in comparison to a time that is more trying and difficult than today.

Chelsea Manning will be speaking on Center Stage at Web Summit Rio. Check out the details of their talk.

Main image of Chelsea Manning: Nym

Related
Partners

EDP: Leading the way to a global energy transition

We sat down with EDP, a leading global company in the renewable energy sector, to learn about its plans ...

April 16
Related
The logos of the six profiled Impact startups at Web Summit Rio, sitting on an orange background.
Partners

Meet these 6 Web Summit Rio Impact partners

From education and healthcare to environmental conservation, the...

March 1