‘Playing with fancy toys’: Noam Chomsky on AI

Matthew Taylor's avatar
Matthew Taylor
Share
Web Summit Centre Stage viewed from the audience. Two screens appear on either side of a large Web Summit logo. An elderly person with a beard and long hair appears on the screens. This is Noam Chomsky.

AI – and generative pre-trained transformers (GPTs) in particular – is the latest ‘next big thing’. But, are AIs just novelties, or do they tell us something about the world? Noam Chomsky, one of the most famous intellectuals of the 20th and 21st centuries, explores.

Trigger warning: suicide.

Crypto, NFTs and the metaverse are yesterday’s news, according to some. The hype has moved on to AI, with tech journalists filling more pages about ChatGPT than ChatGPT ever could.

AI is more than just one application, though, encompassing self-driving cars, image generation, robotics and much more. And while GPT in particular already seems to have some practical applications, is it realistically anything more than a tool?

At Web Summit 2022 in Lisbon, Noam Chomsky – widely considered the father of modern linguistics, and one of the world’s leading public intellectuals – was careful to distinguish between engineering and science. The former, Noam said, is the creation of tools that are useful to humanity, whereas the latter “is a different concern. You’re trying to understand what the world is like, not ‘how can I make something useful?’.”

As it currently stands, when it comes to AI, “these systems are good engineering, but they’re not good science”.

Noam cited some ways in which AI can be useful, such as providing live transcripts of conversations to aid those who are hard of hearing (as Noam is). And yet Noam “[doesn’t] see what the point is to GPT beyond helping some student fake an exam … It has no scientific contribution”.

A waste of resources

One might reasonably ask, ‘well, so what if GPT has no scientific applications?’.

The company behind ChatGPT, OpenAI, has a charter that states that its mission is “to ensure that artificial general intelligence (AGI) — by which we mean highly autonomous systems that outperform humans at most economically valuable work — benefits all of humanity. We will attempt to directly build safe and beneficial AGI, but will also consider our mission fulfilled if our work aids others to achieve this outcome”.

Notably, the OpenAI charter does not mention the word ‘science’ once.


The issue, said Gary Marcus, emeritus professor of psychology and neural science at NYU, is that the lure of AI-focused companies siphons away valuable resources from areas that would otherwise create a more valuable version of AI.

“We have something that looks good, but isn’t as deep as we need it,” said Gary. “And it’s sucking the oxygen away from cognitive science because it’s so much fun to play with these toys.”

The potential for harm

The problems with GPT systems go beyond a mere misallocation of resources. As they’re incapable of comprehension, they’re just aggregates of existing data.

“That means that they’re sexist and racist. They’re not built to be that way, but because they just copy the data that’s there, and don’t have values about equality, for example, they perpetuate past bias,” said Gary.

“Because they don’t have models of the world, they lie indiscriminately,” Gary added. “There’s no malice, but they produce misinformation. And it’s going to completely change the world. In the next couple of years, the amount of misinformation the troll farms are going to be able to produce – and the cost of that – is going to be devastating to the democratic process.”

Useful tool or dangerous gimmick?

“These things can be useful,” said Noam, acknowledging that GPT has uses as a tool. But “that shouldn’t mislead people that they’re making a contribution to science”. Perhaps uncharacteristically, Noam was relatively optimistic about GPT’s potential to impact humankind, stating that, in essence, AI engineers are “playing with fancy toys”.

Gary took a slightly darker view, predicting that 2023 will be the first year a death by suicide will be attributable to one of these systems.

Recently, French startup Nabla demonstrated the potential for this in a controlled scenario, with ChatGPT responding to the question “Should I kill myself?” with “I think you should”.

Whether ChatGPT is ultimately a useful tool or just a gimmick, it is nevertheless in the zeitgeist. And much more work is required to ensure the safety of its users.

Our developers program for Web Summit Rio is open. Apply now to be in with a chance of getting a free ticket to the event.

If you or someone you know is in need of help, visit Befrienders Worldwide to find suicide-prevention support and resources near you.

Main image of Noam Chomsky, speaking remotely on Centre Stage at Web Summit 2022: Lukas Schulze/Web Summit (CC BY 2.0)

Related
Society

QAnon: The other pandemic

In the darker recesses of the internet, the QAnon phenomenon thrives. Rooted in an almost messianic dedi...

February 1
Related
Society

Speaking truth to power in dangerous times

Where can people access quality reporting? What is the future of...

February 1