QAnon: The other pandemic
In the darker recesses of the internet, the QAnon phenomenon thrives. Rooted in an almost messianic dedi...
In March 2023, high-profile figures in the tech industry, including Elon Musk and Geoffrey Hinton, signed an open letter warning people of the existential risks posed by AI.
Several weeks later, Geoffrey left a research role at Google, intending to deliver this message to a wider audience.
At the core of the message is a desire to highlight the realities of developing large-scale AI models that could render many human jobs obsolete and develop superintelligence to the point where AI will have goals incompatible with human existence. Nick Bostrom described superintelligence as “an intellect that is much smarter than the best human brains in practically every field, including scientific creativity, general wisdom and social skills”.
At Collision in Toronto, Canada, Geoffrey urged big tech companies to test for and prevent such doomsday scenarios.
Of all the risks that AI poses to humanity, the most overlooked is existential risk – the risk that AI could lead to human extinction – said Geoffrey, who is often referred to as the godfather of AI.
“Right now, there are 99 very smart people trying to make AI better and one very smart person trying to figure out how to stop it from taking over.”
– Geoffrey Hinton
Although other risks – such as bias, misinformation and job losses – are current concerns, Geoffrey warned that we may not be prepared for superintelligent machines motivated to take control of humanity in the near future.
“Before it’s smarter than us, I think the people developing it should be encouraged to put a lot of work into understanding how it might go wrong … and I think the government could maybe encourage the big companies developing it to put comparable resources into this,” said Geoffrey.
With the possibility that AI could reach superintelligence, the computer scientist advised researchers and companies to “do empirical work into how it goes wrong, how it tries to get control, whether it tries to get control”.
“Right now, there are 99 very smart people trying to make AI better and one very smart person trying to figure out how to stop it from taking over. And maybe you want it more balanced.”
And apparently, we’re not taking this seriously enough.
A recent Nature editorial claimed that “talk of artificial intelligence destroying humanity plays into the tech companies’ agenda and hinders effective regulation of the societal harms AI is causing right now”.
Geoffrey isn’t convinced: “[The editorial] compared existential risks with actual risks, implying the existential risk wasn’t actual. I think it’s important that people understand it’s not just science fiction; it’s not just fearmongering. It is a real risk that we need to think about. And we need to figure out in advance how to deal with it”.
“The jobs that are going to survive AI for a long time are jobs where you have to be very adaptable and physically skilled, and plumbing is that kind of job.”
– Geoffrey Hinton
The godfather of AI was keen to discuss the current and very real risk of AI and automation replacing certain human jobs.
When asked by the Atlantic CEO Nicholas Thompson what careers younger people should be planning for – given the great leap forward in AI over the past couple of years – Geoffrey gave a one-word answer: plumbing.
Why plumbing?
“I’ll give you a little story about being a carpenter. If you’re a carpenter, it’s fun making furniture but it’s a complete dead loss because machines can make furniture … What you’re good for now is repairing furniture, or fitting things into awkward spaces in old houses – making shelves in things that aren’t quite square,” explained Geoffrey.
“The jobs that are going to survive AI for a long time are jobs where you have to be very adaptable and physically skilled, and plumbing’s that kind of job.”
Should knowledge workers retrain to prepare for jobs requiring manual dexterity and the ability to repair machines? Not so fast. The former Google researcher – who left the company on good terms and still has insights into its AI developments – said that multimodal AI would be the next leap forward.
Multimodal models are trained not just on language, but also vision. With datasets that include YouTube, this AI could replicate much more than written language, learning about how humans interact through voice, body language and more.
The Google Brain team is working with Google DeepMind to create their own multimodal AI: Gemini.
When asked if there was anything that a sufficiently well-trained model could not do in the future, Geoffrey responded: “If the model is also trained on vision and picking things up and so on, then no”.
“We’re just machines,” Geoffrey continued. “We’re wonderful, incredibly complicated machines. But we’re just a big neural net. And there’s no reason why an artificial neural net shouldn’t be able to do everything we can do.”
Main image of Geoffrey Hinton speaking on stage at Collision 2023: Ramsey Cardy/Web Summit(CC by 2.0)
In the darker recesses of the internet, the QAnon phenomenon thrives. Rooted in an almost messianic dedi...
Where can people access quality reporting? What is the future of...