Our future AI overlords need a resistance movement

Comment

Artificial intelligence has moved so fast that even scientists are struggling to keep up. In the past year, machine learning algorithms have started creating rudimentary movies and amazing fake photos. They even write code. In the future, we will probably look back to 2022 as AI has shifted from processing information to creating content as well as many people.

But what if we also look back on it as the year artificial intelligence took a step towards destroying the human species? As far-fetched and ridiculous as it sounds, public figures from Bill Gates, Elon Musk and Stephen Hawking, and going back to Alan Turing, have expressed concerns about the fate of humans in a world where machines outsmart them, with the Musk to say AI was becoming more dangerous than nuclear warheads.

After all, humans don’t deal particularly well with less intelligent species, so who’s to say that computers, trained everywhere on data that reflects all aspects of human behavior, won’t “put their goals above ours” as the legendary computer scientist Marvin Minsky once warned.

Refreshingly, there is some good news. More scientists are seeking to make deep learning systems more transparent and measurable. This momentum must not be stopped. As these programs become increasingly influential in financial markets, social media, and supply chains, technology companies will need to start prioritizing AI security over capability.

Last year, across the world’s major AI labs, about 100 full-time researchers focused on building secure systems, according to the 2021 State of AI report produced annually by London-based venture capitalists Ian Hogarth and Nathan Banaich. Their report this year found that there are still only about 300 researchers working full-time on AI security.

“It’s a very low number,” Hogarth said during a discussion with me on Twitter Spaces this week about the future threat of artificial intelligence. “Not only are very few people working to align these systems, but it’s a wild west.”

Hogarth was referring to how in the past year a number of AI tools and research have been created by open source groups, which say that super-intelligent machines should not be controlled and built in secret by a few large companies, but created in the open. In August 2021, for example, the community-driven organization EleutherAI developed a public version of a powerful tool that could write realistic comments and essays on almost any topic, called GPT-Neo. The original tool, called GPT-3, was developed by OpenAI, a company co-founded by Musk and largely funded by Microsoft Corp. which offers limited access to its powerful systems.

Then this year, several months after OpenAI wowed the AI ​​community with a revolutionary imaging system called DALL-E 2, an open source company called Stable Diffusion released its own version of the tool to the public, for free.

One of the advantages of open source software is that by being open, a greater number of people are constantly probing it for inefficiencies. That’s why Linux has historically been one of the most secure operating systems available to the public.

But throwing powerful AI systems out into the open also raises the risk of their misuse. If AI is as potentially harmful as a virus or nuclear contamination, then perhaps it makes sense to concentrate its development. After all, viruses are scrutinized in biosafety labs and uranium is enriched in carefully contained environments. Research into viruses and nuclear power is overseen by regulations, however, and with governments keeping up with the rapid pace of AI, there are still no clear guidelines for its development.

“We almost have the worst of both worlds,” says Hogarth. AI is at risk of misuse by being built openly, but no one oversees what happens when it is created behind closed doors.

For now at least, it’s encouraging to see the spotlight growing on AI alignment, a burgeoning field that refers to the design of artificial intelligence systems that “align” with human goals. Leading AI companies such as DeepMind and Alphabet Inc.’s OpenAI have multiple teams working on aligning AI, and many researchers from those companies have gone on to create their own startups, some of which focus on to make artificial intelligence safe. These include San Francisco-based Anthropic, whose founding team spun off OpenAI and raised $580 million from investors earlier this year, and London-based Conjecture, which was recently backed by the founders of Github Inc., Stripe Inc. and FTX Trading Ltd.

The Conjecture operates on the assumption that AI will reach parity with human intelligence in the next five years and that its current trajectory will lead to disaster for the human species.

But when I asked Conjecture CEO Connor Leahy why AI might want to harm humans in the first place, he answered on the spot. “Imagine people want to flood a valley to build a hydroelectric dam, and there’s an anthill in the valley,” he said. “That won’t stop people from building them, and the anthill will flood immediately. At no point did any human even think of harming the ants. They just wanted more energy and this was the most efficient way to achieve that goal. Accordingly, autonomous AIs will need more energy, faster communication and more intelligence to achieve their goals.”

Leahy says that to avoid this dark future, the world needs a “bet portfolio,” including harnessing deep learning algorithms to better understand how they make decisions and trying to endow AI with more human-like logic.

Even if Leahy’s fears seem exaggerated, it is clear that artificial intelligence is not on a path that is perfectly aligned with human interests. Just look at some of the recent attempts to create chatbots. Microsoft abandoned the 2016 bot Tay that it learned from interacting with Twitter users after it posted racist and sexually charged messages within hours of its launch. In August of this year, Meta Platforms Inc. released a chatbot that claimed Donald Trump was still president, trained on public text on the Internet.

No one knows whether AI will wreak havoc on financial markets or torpedo the food supply chain one day. But it could turn human beings against each other through social media, which is arguably already happening. The powerful artificial intelligence systems that recommend posts to people on Twitter Inc. and on Facebook aim to normalize our engagement, which inevitably means displaying content that provokes outrage or misinformation. When it comes to “AI alignment”, changing these incentives would be a good place to start.

More from Bloomberg Opinion:

• Tech’s Terrible, Terrible Week Tod in 10 Charts: Tim Culpan

• The Wile E. Coyote Moment as Tech Races Off the Cliff: John Authers

• Microsoft’s AI art tool could be good: Parmy Olson

This column does not necessarily reflect the opinion of the editorial board or Bloomberg LP and its owners.

Parmy Olson is a Bloomberg Opinion columnist covering technology. A former reporter for the Wall Street Journal and Forbes, he is the author of “We Are Anonymous.”

More stories like this are available at bloomberg.com/opinion

Leave a Reply

Your email address will not be published. Required fields are marked *