Significant controversy has been sparked by the Elon Musk-signed “open letter” asking for an immediate moratorium in AI development projects. The Future of Life Institute published an open-letter with around 2000 signatures on March 22nd that stated that emerging AI projects could pose many future threats to society and technology.
The letter addressed the need to put a 6-month pause on the development of AI systems that could surpass the capabilities of GPT4. It is the successor to ChatGPT, which was launched last January.
Microsoft-backed GPT4 already has a human-like conversation, can make quick summaries and compose lyrics.
Many companies have been quick to launch similar products and services after OpenAI’s initiative. This was a concern for Musk and many other prominent names such as Yoshua Benigo, Steve Wozniak, and others. These well-known figures were against AI because they believed that advanced AI models could lead to a loss of control over our civilization.
The Debated Citations
Protestors claim that human competitive intelligence systems pose significant dangers to humanity. They cited 12 research papers from experts, including current and former employees of Google, OpenAI, and Google subsidiary DeepMind, to support their claim.
This pause should be used by AI labs and independent experts to jointly develop and implement a set shared safety protocols for advanced AI design. These protocols will be rigorously audited, and monitored by independent outside experts.Open Letter
Four experts mentioned in the letter later claimed that the Future of Life Institute, the think tank behind the letter campaign, had wrongly used their research for unsupported claims. They also claimed that the letter did not adhere to the verification protocols when it was first launched. It also gathered signatures from participants who had never signed it in real life.
These models are based on reality and could pose a real threat to society.
Yann LeCun, chief AI scientist at Meta, and Xi Jinping, a Chinese politician, are included in the list. Yann LeCun actually used Musk’s Twitter channel to clarify that he didn’t support the coordinated effort by Musk and others. The Musk Foundation is also believed to have funded and fueled the Future of Life Institute. It may therefore prioritize longtermism or apocalyptic scenarios rather than the real concerns associated with AI and competitive human intelligence.
The letter cited research entitled ‘On the Dangers Of Stochastic Parrot’. It was co-authored by Margaret Mitchell, Timnit Gebrul and Angelina McMillan. Margaret Mitchell and her team previously worked for Google. Mitchell is currently the chief ethical scientist at Hugging face, an AI startup. She strongly condemned the letter.
Mitchell said that it is not clear what signatories meant by “more powerful than GPT4.” Mitchell also expressed concern at the letter’s treatment of several questionable ideas and a narrative about AI which is only beneficial to the FLI supporters.
Another Refusal and FLI’s Response
Mitchell and her coauthors are not the only ones who have raised concerns. Shiri Dori Hacohem, an assistant professor at University of Connecticut has also raised concerns as the letter mentioned her work without her concern. Shiri’s paper was published last year and highlights the serious risks associated to widespread AI use.
Shiri says that AI doesn’t need human-like intelligence in order to increase those risks.
Her research focused on more serious issues like AI’s influence upon decision-making in relation different existential threats such as nuclear war and climate change.
Max Tegmark, president of FLI, stated that AI’s short and long-term risks were concerning and that society should consider them seriously. He clarified that by citing someone, one is not endorsing the thought process of the individual, but a specific sentence.