Facepalm: Major AI companies have shown how irresponsible and ruthless they can be in leveraging machine learning algorithms to generate financial gains for board members and shareholders. Now, these same companies are asking the entire tech world to trust them to act responsibly when truly dangerous AI models are eventually developed.
Some of the most important companies working with AI algorithms and services have signed a new voluntary agreement to promote AI safety, making their operations more transparent and trustworthy. The agreement, introduced ahead of the recent AI Seoul Summit, provides no enforceable measures to control unsafe AI services, but it is seemingly satisfactory enough to please the UK and South Korean governments.
The new agreement involved tech and AI giants such as Microsoft, OpenAI, xAI (Elon Musk and his Grok venture), Google, Amazon, Meta, and the Chinese company Zhipu AI. All parties will now outline and publish their plans to classify AI-related risks and are apparently willing to refrain from developing models that could have severe effects on society.
The agreement follows previous commitments on AI safety approved by international organizations and 28 countries during the AI Safety Summit hosted by the United Kingdom in November 2023. These commitments, known as the Bletchley Declaration, called for international cooperation to manage AI-related risks and potential regulation of the most powerful AI systems (Frontier AI).
According to UK Prime Minister Rishi Sunak, the new commitments should assure the world that leading AI companies “will provide transparency and accountability” in their plans to create safe AI algorithms. Sunak stated that the agreement could serve as the new “global standard” for AI safety, demonstrating the path forward to reap the benefits of this powerful, “transformative” technology.
AI companies should now set the “thresholds” beyond which Frontier AI systems can pose a risk unless proper mitigations are deployed and describe how those mitigations will be implemented. The agreements emphasize collaboration and transparency. According to UK representatives, the Bletchley Declaration, which calls for international cooperation to manage AI-related risks, has been working well so far, and the new commitments will continue to “pay dividends.”
The companies trusted to protect the world against AI risks are the same organizations that have repeatedly proven they shouldn’t be trusted at all. Microsoft-backed OpenAI sought Scarlett Johansson’s permission to use her voice for the latest ChatGPT bot, and then used her voice anyway when she declined the offer. Researchers have also shown that chatbots are incredibly powerful malware-spreading machines, even without “Frontier AI” models.