Boss Digital

Major tech companies sign agreement to combat AI’s use in election interference


In a nutshell: Several of the biggest tech companies signed an accord to fight AI interference in elections. The pledge aims to root out deepfake and generative AI content designed to influence or interfere with citizens’ democratic right to vote.

This year will be a big year for elections worldwide. An estimated 40 countries and 4 billion people will head to the polls to vote for their leaders and representatives in 2024. However, this is also the year that AI has matured to a dangerous level as election interference is concerned. The technology can now create or manipulate images, audio, or video content realistically enough to convince the average person that what they are seeing or hearing is real.

To tackle the problem, 20 major companies, including Amazon, Google, Meta, and X, took time at the Munich Security Conference on Friday to sign the “Tech Accord to Combat Deceptive Use of AI in 2024 Elections.” The agreement is a voluntary pledge that sets out the “actions to advance seven principle goals” that the signatories will seek to uphold, including prevention, detection, and raising public awareness.

“The intentional and undisclosed generation and distribution of Deceptive AI Election content can deceive the public in ways that jeopardize the integrity of electoral processes,” the collective agreement reads. “For the purpose of this accord, Deceptive AI Election Content consists of convincing AI-generated audio, video, and images that deceptively fake or alter the appearance, voice, or actions of political candidates, election officials, and other key stakeholders in a democratic election, or that provide false information to voters about when, where, and how they can lawfully vote.”

The treaty also seeks to assuage fears of censorship or curtailing of free expression. The online operators and platform holders promise to be mindful of the context in which information is presented before removing it. Their intent is to safeguard artistic expression, satire, and political expression, not suppress it.

The accord is not without limitations. It is a voluntary agreement rather than a proper regulatory framework. So, there are no repercussions for not taking action against harmful and misleading content. The companies have not made clear how they would be held accountable should their platforms be used to spread misinformation. Instead, the agreement offers somewhat broad and fuzzy ‘goals’ for what the firms seek to achieve.

Perhaps the greatest irony of this news is that roughly half of the signatories are developing or assisting in making generative AI tools that bad actors will no doubt use to create precisely the kinds of deceptive content they are proposing to fight. It’s analogous to lighting a building on fire only to put out the flames to be the hero.

That said, one thing the Accord does commit the companies to is “providing transparency to the public,” so there is an avenue, at least in principle, for some level of public scrutiny of the signatories’ efforts. Whether that will be in any meaningful form, remains to be seen.

Image credit: MSC/Michael Kuhlmann



Source link

Leave a Comment

Your email address will not be published. Required fields are marked *

Scroll to Top