Posted inLatest NewsNews

Tech Companies and The Horror Of AI 2024

that’s the way tech works now, it move fast, we break things. Then it picks up the pieces.

tech

AI May Be Useful – But Also Scary

The most recent examples of generative AI video wowing people with their accuracy highlight the potential threat that we now face from artificial content, which could soon be used to depict unreal but convincing scenes that could influence people’s opinions and subsequent responses.

Consider, for example, how they vote.

With this in mind, executives from nearly every major tech giant signed a new agreement late last week at the 2024 Munich Security Conference to take “reasonable precautions” to avoid artificial intelligence tools from disrupting democratic elections.

On the basis of the “Tech Accord to Combat Deceptive Use of AI in 2024 Elections” :

“2024 will bring more elections to more people than any year in history, with more than 40 countries and more than four billion people choosing their leaders and representatives through the right to vote. At the same time, the rapid development of artificial intelligence, or AI, is creating new opportunities as well as challenges for the democratic process. All of society will have to lean into the opportunities afforded by AI and to take new steps together to protect elections and the electoral process during this exceptional year.”

Web 3’s New 7 Areas Of Attention

Executives from tech like Google, Meta, Microsoft, OpenAI, X, and TikTok have all agreed to the new agreement, which aims to foster deeper collaboration and coordination to combat AI-generated fakes before they have an impact.

The pact lays out seven important areas of attention, which all members have agreed to, in principle, as key measures:

tech

The initiative’s key benefit is each company’s willingness to collaborate to share best practices and “explore new pathways to share best-in-class tools and/or technical signals about Deceptive AI Election Content in response to incidents”.

The agreement also states that each will “engage with a diverse set of global civil society organizations and academics” in order to contribute to a better understanding of the global risk picture.

It’s a positive step, but it’s also non-binding, and it’s more of a goodwill gesture from each company to collaborate on the best solutions. As a result, it does not specify definite steps to be taken or penalties for failure to do so.

However, it ideally sets the basis for greater joint action to prevent erroneous AI content from having a substantial influence.

Though the influence is relative.

AI Deepfake’s Manipulation

For example, in the recent Indonesian election, several AI deepfake features were used to sway voters, including a video depiction of deceased leader Suharto aimed to encourage support, as well as cartoonish renderings of some politicians to soften their public image.

These were AI-generated, which was obvious from the start, and no one was going to be duped into thinking they were true photos of how the candidates looked, or that Suharto had returned from the dead. However, even with that understanding, the influence of such can be enormous, demonstrating their strength in perception, even if they are later removed, labeled, and so on.

This could be the true risk. If an AI-generated image of Joe Biden or Donald Trump has enough resonance, the origin may be irrelevant, since it may still convince people based on the description, whether it is genuine or not.

Perception is important, and careful deployment of deepfakes will have an influence and sway some votes, regardless of safeguards and precautions.

Which is a risk we now have to accept, given that such technologies are already widely available, and, like with social media, we’ll be judging the consequences in retrospect rather than closing gaps beforehand.

Because that’s how technology works: we move quickly, and things break. Then we pick up the fragments.

Stay updated on all of the latest news by subscribing to the ITP Live newsletter below or by clicking on the push notifications.