Top 5 This Week

Related Posts

An Artificial Intelligence Researcher Confronts Election Deepfakes

- Advertisement -

Title: AI Expert Warns of Tsunami of Misinformation as Deepfake Threat Grows

For nearly 30 years, Oren Etzioni was among the most optimistic of artificial intelligence researchers. But in 2019, Dr. Etzioni, a University of Washington professor and founding chief executive of the Allen Institute for A.I., became one of the first researchers to warn that a new breed of A.I. would accelerate the spread of disinformation online. By the middle of last year, he expressed distress that A.I.-generated deepfakes could potentially swing a major election. In response to this growing threat, he founded a nonprofit organization, TrueMedia.org, in January.

On Tuesday, TrueMedia.org released free tools designed to identify digital disinformation, with the intention of putting them in the hands of journalists, fact checkers, and anyone else trying to discern what is real online. These tools are capable of detecting fake and doctored images, audio, and video by reviewing links to media files and quickly determining their trustworthiness.

Dr. Etzioni views these tools as an improvement over the current patchwork defense against misleading or deceptive A.I. content. However, he remains deeply concerned about the potential for a “tsunami of misinformation” as billions of people worldwide prepare to vote in elections this year.

The threat of deepfakes created by A.I. technologies is becoming increasingly alarming, with fake voice calls from President Biden, fake images and audio of celebrities like Taylor Swift, and even fake interviews that can manipulate public opinion. Detecting such disinformation is already challenging, and the tech industry continues to release more powerful A.I. systems that can generate convincing deepfakes, making detection even more difficult.

Many artificial intelligence researchers are sounding the alarm, with calls for laws to hold developers and distributors of A.I. audio and visual services accountable for harmful deepfakes. The tech industry is taking steps to address the threat, with companies like Anthropic, Google, Meta, and OpenAI announcing plans to limit or label election-related uses of their A.I. services.

Despite efforts to combat deepfakes, Dr. Etzioni acknowledges the limitations of detection tools in the face of rapidly advancing generative A.I. technologies. He emphasizes the need for cooperation among government regulators, A.I. developers, and tech giants to address the spread of disinformation online.

As the threat of deepfakes continues to grow, the battle against misinformation remains an ongoing challenge that requires a multi-faceted approach to safeguard the integrity of information online.

- Advertisement -

Popular Articles