OpenAI, a leading artificial intelligence company, has found itself at the center of controversy after revealing that its tools were used in influence campaigns by both Russian and Chinese entities. The company, known for its advanced A.I. technology that can write social media posts, generate photorealistic images, and even write computer programs, disclosed in a recent report that its tools were utilized in campaigns aimed at spreading misinformation and propaganda.
One of the campaigns, dubbed Doppelganger, used OpenAI’s technology to generate anti-Ukraine comments in multiple languages, including English, French, German, Italian, and Polish. These comments were then posted on various platforms to influence public opinion on the conflict in Ukraine. Additionally, the tools were used to translate and edit articles supporting Russia’s stance in the war into English and French, as well as convert anti-Ukraine news articles into Facebook posts.
In another instance, OpenAI revealed that its tools were employed in a previously unknown Russian campaign targeting individuals in Ukraine, Moldova, the Baltic States, and the United States through the Telegram messaging service. The campaign generated comments in Russian and English about the war in Ukraine, Moldovan politics, and American politics. Furthermore, the A.I. technology was used to debug computer code designed to automatically post information to Telegram.
Despite the sophisticated use of A.I. technology in these campaigns, OpenAI noted that the political comments generated minimal engagement and were at times unsophisticated. Some posts were clearly identified as being generated by A.I., while others displayed poor grammar, leading OpenAI to dub the effort as “Bad Grammar.”
On the other hand, the Chinese campaign known as Spamouflage utilized OpenAI technology to debug code, analyze social media, and generate posts disparaging critics of the Chinese government. The revelation of OpenAI’s involvement in these influence campaigns raises concerns about the misuse of advanced A.I. technology for malicious purposes and underscores the need for greater oversight and regulation in the field.