OpenAI’s latest AI model, GPT-4o, has been assessed for its potential to influence political opinions through generated text, with results showing a “medium risk” in this area. The company’s “System Card” report, released on August 8, highlighted various safety concerns associated with GPT-4o, including its relatively low risk in cybersecurity, biological threats, and model autonomy.
However, the model’s ability to persuade through text was marked as a medium risk, raising concerns about its potential impact on political opinions. While GPT-4o’s voice persuasion remains low risk, its textual persuasion capabilities were found to be more pronounced.
It is important to note that the evaluation focused on the model’s persuasive abilities and not on detecting biases in its output. OpenAI’s findings revealed that GPT-4o displayed a higher degree of persuasiveness than professional human writers in three out of twelve cases.
As OpenAI continues to explore the capabilities of GPT-4o in generating engaging content, the potential for political influence calls for careful monitoring and regulation. In related news, co-founder John Schulman has left the company to join rival Anthropic, leaving only three of OpenAI’s eleven founders remaining.
The assessment of GPT-4o’s political influence potential underscores the importance of understanding and managing the impact of AI technologies on society. As advancements in AI continue to evolve, it is crucial to consider the ethical implications and ensure responsible use of these powerful tools.