OpenAI, the company that created ChatGPT, announced that it has begun taking action against accounts with ties to Iran that are utilizing its platforms to disseminate news about the US presidential election. This development has garnered significant attention, particularly because the presidential campaign of Donald Trump alleged that Iran hackers had breached its security. The company alleged that its generative AI capabilities are being used by the accounts connected to Iran to disseminate false material.
Iran-Related Accounts Are Banned by OpenAI For Spreading False Information
In a recent statement, OpenAI stated that it is fighting foreign influence operations that make use of its artificial intelligence tools. According to the AI company, it has detected and blocked a flurry of accounts related to the Iranian influence campaign known as "Storm-2035."
The study claims that the accounts have been creating and disseminating false information on a variety of themes utilizing ChatGPT. Notably, the US election and the campaigns were among the most noteworthy subjects. According to the AI company, the operation was concentrated on producing lengthy articles and succinct social media remarks that were disseminated on various channels.
In the meantime, a variety of subjects were addressed in the information produced by these accounts. The forthcoming US election, Israel's involvement in the Olympics, the Gaza crisis, and other subjects are discussed.
The AI company claimed that the operation had little effect, though, as most of the posts had little to no interaction. The operation was categorized by the business as a low-level danger using the Brookings Breakout Scale, an instrument for evaluating the impact of covert operations.
Notably, the operation used social media users and reputable news outlets to target both conservative and progressive audiences, according to Sam Altman's AI business research. To seem more genuine, several identities even copied actual user comments.
OpenAI Is Still Devoted To Fighting Misinformation
The most recent move by OpenAI demonstrates the expanding significance of AI safety, particularly in tracking and thwarting foreign meddling in political processes. Notably, the company has increased the amount of work it does to find and stop these kinds of operations, using its AI models to find and stop any risks.
In the meanwhile, the company's wider dedication to openness and the moral application of AI seems to include the recent crackdown. The announcement's timing is particularly noteworthy because it was made only one week after the campaign of Donald Trump revealed a security vulnerability that it blamed on Iranian hackers.