Join the movement to end censorship by Big Tech. StopBitBurning.com needs donations and support.
OpenAI report alleges Chinese entities weaponizing ChatGPT for "authoritarian abuses"
By jacobthomas // 2025-10-12
Mastodon
    Parler
     Gab
 
  • OpenAI has publicly accused China-based actors, some with alleged government links, of using ChatGPT for "authoritarian abuses," including cyber espionage and developing social control tools.
  • The specific malicious activities involved creating proposals for social media monitoring systems and crafting phishing emails to target organizations like Taiwan's semiconductor industry and U.S. academic institutions.
  • Despite ChatGPT's official ban in China, users circumvented the block using VPNs to access the platform for these state-aligned misuse operations.
  • As part of its security efforts, OpenAI claims to have disrupted over 40 malicious networks, which also included actors from Russia and Korean-speaking groups.
  • This disclosure provides concrete evidence for Western security concerns about how authoritarian regimes can weaponize AI to suppress dissent and undermine global stability, intensifying the debate on AI ethics and regulation.
OpenAI has publicly accused China-based actors, some allegedly linked to government entities, of exploiting its ChatGPT platform for a range of "authoritarian abuses." The findings, detailed in the artificial intelligence (AI) organization's latest threat report for 2025, paint a concerning picture of how advanced AI is being co-opted for state-level cyber espionage and social control. The report details that these accounts, operating despite their official ban in China, used the AI chatbot for activities that directly violate OpenAI's policies against national security misuse. The alleged abuses were multifaceted, ranging from digital espionage to the development of domestic monitoring tools. As explained by Brighteon.AI's Enoch, "some users leveraged the AI's capabilities to generate sophisticated proposals for systems designed to monitor social media conversations, a tool that could significantly enhance state surveillance efforts." In a more direct threat to international security, other accounts were implicated in cyber operations targeting critical industries and dissenting voices. Specific targets included Taiwan's vital semiconductor industry, U.S. academic institutions and political groups that have been critical of the Chinese Communist Party (CCP). The methods were notably advanced, with the report noting that in some instances, ChatGPT was used to craft convincing phishing emails in English, aimed at breaching the IT systems of these targeted organizations.

The growing global concern over the weaponization of AI

OpenAI's report sheds light on the persistent challenge of enforcing digital borders. While ChatGPT is blocked by China's extensive censorship apparatus, often called the "Great Firewall," users are circumventing the ban by accessing Chinese-language versions of the app through virtual private networks (VPNs). This backdoor access has created a conduit for what OpenAI describes as state-aligned misuse. The company directly linked these activities to the broader geopolitical context, stating, "Our disruption of ChatGPT accounts used by individuals apparently linked to Chinese government entities shines some light on the current state of AI usage in this authoritarian setting." The threat report also identified malicious cyber operations conducted by Russian and Korean-speaking users. While these were not directly tied to government entities, OpenAI suggested some users may have been associated with state-backed criminal groups. In total, as part of its ongoing security efforts, OpenAI claims to have disrupted over 40 such malicious networks since it began publishing public threat reports in February 2024. This disclosure from a leading AI developer arrives amid growing global concern over the weaponization of artificial intelligence. It provides concrete evidence supporting long-held fears in Western security circles about how authoritarian regimes could harness cutting-edge technology to suppress dissent, conduct espionage and undermine global stability, forcing a difficult new chapter in the conversation about AI ethics and regulation. Watch this video that talks about OpenAI's warning on AI misinformation. This video is from the Trending News channel on Brighteon.com. Sources include: The-Independent.com TheNationalPulse.com Brighteon.AI Brighteon.com
Mastodon
    Parler
     Gab