Thursday, November 14, 2024

Closed malicious accounts associated with OpenAI, Microsoft, China, Russia, and more

Must read


FILE - In this photo taken on Nov. 21, 2023 in New York, the OpenAI logo appears on a mobile phone in front of a screen showing part of the company website. Negotiators are expected to meet this week to hammer out the details of the European Union's artificial intelligence rules, a process that will affect how the systems that power general-purpose AI services such as OpenAI's ChatGPT and Google's Bard chatbot will be managed. It has been bogged down in an escalating last-minute battle over the issue.  (AP Photo/Peter Morgan, File)

FILE – In this photo taken on Nov. 21, 2023 in New York, the OpenAI logo appears on a mobile phone in front of a screen showing part of the company website. Negotiators are expected to meet this week to hammer out the details of the European Union’s artificial intelligence rules, a process that will affect how the systems that power general-purpose AI services such as OpenAI’s ChatGPT and Google’s Bard chatbot will be managed. It has been bogged down in an escalating last-minute battle over the issue. (AP Photo/Peter Morgan, File)

(NewsNation) — OpenAI and Microsoft Threat Intelligence have shut down accounts associated with five nation-state actors linked to China and Russia who attempted to use AI for malicious reasons, the companies said Wednesday. Announced.

OpenAI, the creator of ChatGPT, said in an official statement that it had “suspended accounts associated with state-affiliated threat actors.” “Our findings indicate that our model provides only limited and incremental capabilities for malicious cybersecurity tasks.”


According to an OpenAI statement, the suspended accounts include China-linked Charcoal Typhoon and Salmon Typhoon, Iran-linked Crimson Sandstorm, North Korea-linked Emerald Sleet, and Russia-linked Forest Blizzard. There is.

Microsoft Threat Intelligence tracks more than 300 unique threat actors, including 160 nation-state actors and 50 ransomware groups.

Online attackers’ motivations vary, but their efforts may include similar actions such as learning about a potential victim’s industry, location, and relationships. Improved software scripting and malware development. Microsoft says it also includes assistance with learning and using your native language.

The organizations at the center of Wednesday’s announcement used OpenAI services to research companies and cybersecurity tools, debug code, and possibly create phishing campaigns.

“Microsoft and OpenAI have not yet observed any particularly novel or unique AI-based attacks or exploitation techniques resulting from the use of AI by threat actors,” Microsoft said in a blog post published Wednesday. “Microsoft and our partners continue to study this situation closely.”

OpenAI and the FBI declined further comment.

NewsNation has reached out to Microsoft for further comment, but has not yet received a response. The Department of Homeland Security did not immediately respond to an email requesting information.

In June, seven major tech companies, including Microsoft and OpenAI, agreed to follow a set of White House AI safety guidelines.

These voluntary efforts include conducting external security testing of AI systems before release, and sharing information about managing AI risks across industry, as well as with government, academia, and the general public. .

The companies also pledged to report vulnerabilities in their products and invest in cybersecurity insider threat protection.



Source link

More articles

LEAVE A REPLY

Please enter your comment!
Please enter your name here

Latest article