Microsoft accuses group of developing tool to abuse its AI service in new lawsuit | TechCrunch

Microsoft Files Lawsuit Against Group for Abusing AI Service

Microsoft has sued a group for allegedly creating tools to exploit its AI services

Technology

Microsoft, AI, Lawsuit, Azure OpenAI Service, Virginia, Cybersecurity

Virginia: Microsoft is taking a stand against a group it claims has been misusing its AI services. They say this group developed tools to bypass safety measures in their cloud AI products.

In a lawsuit filed in December, Microsoft pointed fingers at ten unnamed defendants. These folks allegedly used stolen credentials and custom software to access the Azure OpenAI Service, which is powered by OpenAI’s technology.

According to the complaint, these defendants broke several laws, including the Computer Fraud and Abuse Act. They supposedly accessed Microsoft’s software to create harmful content, although Microsoft didn’t go into specifics about what that content was.

Microsoft is looking for damages and other forms of relief. They discovered in July 2024 that some customers’ API keys were being misused to generate content that went against their acceptable use policy.

It turns out that these API keys were stolen from paying customers. Microsoft’s complaint mentions that the exact way the defendants got these keys is still a mystery, but it seems they had a systematic approach to stealing them.

The lawsuit claims that the defendants set up a “hacking-as-a-service” scheme using stolen API keys from U.S. customers. They created a tool called de3u, which allowed users to generate images with DALL-E without needing to write any code.

De3u was designed to help users bypass Microsoft’s content filters, which is a big no-no. A screenshot of this tool was included in the complaint, showing just how they were operating.

Interestingly, the code for de3u was hosted on GitHub, but it’s no longer available. Microsoft is now authorized to seize a website that was crucial to the defendants’ operations, which should help them gather more evidence.

They’ve also mentioned that they’ve implemented new countermeasures to protect the Azure OpenAI Service from similar activities in the future.

[rule_2]