ChatGPT-maker OpenAI will add an ID verification layer to its AI models in the API to prevent misuse.
OpenAI plans to require organizations to go through an ID verification process in the future before they can access certain AI models in the API. The company announced this verification, called Verified Organization, last week via its support page. With this, OpenAI aims to ensure AI safety. Recently, it was revealed that OpenAI was being used to compose spam messages.
Verification
“We’re adding the verification process to limit unsafe use of AI while continuing to make advanced models available to the broader developer community,” the company stated.
This verification requires a valid, government-issued ID from one of the 200 countries supported by OpenAI’s API. Additionally, an ID can only verify one organization every 90 days. This verification would reportedly take only a few minutes and has no spending requirements according to the company.
AI Spambot
“At OpenAI, we take our responsibility seriously to ensure that AI is both widely accessible and used safely. Unfortunately, a small minority of developers are intentionally using OpenAI APIs in violation of our usage policies,” OpenAI stated.
This new step towards additional security may not have come out of nowhere. Recently, it was revealed that an AI bot consulted OpenAI to generate messages intended to flood websites with spam comments. According to cybersecurity company SentinelOne, the so-called AkiraBot has successfully flooded at least 80,000 websites with spam messages. The bot based itself on messages it created using OpenAI’s models.
read also
OpenAI Takes Elon Musk to Court Again and Demands End to ‘Unfair Practices’
“The use of LLM-generated content likely helps these messages bypass spam filters, as the spam content is different each time a message is generated. The framework also rotates which attacker-controlled domain is specified in the messages, further complicating spam filtering,” according to SentinelOne.
Meanwhile, OpenAI has disabled the API key used by AkiraBot. “We are continuing to investigate and will disable all associated assets. We take abuse seriously and are continuously improving our systems to detect misuse,” OpenAI said in a statement to SentinelOne.