Close up of a keyboard Bloomberg

Free AI Programs Prone to Security Risks, Researchers Say

AI technology may include flaws that hackers can exploit, according to Robust Intelligence. The company released a free tool for scanning AI models for vulnerabilities.

(Bloomberg) -- Companies rushing to adopt hot new types of artificial intelligence should exercise caution when using open-source versions of the technology, some of which may not work as advertised or include flaws that hackers can exploit, security researchers say.

There are few ways to know in advance if a particular AI model —  a program made up of algorithms that can do such things as generate text, images and predictions — is safe, said Hyrum Anderson, distinguished engineer at Robust Intelligence Inc., a machine learning security company that lists the US Defense Department as a client.

Anderson said he found that half the publicly available models for classifying images failed 40% of his tests. The goal was to determine whether a malicious actor could alter the outputs of AI programs in a manner that could constitute a security risk or provide incorrect information.

Often, models use file types that are particularly prone to security flaws, Anderson said. It's an issue because so many companies are grabbing models from publicly available sources without fully understanding the underlying technology, rather than creating their own. Ninety percent of the companies Robust Intelligence works with download models from Hugging Face, a repository of AI models, he said.

“Everybody is using somebody else’s model,” Anderson said.

Robust Intelligence is announcing a new, free tool on Wednesday that scans AI models for security flaws, in addition to testing whether they’re as effective as advertised and if they have issues around bias. The tool uses information from an AI risk database that Robust Intelligence has collected. The idea is that companies that want to use a publicly-available AI program can check it with the tool to assess whether it is safe an effective and use that data to help them select the best options.

The tool is modeled on VirusTotal, a product owned by Alphabet Inc.'s Google that combines a myriad of different virus scanning products and lets users check for problems that their own antivirus software might have missed. Robust Intelligence is hoping its tool will also enable it to crowdsource other reports of bugs from the broader security community, Anderson said.

With the explosive popularity of OpenAI’s ChatGPT chatbot and its Dall-E program for generating images, corporate customers and internet app developers are rushing to add so-called generative AI capabilities into their business processes and products. One option is paying OpenAI for access to its tools, but many customers are opting for free open-source versions available on the internet. Additionally, with open-source tools, users can download the model whereas OpenAI keeps its models confidential and instead grants access to use them. 

“This is democratizing AI like nothing else,” Anderson said of the open-source tools. But he added that there’s a “scary” element to it, which he demonstrated by showing an AI model he put on Hugging Face. When he downloads the model, in the background it runs code on the user’s machine without their permission. That’s a security risk because a bad actor could use such code execution to run something dangerous or to take over a machine. 

Hugging Face is catching many of the issues, he said, but not all of them, and “people should know that when you download stuff on the internet stuff like that can happen.” 

There can also be flaws in OpenAI’s products. Last week, OpenAI temporarily shut down ChatGPT to fix a bug that allowed some users to see the titles of others’ chat histories. The AI research lab later said the security hole had exposed some payment data and personal information, too.

Some of the biggest companies, such as Microsoft Corp. and Google, post models on Hugging Face and large companies also use the AI programs hosted there. Hugging Face has been working with Microsoft’s threat intelligence team on a tool that scans AI programs for a particular kind of security threat and Anderson has been sharing his findings with Hugging Face.

In addition, Hugging Face is running an antivirus program to look for issues in the AI programs it hosts, said Luc Georges, a machine learning and software engineer working on security at Hugging Face. But there are other threats that are harder to catch — like whether a model does what it’s supposed to do, or advertises a legitimate use but turns out to be nefarious.

So far, Hugging Face hasn’t seen any attacks, Georges said.

“What we value at Hugging Faces is open software, open collaboration and transparency,” he said. “There's still a lot of work to do.”

Microsoft reached out to Hugging Face after its AI red team found some models on the site were vulnerable to hackers, said Ram Shankar Siva Kumar, who leads the team. The idea was to look for ways that a hacker with conventional tools could exploit AI programs, he said.

“When we shared this with Hugging Face we were all like, ‘Thank God this attack has not been seen in the wild,’ ” he said. Still, companies that might use these models aren’t paying that much attention, he said. “When we speak to customers they are not even thinking about attacks on machine learning systems.”

For conventional software, there’s a whole ecosystem of government, academia and companies that’s developed in over the past two decades to find, report and share bugs. That system is in its infancy in dealing with AI models, and conventional tools that scan for security issues in regular software aren’t capable of finding them buried in an AI program, Anderson said.

New companies will form to handle some of these challenges, like cleaning the data used in models for any security issues, while model-creators will have to get more sophisticated, said Sam Crowther, founder and chief executive officer of Kasada Pty Ltd., which has been tracking misuse of OpenAI’s ChatGPT. 

“This will spawn a generation of companies that specialize in cleaning input that goes into models. And it's also going to spur people who are building the models to get better at understanding how may they be abused, and 'How can I teach them to not be abused in that way?’ " he said. “Right now it’s just too easy to trick them.”

Hide comments

Comments

  • Allowed HTML tags: <em> <strong> <blockquote> <br> <p>

Plain text

  • No HTML tags allowed.
  • Web page addresses and e-mail addresses turn into links automatically.
  • Lines and paragraphs break automatically.
Publish