Popular AI Tools Contain Critical, Sometimes Unpatched, BugsHackers Can Target Vulnerable Infrastructure to Take Over AI Models, Host Systems
Nearly a dozen critical vulnerabilities in the technical infrastructure that companies use to build artificial intelligence models could allow hackers to access the tools and use them as gateways into the systems in which they are housed.
Several of the 15 disclosed vulnerabilities are not patched.
These vulnerabilities are in tools that have been downloaded hundreds of thousands to millions of times per month and are used to host, deploy and share large language models and machine learning platforms, said Protect AI.
"Many OSS tools, frameworks and artifacts come out of the box with vulnerabilities that can lead directly to complete system takeovers such as unauthenticated remote code execution or local file inclusion vulnerabilities. What does this mean for you? You are likely at risk of theft of models, data and credentials," the company said on Thursday.
Among the affected platforms are Ray, which is used to train machine learning models; MLflow, a machine-learning life cycle platform; ModelDB, a machine learning management platform; and H20 - version 3, an open-source platform for machine learning. These platforms contain 12 of the 15 disclosed bugs.
The vulnerabilities allow attackers to gain unauthorized access to the AI models and use them to steal credentials and data and take over servers of the system on which the model is hosted.
In the past year, organizations and their adversaries have scrambled to deploy AI, especially generative AI, into their operations.
"The AI industry has a security problem, and it's not in the prompts you type into chatbots," Protect AI researchers Dan McInerney and Marcello Salvati said.
The machine learning security company said it had disclosed the vulnerabilities to vendors 45 days before publishing the advisory on Thursday. The company shared workarounds for the six unpatched bugs, four of which are critical.