3rd Party Risk Management , AI-Based Attacks , Artificial Intelligence & Machine Learning

Microsoft: Look to Supply Chains, Zero Trust for AI Security

Tech Giant Shares Major Threats, Potential Safeguards for Firms Using AI
Microsoft: Look to Supply Chains, Zero Trust for AI Security

The rapid rise of artificial intelligence technologies poses new risks. Enterprises using AI must regularly scan for prompt injection attacks, implement transparency in the supply chain and reinforce built-in software controls to serve their company's security needs, Microsoft said.

See Also: The Dark Side of AI: Unmasking its Threats and Navigating the Shadows of Cybersecurity in the Digital Age

The technology giant, which is incorporating generative AI tools in its software products for home and office, in a recent report advised enterprises to plan for AI threats by defining specific use cases and employee access controls.

Using conditional access policies will automatically protect tenants based on risk signals, licensing and usage, the company said. "Risk leaders and CISOs should regularly determine whether use cases and policies are adequate, or if they must change as objectives and learnings evolve," the report says.

AI can help organizations defend against cyberattacks faster and more efficiently in the areas of threat detection and incident response and can potentially help address the massive cybersecurity talent shortage, Microsoft said. Adversaries use the technology as part of their exploits. "It's never been more critical for us to design, deploy and use AI securely," the report says.

AI in fraud is an example of the dual nature of AI. Hackers can use the technology to carry out fraud by using a three-second voice sample to train a model to sound like anyone. "Even something as innocuous as your voicemail greeting can be used to get a sufficient sampling," Microsoft said, adding that this poses a threat to identity proofing.

"It's crucial that we understand how malicious actors use AI to undermine long-standing identity proofing systems so we can tackle complex fraud cases and other emerging social engineering threats that obscure identities," the company said.

Fine-tuning large language models to regularly update their understanding of malicious inputs and edge cases can help prevent prompt injection attacks, which are listed among the top OWASP critical vulnerabilities in large language models. Companies also must use context-aware filtering and output encoding to prevent prompt manipulation and log employee interactions with LLMs to analyze potential threats.

Companies using AI should assess the data touchpoints of LLMs, including those of third-party suppliers; set up cross-functional cyber risk teams to explore learnings; and close gaps. This includes setting up policies that detail the uses and risks of AI, including which designated AI tools are approved for enterprise and points of contact for access and information.

Advance persistent threat groups from North Korea, China and Russia are using LLMs to augment cyber operations, but Microsoft and OpenAI have not yet witnessed any significant attacks (see: OpenAI and Microsoft Terminate State-Backed Hacker Accounts).


About the Author

Rashmi Ramesh

Rashmi Ramesh

Assistant Editor, Global News Desk, ISMG

Ramesh has seven years of experience writing and editing stories on finance, enterprise and consumer technology, and diversity and inclusion. She has previously worked at formerly News Corp-owned TechCircle, business daily The Economic Times and The New Indian Express.




Around the Network

Our website uses cookies. Cookies enable us to provide the best experience possible and help us understand how visitors use our website. By browsing cuinfosecurity.com, you agree to our use of cookies.