Application Security , Artificial Intelligence & Machine Learning , Next-Generation Technologies & Secure Development
Securing AI in App Development: Balancing Speed With Safety
Snyk's Liqian Lim Discusses AI Risks and Best Practices in Software DevelopmentAI has revolutionized app development, but it has also introduced new security challenges. Liqian Lim, senior product marketing manager for AI/ML at Snyk, said that while AI accelerates innovation, it also amplifies the speed and scale at which insecure code is introduced. These vulnerabilities could be like a "ticking time bomb" and could lead to a major breach, she warned.
See Also: Safeguarding Election Integrity in the Digital Age
A recent study showed that 36% of code generated by AI tools such as Microsoft Copilot contains vulnerabilities, including critical ones listed in the top 25 OWASP vulnerabilities. The key to managing these risks, Lim said, lies in adopting a proactive approach in which security is integrated into the early stages of the development process.
"The face of the problem hasn't changed. It's still all about insecure code and how to protect against this. Because of this, modern security has to be as far left as possible," she said. "It has to be as fast as AI to catch unsafe code created by AI early and quickly. We advocate for security guardrails in the form of SaaS, self-scanning and auto-fixing for all use of AI coding tools."
In this video interview with Information Security Media Group at the AI Virtual Summit, Lim also discussed:
- The importance of AI fitness and wellness;
- Findings from Snyk's Organizational AI Readiness Report;
- Foundational steps for building an AI protection strategy for the software development life cycle.
Lim has 18 years of experience across business functions, including AI/ML, blockchain and end-to-end go-to-market campaigns. Prior to Snyk, she worked at Consensys and ThoughtRiver and completed her master of business administration degree at INSEAD.