The adoption of GPT-4 and other generative AI (GenAI) models in the software development community has been rapid. They offer amazing benefits, but their appeal can distract developers from the reality that this technology is not foolproof. If you turn a blind eye to due diligence, AI-generated code from innocent developer prompts can inadvertently introduce security vulnerabilities into your code. For that reason, it is critical to highlight the limitations of GenAI as coding tools, why they create a false sense of trust, and the dangers that arise from not conducting due diligence on AI-generated code.
Co-founder and CTO of Backslash Security.
The double-edged sword of coding with generative AI
Generative AI can significantly speed up code development and has the potential to provide developers with unprecedented efficiencies and capabilities, but it also comes with significant security risks.
To understand how unintended security vulnerabilities can find their way into a developer’s code, we need to cover typical GenAI use cases in software development. For everyday tasks, developers query GenAI models to identify code libraries and receive recommendations for open-source software (OSS) packages to solve coding challenges.
For such queries, whether Java, Python or JavaScript/TypeScript, a common thread emerged: the results of GenAI queries are inconsistent. This inconsistency provides a false sense of security, because sooner or later a varied result chosen by a developer will contain an instance of unsafe code.
This risk is further magnified by the recently published research from Stanford University that a developer’s long-term use of GenAI can gradually influence his urge to stop thoroughly validating code, not realizing how often recommendations can contain embedded risks . This misplaced trust can lead to the integration of unsafe code snippets, ultimately compromising the overall security of the application.
How generative AI can introduce code vulnerabilities
Warning signs of potentially unsafe code present in AI-generated developer recommendations come in several forms, although the most common symptoms include:
● Legacy OSS packages: Due diligence on suspect OSS packages recommended by GPT-4 often reveals that they are out of date, indicating that these package versions had known vulnerabilities. Static datasets used to train LLMs are often the culprit in these cases.
● Unclear guidelines for package validation: This can manifest itself in a number of ways, such as a lack of instructions to check for updates for the outdated software packages or guidelines stating that using packages with the current version is a nice-to-have rather than necessary. Without explicit instructions to verify the latest version of packages, developers may give in over time and use the recommended packages without question.
● Risks of phantom packages: Guidelines provided by GPT-4 can lead to direct use of indirect OSS packages without including them in the manifest. These “phantom” scenarios occur when GPT-4 does not have the full context of the entire codebase. In practice, a vulnerable package is defined by the transitive package that introduced it, and not by a previous developer. As a result, developers may not be aware of these hidden dependencies, which can introduce security vulnerabilities that are not easily detected through conventional manifest-based dependency checks. This significantly complicates vulnerability management and remediation efforts.
As new programming tools reach developers, the number of ways inadvertently writing unsafe code is growing. History has also shown us that new secure coding practices evolve and grow to address new shortcomings. The same will eventually happen with GenAI encryption tools. So here are some basic, technical, and managerial secure coding practices we can adopt now:
Build a mature DevSecOps program
A well-developed DevSecOps program creates a secure foundation for developers to build their AI-generated code practice. The hallmark of a mature program is security controls embedded throughout the software development life cycle (SDLC), including threat modeling, static code scanning, and test automation. By considering these qualities with the signature fast feedback loops, you can safely manage the increased risk that AI-generated business brings as your organization becomes familiar with the new development tools.
Awareness and training
Before GenAI is widely adopted by the development team, they and the security teams must be trained to recognize potentially unsafe code recommendations and common code writing pitfalls that GenAI can introduce. This practice helps developers and security teams learn how GenAI results are obtained and produced and understand its limitations.
Establishing secure coding practices aligned with AI-assisted programming should be taken as a given, but establishing corporate-approved GenAI toolsets is less discussed. These measures serve to avoid the risks of unvetted tools and to better investigate and diagnose security vulnerabilities before the code is put into production. Likewise, identifying use cases for particular GenAI tools will help developers work within their limitations. For example, GenAI is ideal for automating repetitive and manual tasks, such as auto-populating code functions. However, when more complex coding and code dependencies come into play, developers need to rely less on GenAI tools.
Future Outlook: Navigating the AI-driven Development Landscape
Integrating generative AI into software development is inevitable and brings both opportunities and challenges. As GenAI continues to revolutionize coding practices, developers will become increasingly dependent on these tools. A practice shift of this magnitude will require a parallel evolution in security practices tailored to the new coding challenges it will introduce, and third-party research studies highlight the critical role of vigilance and proactive security measures against the unintended risks of AI-generated code. Yet we should not be afraid to use the best tools and strategies to unleash its potential. We have already experienced technological revolutions, such as cloud computing, where application security had to catch up. Here we have the opportunity to prepare and stay ahead of the expected security challenges, while taking advantage of the enormous benefits that AI can bring to the world of coding.
We recommended the best AI phone.
This article was produced as part of Ny BreakingPro’s Expert Insights channel, where we profile the best and brightest minds in today’s technology industry. The views expressed here are those of the author and are not necessarily those of Ny BreakingPro or Future plc. If you are interested in contributing, you can read more here: https://www.techradar.com/news/submit-your-story-to-techradar-pro