Anyone can code with AI. But it might come with a hidden cost.
In recent years, advancements in artificial intelligence (AI) have transformed the coding landscape, enabling individuals with little to no programming experience to create websites and applications. By simply providing instructions to a chatbot, users can generate complex code. However, this newfound accessibility comes with significant risks and challenges that warrant careful consideration.
The Rise of AI-Assisted Coding
AI systems have evolved to a point where they can produce code at an unprecedented rate. This phenomenon, often referred to as “vibe coding,” allows both novices and experienced developers to enhance their productivity. David Loker, head of AI for CodeRabbit, explains that while AI can streamline the coding process, it also introduces potential pitfalls. “AI systems don’t make typos in the way we make typos,” he notes, “but they make a lot of mistakes across the board, with readability and maintainability of the code chief among them.”
Quality vs. Quantity
The primary motivation behind AI-assisted coding is to increase developer productivity. However, experts are concerned that the emphasis on quantity may compromise quality. AI systems often struggle to grasp the entirety of existing codebases, leading to redundancy and inefficiency. For instance, Loker points out that AI coding systems might duplicate functionality across different locations, resulting in a sprawling codebase that is difficult to manage.
“If you update a function in one spot and you don’t update it in the other, you have different business logic in different areas that don’t line up,” Loker explains. This can create confusion and potential errors, making the codebase more complex and harder to maintain.
The Concept of AI Slop
The term “AI slop” was coined in 2024 to describe the influx of low-quality outputs generated by AI systems. In the context of coding, this refers to the vast amounts of serviceable but imperfect code that AI systems produce. While these systems are improving their ability to review code and identify security vulnerabilities, the sheer volume of code generated can overwhelm developers.
Daniel Stenberg, a leading developer, recently expressed frustration over the challenges posed by AI-generated submissions. He noted that managing the influx of low-quality code submissions can take a serious mental toll. However, he also acknowledged a shift in the nature of submissions, stating, “The flood has transitioned from an AI slop tsunami into more of a plain security report tsunami.”
Security Risks in AI-Coded Software
As AI coding systems become more prevalent, the potential for security vulnerabilities increases. Jack Cable, CEO of cybersecurity consulting firm Corridor, emphasizes that while AI may excel at writing code, the volume of code produced can lead to significant security challenges. “Even if a large language model is better at writing code line by line, if it’s writing 20 times as much code as a human would be, there is significantly more code to be reviewed,” Cable warns.
This explosion in code complexity creates a larger attack surface for potential vulnerabilities. The more complex the code, the more difficult it becomes to secure it effectively. Professor Daniel Kang from the University of Illinois Urbana-Champaign echoes these concerns, stating that the rise of AI coding agents could give inexperienced users a false sense of security. “Even if you assume that the rate of security vulnerabilities in any given chunk of code is constant, the number of vulnerabilities will go up dramatically,” he explains.
Case Studies: Real-World Implications
One notable example of the risks associated with AI coding is the creation of a social network for AI systems called Moltbook, which was developed using AI coding systems. Security researchers soon identified critical vulnerabilities in Moltbook’s software, which were attributed to its AI-generated code. Ethical hacker Jamieson O’Reilly highlighted the dangers of inexperienced developers using AI coding agents, stating, “People often believe that AI coding agents will build things per the best security standards. That’s just not the case.”
As AI systems continue to evolve, the balance between convenience and security becomes increasingly precarious. The traditional security measures that have been established over decades are being challenged by the rapid pace of AI development.
The Future of AI Coding
As we move forward, it is essential for developers and organizations to recognize the potential drawbacks of AI-assisted coding. While these systems can enhance productivity and creativity, they also introduce significant risks that must be managed. Companies need to prioritize code reviews from a functionality, quality, and security perspective to mitigate potential vulnerabilities.
Ultimately, the integration of AI into the coding process represents a double-edged sword. It has the potential to democratize coding and empower individuals to bring their ideas to life, but it also requires a heightened awareness of the associated risks and challenges.
Conclusion
AI-assisted coding is revolutionizing the way we approach software development, making it accessible to a broader audience. However, as the coding landscape evolves, it is crucial to remain vigilant about the quality and security of the code being produced. The balance between leveraging AI for efficiency and ensuring robust security measures will be key to navigating the future of coding in an AI-driven world.
Note: This article aims to provide an overview of the implications of AI-assisted coding, highlighting both its benefits and challenges. As technology continues to evolve, ongoing discussions about best practices in coding and security will be essential.

