Tag: security

  • Common Code Security Challenges with Vibe Coding

    Common Code Security Challenges with Vibe Coding

    Creating site or an app with the use of AI tools and without having a slightest clue of what you are doing is popularly known as “Vibe Coding”. This way of developing is a great way to creating critical security vulnerabilities and bugs.

    The basic cause of all these issues is that AI does not know about code security and it is not aware of the context so it fails to implement permission checks and often sends way too much data.

    Here is a list of some of the common code security issues with AI that we encountered.

    Lack of input validation

    One of the most common issues in vibe coding is the lack of input validation.

    AI’s can not plan ahead what they are doing, and neither are they taught to reason about their code’s security, nor about it’s quality, so it fails to implement permission checks and often sends way too much data.

    Transmit an error message to the attacker with sensitive information

    The API routes often transmit an error message to the attacker with sensitive information, such as, but not limited to paths, filenames and sometimes values from the database or stack used in the application.

    Remote code execution vulnerability

    AI tools often produce a remote code execution vulnerability, the backend code that runs another binary without proper input validation.

    This could be prevented by using proper input validation or even better not needing to run external binaries in the first place.

    Enumeration vulnerabilities – lots of it

    AI tools also produce lots of enumeration vulnerabilities that allowed anyone to request data about a user, including personal information such as email addresses, phone numbers, github and google access tokens, full names and more.

    AI responses that send everything

    The Vibe Coded backend code had no implementation of access controls and it was sending any user data to the client.

    This likely happened because the AI tool generated code that selected the whole user and did not remove the password and other fields from the response.

    AI fails to implement permission checks and often selects too much data, simply because it is not aware of the context.

    To wrap things up

    It is critical that you understand your own code, or the one you are getting from your AI assistants.

    Always have a second look at your code and make sure that you understand what it is doing.