Back to Posts
Industry Analysis

Making Vibe Coded Apps Production Ready: A Reality Check

Alexandre De Groodt
2026-02-11
Making Vibe Coded Apps Production Ready: A Reality Check

AI writes code fast. Too fast. You barely have time to think before it's done. The issue is that it will do exactly what you ask - if you don't consistently think about security vulnerabilities, neither will the AI.

Yes, you can tell AI to "write secure code," and it does help. Some tools like Claude Code even have AI security reviews now. Maybe you even have vulnerability scanning giving loads of potential vulnerabilities. But it's not enough. AI review depends on you, and the vulnerability scanner gives a lot of noise.

The Reality

AI-generated code contains 1.7x more issues than human-written code. businesswire

When building with AI ourselves, we noticed consistent patterns. AI is excellent for prototypes - it helps you move fast and validate ideas. But moving from prototype to production requires catching what AI doesn't think about by default.

We found concerning patterns, be it in our own applications or small client projects we reviewed. We had to direct it with instruction files, but even then we needed to keep a close watch. Configuration and logic issues, breaking things that worked, input validation gaps, dependency vulnerabilities, secrets leaking, to name a few.

A good example is CVE-2025-66478 in Next.js React Server Components, a CVSS 10.0 remote code execution vulnerability. nextjs

This all means that if you're a small team building with AI, you're in a particularly vulnerable position. Ironically the worst that can happen is that your product works too well and faces scaling issues and the higher threat attraction that come with it.

A client's small app was defaced the day before release because of the lack of a security header and WAF. AI will not update packages by itself, refactor your code or check for security issues. You need to instruct it to do so.

Real Stories

Tea, a women's dating safety app left its Firebase backend effectively open, exposing verification selfies, ID photos, and over a million highly sensitive private messages because default access rules were never secured. bbc

The Vibe‑coding platform Base44 had an auth flow where a public app_id was enough to register and verify accounts, letting attackers bypass SSO and logins to access "private" internal apps built by small and mid‑sized companies. wiz

Replit + SaaStr. A Replit AI coding agent, used by the SaaStr team, connected to production during a code freeze and deleted the live database for a mature SaaS business, briefly putting critical customer records at risk until backups were restored. fortune

Moltbook. The new reddit for AI agents. Its database was wide open, leaking 1.5 million API authentication tokens, 35,000 email addresses, and private messages between agents, among other issues like prompt injection vulnerability. techzine

Openclaw. A popular personal AI assistant, it had a critical vulnerability discovered just a few days ago. ccb.belgium

Lovable apps. escape.tech found "over 2k high-impact vulnerabilities in apps built with vibe coding platforms**".**

The Theory Problem: Beyond Just Security

Peter Naur said coding isn't just about the code - it's about building the theory behind it. This is the hidden cost of moving too fast, you can end up with AI code nobody on the team truly understands. When something breaks in production, you can end up having to rely on AI to fix AI issues. Until the code will be too big and it can't.

It's not just security that suffers:

Quality Assurance: AI writes tests that pass but mostly confirm it's working as the AI expects - we should either write them ourselves or do some QA.

Onboarding: how do you explain a codebase where 60% was generated by prompts nobody documented? If they run into an issue, you can't really help them.

The knowledge debt compounds faster than the technical debt.

The Real Problem: Unknown AI Risks

Think about it—assuming AI can find and fix issues, who's looking? Someone needs to evaluate whether the app is safe. Never let an app with a payment system run without a security test first—don't forget the AI was trained on faulty human code.

False positives are a real problem. Tools can use AI to estimate if a vulnerability is actually exploitable. AI pentesting can help determine if that SQL injection is real, but you still need someone who knows what they're looking at.

Ever heard of "never build your own crypto"? The same applies for authentication, payment processing and database, unless you have engineers who know what they're doing. If you use an existing one, make sure you set it up properly.

We don't know what we don't know. We need to find someone who does.

And be careful what AI writes for you:

Legal documents, terms of service, privacy policies—AI will generate convincing-sounding legal text that might not actually protect you or comply with regulations. It will claim you're compliant with everything on the main page when you're not. This can cause real issues. reddit

Security Tips

AI lowers the entry barrier for hacking, automating finding and even exploiting vulnerabilities. We cannot rely on developers "thinking about it." We need to automate AI properly on our end as well.

The cybersecurity team - if you have one - needs to be briefed on AI issues, and they need to setup the right methods to deal with the sheer amount of code to verify. The checklist from Aikido for securing vibe-coded apps is a good start.

We also need to set up a safe pipeline when using AI, at minimum:

  • A tool that forces thinking more before AI starts coding (Openspec/Claude Superpowers)
  • Static analysis that checks code for vulnerabilities (SAST) like Snyk
  • Dependency scanning for known CVEs with Dependabot
  • Directed AI security reviews (but don't trust them blindly)

This would significantly improve code quality, but it's still not enough.

Self-Assessment Questions

Before launching or raising money, ask yourself:

  • Have you scanned your dependencies for known vulnerabilities this month?
  • Do you or your team know how authentication and the database are secured in your app?
  • Can your team explain key logic flows?
  • Do you have AI usage policies for handling sensitive data?
  • Are you safe against prompt injection if your AI is connected?
  • Do you know how your team is using vibe coding, and for which products?
  • Do you have guidelines for what can be used?
  • Beyond application security, do you know how do marketing, support and sales teams use AI?

If you answered "no" or "I'm not sure" to any of these, you need help.

What Ingram Does

We're engineers who understand AI, not salespeople. We found critical security issues in our own applications and helped other developers identify major vulnerabilities in theirs. Most small teams don't have a permanent cybersecurity specialist. But you don't have to hire one to know if your app is safe. The risk is real, to give you an idea of the numbers: 43% of cyber attacks target small business, with 60% closing 6 months after a successful attack. The average cost of a data breach on a SMB is $254,445. coladv ridgeit

Our approach:

We run a security audit using automated scanning tools (SAST, SCA, secrets detection) + black and white box AI pentesting combined to confirm vulnerabilities, and do an AI-assisted review of the core aspects (database, authentication, payment, key logic, AI usage) by engineers who understand AI-generated code patterns. Then we give you a clear report of what actually matters with the next steps, and let you choose which you want us to implement. We can also give guidance and suggest policies for using AI, on what pipelines to set up so the issues don't come back.

We'll start with a quick audit and remediation plan in one week.

Ready to know where you stand?

Get in Touch →


Reality check: AI can be pointed at the right things you need to check. You still need to think. Your AI vibe coding tool is not specialized for safe code—are you prompting it right? It's not good at "looking at the security of the whole app" at once.

If it's worth building, it's worth securing.

Sources