Senior engineer reviewing audit results
sendhelp.dev
  • Jan 29, 2025
  • 8 min read

I Fixed 50 Broken Vibe-Coded Projects. Here's What I Learned.

Somewhere around audit number twelve, I stopped being surprised. The symptoms were different — sometimes a crashed deployment, sometimes a security hole, sometimes a database that had quietly become inconsistent — but the underlying cause was almost always the same. The AI had built something that worked in the context it was tested in and failed in the context it was actually used in.

The projects ranged from simple landing pages with broken contact forms to full SaaS applications with payment processing, user management, and API integrations. The tools varied: Cursor, Bolt.new, Lovable, ChatGPT, Claude, Gemini. The common thread wasn't the tool. It was the workflow. Someone described what they wanted. The AI built it. Nobody audited it before shipping it to users.

Multiple monitors showing code audit results and findings

Here's what I found in the first fifty projects, with rough frequency: 43 had security vulnerabilities (86%). Of those, 31 had broken or incomplete authentication. 28 had exposed environment variables or hardcoded secrets. 19 had SQL injection or unsafe query construction. 38 had performance issues that were invisible in development but showed up immediately under real load. 44 had deployment configuration problems — the app worked locally but needed significant changes to run in production.

The most expensive fix I did was an e-commerce site that had been processing payments with a misconfigured Stripe integration for six weeks. The AI had copied a Stripe example from an old tutorial. The webhook validation was wrong. Every webhook event was being accepted without signature verification. The attacker didn't steal money — they triggered fake fulfillment events. The cost was about $8,000 in undelivered goods before anyone noticed the pattern.

Sound familiar?

Run a free scan of your site or send us your details — we'll tell you exactly what's broken.

The cheapest and most common fix is security headers. Takes fifteen minutes. Dramatically reduces attack surface. Almost never present in an AI-generated app. I've added these in a single commit to 41 of the 50 projects. It's now the first thing I check.

The most interesting pattern: AI-generated code is often architecturally correct at the macro level and wrong at the implementation level. The separation of concerns is reasonable. The component structure makes sense. The database relationships are logical. But the specific implementation of auth, the specific query construction, the specific handling of environment variables — these are where the hallucinations live.

What this means practically: AI coding tools are good at structure and bad at security-critical details. If you're vibe-coding something that will handle real user data, real money, or real authentication, those specific areas need human review. Everything else? The AI is probably fine. But those three areas will burn you every time if you don't look.

The recommendation I give to every founder who comes to me after a vibe-coded disaster: don't rebuild. Audit. The structure is usually salvageable. The code is often good. It just needs the security-critical paths reviewed by someone who knows what the failure modes look like — because they've seen all of them before.

Security Horror-stories Industry Audit

Was this post helpful?

Does your app have these problems?

Scan it for free — or send us the details and we'll dig in.

More disaster files