7 Hidden Security Risks in AI-Generated Code (And How to Catch Them)
Every AI-generated codebase we analyze shares the same DNA: it works beautifully on the surface but hides security patterns that would make any experienced developer wince.
This isn’t a knock on AI tools. Cursor, Lovable, Bolt, Claude Code, and Replit are extraordinary. But they optimize for functional correctness, not security posture. That gap is where the risk lives.
Here are the 7 patterns we see most often — and why they matter.
1. The “looks-like-auth” problem
AI generates auth flows that appear complete. There’s a login page, a signup form, session tokens, maybe even a password reset flow. But look closer:
- No rate limiting on the login endpoint — unlimited brute force attempts allowed
- Sessions that never expire or rotate
- Password reset tokens that work forever
isAdminchecked in the React component but not on the API route
This is the most dangerous pattern because it creates a false sense of security. The team believes auth is handled. It’s not.
Why it matters: Auth is the front door to your entire application. A broken auth flow means every user’s data is one exploit away from exposure.
2. The NEXT_PUBLIC_ secret leak
This one is epidemic in Next.js apps built with AI. The AI needs an API key to work, so it creates:
NEXT_PUBLIC_SUPABASE_SERVICE_KEY=eyJhbGciOiJIUzI1NiIsInR5cCI6IkpXVCJ9...
NEXT_PUBLIC_STRIPE_SECRET_KEY=sk_live_51ABC...
The NEXT_PUBLIC_ prefix means these values are bundled into the client-side JavaScript. Anyone who opens your app can extract them from the browser.
Why it matters: Your Supabase service key bypasses Row Level Security. Your Stripe secret key controls your entire billing infrastructure. Both are now public.
3. The stub security pattern
AI assistants often generate security functions that look correct but do nothing:
function validateInput(data) {
// TODO: implement validation
return true;
}
function checkPermission(user, resource) {
// Basic permission check
return true;
}
These stubs pass every test, satisfy every type checker, and look fine in a code review — but they provide zero protection. We call this fake security: functions that exist to satisfy the contract without fulfilling the promise.
Why it matters: Your team believes input validation exists. It doesn’t. Your permission system always says “yes.” Every user is effectively an admin.
4. Hallucinated imports and phantom packages
AI models sometimes reference packages that don’t exist:
import { sanitize } from 'express-sanitizer-pro';
import { validateSchema } from 'json-schema-validator-next';
These packages might not exist on npm — or worse, they could be typosquatted packages planted by attackers who know AI hallucinates these names.
Why it matters: If the import silently fails, the sanitization or validation never happens. If someone registers the package name with malicious code, you’ve just introduced a supply chain attack.
5. The “works in development” configuration
AI-generated configs are tuned for development:
// CORS — allow everything
app.use(cors({ origin: '*' }));
// Cookie settings
{ httpOnly: false, secure: false, sameSite: 'none' }
// Debug mode
DEBUG=true
NODE_ENV=development
These settings make development easier. They also make production exploitation trivial. CORS * means any website can make authenticated requests to your API. Non-httpOnly cookies can be stolen via XSS. Debug mode leaks stack traces and internal paths.
Why it matters: What works in localhost:3000 becomes a security vulnerability the moment you deploy. And AI-generated code rarely includes a “production hardening” step.
6. Missing webhook verification
Every AI-generated Stripe integration we’ve analyzed handles the happy path correctly: create checkout session, listen for checkout.session.completed, provision access.
Almost none verify the webhook signature:
// AI-generated webhook handler
app.post('/api/webhooks/stripe', async (req, res) => {
const event = req.body; // ← No signature verification
if (event.type === 'checkout.session.completed') {
await grantAccess(event.data.object.customer);
}
res.json({ received: true });
});
Without signature verification, anyone can send a fake webhook to your endpoint and grant themselves paid access for free.
Why it matters: Your entire payment model can be bypassed with a single curl command.
7. Dependency bloat with known vulnerabilities
AI tools pull in packages liberally. A simple app might have 200+ dependencies. Within those:
jsonwebtokenversions with known signature bypass vulnerabilitiesaxiosversions with SSRF issuesmulterversions with path traversal bugs- Deprecated packages like
requestthat no longer receive security patches
The AI doesn’t check advisories. It uses whatever version it was trained on — or whatever resolves first.
Why it matters: Each vulnerable dependency is a potential entry point. And with 200+ packages, the attack surface is large.
The pattern behind the patterns
All 7 risks share a root cause: AI optimizes for “it works,” not “it’s safe.”
This isn’t a bug in the AI — it’s a feature mismatch. Code generation tools are designed to produce functional software. Security is a constraint that requires adversarial thinking, context awareness, and understanding of what can go wrong. That’s a different skill entirely.
What to do about it
You don’t need to stop using AI tools. You need to add a checkpoint between generation and deployment:
- Assume every AI-generated app has these patterns. Not because the AI is bad, but because security isn’t its job.
- Run automated checks before shipping. Not generic scanners — checks that understand AI-specific antipatterns like stubs, hallucinated imports, and fake security functions.
- Get a release verdict, not a finding list. You need to know: is this safe enough to ship for its purpose?
- Fix the blockers, accept the rest. Not every finding needs immediate action. Prioritize by exploitability and impact.
The AI builds fast. A security gate makes sure it ships safe.