When we’re first learning to code, it’s easy to get swept up in the possibilities. We could build anything! There’s a real thrill in writing your first function, running it, and seeing that one perfect word on the screen: true.
So we build and build, stacking function on function, and everything more or less works the way we expect. Unfortunately, there’s a problem we don’t realize until later - our code stinks.
We inevitably make rookie mistakes because, well, we’re rookies. And eventually, an uneasy feeling starts to creep in. Something doesn’t feel quite right. Even though things are working, we know we’ve probably cut corners. In development, that feeling - when something seems off, even if you can’t fully explain why - is called a “code smell”.
What Is a Code Smell?
The term “code smell” was coined by Kent Beck in the 90s, and became more widely known after appearing in “Refactoring: Improving the Design of Existing Code”, by Martin Fowler. At its core, a code smell is a sign of underlying technical debt - often introduced through inexperience or rushed decisions. The code works, but something about it feels off. It’s messy, brittle, or harder to change than it should be. Eventually, it starts to stink.
Traditionally, code smells include things like duplicated logic, poor naming conventions, deeply nested conditionals, and overly long functions. If the code is technically “working,” there’s often a tendency to take an “if it ain’t broke, don’t fix it” approach - especially when time or budget is tight. The problem is that code smells often hint at deeper design issues that can become much harder (and more expensive) to fix later on.
Now, with the rise of AI-assisted coding, we’re all rookies again - and unfortunately, it’s producing a lot of stinky code. Just not the kind we’re used to.
The New Code Smell
AI coding tools are trained on best practices across programming languages, so they tend to avoid the obvious mistakes a junior developer might make. You’re unlikely to see poor indentation or a missing semicolon. But that doesn’t mean AI-written code is clean. Without experienced supervision, it can still rack up technical debt - just in subtler, less familiar ways.
While not an exhaustive list, here are some of the common new code smells I see in AI-generated code:
Hard-coded API keys
I’m not sure why an AI thinks it’s a good idea to hard-code a private API key into client-side code, but it happens. Needless to say, this is a major security risk and should never make it into production.
Fake results
I’ve run into situations where I believed a feature was working, only to find that no integration had been created. Instead, a hard-coded test value was being returned to simulate success. Maybe the AI planned to build the integration later, but either way, don’t fall for it.
Red herrings
AI often begins down one path, hits a roadblock, then pivots to a new approach - leaving behind unused or half-finished code. This creates confusion, especially when leftover variables and functions have similar names to the ones that are actually in use.
Ignoring conventions
If your codebase has existing helper functions or utilities, the AI won’t automatically use them unless explicitly told to. Instead, it may recreate similar logic from scratch for each new task, leading to duplication and inconsistencies.
Round and round
One of the more frustrating smells: the AI hits a problem, suggests a fix, then creates a new issue, and eventually re-suggests the original approach as the solution. If you’re not paying attention, you can get caught in a loop - chasing your tail and getting nowhere.
Keeping the Stink Out
With so much code now being generated by AI, it’s easy for quality to slip through the cracks. Features get built fast, but shortcuts, inconsistencies, and blind spots can pile up just as quickly. So what can you do to keep your AI-generated code smelling fresh?
Context is king
Context, context, context. The more you provide, the better your results will be. All major AI coding tools offer a way to inject context into every session - whether that’s a markdown file, settings in the app, or features like Cursor Rules. I’ve seen some context files that are absurdly long, but that might be what it takes to inject enough determinism into the process to get consistent, high-quality output.
Proper planning
I heard a great tip from Ras Mic this week: start your project by asking a deep reasoning model to create an architecture plan. These models are slower, so you don’t want to use them for every step of the process, but for setting the structure and rules that other models will follow during implementation, they’re your best bet. Once the high-level plan is in place, you can delegate the actual coding tasks to faster, lower-level models.
Get a second opinion
If your current AI gets stuck in a loop or just can’t crack a problem, don’t hesitate to get a second opinion - from another AI. I currently use a mix of Cursor (still my go-to for coding), Claude Code (quickly catching up), and ChatGPT (a reliable all-rounder). Between them, there’s not much that can’t be solved.
I’ve also started using CodeRabbit for PR reviews, and it’s been a game changer. It catches issues the others miss, creates GitHub issues automatically, and even suggests prompts you can paste back into your AI environment to fix them. Love this feature!
Trust your gut
Last but not least, trust your intuition. If your Spidey sense is tingling and it feels like things are going off the rails, they probably are. Slow down, check your diffs, and commit regularly so you’ve got clean restore points if something needs to be undone.
After all, a code smell is just that - a sense that something’s not right. The more you code with AI tools, the sharper your instincts will get, and the easier it becomes to sniff out problems before they grow.