AI vibe coding productivity and limits

5 min read

Photo by hauntedeyes on Unsplash

AI agents and vibe coding are everywhere lately. It's catchy to say developers are going to be replaced, and while that makes for a good headline, I wanted to test it myself before concluding.

As a product engineer who enjoys trying new things, I’ve been experimenting with all the recent tools: Claude, Cursor, Copilot, V0 by Vercel, Bolt by Stackblitz—you name it. Probably by the time you’re reading this, a few of them are already outdated. That’s how fast this AI wave is moving.

What I’ve been building

To explore this properly, I decided to build two small apps, one for personal use and one for work-related problems:

A tax calculator for Spain. As a foreigner, I understand what I owe, but why it isn’t exactly clear.

A planning poker tool to estimate tasks with t-shirt sizes. It’s something I needed at work for quick estimations.

You can check them out here:

The apps were built using just the prompt instructions, describing both the functional and non-functional requirements, focusing on my expectations and the problem I was trying to fix with it.

I only targeted the web, not mobile, and asked to use React and tools that I use daily so that I could easily assess the outcome, look for patterns and anti-patterns, and provide a deeper analysis.

The tax calculator was way more complex and needed to iterate more. It was mainly due to my lack of context on how the data should have been presented: defining what was relevant and what wasn’t, and whether to use charts, tables, etc.

The poker planning tool was easier. It’s a common tool among developers, so it didn't need to be that verbose for it to look like what I was expecting. It meant that both the AI and I shared the same assumptions about how it should look and behave.

What I noticed technically

From a technical point of view, these are the top observations. For simplicity, the results below are focused on v0.

Over-engineering: It felt like a large company wrote the code. Layers of complexity, unnecessary abstraction, and a ton of prop drilling. Things that usually happen when many people touch the same codebase without a shared vision. This became a pattern after I iterated the project multiple times.

Dead code: There were several unused components, components returning null, and UI pieces that never got called. These could have been avoided with better prompting (maybe) or if the agent had asked questions instead of assuming things.

Component bloat: Possibly because of how shadcn/ui is integrated in some tools, I ended up with too many components for simple things.

Weak error handling: Most flows had minimal to no fallback. Probably my fault here. I didn’t specify edge cases or validation needs in the prompt. But I expected more "smart defaults."

All in all, the output felt functional but also bloated and hard to scale. It got the job done, but not in a way I’d feel proud of long-term.

What this is good for

Despite my critiques, I don’t want to sound like I’m ranting. This kind of AI-assisted dev has real potential:

Fast prototyping: When I wanted to sketch out an idea for the tax calculator quickly, using AI gave me a solid base to start from. Even though it wasn’t perfect, it gave me a structure that I could improve.

Idea validation: The poker planning tool was great for this. I had a rough idea and wanted to see how it could work in the browser. AI helped me get something up and running without spending days on it.

Quick MVPs for demos or internal tools: Both apps I built could easily be used to demo a concept or shared with a small team to test. Especially for internal tooling, where polish isn't the priority, this workflow saves tons of time.

Generating boilerplate or utility code: For things like form validation, layout structure, or reusable UI blocks, Copilot and the other tools were helpful. They took care of the repetitive bits and let me focus more on logic and flow.

It reminded me of how I felt the first time I used Tailwind: less low-level control but more speed. It helps you ship faster if you’re OK with some trade-offs.

Copilot still feels like magic when I’m refining components, writing tests, or just too lazy to type repetitive stuff. But using these agents to build complete apps? That’s another story.

The bottom line

These tools are not ready to replace developers. They’re here to assist, not replace. And that’s OK. There must be a reason why Copilot was named that way.

If you give them no context, they’ll make assumptions. They'll bloat your project fast if you don’t set rules or guide their logic. Scaling or refactoring that kind of code isn’t easy. So, at the current status, I won't recommend this to big tech, where codebases are complex and big.

Some platforms, like Cursor, let you define conventions and behaviors like teaching your assistant. But it still requires effort, and knowledge to define those rules.

So no, I don’t think we’re close to the end of the dev role. But I do think the job will change. AI can help you move faster, but it’s up to you to sail the ship.

I’m still exploring. Still curious. But for now? I’ll keep my editor open, Copilot on, and prompts ready, but I’m not handing over the wheel just yet.

No comments yet

Written by Manu

I am a product-driven JavaScript developer, passionate about sharing experiences in the IT world, from a human-centric perspective.

Follow Me

Other articles