The dystopian illusion of AI productivity

4 min read

The other day, a friend told me a story that really stuck with me.

Within his work environment, he asked another team a simple question. One AI created a ticket. Another AI responded. Another AI opened a pull request. Another AI told him the issue was resolved.

Plot twist: it wasn’t.

By the time a human actually stepped in, it was already too late, and the cost was way higher than it should have been.

That story kept looping in my head.

Not because AI failed. But because no one really understood what was happening.

So the question is:

Are we actually improving how we work… or just delegating without thinking?

The problem isn’t AI, it’s blind delegation

I’m not an AI hater. Not at all. I have been through all the phases, but now I don't see myself going back to not using any LLM for work.

That said, we’re in an experimental phase. We need to fail, iterate, and refine. That’s part of the process.

But what worries me is the trend. We are delegating too much, too fast.

People unable to write a simple email without assistance. Teams are automating processes they don’t fully understand. Decisions flow through layers of abstraction with no real ownership.

It reminds me of the classic argument: “Why learn math if calculators exist?”

But now we’re taking it to an extreme level.

This is where it starts to feel dystopian, but not for the reasons we usually think.

The dystopian feeling isn’t about AI

It’s easy to jump to extremes and say, “This is dystopian.”

But that’s not really it. AI can become dystopian, but that’s not the default outcome. It’s a direction we drift into if we’re not intentional.

What actually creates that feeling is something simpler:

The loss of understanding and ownership.

When:

  • Decisions are made by systems no one fully understands
  • Outputs are trusted without verification
  • Humans become passive reviewers (or are completely absent)

That’s when things start to break in subtle but expensive ways. Not dramatic failures. Quiet ones. Like the story I shared at the beginning.

AI as the next abstraction layer

I see AI as just another abstraction layer.

Like frameworks once were. Like no-code tools. Like builders like Dreamweaver (yes, I am that old!).

We’ve seen this fear before.

People said:

  • calculators → “We won’t know math anymore.”
  • Excel → “We don't need accountants anymore.”
  • IDEs → “We won’t understand memory or compilation.”
  • frameworks → “we’re abstracting too much.”

And yes, some skills faded. But overall, we didn’t become worse. We shifted focus.

Accountants still exist. Developers still exist.

The heavy lifting has always been abstracted away. That’s not new.

What is new is how quickly we’re willing to give up control.

What’s actually different this time

AI is not just a tool. It’s an agent-like abstraction.

It doesn’t just help you do things: it can act for you.

That’s a big jump.

Before, you needed some level of understanding before using abstractions.

Now, you can bypass that almost completely. And that’s where the real risk starts.

We’re delegating before building intuition.

The real bottleneck was never writing code

Let’s be honest.

Writing code was never the biggest bottleneck. If typing speed were the issue, we could just hire more developers.

The real challenge has always been understanding the product. Knowing what to build, why it matters, and how to execute.

We used to measure work by hours. Then commits. Then the number of PRs. But output without quality is just noise.

AI makes this even more dangerous. Because now we can produce a lot of output that looks correct.

The illusion of productivity

AI can generate tickets, PRs, comments, summaries…

It looks like progress.

But underneath, something more subtle is happening:

  • A false sense of productivity
  • Shallow understanding at scale
  • Human intervention that comes too late (and too expensive)
  • Systems that look like they work… until they don’t

That’s actually worse than obvious failure.

Because it hides.

Abstraction requires responsibility

AI is just another layer. And every abstraction has a trade-off:

You gain speed. You lose visibility.

The mistake isn’t using AI. The mistake is using it without keeping the ability to step in and understand what’s happening underneath.

If you can’t debug it, you probably shouldn’t fully delegate it.

The construction analogy

I like to think about this like a construction site. Why can’t everyone just be the director?

Because execution is not trivial.

It requires:

  • context
  • sequencing
  • trade-offs
  • experience

AI can execute tasks.

But deciding what should be done, when, and whether it makes sense, that’s still human work.

That hasn’t changed.

Conclusion

We’re not heading into dystopia by default.

We’re standing at a fork.

One path: delegate everything → lose skills, lose control.

Other path: use AI intentionally → amplify judgment and impact.

Same tool. Different outcomes. If anything, AI is exposing something uncomfortable:

Writing code was never the hard part. Thinking clearly always was.

And now there’s nowhere to hide.

Get posts in your inbox

Subscribe to my Substack newsletter for the latest posts and updates.

Subscribe on Substack

Other articles

View all articles

Written by Manu

I am a product-driven JavaScript developer, passionate about sharing experiences in the IT world, from a human-centric perspective.

Follow Me