Building software that is meant to disappear
Technology moves faster than ever. New frameworks emerge every year, tools change constantly, and deprecation has become an integral part of everyday work. In many ways, this speed is beneficial. It allows innovation, removes old constraints, and helps us build better products.
At the same time, it raises a quiet question. When we know that what we build today may be replaced very soon, how does that affect the way we design, code, and care about our work?
This is not an argument against change. It is an attempt to look at how expectations about lifespan shape the systems we build.
A simple hardware example
A very clear example of hardware planned obsolescence is the sealed battery.
Most modern devices use lithium batteries with a known lifespan. After a few years, the battery loses capacity. The device still works, the screen is fine, the processor is fast enough, but using it becomes frustrating. The product is not broken, it is just no longer practical.
This happens because the battery cannot be easily replaced. That is not an accident, it’s a design decision made from the beginning. Of course, some of that is driven by technical constraints like waterproofing or space, but the result is still the same: the product is built with a limited useful life in mind.
This example is simple, but it helps explain what is happening in software today.
Software did not always work this way
Many older systems were built to last. Some airport control software, written in COBOL or Fortran decades ago, still runs today. These systems were built with longevity in mind because replacing them was expensive and risky.
Replacing them was expensive and risky. Because of that, engineers focused on stability. Backward compatibility mattered. Changes were careful and incremental. Longevity was not a nice idea, it was a requirement.
The assumptions were different. Software was something you maintained, not something you replaced every year.
The modern software equivalent
Today, many software systems are built with a different expectation. We assume frameworks will change. APIs will be deprecated. Internal tools will be rewritten. Sometimes this is even planned from the start.
The system still works, but keeping it alive becomes harder over time. At some point, replacing it feels cheaper than maintaining it.
This is very similar to the sealed battery. The product does not fail. It simply reaches a point where continuing no longer feels worth it.
How does lifespan change quality
This leads to an uncomfortable question.
If we believe something will not last long, how much effort do we put into making it good?
When a short lifespan is assumed, incentives change. Maintainability becomes less important. A clear design is easier to postpone. Small shortcuts feel acceptable because the code will be replaced anyway.
I’ve worked on projects where we consciously skipped writing tests or documentation, telling ourselves the code would be replaced in six months. Most of the time, it stayed much longer, just harder to work with.
This does not mean people do bad work on purpose. It means they are reacting to the environment they are in. If replacement is expected, longevity stops being the main goal.
The cost does not disappear. It comes back later as rewrites, migrations, complex systems, or teams that are afraid to touch old code.
Shipping fast and its consequences
How much of the code we shipped to production is still running today?
Modern teams often repeat the idea that done is better than perfect. In many situations, this is beneficial. Shipping fast can help teams find product-market fit earlier, learn from real users, and move at the same pace as competitors. Speed can reduce risk when you are still exploring and unsure what really matters.
Problems appear when speed stops being a phase and becomes a permanent mode. Code written to move fast is often harder to maintain. Small shortcuts turn into technical debt, and that debt grows over time. What started as a temporary decision becomes a system that is difficult to scale, change, or even understand. Teams then spend more time maintaining workarounds than improving the product, and there never seems to be enough time to “fix it properly”.
Some people now go one step further and argue that shipping rough or low-quality code is fine because AI will fix it later. Refactoring, documentation, and even redesign are seen as problems we can postpone indefinitely. This idea is tempting, but risky.
AI can help improve code, but it still works within the constraints we create today.
If the architecture is unclear or the original intent is lost, even AI ends up rearranging the mess instead of fixing it. You can't refactor what you don't understand.
Shipping fast is not wrong by itself. The real challenge is knowing when speed is helping you learn, and when it is quietly locking you into a system that cannot grow.
Deprecation is not the problem
This is not an argument against deprecation.
Deprecation is necessary to keep technology moving forward. Without it, systems become heavy and hard to evolve.
A good example is Apple removing the headphone jack. Keeping it had real costs. It took space, added weight, and limited other design choices. Removing it allowed new features and improvements.
Deprecation can enable innovation. The problem starts when short lifespan is no longer a choice, but the default.
A human question
When everything is temporary, responsibility changes.
If code is expected to live for only a short time, how much care does it deserve? How much documentation? How much thought about the next person who will work on it?
This is not only technical debt. It is also human debt. When everything feels temporary, people stop investing. Knowledge fades. Burnout grows. Teams lose their sense of ownership.
Sustainable software isn’t just about code quality, it’s about making the work feel worth doing.
Fast iteration and good quality can exist together. But only if we are intentional about which parts of our systems change quickly, and which parts should remain stable.
An open question
Technology will keep moving fast. That is not going to change.
The real question is where we should slow down. Which parts of our systems deserve to last longer? Which layers should resist constant replacement?
When we design everything to be replaced, we do not just move faster. We quietly change what responsibility and care mean in software.
That is a question worth thinking about.
Get posts in your inbox
Subscribe to my Substack newsletter for the latest posts and updates.
Subscribe on Substack