A few weeks ago, I was talking to a group of close friends about how long we’ve been programming when I realized that it has been more than 17 years since I started developing in .Net. I’ve been working with this since the very first beta, in the early 2000s.
During the conversation, I mentioned that nowadays I already have legacy .net code to maintain. It was when some of my friends asked me: “Isn’t it all legacy now?”.
This comment made me think about that for days. Even though I disagree with him and I’m sure it was much more sarcasm than anything else, the question about what makes something a legacy or not kept lingering on my head. What makes a software a legacy – and why does it feel like something bad?
When we look at the definition of “legacy” we can observe the following, considering the conversation context:
So, apart from the first item, all other items are not-so-pleasant definitions. But I assure you that I still have pleasure working with .Net apps. Based on that, can we assume it is not legacy, but something else?
Leaving .net aside for a moment, how many other languages with more than 15+ years are still in use out there? I can quickly list a lot of them: C, C++, Java, JavaScript, Python, Lua, Ruby, Perl, and so on. Just because these languages are old, they are not to be considered legacy. Most of them are still receiving updates, modern features, and being able to support any use case proposed. And I bet that many developers out there are still having fun with them.
What really makes software legacy is not the language used to create it. Most of the legacy software I see around became legacy due to the lack of investment to keep it growing, lack of diligent developers taking care of it, or just because some better product has been invented. Don’t be ashamed to develop in a programming language that is not “in the hype”. Part of your success lies in working with something that makes you happy.