Sacrificing software quality to “maximize” profits

 

I recently read a blog post by a fellow software developer who captured the state of the modern day software development standards and inadequacies. He managed to describe the grotesque way that software products have become Frankenstein’s monsters of all the Frankensteins that have written them. It really stuck with me and I started thinking of the reasons why things are the way they are. If you are interested in the original post, you can find it here: http://tonsky.me/blog/disenchantment/

Onto the topic at hand.

I have worked on a lot of different projects and all of them have a lot in common, but the one thing that stands out is the fact that all of them are for a paying customer, which means they brought monetary value to the project owner. Now, at the beginning of my career, the customers were someone distant to me, I didn’t necessarily have a direct contact with them, but that changed after we founded our own service company. During the way, I started to see some patterns that heavily influenced software quality and performance.

Let’s go over them one by one.

Deadlines and ever-evolving requirements

 

As I started my career in a company focused on developing… developer tools, I was molded, trained and mentored into a professional looking to produce something that will work for a vast number of customers, with a vast number of use cases, that all had to be covered by our own software. That automatically means lots and lots of corner cases and lots and lots of feature requests that have to be completed by yesterday. The only possible outcome: Large code libraries, with a ton of “anti-pattern” code pieces that would never have been there in the state they are today, had we thought of the exact corner case they cover. Here’s the thing though:

 

No one can ever know all the cases the user is going to want to use your software for.

… and when that software is a developer tool, a code library, a UI component - there is no way in hell, you can design it as optimal and as clean as you’d like it to be from the start.

We have tried it. We did a complete re-design major parts of our libraries, only to find them in a similar state a few months later. Of course, we could have taken the time to integrate features and fixes a lot better, but… Who has the time… I know how that sounds, believe me, I do. But when you are pushed to the wall with a number of features you have to integrate by the end of the planning period, and bugs and features are dropped out of nowhere every sprint - there isn’t much you can do. You can blame that on Product management, and a big part of this may actually be true, but a Product manager’s day job actually is talking to customers and doing the best he can to adhere to their needs and problems, thus sometimes pushing the developers to… well, do stupid things.

Even now when we run our own Software development services companies, the driving force is clients profits, or rather, the hurry to get to them. We frequently have cases where, while we plan a project execution we get to a point where we can do something the right way, or we can do it fast. This is almost always the client’s decision. 9 out of 10 times, we have to do it fast. Even then, we try our best to execute in the best possible way, but still.

Investor interests

… and that dreaded hockey-stick graph

 

When I left the company to join a single-product oriented one, I thought things would be a lot better. We wouldn’t have to think of all the ways developers might want to use our libraries, because, well, we didn’t target developers. We designed software for the end-user, and the product had well-structured requirements and design.

hockeystick.png
 

Boy, was I wrong…

It wasn’t long before we found ourselves yet again pushed against the wall to write faster, integrate quicker, add this, then add that, release sooner, etc. Sooner than later the illusion that we could do something right, from the ground up became more and more brittle, until, at one point, it collapsed all-together.

I was a bit puzzled. We weren’t live during the development process. The product was scheduled to be released almost a year after we started with a bare minimum of features, almost like an MVP. We didn’t have customers to take care of yet. So… What went wrong?

Well as it turns out, we did have a customer. Whether it is the board, upper management or investors, even end-user product-oriented companies have a “customer”. Changing requirements, adding features and pushing deadlines.

Dilbert.png

But… why? After all, it was their product, wouldn’t it be better for it to be in the best state possible when selling to customers?

Short answer - no…

Long answer - As long as projected revenue is met, investors are happy and the stock price goes up, it doesn’t really matter how fast, pretty, user-friendly and maintainable a product is. All it matters is that customers will buy it. And, they did. And so the next product was developed the same way, and the dreaded cycle of bad software development practices was once more started in yet another company.

The thing is, most of the new software products are focused around revenue, rather than, excellence, performance, and innovation. 90% of product-oriented startups aim at an exit in the first two to three years of their existence. Of course, very few make it, but the ones that do, don’t really set the bar any higher for the ones that follow, with a few exceptions. Don’t get me wrong. I only refer to the software aspects of the companies. Their ideas and product concepts are almost always revolutionary, but software-wise…

All of that eventually leads us to,

An abundance of libraries

Libraries.png

Nowadays, it seems we are drowning in an ever-growing sea of libraries, each solving a different problem… or at least that’s what they claim. I remember when I first encountered this. It was only 4 years since I last had an encounter with Web development and JavaScript when I started working on a new project. Imagine my surprise when I found out the good old JavaScript wasn’t restricted to browsers anymore… We were writing mobile apps. With JavaScript. Well, not exactly JavaScript… It was the time for CoffeeScript, TypeScript, YataYataScript. It seemed no one was using plain old JavaScript anymore. I even had an interview at a company, where they asked me what I wrote JavaScript on. Hint: the right answer wasn’t “a keyboard?”. Neither was an IDE of your choice for the task...

Maybe that was the point where something had to be done. But it wasn’t...

Oh well, too late now. Skip another four years and you get React.js, Angular.js, Vue.js, Meteor.js, Backbone, node, Aurelia, polymer… .js. It seems you can select a random word, slap a .js after it and you are bound for success. Of course, all of the above-mentioned startups aim to adopt something modern, new, “improved”, thus giving way to all those numerous frameworks. Maybe all of them solve something fundamentally wrong in our modern development paradigms, but I will probably never know. Why? Because, as soon as I have learned any single one of them, it will be outdated, marked as a framework used in “legacy” applications, making way for the next… .js (hmmm… NEXT.js… sounds catchy … aaaaand it’s taken).

libraries (1).png

All of those frameworks have something in common, they all aim to speed-up the development process, to “simplify” the codebase. To maximize profits for the companies who decide to use them in their product.

Of course, all of that comes at a cost. Large dependencies which are rarely, if ever, used in most applications (take a look at the first point in this post), sluggish performance in many cases, a nightmare for the DevOps engineers. Of course, there is Docker and Kubernetes for that.

All of that unavoidably gives its toll on software development and leads to large applications, with poor performance, because, the one who sells the software doesn’t need it to be memory optimized, or to have an astounding performance. They need to generate revenue.

That’s why the development of all those frameworks is focused on making them easy to adopt, and able to produce an end product as fast as possible, and when the codebase becomes completely unmaintainable, then it’s time to try the next big App development framework.

Another reason why we don’t spend much time to optimize our code is

Hardware limitations

… or rather, the lack of such

There was a time where you had to count the bytes you code actually would take on the computer it would run on. You would have to re-use variables, keep a track of lines count because line terminators were “expensive” and optimize algorithms to save CPU time. All of that basically went out of the window. Today we have no regard for RAM saving and CPU performance optimization. We rely on 3rd party libraries that someone else wrote. We use integrated algorithms, without thinking, if they’re the optimal solution in our case. We use unoptimized render engines, rather than writing our own. And that’s fine (for most cases).  The reason is very, very simple:

We don’t have to.

We have huge RAMs, a ridiculous amount of computing power in each of our device’s CPUs. There always is more memory, there always is another core to squeeze some power from.

 
Chrome_RAM.png

Chrome, which I’m currently writing this blog post on, claims to take around 800 MB of RAM… With only one tab open… With a simple text editor on it… And that’s fine, because, I have another 15 GB of RAM left for other stuff. Could it be better? Of course, it can. Will it be? I doubt it. A new rendering engine for modern browsers may seem like a good idea, but trust me, all of the corner cases the current ones handle, will re-surface and slap us in the face because we can’t think of everything from the start.

 

Don’t get me wrong, there are companies out there who aim to do that. From the top of my head, most of them are in the entertainment industry, but hey, at least there’s a precedent, right?

Concluding thoughts

Maybe all of the above is complete and utter nonsense, but I don’t think software developers have actually forgotten what it’s like to write quality software. I think we have adapted to today’s world status quo and are playing by the rules of the shot callers, doing our best with what we’ve got. We still solve numerous problems, even if they aren’t of the “software quality” nature. Sacrificing some of the “would be” greatness of the software we write, we aim to solve real-world problems, as fast as we can, and create software that will generate revenue.

The only way we can change that is by solving 3 major problems:

  1. Feature requests dropped in the middle of a planned period

  2. Rushed execution

  3. Tests, tests, tests

To do it we have to become more cynical users of the software we buy, demand quality instead of fast rollouts. To reject the mediocre products being sold, as this philosophy has spread roots in almost all aspects of today’s society, starting with Software development, going through civil engineering and even reaching deep inside government structures.

It all comes to the end-user to change the status quo.