What is your special sauce?
What is the churn rate?
How much does a client cost you?
Those are some of the questions an investor can ask a startup. The founder could be asking themselves the same questions when preparing a pitch or getting ready for due diligence. They are all important. There are resources available to assist founders in calculating or answering most, if not all, of them. I would, however, like to look at just one, the last one. So, how much does a client cost you?
Some companies can’t be scaled easily or at all. If you are operating a small business, scalability and digital sustainability might not concern you as much. In startups, however, an ability to scale is the key factor. To be able to ramp up your user count, the number of installations or data points is crucial to the continued operation and the company’s attractiveness to investors.
The vast majority of startups are using some sort of cloud computing, be it AWS, Google, or Azure. The pricing models are almost identical, so I will base my calculations on Google since they provide the easiest-to-read one.
Imagine that your business is getting the heart rate values from your users’ wearables and delivering them to your backend for analysis and processing. The data is delivered and then stored in a MySQL database as an integer (INT) with a fixed size of 4 bytes. To have enough data points for noise filtering, you are looking at an average of 2 readings per minute. So here is a little table showing the overall costs of that.
|Per year per person
Storage, Data points
|Size of data points per person, year, GB||Per year storage cost
|People 40-65 yo, with hypertension
Using smart wearables, CZ
|People receiving health benefits,
with hypertension, using smart wearables, 40-65 yo, CZ
|Avg cost per 1GB storage, year
The numbers don’t look that scary, do they? You need to remember, however, that that’s just one dataset, the tip of the data transfer and processing iceberg. It doesn’t take into account the traffic used, the encryption-decryption cycle that uses CPU power, backups, etc.
I would like to bring attention to a significant difference between optimized and non-optimized data. Even at this level, the difference is more than eightfold. What optimizations do I keep on yammering about, you might ask? To answer that question, I would need to ask another one. What are the downsides of Low-Code/No-Code development?
Some might say that it’s the difficulty in code control. Others, that it creates a mess in versioning.
In my opinion, the biggest flaw of them would be a lack of optimization.
Imagine, you’ve got a small package to deliver to a warehouse 2 blocks away. The package is a size of a shoebox and it’s filled with candy. Let’s say it weighs 5 lbs. But the only method of transportation you’ve got available is an 18-wheeler. That wouldn’t be the best tool to use, would you agree? You’d prefer a bicycle or a scooter in this case. That huge truck is all perfect and fine when you’ve got a load full of pallets to deliver to another town, though.
That is Low-Code/No-Code in essence. Sure, there are some optimizations. My truck analogy, like most analogies, doesn’t paint a full picture. But you got the gist of it, I reckon.
With chip shortages, data costs are going up. Storage, computation, transfer – all of that is getting more expensive and the dataset grows ever larger.
Now, think of a time when carbon footprint is going to be printed not only for food but for software products tool. In my opinion, we aren’t too far off. There’s another way to look at it too. How much does the health of your loved ones worth to you? Is it more or less expensive than hiring a developer? I doubt there is an MSRP on that.
Low-Code/No-Code seems like such a good idea until you start doing the math. We got corrupted by the abundance of processing power. Why spend time and money on optimization? Let’s just chuck more RAM at it or two more CPU cores.
There was a time when writing lean code was a badge of honor for developers. There was a time when we were fighting tooth and nail for every CPU cycle. With the amount of data available, with big data being used in many products we use daily, I believe it’s time to bring it back.
I am not saying, “Don’t use Low-Code/No-Code systems”, not at all! Just have those questions at the back of your mind:
How much would it cost me in resources in two years?
How much would that be when my dataset hits 2TB?
How much CO2 would my application be producing when I get 100.000 clients?