Code performance vs system performance

Just a quick thought: as non-volatile storage becomes faster and more affordable, I/O will cease to be the bottleneck it currently is, especially for database servers. Granted, there are applications/web sites out there which will always have to shard their database layer because they deal with a volume of writes well above what a single DB server can handle (and I'm talking about mammoth social media sites such as Facebook, Twitter, Tumblr etc).  By database in this context I mean relational databases. NoSQL-like databases worth their salt are distributed from the get go, so I am not referring to them in this discussion.

For people who are hoping not to have to shard their RDBMS, things like memcached for reads and super fast storage such as FusionIO for writes give them a chance to scale their single database server up for a much longer period of time (and by a single database server I mostly mean the server where the writes go, since reads can be scaled more easily by sending them to slaves of the master server in the MySQL world for example).

In this new world, the bottleneck at the database server layer becomes not the I/O subsystem, but the CPU. Hence the need to squeeze every ounce of performance out of your code and out of your SQL queries. Good DBAs will become more important, and good developers writing efficient code will be at a premium. Performance testing will gain a greater place in the overall testing strategy as developers and DBAs will need to test their code and their SQL queries against in-memory databases to make sure there are no inefficiencies in the code.

I am using the future tense here, but the future is upon us already, and it's exciting!


Simon said…
Hi Grig. Nice article.

I'm interested to know, from a tester rather than a developer perspective - what tools do you think are useful for measuring code performance?
Koen said…
I disagree. Even if the I/O subsystem becomes an order of magnitude faster, this will not be enough to keep the CPU busy all the time; I/O subsystem >> RAM >> CPU cache; there's multiple orders of magnitude difference between them.
Grig Gheorghiu said…
Koen -- you refer to raw speeds here. I refer to poorly written SQL queries and poorly written code that uses those queries. That combination can tie up the CPU and potentially cause deadlocks as well no matter how fast your CPU is.
Grig Gheorghiu said…
Simon -- thanks for the kind words. I'll follow up on your question with another post soon.
Benjamin said…
Poorly written SQL queries are poorly written because they have redundant queries which hit the DB, not because they're using so much CPU to process DB results. An order of magnitude faster DBs are going to lessen this problem, not increase it.

Profiling is always the best way to see where to focus optimization efforts. SSDs don't change that.
Grig Gheorghiu said…
Benjamin -- I disagree. I think slow I/O masks a lot of issues that come to the forefront once I/O is not the issue anymore. Once you eliminate your #1 problem, #2 becomes #1. And #2 for most people is poorly written queries and poorly written code that makes too much redundant use of queries.

Popular posts from this blog

Performance vs. load vs. stress testing

Dynamic DNS updates with nsupdate and BIND 9

Running Gatling load tests in Docker containers via Jenkins