One nanosecond is to one second as one second is to 31.7 years

Drops on green #1

Peter Burns wrote a great post earlier last week about timescales as they might be “perceived” by a computer’s CPU… “your CPU lives by the nanosecond” [and humans live by the second]. The post seems to be loosely based on this article.

I found that the comparison really resonated with me and could provide a useful way to get an intuitive handle on the tradeoffs we make when designing software systems…

A nano-second is one billionth of a second.

Moderately fast modern CPUs can process a few instructions (e.g. comparing a couple of numbers) every nanosecond, much as humans can “process” a few basic facts every second (e.g. comparing a couple of numbers!). This might blow your mind: A nanosecond is to one second as one second is to 31.7 years!

Peter’s comparisons talked only about the timescales it takes to shuffle data backwards and forwards within one computer (CPU, main memory, disk). Many software systems nowadays consist of a collection of computers connected together by a fast network (within a datacenter) and often co-operating with services running on the other side of the globe to deliver the kinds of applications and services we’re used to using on the web. Therefore, I thought it quite interesting to extend the analogy and think about some of the Numbers Everyone Should Know (due to Jeff Dean) as if a nanosecond was a second.

L1 cache reference - 0.5 ns -> half a second.
Branch mispredict - 5 ns -> 5 seconds.
L2 cache reference - 7 ns -> 7 seconds.
Main memory reference - 100 ns -> 1 minute 40 seconds.

Now it gets interesting:

Send 2K bytes over 1 Gbps network - 20,000 ns -> 5 and a half hours.
Read 1 MB sequentially from memory - 250,000 ns -> nearly 3 days.
Round trip within same datacenter - 500,000 ns -> nearly 6 days.
Disk seek - 10,000,000 ns -> 4 months
Read 1 MB sequentially from disk - 20,000,000 ns -> 8 months.
Send packet California->Europe->California - 150,000,000 ns -> 4.75 years.

The most significant (and perhaps initially unintuitive) of these is that it can be significantly faster to read data from RAM on another nearby machine via the network (6 days) rather than seek to it on local disk (8 months).

I’ll throw one more in there: round trip across a 3G mobile network: 250,000,000 ns -> nearly 8 years!

What’s the point of thinking like this? Well, by putting timescales into units that humans can more intuitively understand and reason about, I hope this might help me (and you) make better choices as we design new systems.

Contents