NOTE

Amdahl's law

#software-engineering (40)#performance (1)

If part of the work must run in order, that part sets the ceiling on how much faster the whole job can get. The rest of the machine waits.

According to the law, even with an infinite number of processors, the speedup is constrained by the unparallelizable portion.

That line is how Amdahl’s law on Wikipedia states the ceiling in plain language before it walks through the algebra and the usual “two parts of a program” picture (directory scan versus per-file workers is the stock example). Gene Amdahl presented the argument at the 1967 AFIPS Spring Joint Computer Conference in Validity of the single processor approach to achieving large scale computing capabilities. In the usual parallel form, if a fraction p of runtime can scale across processors with ideal speedup s, latency speedup is 1 / ((1 − p) + p/s). Push s toward infinity and you still land at 1 / (1 − p). That is the cap.

The bottleneck is the fraction of time you cannot parallelize, not the peak FLOPS of the box you bought. Throwing hardware at a program with a fat serial tail buys little beyond bragging rights, and a slower Moore’s law cadence does not reopen that ceiling.

Software Engineering Advice from Building Large-Scale Distributed Systems (PDF) is Jeff Dean’s Stanford deck on large distributed systems. He treats back-of-the-envelope performance estimates as a core skill and contrasts parallel and serial designs (parallel disk reads next to a single seek-bound path) to show how the slow stage dominates wall time.

Important skill: ability to estimate performance of a system design - without actually having to build it!

Amdahl is the algebra behind that habit.

For product and architecture, the law is a budget question before it is a hardware question.

  • If onboarding, auth, or a single coordinator owns five percent of request time, even ideal parallelism on the other ninety-five percent cannot cut end-to-end latency below that five percent slice. You fix the coordinator or you accept the floor.
  • “We will scale out” is a strategy only after you know what share of user-visible work splits across workers without a single hot stage. Otherwise you ship a fleet that idles while one stage does the same sequential work as before.

Measure where time goes, attack the serial share, then argue about core counts.