Thursday, April 18, 2013

How Pixar Used Moore’s Law to Predict the Future

Image: Pixar
Whether you call it a data-driven prediction or think of it as a self-fulfilling prophecy, Moore’s Law has been going strong. It’s approaching half a century despite frequent observations that it can’t continue forever (Gordon Moore himself only gave it a decade).
Moore’s formulation was that the density of transistors on an integrated circuit doubles every 18 months. (He actually first said 12 months, then 24 months — but the average stuck. It’s a “law,” not a law, after all.) But here’s my way of formulating Moore’s Law: Everything good about computers gets an order of magnitude better every five years.
So why bother with the intervening steps? If we know that computers will improve by a factor of 100 in 10 years, why not go directly to the higher factor instead of just getting a factor of 10 in five years?
We know what Moore’s Law is and how it works, but not many people reflect on why it exists.
Because inventors, visionaries, engineers — whatever you want to call them — have to arrive at each level before they can even imagine a way to the next one … and then create it. That’s how Pixar and its first film Toy Story — the first feature-length computer-animated film — became a reality.
The secret was Moore’s Law, and not just in the technical way one would think. The enabling idea of our vision was computation, of course, but the idea of computation alone would not have gotten us far. Ed Catmull (who would cofound Pixar with me) and I also used the Law to anticipate the future and make good business decisions through the long years we waited for the computer animations we all envisioned to become reality.
Because we — Catmull (now president of Walt Disney Animation Studios), I, and our colleagues — conceived the notion of the first completely digital movie almost four decades ago. It took 20 years to realize that dream with Toy Story, but Moore’s Law is what gave us the confidence to hang on for those two decades.


Alvy Ray Smith

Alvy Ray Smith cofounded Pixar (later acquired by Disney) and Altamira (acquired by Microsoft). He received two technical Academy Awards (for the alpha channel concept and for digital paint systems); conceived and directed the Genesis Demo in Star Trek II: The Wrath of Khan, and the Adventures of Andre & Wally B., with John Lasseter as animator; specified and negotiated the Academy-Award winning Disney animation production system CAPS; and more. Smith earned his Ph.D. from Stanford University. He is currently writing a book on the biography of the pixel.

We were housed back then on a Long Island estate that, strangely, also housed the New York Institute of Technology. It also was home to a complete, traditional cel-animation studio, as well as to many of the top computer graphics minds of the day — minds which intersected computer science and art and would later become the heart of Pixar.
As early as the late 1970s, one of our colleagues Lance Williams proposed a computer-animated story starring a robot named ipso facto. But Ed and I whipped out the proverbial envelope and did some calculations: given the computation rates of the time, we figured it would take billions of dollars and years of time to make the movie.
While I’d like to claim that we used Moore’s Law right then and there to predict the actual time it would take to reach Toy Story, we didn’t yet have a strong enough hold on the details of feature filmmaking to nail that prediction down. But we did understand that it was only a matter of (a long) time, that Moore’s Law was chipping away, and that we could count on it to deliver eventually.
When the group moved to California to become part of Lucasfilm, we got close to making a computer-animated movie again in the mid-1980s — this time about a monkey with godlike powers but a missing prefrontal cortex. We had a sponsor, a story treatment, and a marketing survey. We were prepared to make a screen test: Our hot young animator John Lasseter had sketched numerous studies of the hero monkey and had the sponsor salivating over a glass-dragon protagonist.
But when it came time to harden the deal and run the numbers for the contracts, I discovered to my dismay that computers were still too slow: The projected production cost was too high and the computation time way too long. We had to back out of the deal. This time, we did know enough detail to correctly apply Moore’s Law — and it told us that we had to wait another five years to start making the first movie. And sure enough, five years later Disney approached us to make Toy Story.
We implement each step to see if it actually works, then gain the courage, the insight, and the engineering mastery to proceed to the next step.
Moore’s Law also told us that the new company we were starting, Pixar, had to bide its time — building hardware instead of making movies.
We know what Moore’s Law is and how it works, but not many people reflect on why it exists. Yes, there are often physical barriers to innovation. But there’s no imminent physical barrier to the realization of a bit: A bit is merely presence or absence of something, say a voltage, which means it can get exponentially smaller. So with no physical limitation, Moore’s Law reflects the top rate at which humans can innovate. If we could proceed faster, we would.
The exponential improvement of a given technology — Moore’s Law in the case of computer chip technology — measures the ultimate speed at which a large group of creative humans can proceed to improve a technology, under competition, when there’s no physical barrier to its improvement and when the technology must pay its own way.
It’s like the evolution of living things. Tiny changes aggregate into eventually massive changes, different species even — but the living things must remain viable at every step. There couldn’t have been a direct one-step leap from a bacterium to a baby. In contrast to Darwinian evolution, however, Moore’s Law changes have a teleology: towards cheaper, larger, denser, and faster. We implement each step to see if it actually works, then gain the courage, the insight, and the engineering mastery to proceed to the next step.
We use ‘order of magnitude’ to imply a change so great that it requires new thought processes…
It’s also how we can assess the financial gamble necessary to proceed to the next step, to know that this step is within imaginable bounds.
Which brings us back to the question of Moore’s Law and why we bother with the intervening steps. Why we didn’t just proceed to build the machines we had in 2010 — skipping all the years and pokey machines in between — if we knew in 1965 that computers would be one billion times better in 2010? Because we were unable to imagine the computer of 2010, much less know how to build it, in 1965.
That’s the reason for expressing Moore’s Law in orders of magnitude rather than factors of 10. The latter form is merely arithmetic, but the former implies an intellectual challenge. We use “order of magnitude” to imply a change so great that it requires new thought processes, new conceptualizations: It’s not simply more, it’s different.
Hardly anyone can see across even the next crank of the Moore’s Law clock. A surprisingly unremarked aspect of Moore’s Law is that it’s just as hard to see back along the curve as it is to see forward (unless you were there yourself). Those few who can see forward have made fortunes.
Imagine what we can do with 1,000 times more Moore’s Law horsepower. There’s almost certainly that much life left in it. Catmull and I and our colleagues did the movies, but I can’t even imagine what other innovators will do. But I do know that I’m guaranteed by Law to be surprised.

No comments:

Post a Comment