Share this:

waferIn 1993, I graduated with a degree in Electrical Engineering, having studied semiconductor device physics at Santa Clara University, right in the heart of Silicon Valley.  I remember fabricating my first integrated circuit in a small on-campus clean room outfitted with old generation fab equipment from local semiconductor manufacturers.  Those companies in turn could soak up all of us fresh graduates to don bunny suits in the semiconductor workforce.  The 1990s were a time when Moore’s Law had been in full swing for many years and there was a lot of work being done in physics to continue shrinking transistors and exploring different lithography techniques.  Although the early 1990s brought a recessionary climate, computers, consumer electronics and new semiconductor technologies such as flash memory were going gangbusters.

Moore’s Law is the prediction made by Gordon Moore of Intel in 1965 stating that the number of components (i.e. transistors) on an integrated circuit doubles every two years.  This law has driven the entire semiconductor industry and the clock cycle of all downstream computer and electronics industries for the last half century.  However, in recent years, as every computer enthusiast knows, we have seen a few changes.  While we’ve observed component integration as would be expected by Moore’s Law, we went from chasing faster clock frequencies to counting CPU cores.  Chips could be made with more transistors, but we had to slow them down to avoid burning up the device.  Are we approaching some fundamental limit that would challenge our ability to keep packing more transistors onto a device? As we close the book on the year 2015, 50 years after Gordon Moore’s 1965 paper, is it fair to declare Moore’s Law dead?  After all, even Moore himself saw his law as unsustainable at some point.  Eventually transistors reach the atomic level. And, as far as we know, we can’t shrink traditional semiconductor components to smaller than nature’s elements.


However, in my view, there are a few things that come to mind regarding the state of the art in semiconductor technology and the computer industry that we should be thinking about in the context of Moore’s Law:

Moore’s Second Law: Rock’s Law

A lesser known Moore’s Law, also attributed to Arthur Rock, is the inverse of the first, namely that the cost of a chip fabrication plant doubles every four years.  This is extremely important.  When I came out of school, almost every US semiconductor company ran its own wafer fab.  However, over time and as predicted by Rock’s Law, the capital required to build next generation wafer fabs grew substantially. Chip makers began to outsource the manufacturing to foundries such as  Taiwan Semiconductor Manufacturing Company (TSMC).  TSMC could then amortize the massive costs of developing a new process across many different semiconductor companies.  In addition to this trend, we have also seen the number of complex Application Specific Integrated Circuits (ASICs) production starts continue to decline due to the high startup costs of manufacturing such devices.

So, if costs continue to rise to build state of the art silicon, it begs a question: do we create more diverse, application specific components or do we choose to build billions of a smaller set of parts that can be programmed to serve different functions?


Re-configurable Vs Application Specific

There’s a reason Intel acquired Altera, a re-programmable chip company. Intel has remained the largest chip maker in the world because they have remained the leader in building a general purpose processor that can be programmed by software to do many different functions.  Now, in an era where Moore’s first law is colliding with the second, the world’s largest chip maker decided to make a $16.7B bet in devices that can be reprogrammed (in hardware) to perform specific functions.  Think of it as software-defined silicon.

Our rate of innovation will not slow down even if Moore’s Law dies.  We will simply change the vector of innovation.  New micro-architectures will emerge.  Software development tool chains and frameworks like OpenCL will evolve at a dizzying pace.  Applications will evolve to fit new computing paradigms governed by both the laws of physics and the constraints of economics.

Moore’s Law, HPC and Cloud Computing

So what does this have to with HPC and Cloud Computing?

High Performance Computing defines a category of computing that seeks to push the envelope on performance with a focus on optimizing applications for the underlying hardware.  In many ways, Moore’s Law has been intertwined with HPC.  With each new processor generation, many applications got an inherent performance boost.  Additionally, with more transistors on a device, chip builders could add features in hardware to help make programming software easier.  However, with Moore’s Law under attack, application developers have needed to look for new ways to leverage silicon real estate.  One need look no further than the explosion of applications that combine the power of NVIDIA GPUs alongside Intel CPUs to see how the demand for performance has moved past waiting for vendors to simply upgrade their chips to make applications run faster.

It is also no coincidence that the rise of cloud computing has occurred in a time where the certainty of Moore’s Law has come into question.  Cloud is another expression of how economic forces shape our reality in addition to physics.  Cloud computing takes away barriers of machine ownership and focuses purely on agility.  Cloud enables application developers to harness compute power in as many different ways as there are ideas that can be conceived.

Moore had given his industry 50 years of predictive comfort with his incredible insight of 1965.  Whether or not his law is dead is less important than our ability to understand the current economic and physical forces we confront in this industry today.