Is there an upside to Moore's Law slowing down? Actually, there are many.

Harry Quackenboss, Principal, Q Associates

June 30, 2017

7 Min Read
How Startups Can Win as Moore's Law Ends

A few years ago, before people started talking about the end of Moore's Law, a venture capitalist on a panel at an investor conference was decrying how expensive it was to fund chip startups. "We can afford about one chip startup a year," he said.

Another venture capitalist in the audience spoke up, saying that his firm couldn't even do that. "I didn't mean our firm," the panelist replied. As he swept his arm over the audience, he continued, "I meant all of us combined."

This was partly a result of chicken-and-egg circumstances. In order to convince investors to fund a chip startup, the founders needed to come up with a really big advantage over what the incumbent vendors' products would be in a couple of years that couldn't be easily equaled by them. This required designing big, complex chips that would be manufactured using the latest foundry process, resulting in everything being expensive, from the large design team personnel expense and the EDA software to the lithographic masks to mass-produce the chips to manufacturing the chips in small batches.

Not more than a couple of years later, many started worrying about the slowdown in Moore's Law, and what this was going to do to the tech industry. Many were predicting gloom, as if it were going to bring a kind of high-tech Ice Age.

But for all of the negative consequences, there are positive implications of the slowdown. In recent months, there have been several announcements about Ethernet switch chip products in the 10Gbit/s to 100Gbit/s Ethernet category to challenge the incumbents, including:

  • Barefoot Networks

  • Innovium

  • Cavium (Xpliant acquisition)

  • Centec (China)

  • Nephos (Mediatek spinout)

Why, in the face of the end of Moore's Law, is this happening?

For the last half-century, the entire semiconductor industry has been marching to the beat of a concept first proposed by Gordon Moore, co-founder of Intel.

What Gordon first proposed in 1965 was not exactly what people often say it is. It wasn’t about doubling the number of transistors on a single chip. The title of his original paper, published in Electronics, was titled "Cramming More Components onto Integrated Circuits." Further, over the next decade as the concepts from the paper were distilled into a simple statement, the time frame for doubling components wasn't so precise. What it predicted was that the number of components (transistors) would double every year or two.

In 2005, Gordon spoke on the 40th anniversary of Moore’s Law at an event at the Computer History Museum in Mountain View, Calif. What Gordon said then was that there were three areas of improvement. (1) The spacing between components would shrink (2) The defect density would improve, enabling economically producible chips based on the process yields to be larger and (3) Incremental improvement in the component placement and circuit routing would reduce the amount of wasted space on the chips. Eventually, Gordon explained, the routing optimization methods improved to the point where there wasn’t any wasted space. It was then that the doubling of the number of transistors on a single chip slowed down from every 12 months to 18-24 months.

From the 1970s through the 1980s up until about ten years ago, the chip and systems vendors have marched to this cadence.

Intel and other vendors evolved strategies to allow evolution to occur at Moore's Law rate, with new product introductions that kept up with Moore’s Law, but did so in steps that allowed systems vendor customers, and in turn their end customers, to absorb new chip designs and produce new systems with new features and performance at attractive prices.

On the chip design and manufacturing side of the industry, this involved massive investments, where each new generation required larger investments than the previous generation. This was a fact of life for the entire design and manufacturing supply chain, from companies that make the design and simulation software, the photolithographic machines that made the masks, the robotic handling equipment, the clean room-grade HVAC systems, the shock-absorbing mounting systems for the manufacturing equipment on the chip design and production arena and even the purity of the silicon ingots that were sliced into wafers. It has driven industry consolidation, forcing smaller chip companies to outsource manufacturing to an increasingly smaller set of chip manufacturers.

It impacted the companies that built systems around the chips that needed to come up with ways to recoup their engineering investments before the next generation of chips forced them to redesign. It also impacted systems users. Business PC and server buyers started depreciating their purchases over two to four years. Many vendors of larger systems built their business models, and made investments in engineering around the principle of major new systems designs about every four years (a 4X change in Moore's Law).

If that is all in the past, where is the upside in Moore's Law slowing down? It starts with economics:

Chip designs: New chip designs won't become obsolete as fast. The systems that those chips go into will have longer useful lives.

Electronic design automations tools: The EDA tools that are tied to specific chip manufacturing processes will be useful longer. This means the EDA software vendors can invest in improvements that will reduce the investment and time to design new chips. It may even foster a new generation of EDA companies.

Chip manufacturing tooling: The cost of generating the photolithographic masks should come down, because the mask production machines will have longer useful lives.

Chip manufacturing equipment: Equipment used to print circuit designs, clean the in-process wafers and the robot arms that transfer wafers from one step to the next won't become as obsolete as quickly. With slower obsolescence, the fixed capital equipment can be spread out over more production, lowering chip manufacturing cost. Chip designers can afford to spend more time on creating and refining new architectural approaches. For smaller chip vendors, this may reduce the economies of scale advantage that the largest companies they compete with have.

With longer lives and lower cost per chip, new designs can be economically justified with smaller addressable markets. Instead of focusing on large chips that can do many things, specialized designs with a narrower range of capabilities will become economically feasible.

This is key to enabling chip startups again. But startups aren't going to have an easy time, because the same factors will make it attractive for large incumbent vendors such as Broadcom to produce variations within product families.

Although a lot of the implications of the slowdown of Moore's Law apply to the broader chip industry, there are some factors specific to networking, such as:

Network programmability
Although the introduction of OpenFlow and other software-defined networking technologies didn't trigger new simpler chip designs as some had predicted, several of the new designs feature programmability, both to facilitate inserting them into existing system designs with software written for Broadcom's features and SDK, and to support new control methods such as the P4 language.

Large available market
Data center, or more precisely, server networking, has become a very big part of the market with speeds increasing from 10 Gbit/s to 100 Gbit/s. The bigger the potential market, the more investment in R&D it will attract.

Narrower range of use cases
Over the last decade, the mid- to high-end of the market has addressed with switch chip designs with increasingly complex feature sets, including larger Layer 2 MAC and VLAN table sizes, complex TCAMs, support for a large number of ACLs and so on. Some of the clean sheet designs appear to be based on trading off, for example, fewer complex configuration options for perhaps more ports, faster on-chip data baths or bigger buffers. With the widespread adoption of BGP for server networking, this might be a large enough segment to justify a design just for this use case leaving out Layer 2 protocol support.

— Harry Quackenboss, Principal, Q Associates

About the Author(s)

Harry Quackenboss

Principal, Q Associates

Principal, Q Associates

Subscribe and receive the latest news from the industry.
Join 62,000+ members. Yes it's completely free.

You May Also Like