image of a laptop being taken apart, motherboard being visible

Moore's Law stems from Gordon Moore's observation in 1965 that the number of transistors on a processor would double every two years. But while designers have been trying to squeeze out more density in recent years, it has come at a much higher cost of manufacturing devices on leading-edge process nodes – and it has taken much longer.

With the cost to manufacture an integrated chip steadily climbing (and in particular a sharp increase in those costs in the latest generations due in part to increased mask layers) the industry simply can no longer rely on brute-force engineering of smaller transistors to make computers more powerful. The literal physical limit has been approached on how small these transistors can be.

Still, Intel CEO Pat Gelsinger promised at the company's online Innovation event that "Moore's law is alive and well," adding that “we are predicting that we will maintain or even go faster than Moore's law for the next decade […] we expect to even bend the curve faster than a doubling every two years.”

Surprised? You shouldn’t be.

Unless you’ve spent the past five years in a cave, and possibly even if you have, you know that engineers have had time to consider an alternative path forward. Instead of designing monolithic chips that incorporate all the important elements on a single silicon die, a new approach to IC design subdivides a system into functional circuit blocks called chiplets. A single chip is broken down into multiple smaller independent constituents that make up a chip built out of multiple smaller dies, connected together and combined in new ways.

The result is that unique intellectual property (IP) blocks can be split across multiple chiplets, enabling products to be built with different configurations. If the product needs more, it simply adds in more of these chiplets. This same principle can be applied to memory channels, cores, media accelerators, AI accelerators, graphics, IO or anything else. The idea is that each IP can be split and then scaled up, meaning that the chiplets are tiny and can be built relatively quickly, while faults should be ironed out very quickly, too.

In the Olympics of chip design, then, chiplets are well poised to claim the gold medal. Chiplets can be mixed and matched based on what is needed. A workstation might need less graphics capability and more for computing and AI, whereas a mobile version of the chip will be heavily invested in IO. The key is enabling how those chiplets fit together, and at which points it makes sense to mix and match the relevant ones.

The goal here also is to mix and match which process nodes work best for different parts of the chip. With this manufacturing approach, high-performance ICs can be designed using the best transistor technology for the function. Fabrication cycles can be shorter and performance-wise chiplets can compete with monolithic circuits.

I covered the basics of chiplets two years ago in a previous column, and today I offer an update as both Intel and AMD are pursuing the use of chiplets in their next-generation chips – allowing them to use different manufacturing processes for different chips in the final package. For example, a chiplet design might link a 7nm to 10nm CPU with a 14nm or 22nm I/O element over a high-speed internal interconnect.

AMD: 3D Chiplet Technology

AMD's Ryzen, Ryzen Threadripper and Epyc CPUs, which are based on the company's Zen architecture, are examples of products that currently contain chiplets. The first-generation AMD EPYC processor was based on four chiplets. Each of these had 8 “Zen” CPU cores with 2 DDR4 memory channels and 32 PCIe lanes to meet performance goals. AMD had to work in some extra room for the Infinity Fabric interconnect across the four chiplets. These lessons were put to use with the second-generation 7nm Epyc processor. 

AMD’s 3D chiplet technology combines chiplet architecture with 3D stacking. Earlier this year AMD showcased a new 3D chiplet architecture that will be used for future high-performance computing products set to debut. Using a hybrid bond approach that AMD says provides over 200 times the interconnect density of 2D chiplets and, again according to AMD, more than 15 times the density compared to existing 3D packaging solutions.

Pioneered in collaboration with TSMC, the technology also is said to consume less energy than current 3D solutions.

The first application of the 3D chiplet is called 3D vertical cache. To demonstrate the technology, AMD created a prototype by bonding a 3D vertical cache onto an AMD Ryzen 5000 series processor. AMD reported that the prototype Ryzen 5900X with 3D V-Cache attached delivered a 12 percent higher frame rate for Xbox Game Studios’ game “Gears 5.” In benchmarking on five other games, performance increased an average of 15 percent using the 3D V-Cache technology, according to AMD.

Intel Steps Up to Tiles

For Intel, success will mean that it catches up to rivals, a moment Gelsinger has pledged will happen in 2024. Intel has struggled to move from its 14-nanometer manufacturing process to the 10nm process, while fabs at TSMC and Samsung have better handled moving to the newest node.

Intel’s next-generation Xeon Scalable server processor, called Sapphire Rapids, will be Intel’s first effort to fully embrace a chiplet architecture (Intel calls these “tiles”) and it will be its first mainstream processor that supports DDR5 high-bandwidth memory, PCIe Gen. 5.0 and compute express link, or CXL (this last item is an industry-supported interconnect for processors, memory expansion and accelerators). 

To scale up its data-center Sapphire Rapids chip to more cores, Intel had to split up the design into multiple dies. Set to launch in 2022, Sapphire Rapids’ compute tiles will have full access to cache, memory and input/output (I/O) functionality on all tiles. This means any one core will have access to all of the resources on the chip and are not limited to what’s built into the tile.

Intel’s 10nm process Alder Lake gaming chips, which combine performance processor cores for speed with efficiency cores for better battery life, is its 12th generation processor. Together with the company’s Raptor Lake (13th generation) and Meteor Lake (14th generation) processors, they represent Intel's answer to increasingly competitive AMD processors.

Meteor Lake is set to be released in 2023. Meteor Lake is made up of three different tiles: a Compute Die, SOC-LP die, and a GPU die. This is comparable to the Zen architecture chiplet designs that AMD has been sporting.

At Intel, chiplets are expected to be combined using Intel's Foveros packaging technology, which handles how these dies are attached. Meteor Lake processors will be Intel's first client PC CPU to adopt a multi-tile design with Foveros, a 3D packaging technology that Intel plans to use to stack new processors on top of one another. Intel says it built Foveros upon the lessons it learned with Embedded Multi-die Interconnect Bridge, or EMIB, technology – a technique that provides high-speed communication between several chips. 

The only uncertainty here is that Intel hasn’t spoken much about the glue that binds it all together. A chiplet design requires more engineering work upfront to partition the SoC into the right number and kinds of chiplets.

For one thing, chiplet strategies rely on complex high-speed interconnect protocols. What is more, communications between chiplets costs more power than a monolithic interpretation, and usually means higher latency. But the benefits afforded from using the right process at the right time are significant, as it helps provide both performance and power.

Follow TTI, Inc. on LinkedIn for more news and market insights.

Statements of fact and opinions expressed in posts by contributors are the responsibility of the authors alone and do not imply an opinion of the officers or the representatives of TTI, Inc. or the TTI Family of Specialists.
 


Murray Slovick

Murray Slovick

Murray Slovick is Editorial Director of Intelligent TechContent, an editorial services company that produces technical articles, white papers and social media posts for clients in the semiconductor/electronic design industry. Trained as an engineer, he has more than 20 years of experience as chief editor of award-winning publications covering various aspects of consumer electronics and semiconductor technology. He previously was Editorial Director at Hearst Business Media where he was responsible for the online and print content of Electronic Products, among other properties in the U.S. and China. He has also served as Executive Editor at CMP’s eeProductCenter and spent a decade as editor-in-chief of the IEEE flagship publication Spectrum.

View other posts from Murray Slovick. View other posts from Murray Slovick.

News & Information

Lead Time Trends
Current average manufacturer lead times for parts you depend on

Supply Chain News
Coverage of global conditions impacting manufacturers

Raw Materials Prices
Trends in feedstock metals, oil and other key materials

Podcasts

Listen to our new podcast, TTI Distribution Download! TTI Specialists, supplier partners and more share their expertise and insight on the electronics industry. 

Apple | Spotify | YouTube | Google

Filter Articles By Category

View All Connector Articles


Select Contributor to view their article(s)

View All Passive Articles


Select Contributor to view their article(s)

View All Supply Chain Articles


Select Contributor to view their article(s)

Stay Updated

Subscribe to our newsletters, promotions, and product updates.

Material Costs

Search MarketEYE