At Hot Chips 2022(Opens in a new window), Intel lifted the curtain to show off some tantalizing details about its upcoming 14th generation “Meteor Lake” processors. Now, these cutting-edge CPUs aren’t likely to ship anytime soon. (We’ve yet to see the release of Intel’s 13th-generation “Raptor Lake” processors, though Intel insists Meteor Lake is on track to launch in the second half of 2023.) But the details we’ve seen so far are deeply fascinating and shows a growing change in the way modern processors are designed.
Starting with Meteor Lake (and followed by “Arrow Lake”, its presumed 15th generation core family), Intel will move to using a “chiplet” design for its consumer processors, with many small chips handling different functions fused together to a single chip. It’s a significant departure from the “monolithic” design of existing Intel processors, and it could lead to faster — and perhaps less expensive — processors in the years to come.
The birth of the monolithic CPU die
From the beginning, the computer industry has relentlessly pushed for closer integration. Way back in 1965, Intel co-founder Gordon Moore developed what became known as “Moore’s Law,” the much-hyped axiom that predicted a doubling of the number of components in an integrated circuit every year. This has not always been entirely true, but it nevertheless shows the critical importance of integration for the chip industry.
Over the past few decades, we’ve seen a variety of components integrated into CPUs: floating point modules, cache, memory controllers, PCI Express controllers, video controllers, display controllers, graphics processors, and a variety of other circuits. In general, this has had many positive benefits, from lowering production costs and power consumption, to drastically increased performance. But it has also led to challenges that the companies have had to work to overcome.
How chips can solve monolithic problems
Three main problems stand out in building large monolithic chips. First and foremost is the problem of tile yield.
No manufacturing process is perfect, and when it comes to silicon chips, even a seemingly small defect can cause a chip to malfunction. This tendency makes the construction of large chips significantly more expensive. That’s because when a defect occurs, more manufacturing time and resources are wasted compared to a less defective chip.
Second, components such as GPUs and CPUs tend to perform better when each is made under its own optimal process technology. However, when you integrate multiple types of components into a single chip, you are forced to use the same manufacturing process for all components. You end up having to use a process that will negatively affect both of them at least a little, or you have to use a process that works well for one but not as well for the other.
Last (but certainly not least), having all these components tightly integrated can hinder development to some extent. When everything is baked together, it is not as easy to make changes to just one component – for example the memory controller or the video processor. You have to consider how everything is connected, and then the whole chip has to be run through a lengthy verification process to make sure everything works as it should after a change is made. After that, you still need to have the design changed at the factory, and work through existing pieces, before starting production on the new design.
However, by using a chiplet design, these problems and others can be solved or at least reduced. A certain amount of inspection and verification needs to be done to make sure chips work correctly, to be sure, but you get a lot more flexibility in designing and updating chips. You will also see significantly less restriction on which manufacturing process you use for different parts of the chip. And a defect on a single chip will be less expensive, since the chips are smaller.
This change could ultimately help speed up chip development, potentially reducing cost and improving performance as well. It’s also a proven method: AMD has been using chip design for its Ryzen processors for a while now.
The Chiplet Push: Why Now, and What Could Go Wrong?
If the use of chips has clear and obvious advantages, then why hasn’t Intel used them before? Well, the short answer is that the company hair-many times. Intel used a chip-like design with its Pentium D processors way back in 2005 to combine two CPU cores into a single processor. It again used a chiplet-like design with its 1st Generation Core “Arrandale” processors. And it has been experimenting with chiplets in other products since.
A key difference between these designs and the upcoming Meteor Lake and Arrow Lake CPUs, however, is how closely the respective chips are connected. A disadvantage of a chip design is that the chips simply cannot have the same level of interconnection as they have in a single monolithic chip. Bandwidth is damaged, as a result. This has previously hampered performance on Intel’s products (and AMD’s as well), but it’s something that has improved over time.
Another downside: Older chiplets tend to use more power. However, since power needs are constantly changing between generations, it is difficult to determine how significant this problem may become in the future.
What the Meteor Lake Chiplet CPU will look like
Intel’s massive “Ponte Vecchio” chip combines 47 tiles (the company’s preferred term for these tiny chips) with more than 100 billion transistors. It is with the same interconnection technology that Intel plans to connect the chips in its upcoming Meteor Lake and Arrow Lake processors. As you can see from the image below, the Meteor Lake design consists of a total of six parts, including the package substrate, which is likely to be little more than a PCB for connecting to the LGA socket on the motherboard.
Recommended by our editors
Among the remaining five titles are the CPU chip, a GPU chip and an IO Extender chip. (The latter will probably only contain the PCI Express controller.) The CPU part will be made using the upcoming “Intel 4” manufacturing process, while other chips may be made by Taiwan Semiconductor Manufacturing Company (TSMC).
There’s also a somewhat confusingly named “SOC” chip, which contains everything that doesn’t fit into the three aforementioned chips, including the memory controller. It appears to be the largest chip and probably has a lot of functionality, but one should be careful not to confuse it with a System-on-a-Chip (SoC) that you might find in your phone or TV, which the latter will also contain a CPU and GPU.
The last tile, labeled “Base Tile”, will probably act as a breadboard and serve to connect the other pieces together. This is achieved by using 36 micrometer Foveros Direct connections, which are also used on the Ponte Vecchio.
The graphic tile below helps show the added flexibility this design provides. Intel is able to design these chips with varying amounts of resources to build processors with different amounts of CPU cores, graphics cores, or other resources.
The whole design marks a departure from tradition, but some of the technologies actually go back a bit. Components such as the memory controller which has long been integrated into the CPU are now broken off again into separate components – albeit closely linked to the CPU. Although it may seem a little contrary to prevailing wisdom, we can’t deny that chip designs have promise and that they seem to be the way things are going to go in the future, with both AMD and Intel adopting chip designs.
Unfortunately, it may take some time before we learn more details or see performance numbers from Meteor Lake processors. Intel is expected to launch its 13th generation Raptor Lake processors sometime this fall, and it’s likely that Meteor Lake won’t launch until sometime in the fall of 2023. That means it’ll be even longer before we hear anything about Arrow Lake . And in the meantime, we will see more developments and revolutions of AMD’s already chip-based technology.
Get our best stories!
Sign up What’s new now to get our best stories delivered to your inbox every morning.