Don’t go looking for part numbers or performance yet — just a lot of firsts for Intel.
For many months, Intel has spared no opportunity to remind us that its Meteor Lake chips would be the ones to watch — its first CPU with different chiplets for each component; its first on its Intel 4 process node; its first with a dedicated AI coprocessor inside. Today, Intel is revealing a whole lot more.
Meteor Lake will “launch” on December 14th, the company now says, as the most power-efficient client processor the company’s ever made — and with up to twice the graphics performance, a “low power island” that can run tasks independently, and hooks into Microsoft Windows to intelligently control the new chips.
In no particular order, here are the highlights of Intel’s Core Ultra — because yeah, this one’s not called a Core i7.
With Meteor Lake, Intel will join its peers in crafting computer chips out of Lego-like building blocks — where the “CPU” and “GPU” aren’t just separate components on the same chip, for example, but actually separate pieces of silicon printed at different sizes and grafted together. The likes of AMD and Qualcomm have been doing this for a while, but heterogeneous computing is relatively new to Intel.
The downside for you is that only part of Intel’s chip is actually on the company’s bleeding-edge Intel 4 process. The graphics are on TSMC’s 5nm process, and the I/O and new “SoC Tile” are on TSMC N6. But the upside is that companies can choose the best building blocks — and selectively cut off power to the rest.
Right out of the gate, no pun intended, Intel is attempting to use chiplets to radically reduce how much power its processors draw. And it’s doing so by taking some of the “Central” out of CPU.
Instead of a single CPU or display region of the chip, each Meteor Lake has two — one on a “low power island” that can theoretically run all by itself, with its own efficiency CPU core, NPU AI coprocessor, media engine, and memory.
The other processor cores are distributed among a “Compute Tile” on Intel 4 that houses the P (Performance) and E (Efficiency) cores, dubbed Redwood Cove and Crestmont, respectively, plus a separate Graphics Tile on TSMC N5.
Realistically, most people using a computer will invoke the Compute Tile, but Intel wants to heat up the chip as little as it can, with an enhanced “Thread Director” that only pushes work to higher-power cores after it’s tried the lowest-power ones first:
That goes for the AI coprocessor, too — which Windows can natively see and monitor in the Task Manager, by the way.
Since “Media” is separated from “Graphics,” integrated encoding and decoding of video could theoretically happen without hitting the graphics tile at all. “An example of this is the addition of hardware support for the AV1 film grain feature, which was previously executed with GPU shaders,” writes Intel.
But all those “graphics”-like tasks separate from GPU doesn’t mean the GPU is just for show. Intel says Meteor Lake can add its Intel Arc graphics — now with dedicated ray-tracing units and up to 8 Xe cores on the chip itself.
They also support Intel’s XeSS — its intelligent upscaler akin to Nvidia’s DLSS and AMD’s FSR — for the first time on integrated graphics, which could help boost frame rates even more.
While it didn’t provide any performance numbers, Intel says the GPU can “run at a much lower minimum voltage and reach a much higher maximum clock speed” than previous iGPUs, at well over 2GHz.
And Intel has developed its own “patented low-cost vapor chamber” cooling solution, the company claims, to help get gamer and creator laptops with Meteor Lake out in the world.
But the NPU — that AI coprocessor — should theoretically be in every chip. “The NPU will be available across the full product stack of Meteor Lake,” says Intel’s Tim Wilson, VP of architecture.
(That would line up with what Intel CEO Pat Gelsinger said on a July earnings call: “We’re going to build AI into every platform we build.” But I haven’t confirmed if Meteor Lake will appear outside Core Ultra, so buying a non-Ultra chip may mean no NPU for now.)
Intel isn’t suggesting the tiny NPU will suddenly mean generative AI can stop running on giant cloud servers filled with Nvidia H100 chips, nor is the company saying this one NPU can do it all while you sleep. Instead, it’s suggesting that you now have options — it’s much more efficient to let the NPU run image generator Stable Diffusion, and it’s faster and still decently more efficient to have your GPU and NPU run it together:
But it does have an OpenVINO Inference Engine to help talk to those applications and pipe them directly to the NPU, and a lot of general ideas, including: