In July, a College of Michigan computer engineering professor put out a brand new thought for measuring the efficiency of a processor design. Todd Austin’s LEAN metric obtained each reward and skepticism, however even the critics understood the rationale: A whole lot of silicon is dedicated to issues that aren’t truly doing computing. For instance, greater than 95 % of an Nvidia Blackwell GPU is designated for different duties, Austin advised IEEE Spectrum. It’s not like these components aren’t doing necessary issues, similar to selecting the following instruction to execute, however Austin believes processor architectures can and will transfer towards designs that maximize computing and decrease every part else.
Todd Austin
Todd Austin is a professor of electrical engineering and laptop science on the College of Michigan in Ann Arbor.
What does the LEAN rating measure?
Todd Austin: LEAN stands for Logic Executing Precise Numbers. A rating of 100%—an admittedly unreachable objective—would imply that each transistor is computing a quantity that contributes to the ultimate outcomes of a program. Lower than 100% implies that the design devotes silicon and energy to inefficient computing and to logic that doesn’t do computing.
What’s this different logic doing?
Austin: In the event you take a look at how high-end architectures have been evolving, you may divide the design into two components: the half that really does the computation of this system and the half that decides what computation to do. Essentially the most profitable designs are squeezing that “deciding what to do” half down as a lot as doable.
The place is computing effectivity misplaced in at the moment’s designs?
Austin: The 2 losses that we expertise in computation are precision loss and hypothesis loss. Precision loss means you’re utilizing too many bits to do your computation. You see this development within the GPU world. They’ve gone from 32-bit floating-point precision to 16-bit to 8-bit to even smaller. These are all attempting to reduce precision loss within the computation.
Hypothesis loss comes when directions are onerous to foretell. [Speculative execution is when the computer guesses what instruction will come next and starts working even before the instruction arrives.] Routinely, in a high-end CPU, you’ll see two [speculative] instruction outcomes thrown away for each one that’s usable.
You’ve utilized the metric to an Intel CPU, an Nvidia GPU, and Groq’s AI inference chip. Discover something stunning?
Austin: Yeah! The hole between the CPU and the GPU was quite a bit lower than I assumed it will be. The GPU was greater than thrice higher than the CPU. However that was solely 4.64 % [devoted to efficient computing] versus 1.35 %. For the Groq chip, it was 15.24 %. There’s a lot of those chips that’s in a roundabout way doing compute.
What’s fallacious with computing at the moment that you simply felt such as you wanted to provide you with this metric?
Austin: I feel we’re truly in an excellent state. Nevertheless it’s very obvious whenever you take a look at AI scaling developments that we’d like extra compute, larger entry to reminiscence, extra reminiscence bandwidth. And this comes round on the end of Moore’s Law. As a pc architect, if you wish to create a greater laptop, you want to take the identical 20 billion transistors and rearrange them in a manner that’s extra precious than the earlier association. I feel which means we’re going to wish leaner and leaner designs.
This text seems within the September 2025 print subject as “Todd Austin.”
From Your Website Articles
Associated Articles Across the Net

