Brian, Brian, Brian. Really, do you have to lie to cover your ass? Variations on this “exploit” have been known since Intel derived the X86 architecture from Honeywell and didn’t bother to do the elaborate MMU fix that Multics used to elide it.
We are talking decades, sir. Decades. And it was covered by Intel patents as a feature. We all knew about it. Intel was proud of it.
Heck, we even saw this flaw manifest in 386BSD testing, so we wrote our own virtual-to-physical memory mapping mechanism in software and wrote about it in Dr. Dobbs Journal in 1991.
You could have dealt with this a long time ago. But it was a hard problem, and you probably thought “Why bother? Nobody’s gonna care about referential integrity“. And it didn’t matter – until now.
Now a fix is going to be expensive. Why? Because all the OS patches in the world can’t compensate for a slow software path. We’re looking at 30% speed penalties, sir.
Now, we can probably and properly blame the OS side with their obsession with bloated kernels.
But you promised them if they trust your processors, you’ll compensate for their software bottlenecks and half-assed architectures. And they believed you.
So now you’ve got to fix it, Brian. Not deny it. Fix it. Google didn’t invent the problem. It’s been there in one form or another since the 8086 was a glimmer in Gordon Moore’s eye.
And now it’s going to cost Intel. How much is up to you.