Intel’s X86 Decades-Old Referential Integrity Processor Flaw Fix will be “like kicking a dead whale down the beach”

Brian, Brian, Brian. Really, do you have to lie to cover your ass? Variations on this “exploit” have been known since Intel derived the X86 architecture from Honeywell and didn’t bother to do the elaborate MMU fix that Multics used to elide it.

We are talking decades, sir. Decades. And it was covered by Intel patents as a feature. We all knew about it. Intel was proud of it.

Heck, we even saw this flaw manifest in 386BSD testing, so we wrote our own virtual-to-physical memory mapping mechanism in software and wrote about it in Dr. Dobbs Journal in 1991.

You could have dealt with this a long time ago. But it was a hard problem, and you probably thought “Why bother? Nobody’s gonna care about referential integrity“. And it didn’t matter – until now.

Now a fix is going to be expensive. Why? Because all the OS patches in the world can’t compensate for a slow software path. We’re looking at 30% speed penalties, sir.

Now, we can probably and properly blame the OS side with their obsession with bloated kernels.

But you promised them if they trust your processors, you’ll compensate for their software bottlenecks and half-assed architectures. And they believed you.

So now you’ve got to fix it, Brian. Not deny it. Fix it. Google didn’t invent the problem. It’s been there in one form or another since the 8086 was a glimmer in Gordon Moore’s eye.

And now it’s going to cost Intel. How much is up to you.

MetaRAM Busts RAMBUS Stranglehold?

Sand Hill Road envies RAMBUS. Oh, they don’t envy them their lawsuits, precarious business model or turbulent management structure. But they do envy them their ruthless monopoly of the high-speed DRAM market. RAMBUS successfully competed against the behemoths with a clever architectural enhancement, kept belief in their approach against huge odds, fought back as hard and dirty as the big boys and made licensing deals stick. They are survivors.

When RAMBUS went IPO back in 1997, I was completing work on the first preliminary patent application for InterProphet’s SiliconTCP technology, while William began his hunt for investment. RAMBUS’s IPO was on the minds of many VCs, but it wasn’t in a good way, surprisingly. RAMBUS’s 7 prior years had been fraught with changes in business model and personnel. Instead of setting up a fab, RAMBUS chose to license their technology. Finally, RAMBUS chose to make their stand on the basis of their patents. Don’t let me fool you — investors may crab about the need for “intellectual property protection” but when it comes to playing with the big boys, they believe about as much in IPR as the tooth fairy.

RAMBUS has been remarkably successful in defending and enhancing their patents (and yes, I know about their “steering committee” games — coming from the OS side I’ve seen Microsoft and others play the same games, even to the point of doing software patents on work pre-existing by decades). Essentially, they’ve played dirty like Intel, Hynix and all the other guys the VCs said you could never win against. But it has been a very long wild and crazy ride for the payoff — too much for the “10x in 3 years” crowd.

But despite all of RAMBUS’s remarkable turbulence, it has been amazingly successful. During one incredible record-setting day in 1998, I listened to a top-tier VC say that he’d never want a single share of RAMBUS’s stock no matter *how* much money they made. He just hated them. Another top-tier VC rambled on about how “you could make a lot of money with a RAMBUS business model, but they weren’t interested in that”. What they really hated is how there wasn’t a single massive success where they could bow and take their winnings (like VMWARE in 2006 for example, but they didn’t invest in that one either because it was run by a husband-wife team that believed open source was valuable — hmm, beginning to see a pattern). As Magdelena Yesil (at the time partner at USVP) liked to intone to me “Venture Capitalists are more capitalist than venture these days”. Chip risk wasn’t as exciting when you could respin any company as an Internet venture and go public with no revenue. And semiconductor companies *are* risky.

Semiconductor companies are also the historical lifeblood of Silicon Valley — hey, that’s why it’s called “Silicon Valley” and not “Internet Valley”! So now we come to MetaRAM, an attempt to steal RAMBUS’s monopoly on architecture. According to Ryan Block of Engadget “MetaRam uses a specialized “MetaSDRAM” chipset that effectively bonds and addresses four cheap 1Gb DRAM chips as one, tricking any machine’s memory controller into using it as a 4x capacity DIMM.”

Is the technology innovative? Not likely — it sounds like a combination cache and bank decoder, which is not innovative in the least. In fact, you need 4x the number of components on the DIMM, which means 4x the number of current spikes and decoupling capacitors, even if you put the chips together in the same package. Because you have a fifth chip, you complicate things even more. There is no way you can approach the triple-zero (volume, power, cost) sacred to chip designers with such a design, because one single high-speed high-capacity chip will eventually win out given the proliferation of small expensive gadgets demanding the lowest of volume and power. In a world of gadgets like IPODs, cellphones, laptops, PDAs and the like, cost is very important but *not* the most important quantity. So RAMBUS doesn’t have a lot to worry about here.

Hynix has been fighting a losing battle against RAMBUS ever since getting hit with a whopping $306M patent infringement judgment in 2006 (since reduced to $133M), and RAMBUS is still going for more. These are the same guys who pleaded guilty in 2005 to a DOJ memory price-fixing scheme from 1999-2002 and paid a $185M fine. There is no love lost in the memory biz.

So where does little MetaRAM come in. When technology fails, maybe a clever business model will do. MetaRAM’s big claim to fame is cost reduction — not for gadgets or laptops, but according to Fred Weber, CEO of MetaRAM, for “personal supercomputers” and “large databases”. And who is the big licensee for this so-called technology. Why, it’s Hynix of course, who announced they will make this lumbering memory module. They claim it will be lower power. I think I’d like an independent evaluation on this point, but it will probably be lower cost. Is it worth it? Given reliability considerations, that also remains to be seen. But the moral of this saga is simple — human memories are longer than memory architectures in this business, and the real puppet-master behind the throne (Kleiner-Perkins) is sure to walk away with the money. I wish I could say the same for the customers.

When Video Kills Your Drive – Quicktime Waxes Track 0

Alright! Yes, sometimes I do read slashdot when it’s amusing, and the discussion of how you can create your own custom panic screen (or BSOD window) for OS/X via an API is amusing (my son Ben points this stuff out for me – he feels it’s one of his sacred tasks). Joke panic screens have been around a long time, but the battle over “how much information to give people” has led to many not-so-amusing battles, especially when we were creating 386BSD releases and did the Apple approach as an option long before Apple switched to BSD and did the same thing we did. I like information and transparency, but not at the cost of frustrating and annoying lots of people…

But aside from this old debate, probably the funniest *real* programming error discussed (yes Apple, you did it again) was the Quicktime capture bug a video engineer discussed, where it merrily fills up the drive with video and then, when you’ve run out of space on the disk, overwrites allocated blocks. Yes, allocated blocks! And where did that leave our poor engineer? With no track 0. Even Apple couldn’t put that one back together again – they had to give him a replacement drive, and that’s a complete admission of defeat, because as everyone knows Apple is very cheap. (Disclaimer – I have worked in manufacturing, and we all are very cheap in this regard because hardware returns make a nasty entry on your income statement – show me a systems vendor that doesn’t care about hardware returns, and I’ll show you a vendor that won’t please shareholders).

But the question begs – is there another way to recover from a track 0 loss on a disk drive? Well, we faced the same problem at Symmetric Computer Systems years ago, and we recovered that disk, albeit not the way you might think…

Itanic Readies for Final Sinking – Multiflow and HP Tech Aren’t Enough

Alas, “itanic”, aka “Itanium” is resurrected again, this time debuted by Intel as a “chip with two brains”. But the critics aren’t impressed, and to save money for senior management Itanium soirees, Intel middle managers are to be tossed into the cold big blue.

Well, this is no surprise to those chip-watchers who talked down Itanium from the beginning. While Dell and IBM have fled to calmer X86 waters with the likes of AMD, HP has been steadfast, selling 80% of the chips Intel has shipped.

Of course, many wonder why HP is such a firm holdout when everyone else is baling? To understand how we got where we are, we actually have to ask the question “Where did the Itanium come from?” If you think it all sprang fully formed from Andy Grove’s head, you’re quite wrong. And you’d miss the story of long-ago acquisitions, agreements, and ambitions…

Microsoft’s Ultimate Throughput – Change the Compiler, Not the Processor

I like people who go out on a limb to push for some much needed change in the computer biz. Not that I always like the idea itself – but moxie is so rare nowadays that I have to love the messenger despite the message. So here comes Herb Sutter, Microsoft Architect, pushing the need for real concurrency in software. Sequential is dead, and it’s time for parallelism. Actually, it’s long overdue in the software world.

In the hardware world, we’ve been rethinking Von Neumann architecture for many years – SiliconTCP from InterProphet, a company I co-founded, uses a non-Von Neumann dataflow architecture (state machines and functional units – not instruction code translated to Verilog because that never works) to bypass the old-styled protocol stack in software, because an instruction based general processor can never be as efficient for streaming protocols like TCP/IP as our method. Don’t believe me? Check out Figures 2a-b for a graphic on how much you wait for store and forward instead of doing continuous flow processing – the loss for one packet isn’t bad, but do a million and it adds up fast.

It’s all about throughput now – and throughput means dataflow in hardware. But what about user-level software applications? How can we get them the performance they need when the processor is reaching speed-of-light limits? If on a typical processor from one end to the other end you get one clock cycle at the speed of light at 7-8 GHz, anyone stuck in sequential processing will be outraced by Moore’s Law, multiple cores and specialized architectures like SiliconTCP.

Misplaced Software Priorities

For a perspective from William Jolitz, co-developer of 386BSD, on the need to separate “innovation” from “renovation” in design, read Misplaced Software Priorities today. While it may gore a few oxen – especially those who work in the architectural flatland of low-level software – given the rapid outsourcing of this very same area to low-cost programmers in India and China, it might be time to listen to an alternative view from a long-time Silicon Valley developer and entrepreneur who’s done more for the acceptance of open source than all the pundits put together.

Of course, if someone wants to stay low-level, they can always learn Mandarin, right?

Tom Foremski Interviews Doug Engelbart

Doug Engelbart is a computer legend, but he is also still very much alive and has plenty to say. Tom Foremski of SiliconValleyWatcher had a poignant chat with him. The upshot – have the last 20 years been a failure?

Some snippets from Tom’s excellent interview:
” ‘How do you deal with society when its paradigm of what is right is so dominant?’ Doug Engelbart, the 1960s computer visionary asked me the other evening. It’s a question he has pondered many times over the past 20 years or so, ever since his research funding was taken away.

Mr Engelbart and his teams of researchers at the Stanford Research Institute (SRI) shaped the look and feel of the PC, as John Markoff chronicles in his latest book What the Dormouse Said: How the Sixties Counterculture Shaped the Personal Computer Industry.

Mr Markoff’s book raises the profile of Mr Engelbart, well known as the inventor of the computer mouse, and less well known for his seminal work in creating many of the concepts later found in the personal computer. Mr Markoff returns credit to where it is due.

What the book does not chronicle is how the rise of the PC killed funding for Mr Engelbart’s work.

By 1979, he had lost all funding from SRI because of unfavorable peer review.

‘The other research groups said what I was doing could be done better with microcomputers or through machine-based artificial intelligence. That was the dominant culture at the time. What people don’t realise is that there are many different cultures and not one is right.’ Mr Engelbart told SVW.

As a result of his experiences, he questions whether the past 20 or so years of his life have been a failure.

That’s how long Mr Engelbart has been trying to raise funding to continue his research into human machine interfaces and solving large, complex problems using networked software.

But the culture of our time has been unfavorable to his ideas of developing human-centric computer applications using one big powerful computer with many users. The paradigm of the PC revolution is that everyone gets to have a computer, no time-sharing needed.”

No, Photoshop is Not “real life”, Dave

Dave Pogue today reviews the Bushnell binoculars, and just can’t figure out why they’re so blurry. So I told him “it’s obvious”. Here’s why for the rest of you…:
1. They didn’t bother to adjust the focal plane focus of the sensor to coincide with the same focal point of the binocular eyepiece (e.g. a mfr defect).
2. To confirm this, adjust the binoculars slightly out of focus and take shots – I bet you’ll find that it gets better if it is not fixed focus.
3. If it doesn’t change at all, that means they have a fixed focus, and that the focus is set wrong. This is not adjustable by the user easily, but if you disassemble the binoculars and adjust the focus manually, you’d correct this problem.

But wait – there’s more. I have a long list of errata on digital cameras, most recently the Canon SD200-300 on-camera editing issues, for example, discovered by us at ExecProducer over the course of handling production issues. So I’m quite familiar with these and other annoying issues (light level problems, for example, and resolution issues) and how to find the best way of handling them. So another nit with Dave is a very basic one – using photoshop is not “real life”, as anyone in serious astrophotography will tell you.

Cookies and Popups and Ads, Oh My!

Another marketing lament on how cookies and spyware and popups and intrusive ads are ruining it for the good marketers ends up in my inbox. Why, oh why, they cry can’t someone come up with a way to make the Internet a wonderful place for selling, and keep the bad guys from ruining a good thing. Alas. 🙂

It’s a matter of architecture and trade-offs, courtesy of those techie types no one can understand. Simply put, online marketing people have to give up on magic solutions like cookies and popups and sneaky tracking – they’re all easily disintermediated by the same tech folks who programmed them in the first place. Live by the sword, and die by the sword.

To paraphrase a famous campaign slogan, “It’s the content, stupid”! Provide good content on the Internet, tailored for your audience, and they will watch it and the relevent ads – just like TV. Rely on tricks, and in “Internet time” someone will put out a way to block you. Amazing thing, the Internet.

Take it from an Internet expert who actually knows the insides of all this stuff – it really is this simple. And it places online marketing back under the control of the online marketing specialist where it belongs.

Inside the Black Box or Outside the Flim Flam

Reading the article today There Are No Black Boxes in Online Marketing, I had to laugh when Tom Hespos laments “To me, it seems ridiculous that anyone in this industry would want to put blind faith in a piece of technology without understanding fully why it works”. Absolutely! It would be wonderful if marketing and sales really wanted to know how things worked inside the “black box”. This would make the bona fide technologist who labored over a real product very happy.

But the reason we have black boxes is simple – the customers don’t want to know anything about how it works – they just want it to give them the results they want when they want them.

This desperate willful ignorance on the part of “don’t tell me about the technology, just tell me how it works” online marketing crowd is fertile ground for the flim-flam product that spews out worthless “results” in pretty charts. Like what you may ask. Gee, like security that isn’t, spam filters that don’t, and software “accelerators” that slow the processor – I could go on for hours, and I haven’t even hit any hardware yet.

Let’s face it – an ordinary sincere technologist doesn’t have a chance next to those magical solutions. If she says they don’t work, she’s told that her competitor has it and it does work. If she argues with her customer, she’s blamed for “losing the sale”.

So until black box results are tied to an online marketer’s performance (and job security), expect more black box solutions and very few honest answers.