You may have heard that servers at Google have been packed so tight they catch fire in the datacenter. Turns out power dissipation is the key – even laptops get hot, and servers stacked are fire hazards. It’s the power, everybody – the limiting factor to communications is power (according to Broadcom). What a difference a few years make. When I was invited to a meeting about InterProphet and SiliconTCP over at a major infrastructure company back then, the gal assigned to evaluate my work laughed at me when I seriously brought up the issue of low power and TCP. Of course, she also wanted to boast about some thesis she’d done on TCP about twenty years after everyone else. One (male) engineer, in hearing the description of the meeting said it was the first all-woman “pissing competition” on record. I told him just because I’m assigned a woman for due diligence because I’m a woman too doesn’t mean I get a free ride – frequently it’s the opposite. Alas, there’s no secret sisterhood in business, but envy is universal.
So, how do you swap hot hotswap servers? The key is low power TCP – you can’t burn out the stack anymore adding more and more processors (not to mention the management overhead) even in the datacenter (sorry Intel). You need to get the lowest power TCP stack possible. And that means a no-processor design.
The nontech “what does a low-power TCP hardware implementation do for me” (what a mouthful) is you get real full video on cellphones without burning out the battery as quickly. Since I’m doing automated full video production these days, that’s kind of my interest too. Paul Baran wrote the basic patents on cellular communications for audio/video. But he couldn’t achieve it fully because of this limit. He was so far ahead of his time it’s amazing. I wonder what he’d think now?
So Silicon Valley isn’t really dead technologically, despite what some people like to say. There are a lot of technology problems still to be solved, discussed in black and white in some legendary patents and papers from people like Baran that created entire industries. It’s all right here. When I read patents, it’s pretty clear to me that “everything *hasn’t* been invented yet”.
Unix was invented when I was in high school. The Internet – gee, it was old hat at Berkeley when I got there. Packet-switching? Baran. Cellular. Baran et. al. But no one man or woman could anticipate *everything* because a lot of the pieces just weren’t in place. So the inventors carefully outlined why things couldn’t work, or came up with nonviable solutions because of these missing pieces. It’s all there – if anyone wants to take the time to read it.
Love the “Lucy holding the football for Charlie Brown” quote by Nathan Brookwood in CNET about the Dell / AMD relationship. It always seems so close, and then it slips away. Intel always holds the relationship.
Nathan may be right in saying last year it came the closest ever because of Intel’s slips in so many areas. But instead of running with the ball, AMD fumbled by assuming they could rely solely on their 64-bit advantage for the sale. That isn’t enough I was told. One exec who’s negotiated agreements with Intel and AMD and companies like Dell told me that AMD needs to get “the whiphand” on Intel in some way to get the Dell close. AMD doesn’t have that whiphand. And I know why. It came up chatting with an editor who wanted to know the background on Intel’s preannounced new product. You see, he knew I’d been there, so he wanted the story again. So here it is…
Ashlee Vance wrote that Intel will be introducting “I/O Acceleration Technology” to “attack greedy TCP/IP stack” consumption – in other words, latency through the stack. “Customers often find that their servers spend an inordinate amount of time dealing with network traffic when they should be hammering away on application data.” This sounds very familiar – we told them years ago that “all processors wait at the same speed”.
Back in 1997, when I filed a provisional patent on just such an approach, I had an interesting meeting with Intel’s processor side. We called the technique ROSE then, for Reordering Segment Engine for a product we envisioned called the Network Accelerator – and yes, this was before Adaptec and Alacritech and all those other TOE guys. It was the first in a series of parallel processing refinements, which dealt with the layer 2-7 issues of TCP/IP (the discussion was under NDA).
Everyone says I was amazingly ahead of my time. As Rick Merritt, EE Times writes about the possibility of using storage interconnects concluding “Competitors such as Broadcom Corp., which have existing 1-Gbit R-NICs, will not be able to scale to the greater bandwidth because they lack the ASIC state machine architecture…”.
Well, now I’m pleased that I wrote a paper for the global storage network workshop last MayAll You Need is TCP: EtherSAN and Storage Networks, and even more grateful for the feedback I received from people like Jim Grey of Microsoft, John Wakerly of Cisco, and Greg Pfister of IBM. Gordon Bell was an earlier advocate of the InterProphet technology and urged Chuck Thacker to take a look at it several years ago. So it appears this is finally becoming a topic of serious consideration – although I’ve been seeing it coming for many years.
The fundamental scalable state machine architecture patent (“TCP/IP network accelerator system and method which identifies classes of packet traffic for predictable protocols“) was filed in 1997 and granted 2000. A term memory patent (“Term Addressable Memory of an Accelerator System and Method was independently filed and granted July of 2004. It’s a better memory approach that hand-in-glove with a state machine architecture that deals with certain flaws.
OK. I know the patent attorney said I got the patent grant (“Term Addressable Memory of an Accelerator System and Method“) a while ago, but it really is different when you actually hold it in your hand! I was so excited that I told Vint Cerf about it and ever gracious, he said “congratulations, Lynne – persistence counts!” Means a great deal to me to hear that from the “Father of the Internet”.
It’s my 2nd parchment but there’s more in the queue. This patent relates to the limits found in the original design for InterProphet discussed in the SiliconTCP paper I put together earlier this year. Work on this patent was done after InterProphet went into a low-key mode because of a lack of commitment to it as private venture. But just because it’s easy to bet against someone knowing that life isn’t fair, that doesn’t mean it’s right. Karma rules!
I’m happy to keep my word and execute it well. It just takes a bit more time to make it to shore when the winds are set against you. But winds shift, and so do trends.
This little article just in from a dedicated Cisco engineer. Looks like Cisco is taking a “broadside” from Broadcom in the “TCP offload” universe.
Of course, notice the weasel words of “”selected network streams”. In contrast, at InterProphet we showed a 10x advantage of all network streams on NT at Microsoft in their offices in Redmond in 1998 with a patented design. So it’s taken Broadcom and Microsoft working together about 6 years to kind of make something work but not really. Not very impressive.
Alex Cannera dropped an interesting paper on my desktop discussing congestion control in grid networks. And it’s results confirm what I and others have seen over the years – Vint Cerf seriously saw in 1998 that hop-by-hop reliability preserving end-to-end semantics in the routers was the real key to handling this issue. Vint also is a renowned wine expert, and treated me and William to a wonderful tour of fine wines at the Rubicon in San Francisco where we had a memorable discussion on exactly this issue.
And once again, an interesting item in the postel.org end-to-end group – “An interesting version of TCP was created a few years ago at a large data-storage-system company here — essentially, the TCP receive window was reduced to Go/No-Go, and startup was modified, so the sending Unix box would blast to its mirror at full wire rate from the get go. ACKs would have meaningless Window values, excepting 0, because sender and receiver had similar processing/buffering capability. Loss produced replacement, via repeated ACKs. Being LAN-based system overall made all these mods workable. But clearly, the engineers involved found normal TCP wanting in the ability to deliver data on high-speed links.”
Interesting how legends develop. This project was called the “flamethrower” demo done with a wirewrap version of SiliconTCP on a DEC PAM card with a NIC wired on (and that’s exciting with 100MHz logic).
Interesting line of discussion passed through my email regarding the future of TCP. In particular, Alex Cannara decided to take on a few of the more “conservative” elements on dealing with end-end flows by interior management of links.
As Alex puts it: “Apparently, a great bias has existed against this sort of design, which is actually very successful in other contexts”. Even a very “big old name in Internet Land” liked this type of approach, for the “…reason it [TCP] requires the opposite of backoff is because it doesn’t have the visibility to determine which algorithm to choose differently as it navigates the network at any point in time. But if you can do it hop by hop you can make these rules work in all places and vary the algorithm knowing your working on a deterministic small segment instead of the big wide Internet.”
Let’s take this further.