Oh where Oh where did my protocol engine go?

In a walk down memory lane, Craig Partridge and Alex Cannara discussed Craig’s mention of an XCP meeting and Greg Chesson, Alex saying “But, we still have suboptimal network design, insofar as we depend on TCP from the ’80s and a glacial IETF process — all this while now having complete web servers on a chip inside an RJ45 jack! So maybe his ideas for SiProts were something to consider, even if they weren’t right on target?”

For those not in the know, Greg Chesson stepped on a lot of “TOEs” (hee hee) first in the early 1990’s with filing a lot of patents with protocol engines (PEI – backed by HP at the time).

I have a slide from a presentation that I did for Intel back in 1997 explaining why he failed — simply put, preprocessing likely conditions based on heuristics always failed in the general case, with the preprocessor commonly falling behind the processor even though it was put there to speed up the processing — so the software stack on average was usually faster.

This same process in FEPs has been repeatedly repatented in network processors — I reviewed several — but they never got the methods that allow for completion of the processing without falling behind (esp. on checksum, but there are also other conditions). I always thought Greg could sue a number of network processor companies for infringement, but since they all fail in the same way, who the hell cares.

Greg made his money in SGI, by the way, and look how that company eventually turned out — lots of “throw code over the fence” to linux, which undermined their own sales of systems. Very self-destructive company.

Your paper is silly, smells, and is ugly, ugly…

Jonathan M. Smith has an interesting idea on how to avoid blackballing in tech paper reviews.

For those not clued in (or fortunate enough to have avoided academic paper submission follies), in order to have an academic paper accepted, one must submit to double-blind review by anonymous experts in the field to evaluate whether a paper is interesting and appropriate to the conference venue without being dazzled (or tainted) by knowledge of who actually wrote it.

While in theory this approach seems quite reasonable, in practice one tends to find that papers which push the envelope, contain ideas not within the accepted compact, or even radically new treatise often meet with less-than, shall we say, open-minded and even-handed analysis?

And since it’s pretty easy to guess who’s paper it is anyways, or even find out using a google search on the keywords, which everyone does anyway to figure out if “someone else wrote something like this before, so I can use their results in my analysis”, the “double” in double-blind doesn’t really work.

So Mr. Miller has proposed (at SIGCOMM in the OO session) a simple process: 1) That all reviews be public, and 2) signed by the reviewer. According to Mr. Miller, “That way history gets to see who was right – and who was wrong.”

Sounds good to me – I’m willing to take on the judgement of history in my work, since that’s only rational. Any other takers?

Excuse my Bios, but…

Check out the Phoenix Technologies announcement on their BIOS hardware authentication scheme.

People like Bruce Schneier lecture us all on how hard it is to create a trusted network verification model that holds up under a variety of conditions and needs. And who needs this (very few)?

Some of you may recall how Microsoft had to pull back on their extra evil hardware authentication when it turned out XP didn’t work when you added a device like a DVD. Everybody complained, vendors got really annoyed with customer support, and Microsoft got blamed (well, it was their fault). They’re trying to moderate this in practice, but it’s not a deterministic strategy for a host of technical and practical reasons.

But Microsoft has an OS monopoly and can get away with failure. Does Phoenix have a monopoly on the BIOS? Don’t think so. The customer hates this process of verification – it makes him feel like a criminal. So the vendor notices the customer hates the product, buys another BIOS from AMI or any of the hundreds of others, gets rid of this nonsense for the customer, and the problem is self-correcting.

And now Phoenix is going to disallow network access from people who don’t match this same faulty hardware profile? Great – it didn’t work before, so let’s make it even bigger this time. Sounds like lunacy? It is. Phoenix has been trying this trick for years and years – and it won’t work unless you keep people from upgrading / fixing their PCs.

Can you imagine how people would react if you told them they couldn’t change the oil filter on their cars, or take it to the local oil changers and buy a standard filter? You’d have to throw the car away – sorry Charlie.

As the real security guys will tell you, verification of trusted networked sources is actually a very difficult game, even using hardware and secure links. You fool yourself into believing that you can live in a black and white universe. In reality, with security this brings more problems, because too narrow a window results, so exceptions are made to the rules and soon you’re back to the same corner case issues as before. The real success comes down to knowing the character of the individuals and the use / practice of measures, such as the friend/foe rules of engagement process used in the military.

The last thing everyone needs is an Internet where people are refused access for arbitrary reasons, subverting the entire point of TCP/IP and networked communications – to exchange information – by preying on people’s fears of the “bad guys” gaining access. Maybe if Microsoft would just secure their OS better, we wouldn’t have this fear in the first place.

Now that I’ve pointed out the problem from the perspective of an Internet expert, we could all use a crafty rebuttal from a security expert as to why this is just another brand of snake oil. Hey Bruce, where are you?

Sometimes a Legend

And once again, an interesting item in the postel.org end-to-end group – “An interesting version of TCP was created a few years ago at a large data-storage-system company here — essentially, the TCP receive window was reduced to Go/No-Go, and startup was modified, so the sending Unix box would blast to its mirror at full wire rate from the get go. ACKs would have meaningless Window values, excepting 0, because sender and receiver had similar processing/buffering capability. Loss produced replacement, via repeated ACKs. Being LAN-based system overall made all these mods workable. But clearly, the engineers involved found normal TCP wanting in the ability to deliver data on high-speed links.”

Interesting how legends develop. This project was called the flamethrower demo done with a wirewrap version of SiliconTCP on a DEC PAM card with a NIC wired on (and that’s exciting with 100MHz logic).

We demo’d this to Microsoft, venture, and lots of other companies back in Summer 1998. One Microsoft exec (Peter Ford) noted that we were so overloading the standard NICs that an “etherlock” condition was likely to occur. Etherlock, for those who don’t know, occurs when all of the bandwidth is consumed and nothing else can communicate because there is no idle time effectively. And yes, we saw this occur.

One of the more interesting things we found is that many “standard” NICs were not standard compliant. I still have the wirewrap on my wall alongside a production board.

The Power of TCP is in its Completeness

Interesting line of discussion passed through my email regarding the future of TCP. In particular, Alex Cannara decided to take on a few of the more “conservative” elements on dealing with end-end flows by interior management of links.

As Alex puts it: “Apparently, a great bias has existed against this sort of design, which is actually very successful in other contexts”. Even a very “big old name in Internet Land” liked this type of approach, for the “…reason it [TCP] requires the opposite of backoff is because it doesn’t have the visibility to determine which algorithm to choose differently as it navigates the network at any point in time. But if you can do it hop by hop you can make these rules work in all places and vary the algorithm knowing your working on a deterministic small segment instead of the big wide Internet.”

Let’s take this further.

In math we deal with continuous functions differently than discontinous ones, and TCP algorithms know this – they have different strategies for each approach – but when you get a mixture across the network you’re limited to statistics. If we limit the inhomogeneity, then the end points of TCP can then optimize the remaining result. In this case, the gross aspects limiting the performance no longer dominate the equation.

So you can’t overtransmit or overcommit a link if you’re disciplined – you only fill in the idealized link of the puzzle from the perspective of what you know.

Has the hobgoblin of statistics ruined any ability to do a deterministic job (with metrics and cost value) of improving loss ratios and understanding what is really happening at any point along the way? If so, this would in turn validate / prove a statistical model. But think of all the projects that wouldn’t fly.

At InterProphet we proposed that for every hop we get the best possible effect –basically the same level of end-to-end principle in each segment instead of viewing all hops as one end-to-end segment — by deploying low latency TCP processing as a bucket brigade throughout the infrastructure. Now, the pushback from the manufacturers was cost, but we met all cost constraints with our dataflow design (that works, by the way, and is proven).

The power of this approach is amazing. Instead of simplistically thinking of end-to-end as just two cans and a string, we can apply end-to-end completeness on every segment.

Very few people have understood this — looks like Alex does. And I know Vint Cerf, the Father of the Internet, does. He joined InterProphet’s Board of Directors on the strength of the idea alone. Of course, he’s also a visionary and gentleman in every sense of the word. We should all be so gifted.

What’s in the Future for Digital TV?

Attended the panel discussion hosted by the Swiss Science & Techn. Office, Wallonia Initiative, last night in San Francisco at their office downtown. On the panel were: Thomas Gieselmann, General Partner, BV Capital; Christina Ku, Consumer Electronics Group, Intel Corp.; Bob O’Donnell,
Research Analyst, IDC; Bernard Rappaz, Editor in Chief
Multimedia at Swiss-French TV; and Moderator Bruno Giussani, Knight Journalism Fellow at Stanford University.

One of the most intriguing questions was that the Europeans completely believed that broadcast TV as we know it is dead – no growth, no future. It’s all Internet and cellular.

Something to think about.

California Connects GenYs with Digital Media

We’ve got some great news for students in California who want to incorporate digital media into their studies,
I just heard from Jeff Newman, who kindly reviewed my ACE2004 paper on massive video production and how it can be used to build multimedia community projects.

Jeff says: “As to the impact of such technology, California has recently enacted the Digital Arts Studio Partnership Demonstration Program Act, to make recommendations on a model curriculum and state standards for digital media arts provided to youths aged 13 to 18 years.”

“The inclusion of streaming video would enhance the effectiveness of this statewide effort. It would require the council to convene a meeting of specified entities to review and recommended by consortia associated with each partnership.”

It’s great to see California schools and government taking the lead on such a critical new technology that totally connects with the GenY’s. Thank you, Jeff.

Virtual Communities

Fred Turner, Professor at Stanford, spoke the other day at SCU on “Counterculture into Cyberculture: How the Whole Earth Catalog Brought Us “Virtual Community”. Basically a history talk of the WELL and the organizing power of the hippie movement through the “whole earth” commercial powerhouse of the time. I found it curiously amusing – kind of like watching your mom in a “Granny dress” or your dad with a beard strumming a guitar.

While I’m not quite the age of the “summer of love” crowd (I think I preferred collecting Breyer horses then), I have watched the evolution of these communities from a technology standpoint, and have seen both their strengths and weaknesses as they grew (and in some cases died). As history and anthropology are an avocation and since I’ve been involved in developing and growing relationships using technology, it is a serious topic. So I went and listened.

One of the clear as bells problems stemmed from the willful misunderstanding of what the technology of the time provided and how it could be used. The WELL provided a novel community experience all right, but it was basically too limited to be of great use to build the kind of movement envisioned by the “counterculture” – it was just too early, and easily supplanted by the Internet.

The evolution, technology, and mechanisms which would become the Internet were actually quite separate in design and execution, rose colored glasses of the counterculture notwithstanding.

I know a lot of folks (even Al Gore) would like to stake a claim to the Internet’s success, or as the syllabus stated “the network technology of the WELL helped translate the ideals of the American counterculture into key resources for understanding the social possibilities of digital networking in the 1990s.” But I’m afraid it just isn’t so – it evolved independently and with funding from some of those guys – the DOD comes to mind – that the counterculture tended to protest against.

I’ve never found the hippie movement to be very progressive in using technology, except for television. It’s understandable, given the paroxysms of the time. Just like the nostalgia for this period by these guys isn’t so great for women and people of color.

But we should get real here: the right has used the Internet far more effectively to convey its message until Howard Dean went against his own party’s anti-tech bias and proved the Internet could be beneficial to the left.

It took thirty years, a lot of hard work, a ton of research funds, real tech visionaries like Cerf and Kahn and Berners-Lee to make the Internet the real world wide web.

Not all the cute stores that sold wood stoves, guitars and granny dresses could make one TCP/IP connection or HTTP web page.

Forget Printers and Film – It’s Digital Cameras and Clips

NYTimes had an interesting article by Claudia Deutsch on how Eastman Kodak can survive in the digital world. Very nice comments – they’re right on the money. Wish Kodak would listen, but their management still isn’t known for listening.

However, Kodak and other digital camera manufacturers have great advantages they haven’t even tried to leverage yet. While everyone else talks of film (old cash cow), printers (they’ll always be beat out by better players here), and verticals (medical, archiving, old film conversion), the new market will be in something already on every high end digital camera – video clip capability ready-made for the Internet.

I especially liked Judy Hopelain’s remarks: “Kodak must do more to insert itself into the ways that people use digital photography. Why aren’t they offering something to let tweens and teens use images in instant messaging? Why aren’t they doing more with cellphone cameras?…But Kodak should rethink the decision to pursue printers and printing. What are they going to do that is unique and brand relevant against Hewlett-Packard and the other big boys? They’ll just dilute their brand and stretch their resources” 

According to Time magazine, there are people using this feature in v-logs. It’s a very small market, however, because the tools to produce the clips into an entertaining form which fits the parameters of Internet viewing are very difficult and tedious to use correctly, and require considerable expertise – anything less and you get a laughable out-of-synch amateur effort full of artifacts and lacking the glitz.

I’m so glad I’m with ExecProducer, since we’ve just completed trials with the University of California which took these raw unpolished clips, turned them instantly via email / web into Hollywood-style high-quality movies complete with the imprimateur of the university (branding), music, and technical excellence, ready for viewing on the Internet. All the director need do is “watch the rushes”. Filmography, metadata (important for enterprise), invitiations, content, security,… all done. It will all be described in a paper accepted by the SIGCHI Advances in Computer Entertainment (ACE2004) in Singapore.

The service uses any digital camera clip(s), corrects their imperfections, and leaves the customer feeling that they bought the right camera to make such cool movies. Important for a manufacturer who needs to get value out of such an expensive feature.

It’s a great time to be doing this – all the signs are right. And, you know, you can’t print out a video clip.