Sometimes a Legend

And once again, an interesting item in the postel.org end-to-end group – “An interesting version of TCP was created a few years ago at a large data-storage-system company here — essentially, the TCP receive window was reduced to Go/No-Go, and startup was modified, so the sending Unix box would blast to its mirror at full wire rate from the get go. ACKs would have meaningless Window values, excepting 0, because sender and receiver had similar processing/buffering capability. Loss produced replacement, via repeated ACKs. Being LAN-based system overall made all these mods workable. But clearly, the engineers involved found normal TCP wanting in the ability to deliver data on high-speed links.”

Interesting how legends develop. This project was called the flamethrower demo done with a wirewrap version of SiliconTCP on a DEC PAM card with a NIC wired on (and that’s exciting with 100MHz logic).

We demo’d this to Microsoft, venture, and lots of other companies back in Summer 1998. One Microsoft exec (Peter Ford) noted that we were so overloading the standard NICs that an “etherlock” condition was likely to occur. Etherlock, for those who don’t know, occurs when all of the bandwidth is consumed and nothing else can communicate because there is no idle time effectively. And yes, we saw this occur.

One of the more interesting things we found is that many “standard” NICs were not standard compliant. I still have the wirewrap on my wall alongside a production board.

The Power of TCP is in its Completeness

Interesting line of discussion passed through my email regarding the future of TCP. In particular, Alex Cannara decided to take on a few of the more “conservative” elements on dealing with end-end flows by interior management of links.

As Alex puts it: “Apparently, a great bias has existed against this sort of design, which is actually very successful in other contexts”. Even a very “big old name in Internet Land” liked this type of approach, for the “…reason it [TCP] requires the opposite of backoff is because it doesn’t have the visibility to determine which algorithm to choose differently as it navigates the network at any point in time. But if you can do it hop by hop you can make these rules work in all places and vary the algorithm knowing your working on a deterministic small segment instead of the big wide Internet.”

Let’s take this further.

In math we deal with continuous functions differently than discontinous ones, and TCP algorithms know this – they have different strategies for each approach – but when you get a mixture across the network you’re limited to statistics. If we limit the inhomogeneity, then the end points of TCP can then optimize the remaining result. In this case, the gross aspects limiting the performance no longer dominate the equation.

So you can’t overtransmit or overcommit a link if you’re disciplined – you only fill in the idealized link of the puzzle from the perspective of what you know.

Has the hobgoblin of statistics ruined any ability to do a deterministic job (with metrics and cost value) of improving loss ratios and understanding what is really happening at any point along the way? If so, this would in turn validate / prove a statistical model. But think of all the projects that wouldn’t fly.

At InterProphet we proposed that for every hop we get the best possible effect –basically the same level of end-to-end principle in each segment instead of viewing all hops as one end-to-end segment — by deploying low latency TCP processing as a bucket brigade throughout the infrastructure. Now, the pushback from the manufacturers was cost, but we met all cost constraints with our dataflow design (that works, by the way, and is proven).

The power of this approach is amazing. Instead of simplistically thinking of end-to-end as just two cans and a string, we can apply end-to-end completeness on every segment.

Very few people have understood this — looks like Alex does. And I know Vint Cerf, the Father of the Internet, does. He joined InterProphet’s Board of Directors on the strength of the idea alone. Of course, he’s also a visionary and gentleman in every sense of the word. We should all be so gifted.

What’s in the Future for Digital TV?

Attended the panel discussion hosted by the Swiss Science & Techn. Office, Wallonia Initiative, last night in San Francisco at their office downtown. On the panel were: Thomas Gieselmann, General Partner, BV Capital; Christina Ku, Consumer Electronics Group, Intel Corp.; Bob O’Donnell,
Research Analyst, IDC; Bernard Rappaz, Editor in Chief
Multimedia at Swiss-French TV; and Moderator Bruno Giussani, Knight Journalism Fellow at Stanford University.

One of the most intriguing questions was that the Europeans completely believed that broadcast TV as we know it is dead – no growth, no future. It’s all Internet and cellular.

Something to think about.

California Connects GenYs with Digital Media

We’ve got some great news for students in California who want to incorporate digital media into their studies,
I just heard from Jeff Newman, who kindly reviewed my ACE2004 paper on massive video production and how it can be used to build multimedia community projects.

Jeff says: “As to the impact of such technology, California has recently enacted the Digital Arts Studio Partnership Demonstration Program Act, to make recommendations on a model curriculum and state standards for digital media arts provided to youths aged 13 to 18 years.”

“The inclusion of streaming video would enhance the effectiveness of this statewide effort. It would require the council to convene a meeting of specified entities to review and recommended by consortia associated with each partnership.”

It’s great to see California schools and government taking the lead on such a critical new technology that totally connects with the GenY’s. Thank you, Jeff.

Virtual Communities

Fred Turner, Professor at Stanford, spoke the other day at SCU on “Counterculture into Cyberculture: How the Whole Earth Catalog Brought Us “Virtual Community”. Basically a history talk of the WELL and the organizing power of the hippie movement through the “whole earth” commercial powerhouse of the time. I found it curiously amusing – kind of like watching your mom in a “Granny dress” or your dad with a beard strumming a guitar.

While I’m not quite the age of the “summer of love” crowd (I think I preferred collecting Breyer horses then), I have watched the evolution of these communities from a technology standpoint, and have seen both their strengths and weaknesses as they grew (and in some cases died). As history and anthropology are an avocation and since I’ve been involved in developing and growing relationships using technology, it is a serious topic. So I went and listened.

One of the clear as bells problems stemmed from the willful misunderstanding of what the technology of the time provided and how it could be used. The WELL provided a novel community experience all right, but it was basically too limited to be of great use to build the kind of movement envisioned by the “counterculture” – it was just too early, and easily supplanted by the Internet.

The evolution, technology, and mechanisms which would become the Internet were actually quite separate in design and execution, rose colored glasses of the counterculture notwithstanding.

I know a lot of folks (even Al Gore) would like to stake a claim to the Internet’s success, or as the syllabus stated “the network technology of the WELL helped translate the ideals of the American counterculture into key resources for understanding the social possibilities of digital networking in the 1990s.” But I’m afraid it just isn’t so – it evolved independently and with funding from some of those guys – the DOD comes to mind – that the counterculture tended to protest against.

I’ve never found the hippie movement to be very progressive in using technology, except for television. It’s understandable, given the paroxysms of the time. Just like the nostalgia for this period by these guys isn’t so great for women and people of color.

But we should get real here: the right has used the Internet far more effectively to convey its message until Howard Dean went against his own party’s anti-tech bias and proved the Internet could be beneficial to the left.

It took thirty years, a lot of hard work, a ton of research funds, real tech visionaries like Cerf and Kahn and Berners-Lee to make the Internet the real world wide web.

Not all the cute stores that sold wood stoves, guitars and granny dresses could make one TCP/IP connection or HTTP web page.

Forget Printers and Film – It’s Digital Cameras and Clips

NYTimes had an interesting article by Claudia Deutsch on how Eastman Kodak can survive in the digital world. Very nice comments – they’re right on the money. Wish Kodak would listen, but their management still isn’t known for listening.

However, Kodak and other digital camera manufacturers have great advantages they haven’t even tried to leverage yet. While everyone else talks of film (old cash cow), printers (they’ll always be beat out by better players here), and verticals (medical, archiving, old film conversion), the new market will be in something already on every high end digital camera – video clip capability ready-made for the Internet.

I especially liked Judy Hopelain’s remarks: “Kodak must do more to insert itself into the ways that people use digital photography. Why aren’t they offering something to let tweens and teens use images in instant messaging? Why aren’t they doing more with cellphone cameras?…But Kodak should rethink the decision to pursue printers and printing. What are they going to do that is unique and brand relevant against Hewlett-Packard and the other big boys? They’ll just dilute their brand and stretch their resources” 

According to Time magazine, there are people using this feature in v-logs. It’s a very small market, however, because the tools to produce the clips into an entertaining form which fits the parameters of Internet viewing are very difficult and tedious to use correctly, and require considerable expertise – anything less and you get a laughable out-of-synch amateur effort full of artifacts and lacking the glitz.

I’m so glad I’m with ExecProducer, since we’ve just completed trials with the University of California which took these raw unpolished clips, turned them instantly via email / web into Hollywood-style high-quality movies complete with the imprimateur of the university (branding), music, and technical excellence, ready for viewing on the Internet. All the director need do is “watch the rushes”. Filmography, metadata (important for enterprise), invitiations, content, security,… all done. It will all be described in a paper accepted by the SIGCHI Advances in Computer Entertainment (ACE2004) in Singapore.

The service uses any digital camera clip(s), corrects their imperfections, and leaves the customer feeling that they bought the right camera to make such cool movies. Important for a manufacturer who needs to get value out of such an expensive feature.

It’s a great time to be doing this – all the signs are right. And, you know, you can’t print out a video clip.

Speed Stunts

Of course, never assume what the PR office of a university releases makes any real sense, as this SLAC press release demonstrates.

Looks like a commonplace database search trick to throttle flow control in a faster than exp backoff by probing for the likely end-to-end flow rate at any time. The question is, is this a good enough “good enough” strategy?

Jim Gray, once again, was willing to provide me a bit of perspective on this.

Jim told me “That stunt does not allow packets to get lost. There is some real engineering to make transfers at that speed actually work. But that is proceeding in parallel with the stunts.” That makes me feel more confident of what I was reading. Jim’s read is sensible and balanced, unlike the PR guys in the licensing office of Stanford.

I took a completely different approach with ballistic protocol processing, optimizing at key portions of the network the “best” transfer rate at that RT instant of time – it’s a structural approach, really. I was uncomfortable setting an arbitrary good enough limit given the ever-changing nature of the Internet at any point in time. I found what appeared to be “good enough” was hard to prove good enough.

But of course, I trained in plasma physics, and every attempt in that area to bell the beast by setting arbitrary limits on containment has proven unsuccessful. So 40 years of research there has still left us with “good enough isn’t”.

So who do you think has the good enough solution? CalTech? SLAC as written up in this breathy news item? Or are they running after rainbows?

How Fast Can You Go?

I’ve been following the CalTech and CERN groups responsible for achieving what they claim is the “latest Land Speed Record” of 5.4 Gbps and a claimed throughput of 6.25 Gbps over an average period of 10 minutes, according to the announcement to the Internet 2 newslist on February 24th.

Of course, what does this mean? They claim that “best achieved throughput with Linux is ~5.8Gbps in point to point and 7.2Gbps in single to many configuration”. They claim they’re melting down the “hardware” at 6.6 Gbps. Is this true?

FastTCP and SSC – A Short Meditation

While we’re all oohing and ahhing over CalTech’s FastTCP bulk transfers and record busting using their new TCP congestion control – interesting paper (finally) by Jin/Wei/Low – contrast this with friendly rival Stanford’s protocol high-speed TCP that changes the fairness (I find it interesting and provides some new ideas). Are either likely to impact anyone’s use of the Internet in the next decade, anymore than studying cold fusion?

I’m struck by how all this “record busting” may be a mere sideshow in the scope of real Internet usage, especially given Microsoft Research’s own Jim Gray’s economic arguments against bulk transfers at Stanford a few months back.

Jim said that it is cheaper to send a disk drive via FedEx overnight than any of these contests could provide of benefit to ordinary users. Could the CalTech and Stanford work be too early given that hard reality? I leave that to CalTech and Stanford to battle out which is better a decade down the line. But what about what we can study now?

Maybe dealing with that long latency network issue that Beck etal finds makes storage jitter intractable in the first place is the real challenge of the decade.

Recently a few database experts were suggesting that end-to-end principle might be applied to databases. Beck, Clark, Jacobson, … don’t address this. The question “Are database commits end-to-end – do they satisfy the end-to-end principle?” such as that described in the simplest case (akin to a chaotic strange attractor in physics).

Another thing that came up was “When does latency and jitter combine in a chaotic way such that reliability is injured in database transactions?”

Doyle at CalTech speaks of fragility vs complexity, and uses a combination of control theory, dynamical systems, algebraic geometry and operator theory to connect problem fragility to computational complexity, such that “dual complexity implies primal fragility”, in an NP vs coNP way.

It could be that robust yet fragile (RYF) is effective in defining what’s necessary to prove a viable global storage system. EtherSAN approaches the problem by idealizing the simplest end-to-end mechanism, TCP, with fundamental remedies – not increased complexity. RYF would indicate that this radically improves this by removing primal fragility.

All this seems very similar to the old fusion sustained power burst we had in physics a decade ago. Kept everyone busy until the SSC debacle killed everything in the field. Plasma research is only now beginning to recover.

Let’s go back to fundamentals with Clark etal on end-to-end and simply considering Beck’s well-done arguments for small transactions per storage, cleaving to those goals only and not creating new ones. Reexamining definitions, and understanding them better, ala Bohr and mass, but not changing them.