Speed Stunts

Of course, never assume what the PR office of a university releases makes any real sense, as this SLAC press release demonstrates.

Looks like a commonplace database search trick to throttle flow control in a faster than exp backoff by probing for the likely end-to-end flow rate at any time. The question is, is this a good enough “good enough” strategy?

Jim Gray, once again, was willing to provide me a bit of perspective on this.

Jim told me “That stunt does not allow packets to get lost. There is some real engineering to make transfers at that speed actually work. But that is proceeding in parallel with the stunts.” That makes me feel more confident of what I was reading. Jim’s read is sensible and balanced, unlike the PR guys in the licensing office of Stanford.

I took a completely different approach with ballistic protocol processing, optimizing at key portions of the network the “best” transfer rate at that RT instant of time – it’s a structural approach, really. I was uncomfortable setting an arbitrary good enough limit given the ever-changing nature of the Internet at any point in time. I found what appeared to be “good enough” was hard to prove good enough.

But of course, I trained in plasma physics, and every attempt in that area to bell the beast by setting arbitrary limits on containment has proven unsuccessful. So 40 years of research there has still left us with “good enough isn’t”.

So who do you think has the good enough solution? CalTech? SLAC as written up in this breathy news item? Or are they running after rainbows?

How Fast Can You Go?

I’ve been following the CalTech and CERN groups responsible for achieving what they claim is the “latest Land Speed Record” of 5.4 Gbps and a claimed throughput of 6.25 Gbps over an average period of 10 minutes, according to the announcement to the Internet 2 newslist on February 24th.

Of course, what does this mean? They claim that “best achieved throughput with Linux is ~5.8Gbps in point to point and 7.2Gbps in single to many configuration”. They claim they’re melting down the “hardware” at 6.6 Gbps. Is this true?

FastTCP and SSC – A Short Meditation

While we’re all oohing and ahhing over CalTech’s FastTCP bulk transfers and record busting using their new TCP congestion control – interesting paper (finally) by Jin/Wei/Low – contrast this with friendly rival Stanford’s protocol high-speed TCP that changes the fairness (I find it interesting and provides some new ideas). Are either likely to impact anyone’s use of the Internet in the next decade, anymore than studying cold fusion?

I’m struck by how all this “record busting” may be a mere sideshow in the scope of real Internet usage, especially given Microsoft Research’s own Jim Gray’s economic arguments against bulk transfers at Stanford a few months back.

Jim said that it is cheaper to send a disk drive via FedEx overnight than any of these contests could provide of benefit to ordinary users. Could the CalTech and Stanford work be too early given that hard reality? I leave that to CalTech and Stanford to battle out which is better a decade down the line. But what about what we can study now?

Maybe dealing with that long latency network issue that Beck etal finds makes storage jitter intractable in the first place is the real challenge of the decade.

Recently a few database experts were suggesting that end-to-end principle might be applied to databases. Beck, Clark, Jacobson, … don’t address this. The question “Are database commits end-to-end – do they satisfy the end-to-end principle?” such as that described in the simplest case (akin to a chaotic strange attractor in physics).

Another thing that came up was “When does latency and jitter combine in a chaotic way such that reliability is injured in database transactions?”

Doyle at CalTech speaks of fragility vs complexity, and uses a combination of control theory, dynamical systems, algebraic geometry and operator theory to connect problem fragility to computational complexity, such that “dual complexity implies primal fragility”, in an NP vs coNP way.

It could be that robust yet fragile (RYF) is effective in defining what’s necessary to prove a viable global storage system. EtherSAN approaches the problem by idealizing the simplest end-to-end mechanism, TCP, with fundamental remedies – not increased complexity. RYF would indicate that this radically improves this by removing primal fragility.

All this seems very similar to the old fusion sustained power burst we had in physics a decade ago. Kept everyone busy until the SSC debacle killed everything in the field. Plasma research is only now beginning to recover.

Let’s go back to fundamentals with Clark etal on end-to-end and simply considering Beck’s well-done arguments for small transactions per storage, cleaving to those goals only and not creating new ones. Reexamining definitions, and understanding them better, ala Bohr and mass, but not changing them.

Interplanetary TCP

Google put out a lunar job listing, for the person who really needs to get away from people.

So I went and asked Vint Cerf “Perhaps this is the first real use of interplanetary TCP?”. He laughed. I think you will too.

OK, I like Google. Always have I suppose, because it’s minimalist. And I like the logo – it’s kind of simple and childish, but it’s very Stanford. Yes, I know, I went to Berkeley, but my dad and brothers went to Stanford, so even if there’s a rivalry, it’s a friendly one. Besides, usually Berkeley gets the axe to grind – I’ll take a bear over a tree anyday.