Of course, never assume what the PR office of a university releases makes any real sense, as this SLAC press release demonstrates.
Looks like a commonplace database search trick to throttle flow control in a faster than exp backoff by probing for the likely end-to-end flow rate at any time. The question is, is this a good enough “good enough” strategy?
Jim Gray, once again, was willing to provide me a bit of perspective on this.
Jim told me “That stunt does not allow packets to get lost. There is some real engineering to make transfers at that speed actually work. But that is proceeding in parallel with the stunts.” That makes me feel more confident of what I was reading. Jim’s read is sensible and balanced, unlike the PR guys in the licensing office of Stanford.
I took a completely different approach with ballistic protocol processing, optimizing at key portions of the network the “best” transfer rate at that RT instant of time – it’s a structural approach, really. I was uncomfortable setting an arbitrary good enough limit given the ever-changing nature of the Internet at any point in time. I found what appeared to be “good enough” was hard to prove good enough.
But of course, I trained in plasma physics, and every attempt in that area to bell the beast by setting arbitrary limits on containment has proven unsuccessful. So 40 years of research there has still left us with “good enough isn’t”.
So who do you think has the good enough solution? CalTech? SLAC as written up in this breathy news item? Or are they running after rainbows?