The Power of TCP is in its Completeness

Interesting line of discussion passed through my email regarding the future of TCP. In particular, Alex Cannara decided to take on a few of the more “conservative” elements on dealing with end-end flows by interior management of links.

As Alex puts it: “Apparently, a great bias has existed against this sort of design, which is actually very successful in other contexts”. Even a very “big old name in Internet Land” liked this type of approach, for the “…reason it [TCP] requires the opposite of backoff is because it doesn’t have the visibility to determine which algorithm to choose differently as it navigates the network at any point in time. But if you can do it hop by hop you can make these rules work in all places and vary the algorithm knowing your working on a deterministic small segment instead of the big wide Internet.”

Let’s take this further.

In math we deal with continuous functions differently than discontinous ones, and TCP algorithms know this – they have different strategies for each approach – but when you get a mixture across the network you’re limited to statistics. If we limit the inhomogeneity, then the end points of TCP can then optimize the remaining result. In this case, the gross aspects limiting the performance no longer dominate the equation.

So you can’t overtransmit or overcommit a link if you’re disciplined – you only fill in the idealized link of the puzzle from the perspective of what you know.

Has the hobgoblin of statistics ruined any ability to do a deterministic job (with metrics and cost value) of improving loss ratios and understanding what is really happening at any point along the way? If so, this would in turn validate / prove a statistical model. But think of all the projects that wouldn’t fly.

At InterProphet we proposed that for every hop we get the best possible effect –basically the same level of end-to-end principle in each segment instead of viewing all hops as one end-to-end segment — by deploying low latency TCP processing as a bucket brigade throughout the infrastructure. Now, the pushback from the manufacturers was cost, but we met all cost constraints with our dataflow design (that works, by the way, and is proven).

The power of this approach is amazing. Instead of simplistically thinking of end-to-end as just two cans and a string, we can apply end-to-end completeness on every segment.

Very few people have understood this — looks like Alex does. And I know Vint Cerf, the Father of the Internet, does. He joined InterProphet’s Board of Directors on the strength of the idea alone. Of course, he’s also a visionary and gentleman in every sense of the word. We should all be so gifted.

Leave a Reply

Your email address will not be published. Required fields are marked *

This site uses Akismet to reduce spam. Learn how your comment data is processed.