Alex Cannera dropped an interesting paper on my desktop discussing congestion control in grid networks. And it’s results confirm what I and others have seen over the years – Vint Cerf seriously saw in 1998 that hop-by-hop reliability preserving end-to-end semantics in the routers was the real key to handling this issue. Vint also is a renowned wine expert, and treated me and William to a wonderful tour of fine wines at the Rubicon in San Francisco where we had a memorable discussion on exactly this issue.
From an article entitled Dotcoms a hot domain in the SF Chronicle today:
“One of the standouts was Sunnyvale’s Sandisk, the world’s largest maker of flash-memory cards, which are used in digital cameras. During the past year, its revenue doubled to nearly $1.1 billion, and its market value nearly quadrupled.”
That’s a lot of video clips and stills. This is the revenue driver for new PCs and disk drives that’s been picking up in the consumer sector. Gotta put those pics and vids somewhere, right? Or has anyone thought about this yet?
In a walk down memory lane, Craig Partridge and Alex Cannara discussed Craig’s mention of an XCP meeting and Greg Chesson, Alex saying “But, we still have suboptimal network design, insofar as we depend on TCP from the ’80s and a glacial IETF process — all this while now having complete web servers on a chip inside an RJ45 jack! So maybe his ideas for SiProts were something to consider, even if they weren’t right on target?”
Jonathan M. Smith has an interesting idea on how to avoid blackballing in tech paper reviews.
For those not clued in (or fortunate enough to have avoided academic paper submission follies), in order to have an academic paper accepted, one must submit to double-blind review by anonymous experts in the field to evaluate whether a paper is interesting and appropriate to the conference venue without being dazzled (or tainted) by knowledge of who actually wrote it.
Check out the Phoenix Technologies announcement on their BIOS hardware authentication scheme.
People like Bruce Schneier lecture us all on how hard it is to create a trusted network verification model that holds up under a variety of conditions and needs. And who needs this (very few)?
And once again, an interesting item in the postel.org end-to-end group – “An interesting version of TCP was created a few years ago at a large data-storage-system company here — essentially, the TCP receive window was reduced to Go/No-Go, and startup was modified, so the sending Unix box would blast to its mirror at full wire rate from the get go. ACKs would have meaningless Window values, excepting 0, because sender and receiver had similar processing/buffering capability. Loss produced replacement, via repeated ACKs. Being LAN-based system overall made all these mods workable. But clearly, the engineers involved found normal TCP wanting in the ability to deliver data on high-speed links.”
Interesting how legends develop. This project was called the “flamethrower” demo done with a wirewrap version of SiliconTCP on a DEC PAM card with a NIC wired on (and that’s exciting with 100MHz logic).
Another interesting item from the Postel.org end-to-end crowd – this paper on networked global storage by Beck et al contains some very interesting ideas on the application of the end-to-end principle. While I don’t particularly agree with their implementation proposal, their framing of the problem is quite concise.
Interesting line of discussion passed through my email regarding the future of TCP. In particular, Alex Cannara decided to take on a few of the more “conservative” elements on dealing with end-end flows by interior management of links.
As Alex puts it: “Apparently, a great bias has existed against this sort of design, which is actually very successful in other contexts”. Even a very “big old name in Internet Land” liked this type of approach, for the “…reason it [TCP] requires the opposite of backoff is because it doesn’t have the visibility to determine which algorithm to choose differently as it navigates the network at any point in time. But if you can do it hop by hop you can make these rules work in all places and vary the algorithm knowing your working on a deterministic small segment instead of the big wide Internet.”
Let’s take this further.
Attended the panel discussion hosted by the Swiss Science & Techn. Office, Wallonia Initiative, last night in San Francisco at their office downtown. On the panel were: Thomas Gieselmann, General Partner, BV Capital; Christina Ku, Consumer Electronics Group, Intel Corp.; Bob O’Donnell,
Research Analyst, IDC; Bernard Rappaz, Editor in Chief
Multimedia at Swiss-French TV; and Moderator Bruno Giussani, Knight Journalism Fellow at Stanford University.
One of the most intriguing questions was that the Europeans completely believed that broadcast TV as we know it is dead – no growth, no future. It’s all Internet and cellular.
Something to think about.
We’ve got some great news for students in California who want to incorporate digital media into their studies,
I just heard from Jeff Newman, who kindly reviewed my ACE2004 paper on massive video production and how it can be used to build multimedia community projects.
Jeff says: “As to the impact of such technology, California has recently enacted the Digital Arts Studio Partnership Demonstration Program Act, to make recommendations on a model curriculum and state standards for digital media arts provided to youths aged 13 to 18 years.”
“The inclusion of streaming video would enhance the effectiveness of this statewide effort. It would require the council to convene a meeting of specified entities to review and recommended by consortia associated with each partnership.”
It’s great to see California schools and government taking the lead on such a critical new technology that totally connects with the GenY’s. Thank you, Jeff.