Hints of an Apple-Disney Merger – “Who’s the Big Dog?”

Did anyone notice the announcement today from Disney that they would soon be giving away their broadcast content on the Internet? Some might think this means that the Disney deal to sell “Lost” and other TV show episodes on ITunes is a dead duck – after all, who would buy these shows from Apple when you can download them for free?

But are things really as they appear? One Silicon Valley dealmaker sees the signs of a bigger deal pending. “It’s all about ‘Who’s the Big Dog’. The ITunes model breaks a lot of things for distribution. When the content biz takes it seriously, they won’t do the $1.99 per video pricing model anymore. They can’t vary the pricing – you’ve got to play pricing games to profit off of premium offerings at least some of the time, and begin the process of moving to value pricing.” So what’s going on here? It’s pretty simple…

Fun Friday: Happy Birthday 386BSD!

Our deeds determine us, as much as we determine our deeds. – George Eliot.

Today is a special day. Exactly 14 years ago today, after 15 monthly feature articles on Porting Unix to the 386 appeared in Dr. Dobbs Journal, 386BSD Release 0.0 went public.

A number of 386BSD enthusiasts have noticed that we always favored holidays for releases. St. Patrick, as any good Irish patriot knows, was an escaped Roman slave who led his followers through many perils in the wilds of Europe, evading capture until he reached his home in the British Isles. He was then called to mission in Ireland, converting the Celtic peoples to the Christian faith.

386BSD Release 0.0 launched like a rocket – from zero to 250,000 downloads in a week at a time before HTTP, web browsers, Internet buzz, keywording, and Internet source code tools for management and organization, and most importantly, before the business of open source existed (see The Fun with 386BSD).

386BSD, conceived in 1989 as a quick fun yearlong project (see “386BSD: A Modest Proposal”), also ended up personally consuming about six years of our lives (with periodic periods of “real work” in Silicon Valley companies), the birth of my two younger children, the death of my father-in-law, and a move from Berkeley to Los Gatos. We wrote 17 articles (with translations in German and Japanese) plus additional focus feature pieces on new technology issues, three books, thousands of email responses to specific technical questions, and lots and lots of online documentation. Like a writer who discovers there is a thirst for his works, we became committed to a demanding writing and coding schedule.

We did three major public releases (386BSD Release 0.0, 386BSD Release 0.1, and 386BSD Release 1.0) over three years (plus minor releases and custom releases to research and educational groups), in the process rearchitecting the kernel according to the Berkeley architectural committee recommendations a decade prior to bring BSD into the mainstream. We also invented a few cool mechanisms of our own like “role based security” and “automated installation”, and did a heck of a lot of bug fixes, release engineering, and fixes of other code throughout the release.

We reviewed thousands of submission of new code and bug fixes, keeping track manually of items because no one had an open source Internet-only tracking system available for open source yet. We added, tested, and configured thousands of tools, applications, compilers, and other resources that make up a full release, often finding and fixing problems and moving on to the next one in an “assembly line” manner. Most of the resources supplied, from the plethora of applications to utilities to tools were ones we never personally used (did anyone really need ten different editors?), but others did, so we tried our best to give everyone what they needed (or wanted).

Unlike prior Berkeley releases and commercial Unix releases from AT&T/SCO and Sun Microsystems, all this work was funded through our own resources – we got paid for writing articles, and that was a powerful incentive to keep writing more articles, but there wasn’t anything else in it until Dr. Dobbs released the 386BSD Release 1.0 Reference CDROM. No one back then believed that open source would ever be a valuable business opportunity. Linux was similarly treated for many years.

We had our high point and our low points. Of course, my biggest high was the birth of my son in 1990, when 386BSD was still domiciled at Berkeley, and the birth of my daughter in 1994 right after 386BSD Release 1.0 went to the Sony factory for final stamping and distribution. (My husband was invited, but missed the 25th anniversary of the Internet party at USC ISI though – it was a bit too close). It was a real kick to see so many people loved 386BSD and how it melted down the network when so many folks wanted a copy! I heard from people in Africa about how they were using it in schools, and professionals launching on-the-side open source projects. I heard from people from every continent on the planet except Antartica, and maybe they were using it there too and didn’t tell me.

The lows were more personal. The untimely death of a man I loved as a father and who had supported us through Symmetric Computer Systems and this project just as we were asked to do 386BSD Release 1.0 through Dr. Dobbs Journal – my father-in-law William Leonard Jolitz – left us consumed with grief for a long time. We put a lot of ourselves into this work as a kind of personal catharsis and dedicated the first volume of Source Code Secrets to his memory. And the cancer diagnosis of my oldest daughter six months after his death was also a dark time. It was later found benign during exploratory surgery under a local anesthetic (she was nine years old) as my husband held her hand, talked to her, and watched them work.

[Eerily enough, seven years later she received another cancer diagnosis right when I was trying to finish up the first 386BSD paper I’d written after a five year hiatus. After spending a year constantly battling her HMO to take her tumor seriously, they finally did the biopsy, scheduled a surgeon (booked, cancelled, booked) and removed a tumor the size of a Big Mac. I stayed up until 2am the night before finishing that paper while my husband held me together (“seek no mercy from paper submission committees, for ye shall receive none”). While the committee was reading it the next morning, I was sitting in the surgery waiting room with my husband holding my hand. Fortunately, my oldest daughter is now a healthy college student (her dear high school friend Evan, diagnosed earlier with cancer, wasn’t so lucky – after many surgeries to remove tumors at Stanford he died last year). But what I am I am by the grace of God, and His grace bestowed upon me did not prove ineffectual. But I labored more strenuously than all the rest – yet it was not I, but God’s grace working with me (1 Corinthians 15:10).]

Perhaps it’s no surprise that working on just one project at a funded company – even if it was an impossibly hard project of transport protocol dataflow semiconductors – seemed a cakewalk compared to the morning-to-night coding, testing, and writing ritual that we lived by for so many years doing 386BSD. At InterProphet, a privately-backed Internet semiconductor company I helped co-found a decade ago this year, I finally got some time to work on products again and hire / lead a talented professional engineering team that delivered on time and on budget! We all knew the speed of light on processors problem was going to hit us within a decade, and non-Von Neumann mechanisms were the only way to keep up with Moores Law. But I still fondly remember those crazy years of 386BSD Mania, and we still use 386BSD in our own startups and projects to this day.

So Happy Birthday 386BSD Release 0.0! It was the start of something really big – an open source revolution. But like most revolutions, it only takes one pebble to start a landslide.

Microsoft’s Ultimate Throughput – Change the Compiler, Not the Processor

I like people who go out on a limb to push for some much needed change in the computer biz. Not that I always like the idea itself – but moxie is so rare nowadays that I have to love the messenger despite the message. So here comes Herb Sutter, Microsoft Architect, pushing the need for real concurrency in software. Sequential is dead, and it’s time for parallelism. Actually, it’s long overdue in the software world.

In the hardware world, we’ve been rethinking Von Neumann architecture for many years – SiliconTCP from InterProphet, a company I co-founded, uses a non-Von Neumann dataflow architecture (state machines and functional units – not instruction code translated to Verilog because that never works) to bypass the old-styled protocol stack in software, because an instruction based general processor can never be as efficient for streaming protocols like TCP/IP as our method. Don’t believe me? Check out Figures 2a-b for a graphic on how much you wait for store and forward instead of doing continuous flow processing – the loss for one packet isn’t bad, but do a million and it adds up fast.

It’s all about throughput now – and throughput means dataflow in hardware. But what about user-level software applications? How can we get them the performance they need when the processor is reaching speed-of-light limits? If on a typical processor from one end to the other end you get one clock cycle at the speed of light at 7-8 GHz, anyone stuck in sequential processing will be outraced by Moore’s Law, multiple cores and specialized architectures like SiliconTCP.

Knight-Ridder Sold, But San Jose Mercury News Goes on the Block

So Knight-Ridder got sold, for a premium say some and for a steal say others. Since the San Jose Mercury News is a KR paper, the buyout on the surface appeared to be cause for celebration. Matt Marshall said “But we can confirm that many Knight Ridder employees are breathing a sigh of relief. McClatchy has an excellent reputation for quality journalism, and its headquarters in Sacramento and relative strength in central California means that KR’s Mercury News, Contra Costa Times and other papers in Contra Costa, Monterey and San Luis Obispo will help make the combined company a California powerhouse.”

Maybe so, Matt, but since McClatchy has announced they’re selling twelve papers including the Merc because they only go into “high growth markets” (Pruitt, McClatchy CEO, NYTimes), perhaps the staff should put the cork back into the champange bottle. The only one dancing with glee right now is the SF Chronicle.

More to the point, the analysis of the buyout forsees a lot of debt for McClatchy in a shrinking market. Newspapers aren’t the cash cows they once were, and Internet companies from Google to Craigslist continue to gut both their content and classified revenue. Is this a good buy, or is it a “good bye” for the Merc? One analyst I know here in Silicon Valley said “for what it’s worth, the journalists at that paper might want to polish up their resumes and start blogs ASAP”.

Why Keep Alive “KeepAlive”?

Keepalive in TCP has always been controversial, since it blurs the difference between a dead connection and a moribund connection, or as Vadim Antonov puts it “the knowledge that connectivity is lost”. Advocates, in contrast, believe that the effort reclaiming resources needn’t be done and hence as David Reed puts it “there is no reason why a TCP connection should EVER time out merely because no one sends a packet over it.” Antonov expresses a very narrow affirmation of the value of retained state which is not necessarily useful in the time required, while Reed expresses the reductionist philosophy that no effort should be expended without jusification even if the basis of the repudiation is inherently faulty. But is either truly getting to the heart of the issue? Is it truly important to cleave to the historical constraints of the past Internet philosophical design? Or should we consider the question in the context of what is relevent to the Internet today?

I don’t ask these questions frivolously, but with a serious intent. While I am a student of history, and find the study of heritage very valuable in technical work (even patents require a love of reasoning over time), we should occasionally look at the world not as we would like it to be or how it was but how it is. Thus, I suspect the question should actually be “What is the point of having a long-lived TCP session with keepalive in the 21st century”? Is this not a security hole ripe for exploitation in an age of ubiquitous bandwidth and zombie machines? Is not the lack of security and credentials in the modern Internet the bane of both service providers and users? This is the heart of the issue.

Running the Microsoft Personnel Gauntlet

Ed Frauenheim of cnet discussed the difficulty of running the Microsoft personnel gauntlet, er, “puzzle”. Why are they so arrogant? Obvious answer – they’re a big fish. And some managers think that if their company is big, so are they and act accordingly. However, once they leave the “hive” they usually sink back into the ooze they emerged from in the first place.

When one of the Microsoft recruiters came for me back in the mid-1990’s, I ended up hiring him to staff one of my funded startups – . I recommend that startups in competitive times recruit a Microsoft recruiter – they’re very good.

On the serious side, the simple reason that Microsoft has difficulty in hiring is their antipathy to anyone who has worked with open source. This “us versus them” mindset has caused them to lose out on very talented people and on new directions in research and development in operating systems.

Fun Friday: VCs Get Googled, Tempel 1 to Get Deep Impact

Well, we’ve finally got the lowdown on the post-IPO Google payoff, courtesy of Bill Burnham, and it’s quite a tidy haul. How much? Theoretically “…all the way back in 1999 Kleiner and Sequoia each invested $12.5M in Google for a 10% stake. Fast forward to the Summer of 2004 and these stakes were worth $2.03BN at Google’s IPO price of $85/share”.

They had to back off on selling all that at the IPO, however, which meant they did even better. According to Kleiner’s distribution statements (SEC Form 4) “… to date they have distributed shares worth $3.549BN. They still have another 2.6M shares worth $752M as of yesterday’s close, so the total value of their stake is $4.3BN which represents a 344X return on their investment of $12.5M … not too shabby”.

What about Sequoia? “making an educated guess they have returned about $3.8BN to date and have stock worth another $940M left to distribute for a total return of close to $4.7BN which is about $200M higher than Kleiner’s $4.5BN (with the mystery shares). Based on their $200M more in proceeds for the same stake and their careful doling out of shares to protect the market, Sequoia wins the award for best distrubution process”.

For those of you not sponging off one of the Class A VCs, look toward the heavens (or NASA TV). Tempel 1 is scheduled to be hit by Deep Impact to determine if it really is a dirty snowball or a dirty dustball. Unless you have a rather large (11-inch or better) aperture telescope, watch it on the Internet – it will be Magnitude 11 and pretty hard to spot unless you’re very experienced.

So for all those unhappy people who didn’t make out like bandits on the Google IPO, repeat after me: “The best things in life are free”. At least, until Google figures out a way to put banner ads on Tempel 1.

Squandered Victory a Fascinating Talk

Larry Diamond of the Hoover Institution, Stanford University, spoke yesterday at a special PARC forum on “Our Squandered Victory and the Prospects for Democracy in Iraq”. I must admit, I was skeptical that I would find him an agreeable (or even informed) speaker – I’m not a great fan of the Hoover Institution. But he knew his stuff, was right on the money about the money (the billions spent on this war), had lots of those “where did they get those guys” stories of screwups in Iraq (our guys – not their guys), and presented a thorough convincing argument for how badly the administration has bungled the job from an insider’s perspective.

Why is he an “insider”? Apparently Larry Diamond was asked by Condoleezza Rice to go to Baghdad as an adviser to the American occupation authorities. Diamond wasn’t an Iraq war supporter, but he said he thought creating a “viable democracy” was important. He was there last year.

One of the best speakers I’ve seen this year. He answered every question, and met critics head-on. I wish more Americans could talk to him as someone who’s really “been there”. It’s one way to cut through the spin and make your own “fair and balanced” decision.

TCP Protocols and Unfair Advantage – Being the Ultimate Pig on the Bandwidth Block

Little item from the testing side of proposed TCP protocols on stack fairness from the Hamilton Institute at the National University of Ireland, Maynooth.

According to Douglas Leith:
“In summary, we find that both Scalable-TCP and FAST-TCP consistently exhibit substantial unfairness, even when competing flows share identical network path characteristics. Scalable-TCP, HS-TCP, FAST-TCP and BIC-TCP all exhibit much greater RTT unfairness than does standard TCP, to the extent that long RTT flows may be completely starved of bandwidth. Scalable-TCP, HS-TCP and BIC-TCP all exhibit slow convergence and sustained unfairness following changes in network conditions such as the start-up of a new flow. FAST-TCP exhibits complex convergence behaviour.”

What’s this mean? Simple. In order to get more for themselves these approaches starve everyone else – the “pig at the trough” mentality. But what might work for a single flow in a carefully contrived test rig can immediately start to backfire once more complex “real world” flows are introduced.

There have been concerns for years that these approaches could wreak havoc on the Internet if not carefully vetted. I’m pleased to see someone actually is testing these proposed protocols for unfairness and the impact on network traffic. After 30 years of tuning the Internet, taking a hammer to it protocol-wise isn’t just bad science – it’s bad global policy.

When Your Bandwidth Runs Out

Tom Foremski of SiliconValleyWatcher had an amusing item about how awful it is to be successful enough to “run out of bandwidth”. “SiliconValleyWatcher was off line for about 6 hours as traffic surged above our monthly quota. And I couldn’t open up the pipes because there was no way to buy more bandwidth online. I found that I would have to wait until the next morning and email the sales department!!!”

This little problem is why you negotiate with a managed service provider for overage bandwidth. A good ISP should be calling Tom about his burst, not waiting for Tom to call call them after his blog has been knocked offline as punishment for the sin of being successful. But negotiating bandwidth overages when you are a small business isn’t usually done – everything is so “on the cheap” that even simple contract items (which could be automated) don’t exist. Is it any wonder I run my own datacenter?

I wrote about this in one of my essays on datacenter management and monitoring. I’ve been told that no one needs to know this stuff anymore, because everything works perfectly. Think that’s the case?