Fun Friday: Happy Birthday 386BSD!

Our deeds determine us, as much as we determine our deeds. – George Eliot.

Today is a special day. Exactly 14 years ago today, after 15 monthly feature articles on Porting Unix to the 386 appeared in Dr. Dobbs Journal, 386BSD Release 0.0 went public.

A number of 386BSD enthusiasts have noticed that we always favored holidays for releases. St. Patrick, as any good Irish patriot knows, was an escaped Roman slave who led his followers through many perils in the wilds of Europe, evading capture until he reached his home in the British Isles. He was then called to mission in Ireland, converting the Celtic peoples to the Christian faith.

386BSD Release 0.0 launched like a rocket – from zero to 250,000 downloads in a week at a time before HTTP, web browsers, Internet buzz, keywording, and Internet source code tools for management and organization, and most importantly, before the business of open source existed (see The Fun with 386BSD).

386BSD, conceived in 1989 as a quick fun yearlong project (see “386BSD: A Modest Proposal”), also ended up personally consuming about six years of our lives (with periodic periods of “real work” in Silicon Valley companies), the birth of my two younger children, the death of my father-in-law, and a move from Berkeley to Los Gatos. We wrote 17 articles (with translations in German and Japanese) plus additional focus feature pieces on new technology issues, three books, thousands of email responses to specific technical questions, and lots and lots of online documentation. Like a writer who discovers there is a thirst for his works, we became committed to a demanding writing and coding schedule.

We did three major public releases (386BSD Release 0.0, 386BSD Release 0.1, and 386BSD Release 1.0) over three years (plus minor releases and custom releases to research and educational groups), in the process rearchitecting the kernel according to the Berkeley architectural committee recommendations a decade prior to bring BSD into the mainstream. We also invented a few cool mechanisms of our own like “role based security” and “automated installation”, and did a heck of a lot of bug fixes, release engineering, and fixes of other code throughout the release.

We reviewed thousands of submission of new code and bug fixes, keeping track manually of items because no one had an open source Internet-only tracking system available for open source yet. We added, tested, and configured thousands of tools, applications, compilers, and other resources that make up a full release, often finding and fixing problems and moving on to the next one in an “assembly line” manner. Most of the resources supplied, from the plethora of applications to utilities to tools were ones we never personally used (did anyone really need ten different editors?), but others did, so we tried our best to give everyone what they needed (or wanted).

Unlike prior Berkeley releases and commercial Unix releases from AT&T/SCO and Sun Microsystems, all this work was funded through our own resources – we got paid for writing articles, and that was a powerful incentive to keep writing more articles, but there wasn’t anything else in it until Dr. Dobbs released the 386BSD Release 1.0 Reference CDROM. No one back then believed that open source would ever be a valuable business opportunity. Linux was similarly treated for many years.

We had our high point and our low points. Of course, my biggest high was the birth of my son in 1990, when 386BSD was still domiciled at Berkeley, and the birth of my daughter in 1994 right after 386BSD Release 1.0 went to the Sony factory for final stamping and distribution. (My husband was invited, but missed the 25th anniversary of the Internet party at USC ISI though – it was a bit too close). It was a real kick to see so many people loved 386BSD and how it melted down the network when so many folks wanted a copy! I heard from people in Africa about how they were using it in schools, and professionals launching on-the-side open source projects. I heard from people from every continent on the planet except Antartica, and maybe they were using it there too and didn’t tell me.

The lows were more personal. The untimely death of a man I loved as a father and who had supported us through Symmetric Computer Systems and this project just as we were asked to do 386BSD Release 1.0 through Dr. Dobbs Journal – my father-in-law William Leonard Jolitz – left us consumed with grief for a long time. We put a lot of ourselves into this work as a kind of personal catharsis and dedicated the first volume of Source Code Secrets to his memory. And the cancer diagnosis of my oldest daughter six months after his death was also a dark time. It was later found benign during exploratory surgery under a local anesthetic (she was nine years old) as my husband held her hand, talked to her, and watched them work.

[Eerily enough, seven years later she received another cancer diagnosis right when I was trying to finish up the first 386BSD paper I’d written after a five year hiatus. After spending a year constantly battling her HMO to take her tumor seriously, they finally did the biopsy, scheduled a surgeon (booked, cancelled, booked) and removed a tumor the size of a Big Mac. I stayed up until 2am the night before finishing that paper while my husband held me together (“seek no mercy from paper submission committees, for ye shall receive none”). While the committee was reading it the next morning, I was sitting in the surgery waiting room with my husband holding my hand. Fortunately, my oldest daughter is now a healthy college student (her dear high school friend Evan, diagnosed earlier with cancer, wasn’t so lucky – after many surgeries to remove tumors at Stanford he died last year). But what I am I am by the grace of God, and His grace bestowed upon me did not prove ineffectual. But I labored more strenuously than all the rest – yet it was not I, but God’s grace working with me (1 Corinthians 15:10).]

Perhaps it’s no surprise that working on just one project at a funded company – even if it was an impossibly hard project of transport protocol dataflow semiconductors – seemed a cakewalk compared to the morning-to-night coding, testing, and writing ritual that we lived by for so many years doing 386BSD. At InterProphet, a privately-backed Internet semiconductor company I helped co-found a decade ago this year, I finally got some time to work on products again and hire / lead a talented professional engineering team that delivered on time and on budget! We all knew the speed of light on processors problem was going to hit us within a decade, and non-Von Neumann mechanisms were the only way to keep up with Moores Law. But I still fondly remember those crazy years of 386BSD Mania, and we still use 386BSD in our own startups and projects to this day.

So Happy Birthday 386BSD Release 0.0! It was the start of something really big – an open source revolution. But like most revolutions, it only takes one pebble to start a landslide.

Microsoft’s Ultimate Throughput – Change the Compiler, Not the Processor

I like people who go out on a limb to push for some much needed change in the computer biz. Not that I always like the idea itself – but moxie is so rare nowadays that I have to love the messenger despite the message. So here comes Herb Sutter, Microsoft Architect, pushing the need for real concurrency in software. Sequential is dead, and it’s time for parallelism. Actually, it’s long overdue in the software world.

In the hardware world, we’ve been rethinking Von Neumann architecture for many years – SiliconTCP from InterProphet, a company I co-founded, uses a non-Von Neumann dataflow architecture (state machines and functional units – not instruction code translated to Verilog because that never works) to bypass the old-styled protocol stack in software, because an instruction based general processor can never be as efficient for streaming protocols like TCP/IP as our method. Don’t believe me? Check out Figures 2a-b for a graphic on how much you wait for store and forward instead of doing continuous flow processing – the loss for one packet isn’t bad, but do a million and it adds up fast.

It’s all about throughput now – and throughput means dataflow in hardware. But what about user-level software applications? How can we get them the performance they need when the processor is reaching speed-of-light limits? If on a typical processor from one end to the other end you get one clock cycle at the speed of light at 7-8 GHz, anyone stuck in sequential processing will be outraced by Moore’s Law, multiple cores and specialized architectures like SiliconTCP.

Knight-Ridder Sold, But San Jose Mercury News Goes on the Block

So Knight-Ridder got sold, for a premium say some and for a steal say others. Since the San Jose Mercury News is a KR paper, the buyout on the surface appeared to be cause for celebration. Matt Marshall said “But we can confirm that many Knight Ridder employees are breathing a sigh of relief. McClatchy has an excellent reputation for quality journalism, and its headquarters in Sacramento and relative strength in central California means that KR’s Mercury News, Contra Costa Times and other papers in Contra Costa, Monterey and San Luis Obispo will help make the combined company a California powerhouse.”

Maybe so, Matt, but since McClatchy has announced they’re selling twelve papers including the Merc because they only go into “high growth markets” (Pruitt, McClatchy CEO, NYTimes), perhaps the staff should put the cork back into the champange bottle. The only one dancing with glee right now is the SF Chronicle.

More to the point, the analysis of the buyout forsees a lot of debt for McClatchy in a shrinking market. Newspapers aren’t the cash cows they once were, and Internet companies from Google to Craigslist continue to gut both their content and classified revenue. Is this a good buy, or is it a “good bye” for the Merc? One analyst I know here in Silicon Valley said “for what it’s worth, the journalists at that paper might want to polish up their resumes and start blogs ASAP”.

Why Keep Alive “KeepAlive”?

Keepalive in TCP has always been controversial, since it blurs the difference between a dead connection and a moribund connection, or as Vadim Antonov puts it “the knowledge that connectivity is lost”. Advocates, in contrast, believe that the effort reclaiming resources needn’t be done and hence as David Reed puts it “there is no reason why a TCP connection should EVER time out merely because no one sends a packet over it.” Antonov expresses a very narrow affirmation of the value of retained state which is not necessarily useful in the time required, while Reed expresses the reductionist philosophy that no effort should be expended without jusification even if the basis of the repudiation is inherently faulty. But is either truly getting to the heart of the issue? Is it truly important to cleave to the historical constraints of the past Internet philosophical design? Or should we consider the question in the context of what is relevent to the Internet today?

I don’t ask these questions frivolously, but with a serious intent. While I am a student of history, and find the study of heritage very valuable in technical work (even patents require a love of reasoning over time), we should occasionally look at the world not as we would like it to be or how it was but how it is. Thus, I suspect the question should actually be “What is the point of having a long-lived TCP session with keepalive in the 21st century”? Is this not a security hole ripe for exploitation in an age of ubiquitous bandwidth and zombie machines? Is not the lack of security and credentials in the modern Internet the bane of both service providers and users? This is the heart of the issue.