Microsoft’s Ultimate Throughput – Change the Compiler, Not the Processor

I like people who go out on a limb to push for some much needed change in the computer biz. Not that I always like the idea itself – but moxie is so rare nowadays that I have to love the messenger despite the message. So here comes Herb Sutter, Microsoft Architect, pushing the need for real concurrency in software. Sequential is dead, and it’s time for parallelism. Actually, it’s long overdue in the software world.

In the hardware world, we’ve been rethinking Von Neumann architecture for many years – SiliconTCP from InterProphet, a company I co-founded, uses a non-Von Neumann dataflow architecture (state machines and functional units – not instruction code translated to Verilog because that never works) to bypass the old-styled protocol stack in software, because an instruction based general processor can never be as efficient for streaming protocols like TCP/IP as our method. Don’t believe me? Check out Figures 2a-b for a graphic on how much you wait for store and forward instead of doing continuous flow processing – the loss for one packet isn’t bad, but do a million and it adds up fast.

It’s all about throughput now – and throughput means dataflow in hardware. But what about user-level software applications? How can we get them the performance they need when the processor is reaching speed-of-light limits? If on a typical processor from one end to the other end you get one clock cycle at the speed of light at 7-8 GHz, anyone stuck in sequential processing will be outraced by Moore’s Law, multiple cores and specialized architectures like SiliconTCP.

Knight-Ridder Sold, But San Jose Mercury News Goes on the Block

So Knight-Ridder got sold, for a premium say some and for a steal say others. Since the San Jose Mercury News is a KR paper, the buyout on the surface appeared to be cause for celebration. Matt Marshall said “But we can confirm that many Knight Ridder employees are breathing a sigh of relief. McClatchy has an excellent reputation for quality journalism, and its headquarters in Sacramento and relative strength in central California means that KR’s Mercury News, Contra Costa Times and other papers in Contra Costa, Monterey and San Luis Obispo will help make the combined company a California powerhouse.”

Maybe so, Matt, but since McClatchy has announced they’re selling twelve papers including the Merc because they only go into “high growth markets” (Pruitt, McClatchy CEO, NYTimes), perhaps the staff should put the cork back into the champange bottle. The only one dancing with glee right now is the SF Chronicle.

More to the point, the analysis of the buyout forsees a lot of debt for McClatchy in a shrinking market. Newspapers aren’t the cash cows they once were, and Internet companies from Google to Craigslist continue to gut both their content and classified revenue. Is this a good buy, or is it a “good bye” for the Merc? One analyst I know here in Silicon Valley said “for what it’s worth, the journalists at that paper might want to polish up their resumes and start blogs ASAP”.

Why Keep Alive “KeepAlive”?

Keepalive in TCP has always been controversial, since it blurs the difference between a dead connection and a moribund connection, or as Vadim Antonov puts it “the knowledge that connectivity is lost”. Advocates, in contrast, believe that the effort reclaiming resources needn’t be done and hence as David Reed puts it “there is no reason why a TCP connection should EVER time out merely because no one sends a packet over it.” Antonov expresses a very narrow affirmation of the value of retained state which is not necessarily useful in the time required, while Reed expresses the reductionist philosophy that no effort should be expended without jusification even if the basis of the repudiation is inherently faulty. But is either truly getting to the heart of the issue? Is it truly important to cleave to the historical constraints of the past Internet philosophical design? Or should we consider the question in the context of what is relevent to the Internet today?

I don’t ask these questions frivolously, but with a serious intent. While I am a student of history, and find the study of heritage very valuable in technical work (even patents require a love of reasoning over time), we should occasionally look at the world not as we would like it to be or how it was but how it is. Thus, I suspect the question should actually be “What is the point of having a long-lived TCP session with keepalive in the 21st century”? Is this not a security hole ripe for exploitation in an age of ubiquitous bandwidth and zombie machines? Is not the lack of security and credentials in the modern Internet the bane of both service providers and users? This is the heart of the issue.

Running the Microsoft Personnel Gauntlet

Ed Frauenheim of cnet discussed the difficulty of running the Microsoft personnel gauntlet, er, “puzzle”. Why are they so arrogant? Obvious answer – they’re a big fish. And some managers think that if their company is big, so are they and act accordingly. However, once they leave the “hive” they usually sink back into the ooze they emerged from in the first place.

When one of the Microsoft recruiters came for me back in the mid-1990’s, I ended up hiring him to staff one of my funded startups – . I recommend that startups in competitive times recruit a Microsoft recruiter – they’re very good.

On the serious side, the simple reason that Microsoft has difficulty in hiring is their antipathy to anyone who has worked with open source. This “us versus them” mindset has caused them to lose out on very talented people and on new directions in research and development in operating systems.

Fun Friday: VCs Get Googled, Tempel 1 to Get Deep Impact

Well, we’ve finally got the lowdown on the post-IPO Google payoff, courtesy of Bill Burnham, and it’s quite a tidy haul. How much? Theoretically “…all the way back in 1999 Kleiner and Sequoia each invested $12.5M in Google for a 10% stake. Fast forward to the Summer of 2004 and these stakes were worth $2.03BN at Google’s IPO price of $85/share”.

They had to back off on selling all that at the IPO, however, which meant they did even better. According to Kleiner’s distribution statements (SEC Form 4) “… to date they have distributed shares worth $3.549BN. They still have another 2.6M shares worth $752M as of yesterday’s close, so the total value of their stake is $4.3BN which represents a 344X return on their investment of $12.5M … not too shabby”.

What about Sequoia? “making an educated guess they have returned about $3.8BN to date and have stock worth another $940M left to distribute for a total return of close to $4.7BN which is about $200M higher than Kleiner’s $4.5BN (with the mystery shares). Based on their $200M more in proceeds for the same stake and their careful doling out of shares to protect the market, Sequoia wins the award for best distrubution process”.

For those of you not sponging off one of the Class A VCs, look toward the heavens (or NASA TV). Tempel 1 is scheduled to be hit by Deep Impact to determine if it really is a dirty snowball or a dirty dustball. Unless you have a rather large (11-inch or better) aperture telescope, watch it on the Internet – it will be Magnitude 11 and pretty hard to spot unless you’re very experienced.

So for all those unhappy people who didn’t make out like bandits on the Google IPO, repeat after me: “The best things in life are free”. At least, until Google figures out a way to put banner ads on Tempel 1.

Squandered Victory a Fascinating Talk

Larry Diamond of the Hoover Institution, Stanford University, spoke yesterday at a special PARC forum on “Our Squandered Victory and the Prospects for Democracy in Iraq”. I must admit, I was skeptical that I would find him an agreeable (or even informed) speaker – I’m not a great fan of the Hoover Institution. But he knew his stuff, was right on the money about the money (the billions spent on this war), had lots of those “where did they get those guys” stories of screwups in Iraq (our guys – not their guys), and presented a thorough convincing argument for how badly the administration has bungled the job from an insider’s perspective.

Why is he an “insider”? Apparently Larry Diamond was asked by Condoleezza Rice to go to Baghdad as an adviser to the American occupation authorities. Diamond wasn’t an Iraq war supporter, but he said he thought creating a “viable democracy” was important. He was there last year.

One of the best speakers I’ve seen this year. He answered every question, and met critics head-on. I wish more Americans could talk to him as someone who’s really “been there”. It’s one way to cut through the spin and make your own “fair and balanced” decision.

TCP Protocols and Unfair Advantage – Being the Ultimate Pig on the Bandwidth Block

Little item from the testing side of proposed TCP protocols on stack fairness from the Hamilton Institute at the National University of Ireland, Maynooth.

According to Douglas Leith:
“In summary, we find that both Scalable-TCP and FAST-TCP consistently exhibit substantial unfairness, even when competing flows share identical network path characteristics. Scalable-TCP, HS-TCP, FAST-TCP and BIC-TCP all exhibit much greater RTT unfairness than does standard TCP, to the extent that long RTT flows may be completely starved of bandwidth. Scalable-TCP, HS-TCP and BIC-TCP all exhibit slow convergence and sustained unfairness following changes in network conditions such as the start-up of a new flow. FAST-TCP exhibits complex convergence behaviour.”

What’s this mean? Simple. In order to get more for themselves these approaches starve everyone else – the “pig at the trough” mentality. But what might work for a single flow in a carefully contrived test rig can immediately start to backfire once more complex “real world” flows are introduced.

There have been concerns for years that these approaches could wreak havoc on the Internet if not carefully vetted. I’m pleased to see someone actually is testing these proposed protocols for unfairness and the impact on network traffic. After 30 years of tuning the Internet, taking a hammer to it protocol-wise isn’t just bad science – it’s bad global policy.

When Your Bandwidth Runs Out

Tom Foremski of SiliconValleyWatcher had an amusing item about how awful it is to be successful enough to “run out of bandwidth”. “SiliconValleyWatcher was off line for about 6 hours as traffic surged above our monthly quota. And I couldn’t open up the pipes because there was no way to buy more bandwidth online. I found that I would have to wait until the next morning and email the sales department!!!”

This little problem is why you negotiate with a managed service provider for overage bandwidth. A good ISP should be calling Tom about his burst, not waiting for Tom to call call them after his blog has been knocked offline as punishment for the sin of being successful. But negotiating bandwidth overages when you are a small business isn’t usually done – everything is so “on the cheap” that even simple contract items (which could be automated) don’t exist. Is it any wonder I run my own datacenter?

I wrote about this in one of my essays on datacenter management and monitoring. I’ve been told that no one needs to know this stuff anymore, because everything works perfectly. Think that’s the case?

Jitter, Jitter Everywhere, But Nary a Packet to Keep

I was looking over the end-to-end discussion on measuring jitter on voice calls on the backbone and came across this little gem: “Jitter – or more precise delay variance – is not important. Only the distribution is relevant”. This dismissive little item of a serious subject is all too commonplace, but misses the point the other researcher was making.

The critic assumes “fixed playout buffer lengths (e.g. from 20 to 200ms)” to calculate overall delay. But do these buffer lengths take into account compressed versus uncompressed audio? If not, the model is faulty right there. The author admits his approach is “problematic” but then assumes that “real-time adaptive playout scheduling” would be better – but then the measurement mechanism becomes part of the measurement, and you end up measuring this instead of the unmodified delay – which doesn’t help the researcher looking at jitter and delay measurements for voice.

But there is a more fundamental disconnect here between our voice-jitter researcher and his jitter-is-irrelevent nemesis – jitter does matter for some communications – it just depends on what problem is being solved. And it is careful definition of the problem that leads to dialogue.

Opinion: Getting “Beyond Fear”: A Security Expert’s Prescription for A Safer World

My review of Bruce Scheier’s new book Getting “Beyond Fear”: A Security Expert’s Prescription for A Safer World is now online at Security Pipeline.

I must admit, I had a difficult time with this one. I’ve reviewed other security books, including one by Bruce before, but those are usually “insider” books on the hard tech aspects of security (see “Perspectives on Computer Security” and Under Lock and Key”, Dr. Dobbs Journal). But Bruce took a different tact with this book – he wanted to talk to ordinary people about how they could deal with security. And he expressed to me privately that he was frustrated with how difficult it was to reach that audience.

And I could see why he had a problem. The marketing of security books is very masculine, very secret agent man, but opening it up Bruce wrote a very readable book about fear and security. Since secret agents and hackers are thought not to feel fear, this doesn’t mesh.

Ironically, the audience I thought Bruce spoke best to inside the covers was women! Women are often neglected in discussions of security, because it is commonly viewed (even by women editors) that this subject is too “manly” and too “technical” to attract their attention.

But here we are, reading about security patdowns that seem like groping sessions and women terrorists from Chechnya blowing up airplanes. How women can be excluded from consideration or from the responsibility of informing themselves about security is beyond me – yet the publishing bias persists.

I originally tried to place a longer piece discussing security and the role of women in our society in more mainstream press, simply because the tech audience is decidely male. I hoped to reach the women and girls currently undergoing the humiliations of an overworked and underfinanced security grid. But after a lot of those cited rejections, I finally gave up and placed it (suitably modified) with an editor I know in a solidly male tech publication. I’m grateful to Mitch Wagner of Security Pipelines for allowing me to discuss Bruce’s book in the context of recent security debacles. I only hope that the guys reading it will pick up a copy for their wives, mothers, and girlfriends, and encourage them to read it because a woman said so. Because despite what the mainstream press editors will tell you, women still need to know how to evaluate security before it becomes a danger to them and others.