Microsoft’s Ultimate Throughput – Change the Compiler, Not the Processor

I like people who go out on a limb to push for some much needed change in the computer biz. Not that I always like the idea itself – but moxie is so rare nowadays that I have to love the messenger despite the message. So here comes Herb Sutter, Microsoft Architect, pushing the need for real concurrency in software. Sequential is dead, and it’s time for parallelism. Actually, it’s long overdue in the software world.

In the hardware world, we’ve been rethinking Von Neumann architecture for many years – SiliconTCP from InterProphet, a company I co-founded, uses a non-Von Neumann dataflow architecture (state machines and functional units – not instruction code translated to Verilog because that never works) to bypass the old-styled protocol stack in software, because an instruction based general processor can never be as efficient for streaming protocols like TCP/IP as our method. Don’t believe me? Check out Figures 2a-b for a graphic on how much you wait for store and forward instead of doing continuous flow processing – the loss for one packet isn’t bad, but do a million and it adds up fast.

It’s all about throughput now – and throughput means dataflow in hardware. But what about user-level software applications? How can we get them the performance they need when the processor is reaching speed-of-light limits? If on a typical processor from one end to the other end you get one clock cycle at the speed of light at 7-8 GHz, anyone stuck in sequential processing will be outraced by Moore’s Law, multiple cores and specialized architectures like SiliconTCP.

Leave a Reply

Your email address will not be published. Required fields are marked *