Fun Friday: AI Technology Investments, Failed Startups, 386BSD and the Open Source Lifestyle and Other Oddities of 2020

First, William Jolitz and I did a comprehensive article entitled Moving Forward in 2020: Technology Investment in ML, AI, and Big Data for Cutter Business Journal (April 2020 – paid subscription). Given the pandemic and upheaval in global economies, this advice is even more pertinent today. 

Instead of moving from technology to key customers with an abstracted total addressable market (TAM), we must instead quantify artificial intelligence (AI) and machine learning (ML) benefits where they specifically fit within business strategies across segment industries. By using axiomatic impacts, the fuzziness of how to incorporate AI, ML, and big data into an industry can be used as a check on traditional investment assumptions.

For additional information on this article, please see AI, ML, and Big Data: Functional Groups That Catch the Investor’s Eye (6 May 2020, Cutter Business Technology Advisor).

Techcrunch presented their loser brigade list of 2020 failed startups in December of 2020 – although a few more might have missed the list by days. Some of these investments were victims of “the right startup in the wrong time”. Others were “the wrong startup in the right time”. And some startups were just plain “the wrong startup – period”. 

We mourn the $2.45 billion dollars which vanished into the eager pockets of dreamers and fools (we’re looking at you, Quibi – the pig that swallowed $1.75B of investment and couldn’t get any customers) and feel deeply for the Limiteds who lost money in one of the biggest uptick years in the stock market.

Thirty years have passed since we launched open source operating systems with 386BSD. Open source as a concept has been around for over 40 years, as demonstrated by the amazing GNU GCC compiler done by RMS. But until the mid-1990’s, most software was still held under proprietary license – especially the operating system itself. The release of 386BSD spurred the creation of other progeny open source OS systems and a plethora of open source tools, applications and languages that are standard today. However, the “business” of open source is still much misunderstood, as Wired notes in The Few, the Tired, the Open Source Coders”. Some of the more precious gems excerpted:

But open source success, Thornton quickly found, has a dark side. He felt inundated. Countless people wrote him and Otto every week with bug reports, demands for new features, questions, praise. Thornton would finish his day job and then spend four or five hours every night frantically working on Bootstrap—managing queries, writing new code. “I couldn’t grab dinner with someone after work,” he says, because he felt like he’d be letting users down: I shouldn’t be out enjoying myself. I should be working on Bootstrap!

“The feeling that I had was guilt,” he says. He kept at it, and nine years later he and Otto are still heading up Bootstrap, along with a small group of core contributors. But the stress has been bad enough that he often thought of bailing.”…

…Why didn’t the barn-raising model pan out? As Eghbal notes, it’s partly that the random folks who pitch in make only very small contributions, like fixing a bug. Making and remaking code requires a lot of high-level synthesis—which, as it turns out, is hard to break into little pieces. It lives best in the heads of a small number of people.

Yet those poor top-level coders still need to respond to the smaller contributions (to say nothing of requests for help or reams of abuse). Their burdens, Eghbal realized, felt like those of YouTubers or Instagram influencers who feel overwhelmed by their ardent fan bases—but without the huge, ad-based remuneration.

Been there. Done that.

Not many Linux-come-latelies know this, but Linux was actually the second open-source Unix-based operating system for personal computers to be distributed over the Internet. The first was 386BSD, which was put together by an extraordinary couple named Bill and Lynne Jolitz. In a 1993 interview with Meta magazine, Linus Torvalds himself name-checked their O.S. “If 386BSD had been available when I started on Linux,” he said, “Linux would probably never have happened.”

Linus was able to benefit from our two year article series in Dr. Dobbs Journal (the premiere coding magazine of the day, now defunct in an age of github), which along with the how-to details of “Porting Unix to the 386” we also included source code in each article. That, coupled with Lions Commentary on Unix (NB – the old encumbered Edition 6 version, and not Berkeley Unix) allowed Linus to cudgel together Linux. We had no such issues, as we had access to both Berkeley Unix and a source code license from AT&T for our prior company, Symmetric Computer Systems, and hence knew what was encumbered and what was not (Lions was entirely proprietary). Putting together an OS is a group effort to the max. Making an open source OS requires fortitude and knowledge above and beyond that.

Jalopnik, one of my favorite sites, found the ultimate absurd Figure 1 patents with this little gem of an article: Toyota’s Robocars Will Wash Themselves Because We Can’t Be Trusted. Wow, they really knocked themselves out doing their Figure 1, didn’t they? Womp Womp.

And finally, for a serious and detailed discussion of how the pandemic impacted the medical diagnostic side, I recommend this from UCSF: We Thought it was just a Respiratory Virus. We were Wrong (Summer 2020). Looking back, it was just the beginning of wisdom.

Stay safe, everyone!

2020 AMD and Intel: The Grass is Greener on the Other Side of the Chip Business

AMD or Intel? 2020 is the Processor Battleground

AMD, the long neglected stepsister to Intel, has done marvelously well in recent years, primarily due to Intel’s “meltdown” of trust in their flagship processor products and Intel’s delays in shipping new competitive 10nm chips. Coupled with ineffectual senior management and poor board control, Intel, the darling of the Wall Street set, sat wallowing in management paralysis and a moribund stock price until recently.

Dr. Lisa Su, AMD CEO

Meanwhile AMD’s CEO Dr. Lisa Su has been instrumental in moving AMD from its Eeyore approach to life to that of a first-rate competitor in the chip space with its Ryzen, Radeon and Epyc product lines. Dr. Su has not only changed AMD’s attitude – she’s also changed the entire competitive landscape with bold technology moves and strategic partnerships with companies such as Microsoft. Having dealt with the earlier AMD in the 1990’s, where no one would make a decision and the C-suite was filled with ineffectual do-nothings, it has been refreshing to see capable management drive good engineering and product management.

In the last few years, AMD and Intel have swapped places. AMD, the driver in specialist processors, has gone full-bore into the vacuum left by Intel’s strategic blunders and broadened into general processors . Intel, in contrast, has made an old Intel revenue-enhancement approach “new again”, by taking their general processors and specializing them for specific markets.

But believing the grass is greener on the other side of the chip business comes with its consequent perils. Continue reading 2020 AMD and Intel: The Grass is Greener on the Other Side of the Chip Business

Intel’s X86 Decades-Old Referential Integrity Processor Flaw Fix will be “like kicking a dead whale down the beach”

Image: Jolitz

Brian, Brian, Brian. Really, do you have to lie to cover your ass? Variations on this “exploit” have been known since Intel derived the X86 architecture from Honeywell and didn’t bother to do the elaborate MMU fix that Multics used to elide it.

We are talking decades, sir. Decades. And it was covered by Intel patents as a feature. We all knew about it. Intel was proud of it.

Image: Jolitz, Porting Unix to the 386, Dr. Dobbs Journal January 1991

Heck, we even saw this flaw manifest in 386BSD testing, so we wrote our own virtual-to-physical memory mapping mechanism in software and wrote about it in Dr. Dobbs Journal in 1991.

You could have dealt with this a long time ago. But it was a hard problem, and you probably thought “Why bother? Nobody’s gonna care about referential integrity“. And it didn’t matter – until now.

Continue reading Intel’s X86 Decades-Old Referential Integrity Processor Flaw Fix will be “like kicking a dead whale down the beach”

MetaRAM Busts RAMBUS Stranglehold?

Sand Hill Road envies RAMBUS. Oh, they don’t envy them their lawsuits, precarious business model or turbulent management structure. But they do envy them their ruthless monopoly of the high-speed DRAM market. RAMBUS successfully competed against the behemoths with a clever architectural enhancement, kept belief in their approach against huge odds, fought back as hard and dirty as the big boys and made licensing deals stick. They are survivors.

When RAMBUS went IPO back in 1997, I was completing work on the first preliminary patent application for InterProphet’s SiliconTCP technology, while William began his hunt for investment. RAMBUS’s IPO was on the minds of many VCs, but it wasn’t in a good way, surprisingly. RAMBUS’s 7 prior years had been fraught with changes in business model and personnel. Instead of setting up a fab, RAMBUS chose to license their technology. Finally, RAMBUS chose to make their stand on the basis of their patents. Don’t let me fool you — investors may crab about the need for “intellectual property protection” but when it comes to playing with the big boys, they believe about as much in IPR as the tooth fairy.

RAMBUS has been remarkably successful in defending and enhancing their patents (and yes, I know about their “steering committee” games — coming from the OS side I’ve seen Microsoft and others play the same games, even to the point of doing software patents on work pre-existing by decades). Essentially, they’ve played dirty like Intel, Hynix and all the other guys the VCs said you could never win against. But it has been a very long wild and crazy ride for the payoff — too much for the “10x in 3 years” crowd.

But despite all of RAMBUS’s remarkable turbulence, it has been amazingly successful. During one incredible record-setting day in 1998, I listened to a top-tier VC say that he’d never want a single share of RAMBUS’s stock no matter *how* much money they made. He just hated them. Another top-tier VC rambled on about how “you could make a lot of money with a RAMBUS business model, but they weren’t interested in that”. What they really hated is how there wasn’t a single massive success where they could bow and take their winnings (like VMWARE in 2006 for example, but they didn’t invest in that one either because it was run by a husband-wife team that believed open source was valuable — hmm, beginning to see a pattern). As Magdelena Yesil (at the time partner at USVP) liked to intone to me “Venture Capitalists are more capitalist than venture these days”. Chip risk wasn’t as exciting when you could respin any company as an Internet venture and go public with no revenue. And semiconductor companies *are* risky.

Semiconductor companies are also the historical lifeblood of Silicon Valley — hey, that’s why it’s called “Silicon Valley” and not “Internet Valley”! So now we come to MetaRAM, an attempt to steal RAMBUS’s monopoly on architecture. According to Ryan Block of Engadget “MetaRam uses a specialized “MetaSDRAM” chipset that effectively bonds and addresses four cheap 1Gb DRAM chips as one, tricking any machine’s memory controller into using it as a 4x capacity DIMM.”

Is the technology innovative? Not likely — it sounds like a combination cache and bank decoder, which is not innovative in the least. In fact, you need 4x the number of components on the DIMM, which means 4x the number of current spikes and decoupling capacitors, even if you put the chips together in the same package. Because you have a fifth chip, you complicate things even more. There is no way you can approach the triple-zero (volume, power, cost) sacred to chip designers with such a design, because one single high-speed high-capacity chip will eventually win out given the proliferation of small expensive gadgets demanding the lowest of volume and power. In a world of gadgets like IPODs, cellphones, laptops, PDAs and the like, cost is very important but *not* the most important quantity. So RAMBUS doesn’t have a lot to worry about here.

Hynix has been fighting a losing battle against RAMBUS ever since getting hit with a whopping $306M patent infringement judgment in 2006 (since reduced to $133M), and RAMBUS is still going for more. These are the same guys who pleaded guilty in 2005 to a DOJ memory price-fixing scheme from 1999-2002 and paid a $185M fine. There is no love lost in the memory biz.

So where does little MetaRAM come in. When technology fails, maybe a clever business model will do. MetaRAM’s big claim to fame is cost reduction — not for gadgets or laptops, but according to Fred Weber, CEO of MetaRAM, for “personal supercomputers” and “large databases”. And who is the big licensee for this so-called technology. Why, it’s Hynix of course, who announced they will make this lumbering memory module. They claim it will be lower power. I think I’d like an independent evaluation on this point, but it will probably be lower cost. Is it worth it? Given reliability considerations, that also remains to be seen. But the moral of this saga is simple — human memories are longer than memory architectures in this business, and the real puppet-master behind the throne (Kleiner-Perkins) is sure to walk away with the money. I wish I could say the same for the customers.

When Video Kills Your Drive – Quicktime Waxes Track 0

Alright! Yes, sometimes I do read slashdot when it’s amusing, and the discussion of how you can create your own custom panic screen (or BSOD window) for OS/X via an API is amusing (my son Ben points this stuff out for me – he feels it’s one of his sacred tasks). Joke panic screens have been around a long time, but the battle over “how much information to give people” has led to many not-so-amusing battles, especially when we were creating 386BSD releases and did the Apple approach as an option long before Apple switched to BSD and did the same thing we did. I like information and transparency, but not at the cost of frustrating and annoying lots of people…

But aside from this old debate, probably the funniest *real* programming error discussed (yes Apple, you did it again) was the Quicktime capture bug a video engineer discussed, where it merrily fills up the drive with video and then, when you’ve run out of space on the disk, overwrites allocated blocks. Yes, allocated blocks! And where did that leave our poor engineer? With no track 0. Even Apple couldn’t put that one back together again – they had to give him a replacement drive, and that’s a complete admission of defeat, because as everyone knows Apple is very cheap. (Disclaimer – I have worked in manufacturing, and we all are very cheap in this regard because hardware returns make a nasty entry on your income statement – show me a systems vendor that doesn’t care about hardware returns, and I’ll show you a vendor that won’t please shareholders).

But the question begs – is there another way to recover from a track 0 loss on a disk drive? Well, we faced the same problem at Symmetric Computer Systems years ago, and we recovered that disk, albeit not the way you might think…

Itanic Readies for Final Sinking – Multiflow and HP Tech Aren’t Enough

Alas, “itanic”, aka “Itanium” is resurrected again, this time debuted by Intel as a “chip with two brains”. But the critics aren’t impressed, and to save money for senior management Itanium soirees, Intel middle managers are to be tossed into the cold big blue.

Well, this is no surprise to those chip-watchers who talked down Itanium from the beginning. While Dell and IBM have fled to calmer X86 waters with the likes of AMD, HP has been steadfast, selling 80% of the chips Intel has shipped.

Of course, many wonder why HP is such a firm holdout when everyone else is baling? To understand how we got where we are, we actually have to ask the question “Where did the Itanium come from?” If you think it all sprang fully formed from Andy Grove’s head, you’re quite wrong. And you’d miss the story of long-ago acquisitions, agreements, and ambitions…

Microsoft’s Ultimate Throughput – Change the Compiler, Not the Processor

I like people who go out on a limb to push for some much needed change in the computer biz. Not that I always like the idea itself – but moxie is so rare nowadays that I have to love the messenger despite the message. So here comes Herb Sutter, Microsoft Architect, pushing the need for real concurrency in software. Sequential is dead, and it’s time for parallelism. Actually, it’s long overdue in the software world.

In the hardware world, we’ve been rethinking Von Neumann architecture for many years – SiliconTCP from InterProphet, a company I co-founded, uses a non-Von Neumann dataflow architecture (state machines and functional units – not instruction code translated to Verilog because that never works) to bypass the old-styled protocol stack in software, because an instruction based general processor can never be as efficient for streaming protocols like TCP/IP as our method. Don’t believe me? Check out Figures 2a-b for a graphic on how much you wait for store and forward instead of doing continuous flow processing – the loss for one packet isn’t bad, but do a million and it adds up fast.

It’s all about throughput now – and throughput means dataflow in hardware. But what about user-level software applications? How can we get them the performance they need when the processor is reaching speed-of-light limits? If on a typical processor from one end to the other end you get one clock cycle at the speed of light at 7-8 GHz, anyone stuck in sequential processing will be outraced by Moore’s Law, multiple cores and specialized architectures like SiliconTCP.

Misplaced Software Priorities

For a perspective from William Jolitz, co-developer of 386BSD, on the need to separate “innovation” from “renovation” in design, read Misplaced Software Priorities today. While it may gore a few oxen – especially those who work in the architectural flatland of low-level software – given the rapid outsourcing of this very same area to low-cost programmers in India and China, it might be time to listen to an alternative view from a long-time Silicon Valley developer and entrepreneur who’s done more for the acceptance of open source than all the pundits put together.

Of course, if someone wants to stay low-level, they can always learn Mandarin, right?

Tom Foremski Interviews Doug Engelbart

Doug Engelbart is a computer legend, but he is also still very much alive and has plenty to say. Tom Foremski of SiliconValleyWatcher had a poignant chat with him. The upshot – have the last 20 years been a failure?

Some snippets from Tom’s excellent interview:
” ‘How do you deal with society when its paradigm of what is right is so dominant?’ Doug Engelbart, the 1960s computer visionary asked me the other evening. It’s a question he has pondered many times over the past 20 years or so, ever since his research funding was taken away.

Mr Engelbart and his teams of researchers at the Stanford Research Institute (SRI) shaped the look and feel of the PC, as John Markoff chronicles in his latest book What the Dormouse Said: How the Sixties Counterculture Shaped the Personal Computer Industry.

Mr Markoff’s book raises the profile of Mr Engelbart, well known as the inventor of the computer mouse, and less well known for his seminal work in creating many of the concepts later found in the personal computer. Mr Markoff returns credit to where it is due.

What the book does not chronicle is how the rise of the PC killed funding for Mr Engelbart’s work.

By 1979, he had lost all funding from SRI because of unfavorable peer review.

‘The other research groups said what I was doing could be done better with microcomputers or through machine-based artificial intelligence. That was the dominant culture at the time. What people don’t realise is that there are many different cultures and not one is right.’ Mr Engelbart told SVW.

As a result of his experiences, he questions whether the past 20 or so years of his life have been a failure.

That’s how long Mr Engelbart has been trying to raise funding to continue his research into human machine interfaces and solving large, complex problems using networked software.

But the culture of our time has been unfavorable to his ideas of developing human-centric computer applications using one big powerful computer with many users. The paradigm of the PC revolution is that everyone gets to have a computer, no time-sharing needed.”

No, Photoshop is Not “real life”, Dave

Dave Pogue today reviews the Bushnell binoculars, and just can’t figure out why they’re so blurry. So I told him “it’s obvious”. Here’s why for the rest of you…:
1. They didn’t bother to adjust the focal plane focus of the sensor to coincide with the same focal point of the binocular eyepiece (e.g. a mfr defect).
2. To confirm this, adjust the binoculars slightly out of focus and take shots – I bet you’ll find that it gets better if it is not fixed focus.
3. If it doesn’t change at all, that means they have a fixed focus, and that the focus is set wrong. This is not adjustable by the user easily, but if you disassemble the binoculars and adjust the focus manually, you’d correct this problem.

But wait – there’s more. I have a long list of errata on digital cameras, most recently the Canon SD200-300 on-camera editing issues, for example, discovered by us at ExecProducer over the course of handling production issues. So I’m quite familiar with these and other annoying issues (light level problems, for example, and resolution issues) and how to find the best way of handling them. So another nit with Dave is a very basic one – using photoshop is not “real life”, as anyone in serious astrophotography will tell you.