Sedate Sunday: Silicon Valley and Post-Cold War Innovation

I came across this essay on Silicon Valley’s ascendency. It’s a bit wordy in some places and only abstractly relates to Silicon Valley. But who can resist an article that merges IPR, Gramsci, Silicon Valley investment, and Bretton-Woods.

I was amused, no matter how romantized some of the the assumptions. Come on, we all know that communism was really just another form of kleptocracy in disguise, just like Prosperity Gospel, unbridled capitalism, and all the other scams. It’s the human condition writ large.

Scams work by promising people things they don’t merit nor deserve in return for becoming their trolls, fan-boys, minions, and various minor demons. At least Maxwell’s demons did some undeniably important work, but most of these lesser types from the Stygian Depths reject pile don’t want to work (hence the “merit” stuff I menioned), nor are they part of the in-group (hence the “deserve” part). They’re also non-too-bright as a rule. But they are useful in aiding the ascent to substantial power and wealth, primarily by flooding the airwaves and empty streets with bellowing monsters, which in turn is covered by a lazy press corp as a meaningful “event” which should be taken seriously by “those in charge”. 

Technology has certainly brought down the costs of this well-established mechanism. You don’t have to print pamphlets to get attention. You can even more cheaply motivate the mob using facebook ads targeted to any feeble-minded demographic, or pull off in-your-face twitter placement with a word from the Big Twit himself. 

Honestly, it makes me long for the good old days of board room shenagins when William and I pitched hard tech companies. And yes, they were just as misogynistic, narrow-minded, and assholish then as now. That hasn’t changed.

It’s just back then there were still rivals, rules, and relationships to manage in the SV investment side. So William and I had a fighting chance. And fight we did. Sometimes…sometimes we made a success — before anyone caught on. Those were amazing times.

Now writers view startups as some kind of historical media retcon — a rather odd combination of Highlander, Fawlty Towers, and The Big Bang Theory (no women allowed, folks, unlike real life). William, who handled acquisitions for Tandem at one point, also had a fondness for Barbarians at the Gate, but that’s East Coast, not West Coast. And despite what folks will tell you, all those hagiographic movies about SV are so ridiculous and boring  I just don’t bother.

But historical fiction about SV will continue to be popular, especially with a polisci or econ twist. So go ahead, and imbibe this one, especially the amusing views of open source development and startups:

“Within even the very early culture of Silicon Valley, a distinctive tension could be discerned between the “hacker ethic”—with its commitment to entirely free and open information, born as it was in a university laboratory—and the entrepreneurial drive to protect intellectual property. This was not a superficial short-term contradiction, but a defining productive tension that continues to animate the entire domain of networked and computer-driven social and economic relationships.”

Gilbert and Williams, How Silicon Valley Conquered the Post-Cold War Consensus

On to one of my personal pet peeves — there was no hacker ethic as described by the authors back when we were putting together various technologies for the Internet and Berkeley Unix prior to the early 2000s. The very concept of a hacker having any ethics is so laughable I wonder that any reputable journalist can type the words without gagging. We were in it for the fun, the money, and kicking over apple carts. Anything else someone tells you is a sales pitch.

Not to say there weren’t hackers back then. Of course there were. John Draper, aka Captain Crunch, was one such example. Back in the 1970s and 1980s, one could still get access to all the telecommunications and tech docs in public libraries and, with a bit of cleverness and elbow grease, hack pay phone, computers, and all sorts of primitive networks. Security was an afterthought in those days. Security is still an afterthought now. However, it wasn’t all fun and games. John was always followed around by men in suits and shiny black shoes at conferences, William noted.

Even 386BSD, which through Dr. Dobbs Journal articles and releases birthed the open source operating system (even Linux used the article’s 386 source code supplied with every issue), was based on a very different viewpoint from the present-day common viewpoint of everything “free”. Berkeley Unix had been licensed for over a decade, yet the vast majority of works which encompassed it were not proprietary. It was inevitable that eventually those code remnants would be removed and replaced.

Yes, the copyleft and RMS were talked about a lot back then with the long-awaited HERD OS expected to roll over everything in the universe and then Marxism would prevail! Gosh, I can barely type that while laughing. And yes, they really did believe they were some kind of Second Coming of the Open Source Proletariat before Bernie Sanders came along and stole their thunder.

This invested belief in the copyleft actually allowed Berkeley and us to work quietly. Frankly, no one expected Berkeley to finally get around to removing most of the old version 6 Unix detritus.

Even William’s and my prior company, Symmetric Computer Systems, contributed code on disk drive management.  And William and I contributed the source code for the 386 port, making Berkeley Unix actually usable.

During this time, I really enjoyed writing the Source Code Secrets: Virtual Memory book with William, based on the virtual memory system from CMU. The CMU Mach project provided the key in a new approach to a virtual memory system, permitting the jettisoning of the old industrious evaluation virtual memory system of a decade prior. It’s a nice piece of work that is much underappreciated.

And of course, when the unencumbered incomplete release was made public, we got creative and wrote entirely new modules to fill in the missing pieces for the releases.

But working on open source and working on proprietary intellectual property is not antagonistic as the author would state. One of my proudest moments was getting my patents granted for InterProphet’s low-latency protocol processing mechanism and term memory. 

The key is understanding what you owe to others and what you owe to yourself.

Berkeley Unix was a long-term project that collected the works of many people. Berkeley handled the release mechanics and integration. Sometimes they did new work, but not always. It was research, mostly paid for by the government. And that means you and me. 

William and I did the port to the 386, contributed code, wrote published articles, and devised new work as a research project. While we received no funding from Berkeley, we did have a lot of fun.

InterProphet, in contrast, was a 1997 startup focused on improvements in latency in networking using a dataflow architecture. Our innovations were funded, we had employees and an office, and we built the prototype and production boards. We developed the drivers and support software. We paid for really expensive proprietary chip design tools.

And we filed patents and held trade secrets. Intellectual property protection was a given in this work. (A bit of advice here: If your engineers decide to deal with bugs in their software by sending source code to the vendor, put a stop to it immediately. It causes no end of problems later.)

We had an obligation to the investors at InterProphet. And we kept our deals with that company. Just as William and I did with Symmetric Computer Systems back in the 1980s. Technology innovation was valued — at least enough so we could get another startup off the ground. It required due diligence and careful maintenance.

The mistake in many “historical” analysis of Silicon Valley innovation lies in conflating the technology innovation of the pre-2000 era with the non-innovative “free stuff” of the post-2000 period. Investment strategies were completely different. Business structures were different. Even financial structures pre and post IPO changed markedly. They’re not comparable. 

There is nothing “free” in using FaceBook, or Twitter, or Google News, or Apple Maps, or a plethora of other websites. And that is by design.

These websites and applications are intended to go “viral”. They must lure in an unsophisticated customer and make the site “sticky” so they can be tracked. Gosh darn, that’s all it was and is about. No innovation required. In fact, invention and innovation were derided. As John Doerr noted back then, it was “renovation, not innovation” that was king. 

And as the author notes, anything related to manufacturing was sent off to China. No more chip investments. No more hardware investments. No more of that “risky” tech innovation. It had all been done. 

I don’t usually call out specific VCs from that time, but John Doerr and Kleiner deserve it for singlehandedly killing an entire generation of technology with a cynical investment strategy. Special mention goes to Google, Apple, and Intel for corralling open source operating system innovation to maintain their profits.

So John and KPCB, and the tech monopolies as runner-ups — I salute you.

People went hunting for content to populate those websites. Youtube for example grabbed the few popular short videos circulating on the web and put them on the site just to appear like it was being used — until it was used through relentless press.

Customer acquisition dollars were high. A flip was six months.

Content was available in many ways. As the printed press conglomerates strove to grab eyeballs, they inadvertently gave their content away while cratering their traditional print advertising dollars. Aggregators glommed onto that content, manipulating the views towards paid ads and “curated” experiences. Video and music content was pirated as well, but entertainment media executives had been down this road many times before, and hit hard with copyright lawsuits. 

Databases of many kinds were publicly available as well, from geolocal map data to astronomy datasets. With that richness of information, the sky was the limit for people putting a front-end on the information. And so it is today.

I remember when Amazon was first funded as a bookstore. I bought a book — a Harry Harrison Stainless Steel Rat book I recall. One of the VCs back then gave me the dark side sell at an investment event: It was all about knowing what you look at, what you want, what you need. And putting that in front of you so you buy it. And Amazon takes a cut all the way to the bank. Privacy? Who cares. 

It took Amazon six years to a quarterly profit.

Think about that. Six years losing money. When a VC starts demanding quarterly profits, dig up Amazon’s pro formas.

Fun Friday: Funding in Transition and Mammalian Distributed Memory Storage

As inflation continues to take its toll on everyone’s investments as well as steak dinner (psst – get the rotisserie chicken at Costco instead), Silicon Valley is clearly in a state of transition. Startups have been told to tighten their belts financially. Layoffs in big tech companies have begun. “Growth” ventures are failing to get follow-on funding, primarily in the consumer space and media (in Substack’s case, they’re also proving the old adage that no one pays attention to writers).

But for every easy money gambit that’s falling out of the sky, there is hope for the dreamer and rogue. Venture firms are still collecting money hand-over-fist from desperate Limiteds eager to get some return with the stock market slowing. Folks with money want to make more money. There are lots of them.

Of course, this doesn’t directly help the small entrepreneur. Big Venture (TM) doesn’t fund the small fry inventing neat technologies anymore — they have too many Series D unicorn mouths to feed. Big tech companies are no longer a safe bet — they may fire you and escort you unceremoniously out the door without any warning. In hard times, loyalty is not its own reward.

But there are a lot of individual investors out there who can drop $1M on a neat tech idea. All the startups William and I founded started with a dream, some code, and a handshake during lousy economic times. They were funded precisely because making easy money on scams and gambits have evaporated.

So if you’ve got a good hard tech project, now may be the best time to go for it. The cash is still plentiful. Just play it cool. It worked for us. It can work for you.

Researchers tracked neural activity across a whole mouse brain to determine what areas were involved in storing a specific memory. Many brain regions found likely to be involved in encoding a memory (top) were also found to be involved in recall upon reactivation (bottom).
Credits: Image courtesy of the Tonegawa Lab/Picower Institute.
Neural activity across a whole mouse brain to determine what areas were involved in storing a specific memory. Many brain regions found likely to be involved in encoding a memory (top) were also found to be involved in recall upon reactivation (bottom). 
Credits:Image courtesy of the Tonegawa Lab/Picower Institute. Read the article!

And speaking of distributed memory, a new study from MIT describes the mammalian brain as storing memory, not densely in a few regions, but instead loosely across many regions of the brain. This makes sense in a way. It’s a lot easier to completely lose a memory if it’s in one or a few locations than spread throughout the brain. Also, storing memories in larger “chunks” would result in a lot of wasted storage space since a memory is of varying size. Indirect references to each memory element, even if a few are lost, are more efficient than directly physically mapping a memory.

It does explain the dreamlike aspect of memories, doesn’t it? And also perhaps memories which are completely wrong but feel entirely real and true. Likely we lose a lot of these references that fill in some of the blanks over time. Associated elements, like smell, can track back along a pathway to a memory to give the gist of it, but it may be only a shadow of what was actually recorded.

But is the brain’s memory sparsely allocated as well? It may well be given this highly distributed storage across many parts of the brain. Sparse allocation is common in operating systems because it is usually faster and overall more efficient. But it can use more total memory than that of a densely allocated memory mechanism if most of the elements contain non-repeatable data. Are most of our memories just collages of a few meaningful pieces and a lot of filler? Perhaps dreams look odd precisely because they are just stray strands of sparse referents to redundant memories garbage collected by the brain and reallocated for use.

To dream. Perchance to sleep. Now that is the question.

Intel Ouroboros: Pat Gelsinger Returns to Build the Future

Classical Ouroboros. Wikipedia.

Pat Gelsinger is a technologist’ technologist. He worked on the X386 and X486 processors. We referenced the book he and Crawford wrote Programming the 80386 for our Porting Unix to the 386” series in Dr. Dobbs Journal in the early 1990’s and the development of 386BSD. It was a seminal processor and work that helped launched the open source operating system movement.

Yet Pat didn’t stay to retire with laurels at Intel. After many years battling for Intel’s future, he left to head EMC and, later, VMWare. Now he’s been brought back to Intel as CEO effective 15 February 2021. Why?

In a nutshell, while Gelsinger was off dabbling in storage technologies and cloud services, Intel was burning through every single technology advantage people like Gelsinger had built. Now, Intel is facing a reckoning, and needs to build a future again.

And that future depends on people with technical and domain skill, like Pat Gelsinger. 

This was a bitter pill for Intel’s Board of Directors and executive team to swallow. But, as Baron Mordo said, “The bill comes due”

The roots of this squandering of the future was based not in technology, but in contempt of technologists. Risk-takers in both strategic and startup investment in the 1990’s and 2000’s saw the proliferation of new approaches as “chaotic”. 

InterProphet SiliconTCP board. 1998.

I sat in an office of a top tier VC firm on Sand Hill Road in the late 1990’s and listened to the “smart money” partner complain about how their investments in ATM were being disrupted by InterProphet’s SiliconTCP low latency chip — as if I owned the burgeoning TCP/IP technology and was personally damaging their investments with a few prototype boards, a handful of working FPGAs and some Verilog.

TCP/IP was present in the mid-1980’s in Berkeley Unix, and used in datacenters throughout academia and government. As Vint Cerf himself noted, it was a good enough solution to get packets from one point to another.

TCP/IP as an “ad hoc” technology was good enough to take out OSI, ISDN and ATM. I thought it was wiser to surf the tsunami instead of railing against it. That just bred resentment.

I sat in corporate offices in the 1990’s and 2000’s and heard complaints about how open source was overtaking proprietary software stacks, and it was ruining their projections and their business.

Berkeley Unix was a feeder of innovation from the early 1980’s. True, it was not a viable competitor to proprietary OS stacks until we launched 386BSD in the early to mid-1990s. From that open source stack, backed by Dr. Dobbs Journal, sprang a whole host of proprietary software industry competitors, including Linux.

Open source kernels like 386BSD and its many progeny would not have made inroads if there had not been a wealth of innovation already present to mine out by these groups — innovation that was neglected, minimized or attacked by established proprietary players. 

But up to the point we released 386BSD publicly, everyone underestimated us. It couldn’t be done. It wouldn’t be done. But it was done. I knew it could be done.

I sat in a room in the early 2000’s as a VP at Microsoft complained about how open source was a threat and how, looking right at me, they had gathered information on everyone involved and their families. As if developing the open source OS created some kind of ominous fifth column of open source software subverting their eternal rights to OS glory. It was…unpleasant. It was also incredibly horribly damaging personally.

I listened in the mid-2000’s as a VC “sympathetically” told us that we’d never get funding again after InterProphet. Not because we’d done anything wrong. We met our commitments. We built things. But because they didn’t want innovation and the risks that came with it. And their way to kill the message was to kill the messenger.

“The bill comes due”.

The resentment in the 1990s and 2000s towards new ideas and the creation of new products was intense. All they could see was damage to their five year technology plans and prior investments. The idea of hedging your bets was anathema, because that implied they couldn’t control the industry.

And mind you, it was about control. Control of technology. Control of innovation. Control of monetization. Control of creativity. Control of thought.

So here we are, in 2021. Intel squandered their future, slicing and dicing their monetization game. Intel’s “safe and sane” business relationship with Apple is now in pieces. In 2018 Apple maneuvered Intel into taking out Qualcomm as a competitor. In 2019 Apple acquired Intel’s smartphone modem tech and developed their own. In 2020 Apple introduced the M1 as a competitor to the high end X86 line. And that’s just one customer. The vultures are circling. Intel lost control.

Now Pat Gelsinger has agreed to come back. How will he pick up the pieces of a damaged company? I assume he’d only return if he had broad latitude in restructuring, hiring and firing. He’ll investigate interesting acquisition targets that offer a path forward for Intel. And he’ll look closely at how rivals like AMD under Dr. Lisa Su have done so well while Intel foundered.

Intel ouroboros. Pat is back at the beginning. It remains to be seen how he creates a future for Intel once again.

Apple Store “Bait and Switch” IPhone Battery Gambit: Apple Giveth and Taketh Away

Image: cnet.com

Beware the Apple Store “bait and switch” iPhone battery gambit. We faced this yesterday in Los Gatos, CA where they tried to claim a working iPhone 6s with a good screen / original owner was not eligible for their $29 battery replacement at the appointment because it had a slight bow in the frame.

Now, by this point everyone likely has some flaw in their old iPhone, whether it is a slightly dinged frame from being dropped to a minute crack or scratch under the frame. It’s normal wear and tear. And they likely didn’t have a problem replacing the battery before the discount was announced and replacements were more costly and infrequent. But now, it’s an issue.

They did offer to sell an iPhone 6s for close to $300! This is a terrible price. Don’t go for it. This is what they mean by bait and switch.

Continue reading Apple Store “Bait and Switch” IPhone Battery Gambit: Apple Giveth and Taketh Away

Intel’s X86 Decades-Old Referential Integrity Processor Flaw Fix will be “like kicking a dead whale down the beach”

Image: Jolitz

Brian, Brian, Brian. Really, do you have to lie to cover your ass? Variations on this “exploit” have been known since Intel derived the X86 architecture from Honeywell and didn’t bother to do the elaborate MMU fix that Multics used to elide it.

We are talking decades, sir. Decades. And it was covered by Intel patents as a feature. We all knew about it. Intel was proud of it.

Image: Jolitz, Porting Unix to the 386, Dr. Dobbs Journal January 1991

Heck, we even saw this flaw manifest in 386BSD testing, so we wrote our own virtual-to-physical memory mapping mechanism in software and wrote about it in Dr. Dobbs Journal in 1991.

You could have dealt with this a long time ago. But it was a hard problem, and you probably thought “Why bother? Nobody’s gonna care about referential integrity“. And it didn’t matter – until now.

Continue reading Intel’s X86 Decades-Old Referential Integrity Processor Flaw Fix will be “like kicking a dead whale down the beach”

Oh Microsoft, Google is Calling – They Want the OS Space

With the announcement of Android, the Google open source mobile platform, there has been breathless talk of Google taking out the “locked” cellphone market with a Linux OS version. But we all know there are many open source Linux OS mobile versions already out there, so grabbing one and putting some stuff into it isn’t really that hard. In fact, one wag I know had this little joke:

How many Google Ph.D’s does it take to create a mobile operating system? Answer – 1000. One to download the OS, and 999 to add “Copyright Google”.

Hmm, ever since the bright kids at Google were accused of appropriating code to build their social networking site Orkut (see Google Stole Code? Is Social Networking that Hard?), many techies have expressed a somewhat low opinion of Google’s technical expertise, especially when doing the actual work with all those incredible resources in people and money is probably a lot easier than “borrowing” somebody else’s “crufty” code and figuring it out. Sometimes, by the way, “crufty” means “I can’t figure out your code because I’m too stupid so I’m going to run it down”. I got that a lot with 386BSD. But given the incredible brainpower Google has gathered, I would think they could not only eventually figure it out, but maybe do a better job from the beginning…

So if Google is so full of smart people, why I am asked did they just take a Linux distro and hack it? Why didn’t they give us a “from the ground up” genius OS?

Google is full of smart people, and Linux (and BSD and Windows BTW) are not optimal OS’s for mobile computing and they know that. They also do have the resources to completely change the paradigm of open source and mobile computing but choose not to. That’s a fact.

But choosing a Linux distro and entering the mobile space is the perfect feint if a very large and very rich company has decided to take on Microsoft in the OS market, but is worried given their rival’s monopoly that they already would look like a loser if they competed directly. By cudgeling one of the Unix lookalikes and stuffing it into a small device, they can appear like they are a real contender in a big space and work their way into the heart of Microsoft’s defenses.

So it is a smart strategy. Too bad people only think tactically nowadays – they’re missing the real battle.

Safari Goes Corporate – Jobs Announces Windows Version

Steve Jobs today announced that Safari, the annoyingly broken browser for the Mac, would soon be appearing on Windows systems as another annoyingly broken Windows application. Aside from the obvious joy at the thought that browser compatibility testers would no longer require a Mac to do their job, is there any other import to this announcement? Well, there are a few possibilities…

One possibility is this is part of the strategy of going ever closer into direct competition with Microsoft, although Safari isn’t in the same class as IE (or even Firefox). Safari was built on the KDE Project’s KHTML layout, BTW. For a number of years there was frequent sniping between the groups – the KDE volunteers felt Apple didn’t comply with the spirit of open source (they’d take their time in submitting changes, for example) while Apple complained they weren’t quick enough fixing their bugs. I always found the latter complaint hilarious because Apple was notorious for never fixing their bugs even when they could. The source trees have diverged greatly since this initial schism (schism in open source? perish the thought), and there are dreams of somehow “mending the rift” through “unity” initiatives. But this is a lot of work for a questionable gain.

Open sourcerers are good at fractionating markets, but lousy at aggregating them. It’s just too easy for someone to run off and roll his own when he gets miffed for a potential quick gain (while wacking the older group). Fractionation totes up in the big book of life as a long-term cost hit on the entire open source market segment, and is something Microsoft loves to see.

As Apple has migrated off the PowerPC, the distance between them and a Wintel platform (Mactel?) has diminished mightily. The “next step” (yes, a pun) is moving their software onto Windows to develop an audience, so the distance between Windows and Mac becomes a matter of taste. Apple knows that eventually they have to face the harsh economics of the Wintel world. They also know Microsoft has to move the Windows franchise intact to a very broad market, while they only have to appeal to a subset of that market.

Is it better to be a remora or a whale? If they can profit from the current Microsoft open-source obsession (you know, “Get Linux”), they can do very well taking bites out of Microsoft’s market, since Microsoft prefers to fixate on one enemy at a time. If Apple gets too troublesome, Microsoft can always buy it. Of course, maybe something in that Microsoft-Apple agreement signed years ago makes this a lot easier than one might think (although nothing is ever easy around Steve Jobs).

GPLv3 – Yes, You Can Run DRM (If You’re Very Very Sneaky)

GPLv3 and DRM. Yes, we’ve heard it all before. The license says you can’t use it (essentially, if you use it, you have to show the code, which means people can remove it). The advocates from the Linux Foundation and their mouthpieces say you can. It reminds me of the policeman shrieking “Do not panic, all is well” at the rioting crowd in Animal House right before he’s trampled. Meanwhile, open source followers seem to be trapped in the alleyway…

So, does it ban DRM or doesn’t it? It does, but there is a big loophole, and Microsoft (and a few of us old hands at open source – after all, we helped invent it) know it.

Models, Simulations and Bugs, Oh My!

A recent discussion on e2e focussed on the efficacy of mobile / wireless simulations. You see, in the world of computer academia, simulations are de rigor to getting a paper through the peer review process, because it can provide you with lots of neat numbers, charts and diagrams that look nice but may mean absolutely nothing in practice. But “in practice” means applied (or horrors, development) work, and that’s usually met with disdain in the academic world (see Academics versus Developers – Is there a middle ground?). In other words, blue sky it but never never build it if you want to get a paper approved.

Simulations and models are an important tool in understanding the behavior of complex systems, and they’re used in most every scientific discipline today. But there’s a delicate distinction between a model of an environment and using the model as the environment — one that is often lost in the artificial world of networks and operating systems.

Academics versus Developers – Is there a middle ground?

Jim Gettys of One Laptop per Child is engaged in a furious discussion on the networking / protocol list as to whether academics should take responsibility for reaching out to the Linux community and maintaining their own work within the Linux code base. His concern is that networking academics, when they do bother to test their pet theories, use such old versions of Linux that it becomes infeasible to integrate and maintain this work in current and later versions. The flippant academic response is usually of the form of “we write papers, not code” variety (which isn’t precisely true and actually then brings into question the relevence of said papers and the claimed work that stands behind them).

As Jim says himself, “If you are doing research into systems, an academic exercise using a marginal system can only be justified if you are trying a fundamental change to that system, and must start from scratch. Most systems research does not fall into that category. Doing such work outside the context of a current system invalidates the results as you cannot inter compare the results you get with any sort of ‘control’. This is the basis of doing experimental science.”

This is an old dispute, and one that has its roots in the creation and demise of Berkeley Unix (BSD) distributions. So perhaps a little perspective is in order.