Fun Friday: Funding in Transition and Mammalian Distributed Memory Storage

As inflation continues to take its toll on everyone’s investments as well as steak dinner (psst – get the rotisserie chicken at Costco instead), Silicon Valley is clearly in a state of transition. Startups have been told to tighten their belts financially. Layoffs in big tech companies have begun. “Growth” ventures are failing to get follow-on funding, primarily in the consumer space and media (in Substack’s case, they’re also proving the old adage that no one pays attention to writers).

But for every easy money gambit that’s falling out of the sky, there is hope for the dreamer and rogue. Venture firms are still collecting money hand-over-fist from desperate Limiteds eager to get some return with the stock market slowing. Folks with money want to make more money. There are lots of them.

Of course, this doesn’t directly help the small entrepreneur. Big Venture (TM) doesn’t fund the small fry inventing neat technologies anymore — they have too many Series D unicorn mouths to feed. Big tech companies are no longer a safe bet — they may fire you and escort you unceremoniously out the door without any warning. In hard times, loyalty is not its own reward.

But there are a lot of individual investors out there who can drop $1M on a neat tech idea. All the startups William and I founded started with a dream, some code, and a handshake during lousy economic times. They were funded precisely because making easy money on scams and gambits have evaporated.

So if you’ve got a good hard tech project, now may be the best time to go for it. The cash is still plentiful. Just play it cool. It worked for us. It can work for you.

Researchers tracked neural activity across a whole mouse brain to determine what areas were involved in storing a specific memory. Many brain regions found likely to be involved in encoding a memory (top) were also found to be involved in recall upon reactivation (bottom).
Credits: Image courtesy of the Tonegawa Lab/Picower Institute.
Neural activity across a whole mouse brain to determine what areas were involved in storing a specific memory. Many brain regions found likely to be involved in encoding a memory (top) were also found to be involved in recall upon reactivation (bottom). 
Credits:Image courtesy of the Tonegawa Lab/Picower Institute. Read the article!

And speaking of distributed memory, a new study from MIT describes the mammalian brain as storing memory, not densely in a few regions, but instead loosely across many regions of the brain. This makes sense in a way. It’s a lot easier to completely lose a memory if it’s in one or a few locations than spread throughout the brain. Also, storing memories in larger “chunks” would result in a lot of wasted storage space since a memory is of varying size. Indirect references to each memory element, even if a few are lost, are more efficient than directly physically mapping a memory.

It does explain the dreamlike aspect of memories, doesn’t it? And also perhaps memories which are completely wrong but feel entirely real and true. Likely we lose a lot of these references that fill in some of the blanks over time. Associated elements, like smell, can track back along a pathway to a memory to give the gist of it, but it may be only a shadow of what was actually recorded.

But is the brain’s memory sparsely allocated as well? It may well be given this highly distributed storage across many parts of the brain. Sparse allocation is common in operating systems because it is usually faster and overall more efficient. But it can use more total memory than that of a densely allocated memory mechanism if most of the elements contain non-repeatable data. Are most of our memories just collages of a few meaningful pieces and a lot of filler? Perhaps dreams look odd precisely because they are just stray strands of sparse referents to redundant memories garbage collected by the brain and reallocated for use.

To dream. Perchance to sleep. Now that is the question.

Intel Ouroboros: Pat Gelsinger Returns to Build the Future

Classical Ouroboros. Wikipedia.

Pat Gelsinger is a technologist’ technologist. He worked on the X386 and X486 processors. We referenced the book he and Crawford wrote Programming the 80386 for our Porting Unix to the 386” series in Dr. Dobbs Journal in the early 1990’s and the development of 386BSD. It was a seminal processor and work that helped launched the open source operating system movement.

Yet Pat didn’t stay to retire with laurels at Intel. After many years battling for Intel’s future, he left to head EMC and, later, VMWare. Now he’s been brought back to Intel as CEO effective 15 February 2021. Why?

In a nutshell, while Gelsinger was off dabbling in storage technologies and cloud services, Intel was burning through every single technology advantage people like Gelsinger had built. Now, Intel is facing a reckoning, and needs to build a future again.

And that future depends on people with technical and domain skill, like Pat Gelsinger. 

This was a bitter pill for Intel’s Board of Directors and executive team to swallow. But, as Baron Mordo said, “The bill comes due”

The roots of this squandering of the future was based not in technology, but in contempt of technologists. Risk-takers in both strategic and startup investment in the 1990’s and 2000’s saw the proliferation of new approaches as “chaotic”. 

InterProphet SiliconTCP board. 1998.

I sat in an office of a top tier VC firm on Sand Hill Road in the late 1990’s and listened to the “smart money” partner complain about how their investments in ATM were being disrupted by InterProphet’s SiliconTCP low latency chip — as if I owned the burgeoning TCP/IP technology and was personally damaging their investments with a few prototype boards, a handful of working FPGAs and some Verilog.

TCP/IP was present in the mid-1980’s in Berkeley Unix, and used in datacenters throughout academia and government. As Vint Cerf himself noted, it was a good enough solution to get packets from one point to another.

TCP/IP as an “ad hoc” technology was good enough to take out OSI, ISDN and ATM. I thought it was wiser to surf the tsunami instead of railing against it. That just bred resentment.

I sat in corporate offices in the 1990’s and 2000’s and heard complaints about how open source was overtaking proprietary software stacks, and it was ruining their projections and their business.

Berkeley Unix was a feeder of innovation from the early 1980’s. True, it was not a viable competitor to proprietary OS stacks until we launched 386BSD in the early to mid-1990s. From that open source stack, backed by Dr. Dobbs Journal, sprang a whole host of proprietary software industry competitors, including Linux.

Open source kernels like 386BSD and its many progeny would not have made inroads if there had not been a wealth of innovation already present to mine out by these groups — innovation that was neglected, minimized or attacked by established proprietary players. 

But up to the point we released 386BSD publicly, everyone underestimated us. It couldn’t be done. It wouldn’t be done. But it was done. I knew it could be done.

I sat in a room in the early 2000’s as a VP at Microsoft complained about how open source was a threat and how, looking right at me, they had gathered information on everyone involved and their families. As if developing the open source OS created some kind of ominous fifth column of open source software subverting their eternal rights to OS glory. It was…unpleasant. It was also incredibly horribly damaging personally.

I listened in the mid-2000’s as a VC “sympathetically” told us that we’d never get funding again after InterProphet. Not because we’d done anything wrong. We met our commitments. We built things. But because they didn’t want innovation and the risks that came with it. And their way to kill the message was to kill the messenger.

“The bill comes due”.

The resentment in the 1990s and 2000s towards new ideas and the creation of new products was intense. All they could see was damage to their five year technology plans and prior investments. The idea of hedging your bets was anathema, because that implied they couldn’t control the industry.

And mind you, it was about control. Control of technology. Control of innovation. Control of monetization. Control of creativity. Control of thought.

So here we are, in 2021. Intel squandered their future, slicing and dicing their monetization game. Intel’s “safe and sane” business relationship with Apple is now in pieces. In 2018 Apple maneuvered Intel into taking out Qualcomm as a competitor. In 2019 Apple acquired Intel’s smartphone modem tech and developed their own. In 2020 Apple introduced the M1 as a competitor to the high end X86 line. And that’s just one customer. The vultures are circling. Intel lost control.

Now Pat Gelsinger has agreed to come back. How will he pick up the pieces of a damaged company? I assume he’d only return if he had broad latitude in restructuring, hiring and firing. He’ll investigate interesting acquisition targets that offer a path forward for Intel. And he’ll look closely at how rivals like AMD under Dr. Lisa Su have done so well while Intel foundered.

Intel ouroboros. Pat is back at the beginning. It remains to be seen how he creates a future for Intel once again.

Apple Store “Bait and Switch” IPhone Battery Gambit: Apple Giveth and Taketh Away

Image: cnet.com

Beware the Apple Store “bait and switch” iPhone battery gambit. We faced this yesterday in Los Gatos, CA where they tried to claim a working iPhone 6s with a good screen / original owner was not eligible for their $29 battery replacement at the appointment because it had a slight bow in the frame.

Now, by this point everyone likely has some flaw in their old iPhone, whether it is a slightly dinged frame from being dropped to a minute crack or scratch under the frame. It’s normal wear and tear. And they likely didn’t have a problem replacing the battery before the discount was announced and replacements were more costly and infrequent. But now, it’s an issue.

They did offer to sell an iPhone 6s for close to $300! This is a terrible price. Don’t go for it. This is what they mean by bait and switch.

Continue reading Apple Store “Bait and Switch” IPhone Battery Gambit: Apple Giveth and Taketh Away

Intel’s X86 Decades-Old Referential Integrity Processor Flaw Fix will be “like kicking a dead whale down the beach”

Image: Jolitz

Brian, Brian, Brian. Really, do you have to lie to cover your ass? Variations on this “exploit” have been known since Intel derived the X86 architecture from Honeywell and didn’t bother to do the elaborate MMU fix that Multics used to elide it.

We are talking decades, sir. Decades. And it was covered by Intel patents as a feature. We all knew about it. Intel was proud of it.

Image: Jolitz, Porting Unix to the 386, Dr. Dobbs Journal January 1991

Heck, we even saw this flaw manifest in 386BSD testing, so we wrote our own virtual-to-physical memory mapping mechanism in software and wrote about it in Dr. Dobbs Journal in 1991.

You could have dealt with this a long time ago. But it was a hard problem, and you probably thought “Why bother? Nobody’s gonna care about referential integrity“. And it didn’t matter – until now.

Continue reading Intel’s X86 Decades-Old Referential Integrity Processor Flaw Fix will be “like kicking a dead whale down the beach”

Oh Microsoft, Google is Calling – They Want the OS Space

With the announcement of Android, the Google open source mobile platform, there has been breathless talk of Google taking out the “locked” cellphone market with a Linux OS version. But we all know there are many open source Linux OS mobile versions already out there, so grabbing one and putting some stuff into it isn’t really that hard. In fact, one wag I know had this little joke:

How many Google Ph.D’s does it take to create a mobile operating system? Answer – 1000. One to download the OS, and 999 to add “Copyright Google”.

Hmm, ever since the bright kids at Google were accused of appropriating code to build their social networking site Orkut (see Google Stole Code? Is Social Networking that Hard?), many techies have expressed a somewhat low opinion of Google’s technical expertise, especially when doing the actual work with all those incredible resources in people and money is probably a lot easier than “borrowing” somebody else’s “crufty” code and figuring it out. Sometimes, by the way, “crufty” means “I can’t figure out your code because I’m too stupid so I’m going to run it down”. I got that a lot with 386BSD. But given the incredible brainpower Google has gathered, I would think they could not only eventually figure it out, but maybe do a better job from the beginning…

So if Google is so full of smart people, why I am asked did they just take a Linux distro and hack it? Why didn’t they give us a “from the ground up” genius OS?

Google is full of smart people, and Linux (and BSD and Windows BTW) are not optimal OS’s for mobile computing and they know that. They also do have the resources to completely change the paradigm of open source and mobile computing but choose not to. That’s a fact.

But choosing a Linux distro and entering the mobile space is the perfect feint if a very large and very rich company has decided to take on Microsoft in the OS market, but is worried given their rival’s monopoly that they already would look like a loser if they competed directly. By cudgeling one of the Unix lookalikes and stuffing it into a small device, they can appear like they are a real contender in a big space and work their way into the heart of Microsoft’s defenses.

So it is a smart strategy. Too bad people only think tactically nowadays – they’re missing the real battle.

Safari Goes Corporate – Jobs Announces Windows Version

Steve Jobs today announced that Safari, the annoyingly broken browser for the Mac, would soon be appearing on Windows systems as another annoyingly broken Windows application. Aside from the obvious joy at the thought that browser compatibility testers would no longer require a Mac to do their job, is there any other import to this announcement? Well, there are a few possibilities…

One possibility is this is part of the strategy of going ever closer into direct competition with Microsoft, although Safari isn’t in the same class as IE (or even Firefox). Safari was built on the KDE Project’s KHTML layout, BTW. For a number of years there was frequent sniping between the groups – the KDE volunteers felt Apple didn’t comply with the spirit of open source (they’d take their time in submitting changes, for example) while Apple complained they weren’t quick enough fixing their bugs. I always found the latter complaint hilarious because Apple was notorious for never fixing their bugs even when they could. The source trees have diverged greatly since this initial schism (schism in open source? perish the thought), and there are dreams of somehow “mending the rift” through “unity” initiatives. But this is a lot of work for a questionable gain.

Open sourcerers are good at fractionating markets, but lousy at aggregating them. It’s just too easy for someone to run off and roll his own when he gets miffed for a potential quick gain (while wacking the older group). Fractionation totes up in the big book of life as a long-term cost hit on the entire open source market segment, and is something Microsoft loves to see.

As Apple has migrated off the PowerPC, the distance between them and a Wintel platform (Mactel?) has diminished mightily. The “next step” (yes, a pun) is moving their software onto Windows to develop an audience, so the distance between Windows and Mac becomes a matter of taste. Apple knows that eventually they have to face the harsh economics of the Wintel world. They also know Microsoft has to move the Windows franchise intact to a very broad market, while they only have to appeal to a subset of that market.

Is it better to be a remora or a whale? If they can profit from the current Microsoft open-source obsession (you know, “Get Linux”), they can do very well taking bites out of Microsoft’s market, since Microsoft prefers to fixate on one enemy at a time. If Apple gets too troublesome, Microsoft can always buy it. Of course, maybe something in that Microsoft-Apple agreement signed years ago makes this a lot easier than one might think (although nothing is ever easy around Steve Jobs).

GPLv3 – Yes, You Can Run DRM (If You’re Very Very Sneaky)

GPLv3 and DRM. Yes, we’ve heard it all before. The license says you can’t use it (essentially, if you use it, you have to show the code, which means people can remove it). The advocates from the Linux Foundation and their mouthpieces say you can. It reminds me of the policeman shrieking “Do not panic, all is well” at the rioting crowd in Animal House right before he’s trampled. Meanwhile, open source followers seem to be trapped in the alleyway…

So, does it ban DRM or doesn’t it? It does, but there is a big loophole, and Microsoft (and a few of us old hands at open source – after all, we helped invent it) know it.

Models, Simulations and Bugs, Oh My!

A recent discussion on e2e focussed on the efficacy of mobile / wireless simulations. You see, in the world of computer academia, simulations are de rigor to getting a paper through the peer review process, because it can provide you with lots of neat numbers, charts and diagrams that look nice but may mean absolutely nothing in practice. But “in practice” means applied (or horrors, development) work, and that’s usually met with disdain in the academic world (see Academics versus Developers – Is there a middle ground?). In other words, blue sky it but never never build it if you want to get a paper approved.

Simulations and models are an important tool in understanding the behavior of complex systems, and they’re used in most every scientific discipline today. But there’s a delicate distinction between a model of an environment and using the model as the environment — one that is often lost in the artificial world of networks and operating systems.

Academics versus Developers – Is there a middle ground?

Jim Gettys of One Laptop per Child is engaged in a furious discussion on the networking / protocol list as to whether academics should take responsibility for reaching out to the Linux community and maintaining their own work within the Linux code base. His concern is that networking academics, when they do bother to test their pet theories, use such old versions of Linux that it becomes infeasible to integrate and maintain this work in current and later versions. The flippant academic response is usually of the form of “we write papers, not code” variety (which isn’t precisely true and actually then brings into question the relevence of said papers and the claimed work that stands behind them).

As Jim says himself, “If you are doing research into systems, an academic exercise using a marginal system can only be justified if you are trying a fundamental change to that system, and must start from scratch. Most systems research does not fall into that category. Doing such work outside the context of a current system invalidates the results as you cannot inter compare the results you get with any sort of ‘control’. This is the basis of doing experimental science.”

This is an old dispute, and one that has its roots in the creation and demise of Berkeley Unix (BSD) distributions. So perhaps a little perspective is in order.

Running the Microsoft Personnel Gauntlet

Ed Frauenheim of cnet discussed the difficulty of running the Microsoft personnel gauntlet, er, “puzzle”. Why are they so arrogant? Obvious answer – they’re a big fish. And some managers think that if their company is big, so are they and act accordingly. However, once they leave the “hive” they usually sink back into the ooze they emerged from in the first place.

When one of the Microsoft recruiters came for me back in the mid-1990’s, I ended up hiring him to staff one of my funded startups – . I recommend that startups in competitive times recruit a Microsoft recruiter – they’re very good.

On the serious side, the simple reason that Microsoft has difficulty in hiring is their antipathy to anyone who has worked with open source. This “us versus them” mindset has caused them to lose out on very talented people and on new directions in research and development in operating systems.