AI Trends for 2024: What’s Old is New Again

There’s a lot of shock and awe in the AI space these days. And lots of money on the table. But through the sturm und drang, some trends are emerging. To level-set, I went back and reread our last published article together, Moving Forward in 2020: Technology Investment in ML, AI, and Big Data (William F Jolitz & Lynne G Jolitz, Cutter Business Journal, 7 April 2020).

Four years ago, AI was at a crossroads. When we looked a traditional value propositions in technology, where one went from a specific technology to a target customer in a high value sector to a broadened sector and use, AI was doing miserably. 70% of companies said their AI projects provided little to no benefit to their company. Only 40% of companies said they had made a significant investment in AI. The frustration lay with “products sold with ill-defined benefits” which led to “unsustainable revenue that plummets when customers become disillusioned from a tactical lack of sales focus”. We stated the key problem was “the startup’s sales focus no longer aligns with the customer’s strategic focus”.  In tech speak, they couldn’t figure out what to do with it and got disappointed.

We suggested what we called an “axiomatic” approach: “Instead of moving from technology to key customers with an abstracted TAM (Total Available Market), we must instead quantify AI and ML benefits where they specifically fit within business strategies across segment industries”. We then highlighted three areas to watch: surveillance, entertainment, and whitespace, while also discussing the issues with ad hoc architectures which potentially disrupt the cloud services costs and security. In terms of architectures, there is now more focus on data ownership and control, as well as reducing costs in the cloud. But it’s still very much the same as four years ago for most customers.

But the key prediction where we were literally “on the money” was our analysis of chaotic disruption of the market forwarded and funded by “super angels”. This was how companies like OpenAI spawned and spurred tremendous disruption in a very short timeframe: 

“Venture capital (VC) investments in ML/AI fixate on a startup’s ability to obtain go-to-market sales by disintermediating other vendors and to lock-up highly profitable (yet elusive) opportunities. The VC’s intent is startup validation and gauging threats to other vendors’ uncompetitive businesses that will drive the startup’s ability to gain partnerships and revenue shares. However, sometimes, the result is not what VCs would wholly desire but rather more like paralysis with no clear “win” — because the startup only partially engages the customers and does not succeed in displacing other vendors. To force the win, tactical deconstructing/reconstructing of AI/ML solutions around existing layers of edge and cloud platforms as an investment category is akin to desperately reshuffling poker chips on the poker table. This is best avoided. Industry disruption is inherently unstable. Like an ouroboros, it can abruptly turn from obvious low-hanging fruit targets to feeding off earlier successful targets undergoing a state of change.  

The potential for radically greater opportunities is more interesting than patiently maintaining course or  re-navigating the rough waters to see existing ventures through to a reasonable conclusion. This potential is the realm of super angels, self-funders, and leading edge “winners.” These individuals and groups see no disadvantage to riding a chaotic wave because they’ve gotten accustomed to being so out in front of theirthe self-competition within their newly chosen, ever-shifting “whitespace path.” 

However, the traditional VC process is disadvantaged by these groups because venture capitalists’ gut instincts based on the feel of the deal get whipsawed by the loss of bragging rights to ROI, limiting them from getting too far out beyond their headlights. Thus, chaotic disruption is a no-go zone for most. For those who decide to enter these perilous waters, the tendency to share risk across many partners leads to a kind of groupthink at odds with the fast moves and flexibility required of the super angels. 

As open source investments demonstrated, it’s a risky business consuming your own potential customers. In the AI chaotic disruption, all potential customers are considered targets: media, artists, writers, businesses.

In consuming the “long tail” of literature, art, whitepapers, business databases, and personal information and opinion on the Internet and then regurgitating it as facsimiles stripped of authorship and authority, companies like OpenAI and Google whipsawed established players. As we have seen, the rush of businesses and consumers to magnify this effect was phenomenal — and dangerous.

The intent was to rapidly drive paniced companies to sign exclusive agreements and become the dominant company in AI for the next half century. If it sound unbelievable, note we now have only a few companies which dominate search, content, and connection due to brand recognition and addictive use. It takes a lot of money to maintain an addition, or establish a new one. 

As the investment space is still suppressed due to poor conditions despite all the dry powder, there are a few bright spots. Battery investments continue to spark interest. Climate change companies surge and storm. Crypto was actually legalized by the SEC, because you can’t play with GameStop forever — so it’s time to jump into the big scams, kids. Space investments are, well, vast. AI plays a role in all of these.

But because of the chaotic disruption strategy that our billionaires strategized in Silicon Valley, AI now has the attention of everyone, from governments and military and NGOs to plain ordinary users. It doesn’t matter if AI is “lazy”. Even the IMF is jumping in.

Will AI benefit humanity? That’s out of my paygrade. William and I saw it had unique potential in many areas in 2020. That’s still true in 2024. I hope the chaotic disruption doesn’t prevent us from seeing some real benefits for the better.

Merry Monday: Venture Gets Antsy, Boom or Bust? Everyone goes Buggy over AI

Another Monday, another week of business excitement.

Venture investment firms are getting nervous as the projections for a slow IPO market in 2024 make fund exit profitability iffy as each fund matures. So what to do? Simple – roll the money into another fund that has no exit date and call it a day. And so, continuation funds, and their more desparate cousins, strip sales, are generating increased interest (subscription required, and I’m really sorry about that).

It’s hard to support a wealthy lifestyle before one is truly wealthy, but somehow VCs manage with their fees. But they want more. And they’re going to get it, by hook (clawback) or by crook (IPO).

It may be hard to believe, given most of these funds have ten year lifetimes, but even though we’ve gone through a lot of “booms” over the last decade, apparently they’ve chosen so poorly that they don’t have the exit they promised their Limiteds. In a more cynical moment, one might also wonder if the carry was so good they held off on distributions, hoping for more.

And then the pandemic hit. And then we had inflation grow. And now we are watching a world grow hotter in terms of climate and strife.

There are still optimists, however.

UBS Wealth Management just released a case study claiming that the 2020’s will look more like the 1990’s (without the early 1990’s recession) instead of some Roaring 20’s stock market insanity before folks jump off of buildings. For that alone, I must confess I’d rather consider a Clinton-style economic “Americans for a Bigger America” boom, as I don’t like heights.

In terms of technology investment, the 1990s was a good era for us personally.

We started off in 1991 (after two prior years of work with Berkeley with no real support) with the introduction of Porting Unix to the 386 in Dr. Dobbs Journal. Over the next two years, we painstakingly described our 386BSD port of Berkeley Unix from design to execution to distribution over the Internet.

  • 386BSD made manifest Stallman’s concept of “open source” as a means to encourage innovative software development.
  • 386BSD demonstrated that the Internet could be a viable mechanism for software distribution and updates instead of CDROMs and other hard media. 
  • 386BSD provided universities and research groups all over the world the economic means to finally conduct OS and software research using Berkeley Unix on inexpensive 386 PCs instead of minicomputers and mainframes. 
  • 386BSD spurred a plethera of new funded startups launched with a focus on open source online tools and support.

In sum, 386BSD broke the logjam on university research encumbered by proprietary agreements and spurred the growth of a new industry in Silicon Valley. 

Not bad for a research OS project that was disliked by Berkeley’s CS department and essentially moribund by 1989.

In 1997, we observed that Internet traffic wasn’t well-optimized. We launched and obtained funding for InterProphet, a low-latency TCP processor dataflow engine. In 1998 we went from concept to patents to prototype. We proved that dataflow architectures, a non-starter in the 1980s with floating point processing, was a viable means to effectively offloading TCP processing from the kernel to a dedicated processor, just as graphics was offloaded through use of dedicated graphics processors. We did this on a million-dollar handshake investment and a handful of creative engineers. Silicon Valley can be an amazing place.

Global strife and pain isn’t usually good for business. It was an age of “renovation, not innovation” the prior decade — hyper-focused on strip-mining extant technologies and vending rent-seeking fads — unequipped to deal with these unsettled times. 

But such times often spur interest in non-conventional problem-solving, opening a door to new technologies and risky solutions. Given the huge issues with climate change and what it spawns, we do need more “innovation, not renovation”. 

I guess I’m just an optimist.

Speaking of technology fads, as the smoke clears and the mirrors crack, will generative AI still be the savior Silicon Valley hopes it to be? As with all answers, it depends.

If one owns the datasets from which one mines the answers, likely “Yes”. Security and privacy issues are moot inside your datacenter, for the most part, assuming you actually invest in security. Cost reduction is a viable metric that businesses can use to determine the efficacy of AI independent of fads. Expect to see use cases that focus on support and customer effectiveness. We also should expect new solutions crafted out of analysis of highly complex areas, from drug development to climate modeling. 

However, generative AI has limits. The latest cut of a thousand critics, courtesy of Google, demonstrated that one could overload the AI generative variation responses to the point it begins to spew out actual training data using Internet data with a single word. Using an extraction technique that relied on an infinite request (do something “forever”), they achieved immediate results:

“After running similar queries again and again, the researchers had used just $200 to get more than 10,000 examples of ChatGPT spitting out memorized training data, they wrote. This included verbatim paragraphs from novels, the personal information of dozens of people, snippets of research papers and “NSFW content” from dating sites, according to the paper.”

I’m sure a new set of lawsuits based on infrigement are already in the works. Maybe even using ChapGPT to generate them. Who knows?

So let us send our thoughts and prayers to the poor VCs and happy lawyers. I haven’t seen this much eagerness for technology infringement lawsuits since the USL v UCB and Java v Everybody years