As the layoffs of the old continue here in Silicon Valley, the investment community and Big Tech ™ rush headlong into the wonderful world of AI. Every company and every startup now sees that brass ring ready to anoint the Overlords of AI ™ (pending I assume).
Elon Musk, annoyed with his not-good-enough OpenAI involvement, is simultaneously railing against the perils of AI and announcing a new AI company called xAI — which will apparently tell us how the real world works with an exploration of “the true nature of the universe”. Heh.
I guess the Earth and the Solar System aren’t enough anymore. Nor is the Milky way galaxy with everything within the Perseus and Scutum-Centaurus arms. Nope, now we’ve got to include the Hubble picture of all galaxies to get the maximum pitch potential. I guess the TAM is huge enough for even the most greedy investor. I think…
Meanwhile, ChatGPT application growth has finally started to slow down, as people rush to the new Threads app launched as a competitor to Twitter. Yes, folks, the dumbest site in all the Milky Way has been duped by Meta (the Facebook fellows) as an add-on to their Instagram app.
It is a truth universally acknowledged that a site in need of boosting must be in want of a cloned competitor. In other words, the quickest path to development is usually to rip and piggyback open source code onto something else and cross your fingers as you go live. It may not be ready, but it will be running, kinda.
Meanwhile, OpenAI is being investigated by the FTC for its propensity to unleash “hallucinating” AIs on the world, leading to rampant lies and misleading statements. They’re also annoyed that the AIs were trained on copyrighted works, but honestly I think that’s an afterthought as no one cared when source code attribution was deleted and content aggregators like Google News allowed one to find the article that didn’t require payment before. Why now? Maybe it just makes the filing more “human”.
The term hallucinating is a fascinating one for a piece of software, n’est-ce pas? It allows the company to elide responsibility for their software producing rotten results by implying the software is just some kind of person with a disability who should be treated with kindness and not legal threats. It also distances the company from this AI “person”, as person’s are only responsible for themselves and no one else.
This is, in sum, a very weird attempt to extend the ragged edges of Section 230 of the communications act to AIs where the AI software developer has no influence nor liability for what the AI actually says.
If you recall, the act states that “no provider or user of an interactive computer service shall be treated as the publisher or speaker of any information provided by another information content provider.”
The legal shield was passed to allow individuals to comment and post items on websites like Facebook and Twitter — and even blogs like mine if I so desire — with no liability to the provider of the service for the words said. People say dumb things, goes the thought, so why punish the website operator? This little sentence made the Internet providers rich beyond the dreams of avarice.
By acting like the AI itself is just providing information, and the developers and propagators of the information are just providers of an interactive computer service, they distance themselves from the liability of these acts.
So of course, everyone in Silicon Valley and beyond wants to extend it to AIs. Investors. Big Tech. Even Elon Musk.
That’s where the money is.