The Security Frustrations of Apple’s “Personal” Personal Computer: Device Access, Two Factor ID, and 386BSD Role-Based Security

Recently, a FaceBook friend lamented that he could not access his icloud mail from a device bound to his wife’s icloud access. He also expressed frustration with the security mechanism Apple uses to control access to devices – in particular, two-factor authentication. His annoyance was honest and palpable, but the path to redemption unclear.

Tech people are often blind to the blockers that non-technical people face because we’re used to getting around the problem. Some of these blockers are poorly architected solutions. Others are poorly communicated solutions. All in all, the security frustrations of Apple’s “personal” personal computer are compelling, real and significant. And do merit discussion.

One way tech folks get around Apple restrictions on email, for example, is to use multiple accounts. On an IOS device one can use multiple services and setup accounts for these services. For example, most don’t know that “notes” is actually a hidden email client via an imap fetch. Apple does use the appropriate protocols behind the curtain.

In my case, I don’t even use icloud mail. I use dedicated email accounts. I have access mediated by a mail server in our datacenter cloud which we personally administrate. ┬áSo all this frustration doesn’t impact me. I don’t see it in my daily life. I’m blind to the impact on others.

But what if I, like most people using Apple devices, want to access icloud mail from any Apple device? Icloud isn’t an ordinary heterogenous service. It is actually bonded to a set of devices. This is the philosophy behind Apple’s “personal” personal computer. The assumption is the customer has many devices tightly held and only used by that single customer. If that’s so, it naturally follows that those device will not easily permit access to a different icloud mail, because there is no need. The security won’t allow it. They expect you to setup another imap account, like Gmail, directly. They expect you to be a “techie”.

As an experiment in trying to understand this issue, I went to a mac that is not bound to an iphone I had in hand. By using “find my iphone”, I logged in as the apple ID and password. It then showed the location of that iphone sitting next to me. I then changed to the mail app within the browser, and it showed me the icloud mail. All this on a mac bonded to a different user. So I did get around the problem.

But it was non-intuitive. It was somewhat absurd. And it did reveal a security issue. Security by obscurity is a bug, not a feature.

I was unable to test this on other devices, such as an ipad, as I was pressed for time. But I’m pretty sure there are ways to get around this even on small devices. But really, seriously, is this sensible?

This entire conversation then segued into a discussion of two-factor identification. In theory, two factor identification is quite straightforward: Since everybody has a phone and some other device, if someone tries to access your account because they cracked your password and it’s not on one of your bonded devices, they send an email or a text to another device known as yours to confirm it’s OK. Simple, right?

Well, theory and practice are called that because thinking and living are really two different things. People live messy lives. Their phone may have been lost or forgotten or not charged up. They don’t know what device is the “mother may I” device.

The fundamental problem is that the nature of the security constraints of the Apple iphone concept requires it to be hermetically sealed. (In contrast, Android is a leaky sieve, and it is quite vulnerable.) This is why the battles between Apple and the government on access to personal iphones is so fraught, because it really is all or nothing.

This is not the only use of two factor identification. Two factor identification is required for a lot of services, such as FaceBook, and is not bound to a particular device walled garden. But the issue of making sure you have access to the primary device and “mother may I” device is still there. You must have all your devices, old and new, standing ready for the occasional incursion. And you must check and update all two-factor identification access points when you update devices. And this, my friends, is absurd.

I actually did a little work on security way back in the 1990’s, where William and I came up with the concept of role-based security as an adjunct to the usual password mechanisms. We even wrote up an article in Dr. Dobbs Journal about it, plus implemented it in 386BSD Release 1.0.

What were the gotchas? Security people didn’t like it because they were obsessed with crypto as a savior – which of course it wasn’t. The IT guys weren’t enamored because they liked administrating passwords and didn’t think it was a problem to change the password all the time, even though people don’t do that because they forget it, or they put it on a post-it on their monitor or write it in their wallet, and use “password” and “12345” as their password.

Two factor authentication is just doing a “mother may I” to a separate device because we have lots more devices, but fails if the device being asked might not be available, hence blocking the user from work. It may not be great, but at this point, it’s really all we have.

Intel’s X86 Decades-Old Referential Integrity Processor Flaw Fix will be “like kicking a dead whale down the beach”

Brian, Brian, Brian. Really, do you have to lie to cover your ass? Variations on this “exploit” have been known since Intel derived the X86 architecture from Honeywell and didn’t bother to do the elaborate MMU fix that Multics used to elide it.

We are talking decades, sir. Decades. And it was covered by Intel patents as a feature. We all knew about it. Intel was proud of it.

Heck, we even saw this flaw manifest in 386BSD testing, so we wrote our own virtual-to-physical memory mapping mechanism in software and wrote about it in Dr. Dobbs Journal in 1991.

You could have dealt with this a long time ago. But it was a hard problem, and you probably thought “Why bother? Nobody’s gonna care about referential integrity“. And it didn’t matter – until now.

Now a fix is going to be expensive. Why? Because all the OS patches in the world can’t compensate for a slow software path. We’re looking at 30% speed penalties, sir.

Now, we can probably and properly blame the OS side with their obsession with bloated kernels.

But you promised them if they trust your processors, you’ll compensate for their software bottlenecks and half-assed architectures. And they believed you.

So now you’ve got to fix it, Brian. Not deny it. Fix it. Google didn’t invent the problem. It’s been there in one form or another since the 8086 was a glimmer in Gordon Moore’s eye.

And now it’s going to cost Intel. How much is up to you.

SpaceX and Open Source – The Costs of Achieving Escape Velocity

The successful low-earth orbit of a Dragon capsule mock-up by the Falcon 9 rocket was a great achievement by SpaceX last week (June 4, 2010) and a harbinger of the new age of private space transport. As I watched their success, the excitement from the press and space enthusiasts, and the unexpectedly vindictive response from many inside NASA, I was reminded of the launch of 386BSD – and why those most able to understand your achievement often are the most parochial.

Space exploration is a family tradition for the Jolitz clan, starting with William L. Jolitz developing transponders and thin and thick films for many spacecraft at Ford Aerospace (some still transmitting telemetry long after his passing) to his son William’s work at NASA-Ames on an oscillating secondary mirror for the Kuiper Airborne Observatory as a high school intern, to his three grandchildren working at NASA in astrobiology (Rebecca Jolitz), orbital dynamics and space fatigue simulations (Ben Jolitz) and spacecraft logistics for science projects (Sarah Jolitz).

Ben and Rebecca Jolitz also had the opportunity to meet Elon Musk, founder of SpaceX, at the 2007 Mars Society Conference held at UCLA and were inspired by his vision and determination (it was at that conference that Ben decided he wanted to major in physics at UCLA). Though still in high school, they had recieved honorable mention in the Toshiba ExploraVision competition for their highly creative concept of a Mars Colonization Vehicle based on using an asteroid in a controlled bielliptic orbit as a transport vehicle to provide the “heavy lifting” of supplies and personnel between Earth and Mars. They were invited to present a more detailed talk on their project for the Independent Study track. The speakers at this conference were a great spur to their scientific enthusiasm.

So it was with great satisfaction that I watched SpaceX demonstrate that “rocket science” is no longer the province of great nations but instead will bring about a democratization of space – cargo and transport, experimentation and eventually mining and exploration.

This is no quick path, however. The struggle for open source software, beginning with Richard Stallman and his remarkable GCC compiler, Andy Tanenbaum’s Minix system, Lyon’s careful documentation of version 6 Unix, our Dr. Dobbs Journal’s article series on 386BSD Berkeley Unix and subsequent releases and Linus Torvald’s amazing synthesis of the prior Unix, 386BSD and Minix works to achieve Linux occurred during a enormous burst of creativity that actually totaled about five years (1989-1994). After this came the long process of usability design – driver support, GUI support, applications support, new scripting languages – which is still a work in progress after another decade. Big vision projects take a lot of time and are not for the timid.

It is no secret that NASA has been struggling for many years with a lack of purpose. Just like Unix in the mid-1980’s and Windows in the mid-1990’s, technology which is held too tightly to a single group or company or national agency tends to calcify. Innovation becomes too risky. Agendas and interest groups override design decisions which may theoretically impact their funding. It becomes easier to add to a design than subtract from it, resulting in an unwieldy project which never converges in form or function.

Eventually, more effort in put into maintaining the flaws than eliminating them. Bugs and unexpected interactions begin to dominate, resulting in more meetings, workshops and conferences. Tools to manage the side-effects and flaws of the project become the object of research, while the actual project suffocates as it becomes more and more obese.

The life cycle of an operating system, like the life cycle of a space exploration vehicle, encompasses a brief burst of risk-taking and innovation followed by a long series of “rational” decisions which add heft and gravitas, followed by bloat, loss of purpose and final collapse. But during the long period of bloat and dementia, the lack of satisfactory execution provides an opportunity for newer faster designs leveraging new technologies in other fields to pry into previously unobtainable market niches and slowly eat out the old markets. This happened with open source, and it is happening with space exploration.

The shuttle itself is over 35 years old and encompasses aging technology which can no longer be retrofit – and has been long scheduled for decommission. This schedule has been put off again and again for two reasons: 1) the US has refused to properly fund and schedule a replacement because the costs and commitment are very great and 2) the maintenance and rocket groups are based in key states dependent on continual funding. Politics as usual has been to fund existing projects when we are long overdue to redefine NASA’s mission and goals. And, as is often the case, in refusing to examine other options, we have been left with only one option – end the shuttle program and depend on other nations and consortiums for transport – Russia and the European Space Agency primarily.

The Bush-era Constellation program was in theory supposed to provide an alternative, but the results of this program were laughable – it became a symbol of a bloated self-referential insatiable rocket bureaucracy that couldn’t build a real rocket to get pizza, much less get to the moon. And there was the tragic side to this – so many Americans love to complain about their government by saying “If we could get a man to the moon, why can’t we do” whatever, when in reality America lost that ability over 25 years ago with the decommission of the Saturn 5 rockets. Since then, like many other government makework projects, “rocket science” has devolved to fantasy powerpoint presentations and one-off prototypes that might have been flown, except for the risk of failure.

So while the science side of NASA, with their unmanned probes and experiments and space telescopes, has continually advanced despite the occasional loss, the rocket side has cowered, fearful of failure yet addicted to the status quo of “no risk = no failures”. And this stance, while appearing to play it safe, has created more opportunities for the SpaceX’s of the world as space transport, satellite maintenance and other niche markets look for more effective and less expensive approaches.

Competition, we are always told, is good for America. After all, it was competition with the old Soviet Union that launched the space program – and the need to hire rocket scientists to get us up there. So in principle NASA’s rocket guys should be pleased with SpaceX – they can leverage SpaceX’s experience while encouraging their own demoralized workforce to become more innovative. Like open source, the knowledge that “it can be done” should provide both a relief to fear and a spur to greatness. It’s a win-win, right?

So why the malice and anger? Why did so many within the agency that could most benefit from this knowledge wish SpaceX ill? Why are they running down their achievement? Why aren’t they rising to the challenge? Aren’t they eager to break out of their repressive paradigms?

While envy and fear of change play a great role here, the loss of status is most pernicious. During the rise of open source, a new set of designers and developers began to set the pace for innovation. Many programmers frustrated in their work in industry found an outlet in open source. An avalanche of ideas – good, bad and indifferent – could no longer be repressed by groups controlling proprietary operating systems source. These groups – corporations, standards committees, technology “gurus” – derived much benefit from the old system. They were the leaders at conferences, the movers and shakers of agendas. More than even money, they had the power to elevate or destroy ideas and people on a whim. And believe me, I saw what happened when people didn’t “get with the program”. It wasn’t pretty.

When 386BSD was born, I was told by many in the hard-core Unix side that it would be “strangled in the cradle” – either by lawsuits (which of course, never happened) or by ridicule (which did occur, constantly). I didn’t believe it. I just couldn’t believe that the experts I knew in the biz would wish it ill when they had an opportunity to finally work with BSD without all the proprietary license rigamarole. For years I had heard people complain about all the agreements and licenses and restrictions and “If only it were unencumbered”. Now that they had their wish, wasn’t it great?

Boy, was I misled. What I saw as an opportunity, many other good talented people saw as a threat to their comfortable professional existence. I understand comfort, and I never wanted to make anyone unhappy. But in giving them what they had wished for, I did make them unhappy, because I also gave it to everybody else – and that was inconvenient. Well, I plead youthful enthusiasm here to misunderstanding their desires. But if given the chance, I’d do it again, because it was the right thing to do – even if I did it the “inconvenient way”.

So what were the claims? I was told nobody would use open source because it didn’t have a big company behind it – and we see today that was wrong. I was told that nobody would make money off of open source – and today we see many companies developing profitable businesses off of support and new design. I was told that nobody would use open source to innovate, and yet I use entirely new applications and languages that were not even thought of at the time Dr. Dobbs Journal launched the “Porting Unix to the 386” series in January of 1991. I was told that the only way to distribute software was by selling it on a disk, and that we were crazy to put it out on the Internet, and yet now this is the way even proprietary software is distributed. When I talked about Internet-based OS’s, I was literally laughed at by experts I respected – and it hurt – but now we see the beginnings of the “webOS”.

The ridicule did have real and lasting effects. The constant intimations by Unix groups of pending lawsuits that never arrived but always “loomed”, the personal strain caused by creating entire OS releases on a shoe-string budget funded mostly by writing articles and refinancing our house while raising three young children, the ever-escalating expectations of a consumer audience demanding a commercial OS with all the bells and whistles dissatisfied by traditional Berkeley Unix research releases (with the traditional demands of self-administration – in a “damned if you do, damned if you don’t” moment I actually insisted the Dr. Dobbs OS release installation and administration be automated by default, with the traditional installation process selectable if desired, and was then ridiculed for not making them do it the hard way – sigh) and finally, the relentless badmouthing of any new approaches in the kernel – the raison d’etre of Berkeley Unix but not, admittedly, of a commercial corporate proprietary system. The last of these was the hardest to bear, frankly – and I understood why many other designers, seeing this, fled to Linux. After all, the ridicule, badmouthing and blacklisting was a piece of what they had experienced in their companies, so why endure it in a supposedly “open source” project?

So like 386BSD, the NASA badmouths and their corporate masters could potentially destroy SpaceX. Yes, SpaceX is better funded than a two-person project like 386BSD – our original “Falcon 9” rocket was a 300-400 kbyte kernel plus some apps (386BSD Release 0.0) and 17 5,000-10,000 word articles plus code on how to do it yourself – but getting out of a gravity well of Earth, not to mention the psychological gravity well of believing you can do it (which seems to be more like Jupiter in terms of magnitude) is a heck of a lot harder. Ridicule, the inevitable technical setbacks SpaceX potentially faces, liability laments (ah, there’s that “lawsuits pending” stuff again), a steep learning curve, American impatience (doing new releases with some new innovative work in the kernel took us about 8-12 months – doing the next stage in rocket / capsule design will take longer) and media disillusion when the audience fades (no audience = no money) add to the burden.

But even if, somehow, SpaceX is marginalized, their accomplishment are *real* and will spur others to try. Linux was able to grow and thrive during this time precisely because it was *not* an American project – based in Finland, the canards thrown at 386BSD were deemed irrelevant to Linux. Linux was a safe haven to many serious programmers disillusioned with the threats, lies and distortions promolgated around Berkeley Unix precisely because it was an outsider, uninfluenced by other interests.

People of ill will can kill an innovative project for a while. But they can’t kill the idea on which that project is based. It may be delayed for a while. But somewhere, somehow, it will spur others on to try. SpaceX, like 386BSD, is only the beginning.

SCO Gets Valentine, Lessig Campaigns for “Change”

Well, SCO got a big Valentine’s treat last week – a $100M potential investment by the very politically-connected Stephen Norris Capital Partners (an equity firm) and their “partners from the Middle East” to lift the firm out of the morass of Chapter 11 bankruptcy and into private hands. While Linux adherents are clearly annoyed with SCO’s escape from the bankruptcy abyss, the more interesting item to consider is how SNCP is going to get the kind of “bragging rights” IRR out of UNIX.

What if there was a strategic need for an OS that’s entirely licensed for military use? $100M would be a reasonable bet to get return on investment if 1) you had an “in” with the DOD and 2) they buy off that they need an OS qualified for strategic security reasons. I know some people might get hung up on the Middle East connection, but that would only be an issue if there was majority foreign ownership, and that can be easily handled via a domestic equity firm. Just food for thought.

Also, Larry Lessig of “creative commons” fame (whom I’ve interviewed as well as reviewed) has announced that he is considering a run for Tom Lanto’s seat against the very popular and (once again) politically-connected Jackie Speier. Larry’s concern is political corruption, and given the Zeitgeist it is a serious one, but not one that has arisen recently. Several years ago Larry and I chatted about the possibility (before YouTube, mind you) of using user-generated video to create a “truth squad” to monitor political campaigns for honesty. We both could see this coming, but I think it came into fruition with the “macaca” remark blurted out by George Allen (former Republican Senator for Virginia) during a campaign stop and captured on video for the world by the intrepid (and unflappable) S. R. Sidarth. I don’t vote in this district, but I must say that Larry would make this a very interesting race if he chooses to enter it. Time will tell.

When Internet Rants Go Too Far – How Vulgar Commentary Masks Naked Power Struggles

Sometimes I have problems categorizing articles I discuss. Perhaps this item would have fit in “women & technology”, but I don’t think this is exclusively a “woman’s problem”. Since I’ve seen this since the days of Unix and experienced the brunt of it during the pioneering days of open source and 386BSD, I think it may belong in a broader category than that peddled by the vanishing newspapers.

Fun Friday: Happy Birthday 386BSD!

Our deeds determine us, as much as we determine our deeds. – George Eliot.

Today is a special day. Exactly 14 years ago today, after 15 monthly feature articles on Porting Unix to the 386 appeared in Dr. Dobbs Journal, 386BSD Release 0.0 went public.

A number of 386BSD enthusiasts have noticed that we always favored holidays for releases. St. Patrick, as any good Irish patriot knows, was an escaped Roman slave who led his followers through many perils in the wilds of Europe, evading capture until he reached his home in the British Isles. He was then called to mission in Ireland, converting the Celtic peoples to the Christian faith.

386BSD Release 0.0 launched like a rocket – from zero to 250,000 downloads in a week at a time before HTTP, web browsers, Internet buzz, keywording, and Internet source code tools for management and organization, and most importantly, before the business of open source existed (see The Fun with 386BSD).

386BSD, conceived in 1989 as a quick fun yearlong project (see “386BSD: A Modest Proposal”), also ended up personally consuming about six years of our lives (with periodic periods of “real work” in Silicon Valley companies), the birth of my two younger children, the death of my father-in-law, and a move from Berkeley to Los Gatos. We wrote 17 articles (with translations in German and Japanese) plus additional focus feature pieces on new technology issues, three books, thousands of email responses to specific technical questions, and lots and lots of online documentation. Like a writer who discovers there is a thirst for his works, we became committed to a demanding writing and coding schedule.

We did three major public releases (386BSD Release 0.0, 386BSD Release 0.1, and 386BSD Release 1.0) over three years (plus minor releases and custom releases to research and educational groups), in the process rearchitecting the kernel according to the Berkeley architectural committee recommendations a decade prior to bring BSD into the mainstream. We also invented a few cool mechanisms of our own like “role based security” and “automated installation”, and did a heck of a lot of bug fixes, release engineering, and fixes of other code throughout the release.

We reviewed thousands of submission of new code and bug fixes, keeping track manually of items because no one had an open source Internet-only tracking system available for open source yet. We added, tested, and configured thousands of tools, applications, compilers, and other resources that make up a full release, often finding and fixing problems and moving on to the next one in an “assembly line” manner. Most of the resources supplied, from the plethora of applications to utilities to tools were ones we never personally used (did anyone really need ten different editors?), but others did, so we tried our best to give everyone what they needed (or wanted).

Unlike prior Berkeley releases and commercial Unix releases from AT&T/SCO and Sun Microsystems, all this work was funded through our own resources – we got paid for writing articles, and that was a powerful incentive to keep writing more articles, but there wasn’t anything else in it until Dr. Dobbs released the 386BSD Release 1.0 Reference CDROM. No one back then believed that open source would ever be a valuable business opportunity. Linux was similarly treated for many years.

We had our high point and our low points. Of course, my biggest high was the birth of my son in 1990, when 386BSD was still domiciled at Berkeley, and the birth of my daughter in 1994 right after 386BSD Release 1.0 went to the Sony factory for final stamping and distribution. (My husband was invited, but missed the 25th anniversary of the Internet party at USC ISI though – it was a bit too close). It was a real kick to see so many people loved 386BSD and how it melted down the network when so many folks wanted a copy! I heard from people in Africa about how they were using it in schools, and professionals launching on-the-side open source projects. I heard from people from every continent on the planet except Antartica, and maybe they were using it there too and didn’t tell me.

The lows were more personal. The untimely death of a man I loved as a father and who had supported us through Symmetric Computer Systems and this project just as we were asked to do 386BSD Release 1.0 through Dr. Dobbs Journal – my father-in-law William Leonard Jolitz – left us consumed with grief for a long time. We put a lot of ourselves into this work as a kind of personal catharsis and dedicated the first volume of Source Code Secrets to his memory. And the cancer diagnosis of my oldest daughter six months after his death was also a dark time. It was later found benign during exploratory surgery under a local anesthetic (she was nine years old) as my husband held her hand, talked to her, and watched them work.

[Eerily enough, seven years later she received another cancer diagnosis right when I was trying to finish up the first 386BSD paper I’d written after a five year hiatus. After spending a year constantly battling her HMO to take her tumor seriously, they finally did the biopsy, scheduled a surgeon (booked, cancelled, booked) and removed a tumor the size of a Big Mac. I stayed up until 2am the night before finishing that paper while my husband held me together (“seek no mercy from paper submission committees, for ye shall receive none”). While the committee was reading it the next morning, I was sitting in the surgery waiting room with my husband holding my hand. Fortunately, my oldest daughter is now a healthy college student (her dear high school friend Evan, diagnosed earlier with cancer, wasn’t so lucky – after many surgeries to remove tumors at Stanford he died last year). But what I am I am by the grace of God, and His grace bestowed upon me did not prove ineffectual. But I labored more strenuously than all the rest – yet it was not I, but God’s grace working with me (1 Corinthians 15:10).]

Perhaps it’s no surprise that working on just one project at a funded company – even if it was an impossibly hard project of transport protocol dataflow semiconductors – seemed a cakewalk compared to the morning-to-night coding, testing, and writing ritual that we lived by for so many years doing 386BSD. At InterProphet, a privately-backed Internet semiconductor company I helped co-found a decade ago this year, I finally got some time to work on products again and hire / lead a talented professional engineering team that delivered on time and on budget! We all knew the speed of light on processors problem was going to hit us within a decade, and non-Von Neumann mechanisms were the only way to keep up with Moores Law. But I still fondly remember those crazy years of 386BSD Mania, and we still use 386BSD in our own startups and projects to this day.

So Happy Birthday 386BSD Release 0.0! It was the start of something really big – an open source revolution. But like most revolutions, it only takes one pebble to start a landslide.

Apple, Intel and the Price of Obsolescence

Of course, the inevitable after-the-announcement hits, with Matt Marshall’s long-awaited arrival of a new Apple Powerbook a sobering event. “But we’re feeling a bit like a fool today. That’s because the laptop arrived on our doorstep about two hours after Steve Jobs announced Apple’s shift to Intel processors. Even before we cracked open the box, our shiny new Powerbook was a legacy machine”. Such is the price of processor envy.

Matt wonders if he had just waited off, if he could have gotten five years worth of work out of it, but instead “…will officially be obsolete in two, when the new Intel-powered Powerbooks land in Apple stores. Oh sure, the laptop itself will still work fine. But chances are, all the relevant software updates we need to keep the laptop current (from both Apple and independent developers) will begin to disappear, and our Apple flag will be firmly planted in the land of the old.” So true.

Of course, the wags always have the “you never owned the machine anyway, just the use of it, so what are you complaining about” (which is somewhat incorrect as you do own the tangible hardware, and by implication permanent access to low-level software like device drivers that allow replacements, upgrades and repairs, but ignorance is bliss). They also think that five years is bogus. Well, that’s not really right either, but again, most people are pretty ignorant of the difference between the hardware and the software, and even much of the software isn’t as convulsed as one might believe. Here’s Lynne’s take on this tech:

“Well, my kids have inherited a Symmetric 375 Berkeley Unix (Symmetrix) system that is turning 20 years old, and it runs great, has all the Unix utilities, and has the best version of rogue ever. Just a few bad blocks on the disk, but we mapped those out (yes, you can fix a disk drive!).

No, these machines don’t need to be obsolete so rapidly. The bit rot is intentional (as is the broken versioning and updates). Otherwise, folks wouldn’t migrate to new software. But the hardware is actually very reliable and remarkably easy to upgrade from when I started in workstations.

Can you imagine if people couldn’t fix their cars after 3 years? Wow, GM and Ford would love that, but you’d have a rebellion on your hands!

Of course, one could argue that we don’t live, we only buy the USE of life for a time. But I would hope that becoming obsolete according to a corporation’s best interests won’t mean we all end up on “Logans Run” soon after. :-)”

Open Source and Russell’s Paradox – A New Commons?

Jesus Villasante, a senior official at the European Union Commission in charge of Software Technologies in a spontaneous panel discussion on open source decided to shine a light on the lack of coordination, the influence of commercial interests, and the inability to evolve beyond current corporate paradigms. “Companies are using the potential of communities as subcontractors–the open-source community today (is a) subcontractor of American multinationals.” Mr. Villasante is completely correct in his analysis, although I doubt he will find many who will agree in either the corporate camp or in the open source camps.

Well, I do have extensive experience in this area. I co-pioneered the first Berkeley open source operating system using our own personal resources and with the support of the editor and staff of one of the most popular American technical magazines at the time – Dr. Dobbs Journal. Along with doing a complete novel port to the X86 architecture and new architectural design, we documented the porting process, kernel and architecture. We did not believe that simply reading the source was sufficient to understanding, and it required extensive documentation and open design review to provide releases that were reliable enough for researchers (much less consumers). This was the Berkeley way, and we cleaved to it with the support of the technical press.

Even with many releases, there was extensive documentation and careful attention to design trade-offs and discussion – for example, “Is it better to commit time to a patch if the Berkeley architectural goals from 1982 from Dennis Ritchie et.al. intended a new subsystem?” and “Are artifacts such as brk() wise in perpetuating so that some legacy programs can be run while impeding evolution in modularity?”. We addressed these and many other issues.

However, what we found is that various interests in the open source and business communities did not wish any new paradigms or evolution of the operating system if it required any changes to legacy applications (some suspiciously acquired). And this was the beginnings of a lot a bad blood in the Unix community.

You Never Know Who’s Watching You

Declan McCullagh of Cnet posted an item last week about Maureen O’Gara and Groklaw which spilled over into the bizarre world of open source paranoia. According to McCullagh, “Maureen O’Gara, a freelance writer who pens the weekly LinuxGram, alleged that Groklaw blog author Pamela Jones is a ’61-year-old Jehovah’s Witness with religious tracts in her backseat.’ O’Gara said she personally visited what appeared to be Jones’ apartment and Jones’ mother’s home in the New York City area.”

While that is mildly amusing, it’s not really surprising. The net allows people to assume, let’s call them “avatars”, that mask the real person with all their consequent flaws and frailties. But anonymity isn’t a Constitutional right, especially when you take center stage in a legal battle, as Groklaw has done. In fact, why be anonymous at all? Since Ms. Jones has lots of supporters who like her work, what’s the problem?

Sun Pats Rump SCO – Tarantella Cashes Out After Lots of Agony

A teeny tiny acquisition announcement brought back a lot of memories today.

Remember Santa Cruz Operation – no, not the SCO you read about fighting IBM and Novell, but the “old SCO”? Bob Greenberg and friends did a very brain-damaged version of Unix for the PC (originally derived from Version 7 and System 3) way back in the dark ages. Bob had done a Version 6 Unix derivative for the RAND Corporation called BobG Unix. The group was spun (thrown? or maybe walked?) out of Microsoft because Microsoft really didn’t want a Unix system if it wasn’t written in BASIC.

In 1982, Intel offered Symmetric Computer Systems CEO and Founder William Jolitz a great deal on 286 processors when he was deciding on processor bids to use in their new workstation funded by Technology Funding Partners (Symmetric’s lead venture firm). There were lots of problems with the 286 and Unix: 1) the instructions were not restartable, so if the operation could not complete (like memory wasn’t loaded) you could not reliably reload the instruction (there were steppings that supposedly could, but you never knew what you’d get), 2) the only way the address space was made large was by the reloading of segments – we’d encountered this problem before with the PDP-11 (William Jolitz work as an undergrad was on overlays for the PDP-11, so he was very familiar with this problem) – performance goes to hell when you move data from overlapping 64 kbyte segments to other segments, with all bets off when you hit an exception during that time, and 3) the calls within the segment and intrasegment calls made for variable sized stack frames, and the way Intel, Microsoft and others agreed on stack frame layout required a major rewrite in Unix – ironically, this made it difficult for early Windows programs derived from DOS as well. From these issues, we knew 286 Unix would never be a successful product, because there were too many compromises to move too many software packages from architectures like the VAX to it. SCO went ahead and made a Xenix based on the 286 – took them three years and a lot of work – and it was still a disappointment.