Freedom, Trust, and Other Boring Software Features
Providing more evidence that blogging is something you can get better at the longer you do it, my friend Rafe Colburn put out a brilliant post the other day outlining a third kind of software freedom.
What Apple offers in exchange for giving up Freedom 0 (and they ask not only end users but also developers to give it up) is a new freedom for computer users — the freedom to install stuff on your computer without screwing things up. Freedom 0 is about giving you the right to screw up your computer in whatever way you see fit. Apple’s freedom is about giving you the opportunity to install any of thousands of applications with the knowledge that your phone will work just as well after you install them as it did before, and that you can get rid of those applications whenever you want.
The comments are generally pretty reasoned (funny how thoughtful people attract thoughtful responses), but one of the glaring omissions in the conversation was how much of this ground was covered in Microsoft’s work nearly a decade ago around trustworthy computing. The seminal document of the initiative was written by Craig Mundie in 2002, in a white paper that Microsoft later made publicly available. I’ve embedded it below for review, but it’s worth pulling out the few key concepts that Mundie identified as the pillars of trustworthy computing:
- Security
- Privacy
- Reliability
- Business Integrity
These are notable for a few reasons — while Microsoft was getting beaten up then for security to a huge degree, and reliability to a lesser but still significant degree, the issue of privacy in that pre-social networking world hadn’t yet become as significant an issue with users as it is today.
Most importantly, though, the idea of business integrity was considered a core element of how much users would trust the technology that they use. Microsoft was still at its nadir in terms of its industry reputation at the time, and that mistrust of Microsoft led much of the tech industry to dismiss the principles of trustworthy computing almost out of hand, especially as they were linked to the “Palladium” concept that Microsoft was then advancing about hardware security and software certification.
Succeeding Despite Itself
Microsoft went on to make some technological decisions for their own platform work based on the trustworthy computing concept, ranging from halting development on Windows and Internet Explorer to perform massive security reviews, to architecting parts of the .NET platform to embody principles of reliability and trustworthiness. But on the whole, as evidenced by the meager offerings on the current trustworthy computing website, Microsoft has walked away from its effort to market the idea.
In interim, though, the idea of locking down an ecosystem with extremely rigid hardware controls, a centralized software approval or certification authority, and an appliance-like simplicity of experience have completely won the attention and focus of the tech industry. Nearly all of the precepts of Trustworthy Computing have been what the market decided it preferred, and have been the foundation of what technologists strive to create.
Except, perhaps, for the fundamental Trustworthy Computing tenet of business integrity. None of the major players of trustworthy, locked-down platforms seem to want to publicly address that the biggest danger to their own market success, once they’ve solved the problems of viruses and complexity and software crashes, is how people feel about doing business with them.
Trustworthy computing was truly a worthy vision. Hopefully we’ll see new products that are announced with a bullet point saying “You can trust our company and here’s why”, alongside all the other compelling parts of a trusted experience.
The Docs
Below is Craig Mundie’s original 2002 white paper on Trustworthy Computing. There are tons of good parts worth quoting, but I’ll close with just one, from the section on Policy:
Once a technology has become an integral part of how society operates, that society will be more involved in its evolution and management. This has happened in railways, telecommunications, TV, energy, etc. Society is only now coming to grips with the fact that it is critically dependent on computers.
We are entering an era of tension between the entrepreneurial energy that leads to innovation and society’s need to regulate a critical resource despite the risk of stifling competition and inventiveness. This is exacerbated by the fact that social norms and their associated legal frameworks change more slowly than technologies. The computer industry must find the appropriate balance between the need for a regulatory regime and the impulses of an industry that has grown up unregulated and relying upon de facto standards.
Many contemporary infrastructure reliability problems are really policy issues. The state of California’s recent electricity supply crisis was triggered largely by a bungled privatization. The poor coverage and service of US cellular service providers is due in part to the FCC’s policy of not granting nationwide licenses. These policy questions often cross national borders, as illustrated by the struggle to establish global standards for third-generation cellular technologies. Existing users of spectrum (often the military) occupy different bands in different countries, and resist giving them up, making it difficult to find common spectrum worldwide.