Digital Illiteracy



Sure, you can write your data, but will you be able to read it?

There was a time in my professional life when I spent lots of time and energy helping people implement database, file and messaging servers.

Inevitably, folks would want to talk about some form of archiving.

They’d have been the recipient of some pitch or seen some spec sheet on some wild optical media device, and then been seduced by the dark side.

 “25 years! 50 year media life!”

“Ha!”, I would say, usually projecting some subtle degree of smugness, “and how are you going to read it, 50 years from now?”

 “What application, what file system, what hardware, what technology are you planning to have around then that will give you some confidence of your ability to read this stuff?”

That good advice was, at the time, largely theoretical, a matter of informed conjecture.

Just last week though, the news provided me with not one, but two examples of that specific problem playing out.

The first was a case of the Andy Warhol Museum coming on a group of old diskettes in their collection, as reported in the Washington Post.  The Pittsburg museum was fortunate enough to be able to work with the Carnegie Mellon Computer Club — remember Mach Unix? — who had the curiosity and the skills to piece together what computer and application created them. Ever more fortunately, The Computer Club actually had a running example of a Commodore Amiga in their collection which allowed them to read the disks and open the files.

The files, in this case comprised at least 4 recognizable signature original Warhol works, along with several more bits of Andy just mucking about with a new computer.  

Andy is an icon of 20th century style — These might not be irreplaceable masterworks, but Warhol’s whole approach to art was that all of his work was conceived to be eminently replaceable.

The second current example of getting back the data is a subject of somewhat more gravity.

It seems that back before the Apollo moon landings, NASA sent several missions to map the moon so we would be able to identify safe landing spots for the planned manned missions. Doug Bierend at Wired has written a fascinating story about a project to recover the information.

It seems that the photographic images — including the first ever of the dark side of the moon — were sent back as analog data. The pictures that were used and published at the time were made by transmitting single scanned screens of information from a photograph made by the satellite and printed out on thermal printers in 20 foot long strips which were then taped together into a whole image. That resulting montage was then photographed and passed to the wire services. If you’ve ever seen contemporary published photographs they’re notable for an weird, banded appearance that are artifacts of that oddball process.

No one, actually, has ever seen the original photographs, because the satellites that took them crashed into the moon.

I used to work for a company that was headquartered within sight of the impressive blimp hangar at Moffett Field in Silicon Valley. Up and away!    At NASA’s Ames Research Center, located there, there is a closed McDonald’s restaurant filled with Analog Tape Drives and a crew of techs who describe themselves as “digital archeologists” and refer to their Jolly Roger-flying former fast food joint as “McMoon’s”.

These talented folks started from scratch and had to reverse engineer the data formats and how the information was encoded. Nothing was known, and the nature of the problems that needed to be solved ranged from the sublime to the ridiculous.

Do you know that the specifications for a 1966 Ampex tape drive call for whale oil to be used as the lubricant? Like they have that down at the local Radio Shack. 

To cut to results, these guys succeeded in cracking the encoding, hacking a bridge into Adobe Photoshop, and recovering and rendering the information at better resolution than is possible with our current digital cameras.

The resulting photographs are staggering — breathtaking.

The information was there, but no one had ever devised a way to see it until now.  We sent rockets to the dark side of the moon, but never really saw the full results of that work — just dim shadows.

If there’s a Nobel prize for Geekery, these guys have my vote.

So if you find yourself in the position of making decisions about a new information system, my old good advice about considering how you intend to access that information in the future is doubly good now. The life cycles of data storage devices have shortened dramatically. Operating systems, applications and file systems have gotten more and more fragmented. If you’re planning on storing large amounts of data with a cloud provider, you’d don’t want to be surprised if it turns out they haven’t really considered the portability and tools they’ll use to move your data if you need them to later.

In system design, you need to spend nearly as much time considering how to manage the access life cycle of the data as you do figuring out what data you will write.

Cause it does you no good whatsoever to write if you can’t read.

Pearl Street Station

Edison's Jumbo Dynamo

Edison’s Jumbo Dynamo

I’ll admit it.

I’m a server hugger. I have an inappropriate emotional relationship with hardware and software. I want my information and my most important stuff on my own machinery. I want to apply all my long fought geek mastery to maxing out performance, redundancy, and fault tolerance. I want to hear the white noise from an incomprehensible number of disk spindles and cooling fans shrieking at max RPM, to feel the racks shaking, see disk lights flashing in hypnotic patterns when I/O starts to peak.

Oh yeah.

I’ll hazard a guess that if you’ve hung with me this far, that you’re one, too.

And you and me, pal, our way of looking at the IT universe is completely obsolete. Our attachment to iron – to the visible manifestation of our power over information – has made us both as endangered as typewriters, roll-film cameras, carburetors and 100 watt incandescent lightbulbs.

Lightbulbs and their inventor, Thomas Edison, are, in fact, the best illustration of why how we all look at Information Technology has to change.

In 1879, Thomas Edison successfully designed a practical electric incandescent light. Practical, in that it worked, was safe, and could be manufactured for a reasonable cost. There was just one little problem.

There was no electrical supply.

Accordingly, in 1882, Edison designed, constructed and opened the Pearl Street Power Station, which was strategically located in the most densely populated district of lower Manhattan. Edison’s technical accomplishments and investments were enormous. To meet the projected demand, Edison designed ‘The Jumbo Dynamo’ – 27 tons of steam boiler-powered cast iron, steel and brass that could generate 100 Kilowatts of electric power. The Pearl Street Power station had six of these dynamos, with their fireboxes and boilers extending two additional floors beneath the street. The cost and risk was unprecedented, with the construction of the station costing nearly a third of a million 1880s dollars, and income that didn’t recoup the investment until the station’s third year of operation.

I’ve stood in front of one of the Jumbos while it is operating, and it is a truly awesome experience, with massive parts spinning within arm’s reach that shake the floor and allow for a visceral appreciation of the power required to run the thing and the power that it makes. The reason that I’ve been able to observe a Jumbo in operation is today, one of the two remaining is fully operational in its home in the Greenfield Village at the Henry Ford Museum in Dearborn, Michigan. At Greenfield, the Jumbo sits in the relocated Detroit Powerhouse #1, where it is a honored as National Historic Mechanical Engineering landmark by the American Society of Mechanical Engineers. The Jumbo sits in a museum because, despite the extraordinary breakthrough it represents, it was a dead end in the history of power engineering – it was, bluntly, a mistake.

These mistakes are a fact of technology and engineering that are inescapable. If Thomas Edison could make them, so can you.

Edison’s mistake was simple, but fundamental. Like many entrepreneurs, he was so close to the technology he created that it was difficult to see its flaws. The Jumbo generated Direct Current (DC) power, which is not really suited for transmission over any significant distance, and has other technical issues. As demonstrated by Nicola Tesla, Alternating Current (AC), was far better suited for general electrification and combined residential and industrial use. Pearl Street Station was decommissioned in 1895, and the entire electrical world has been all AC ever since.

So why should you, as a modern Information Technology practitioner, care about orphans from the dawn of the industrial age?

Simple. You do not want to be building powerplants – you just want to turn on the lights.

Think about some of the quantum leaps in the power of computing technology across the last few years. 64-Bit processors. Virtualization. Infiniband and 40GB networks. Solid State Drives. Embedded Flash Cache Memory. Non-relational large object databases and datastores. Software defined networking. Analytics software running on graphics processing units. If every time a new technological development came along, you needed to take your precious infrastructure to the landfill and start over, you would be out of business. The cost of building these IT infrastructures continues to rise, and startups and agile companies of all stripes do not want to bear the capital investment costs of both the initial build outs and the ongoing IT arms race that follows.

Utility computing, where processing power and storage is just provided as a service, just like electricity, is the permanent solution to that intractable problem. There are only a handful of service providers globally that have the ability today to support global enterprises and workloads.

Why do these types of Utility Computing Service engagements make compelling technical and economic sense? First, they are wholly outcome and usage based. You are only paying for an outcome, not a specific technology or method. Customers only pay for what they actually use. As methods improve, Service Providers are motivated to deploy them, as they increase their value to customers. The underlying in-house IT principle of trading massive amounts of Capital Investment for the capability of obtaining service likely will not be fully utilized does not make business sense.

Second, in-house IT architectures have to be over architected and over specified. If you have one day a year when your systems need to support max load, you need to specify everything to ensure that you can support that one day. And you had sure better not miss your calculations and come up short. Trusting your workload to a properly designed utility computing service means that if for 364 of those days you only need 5% of that peak capacity, you run at 5%, and on that peak day, it takes all of 15 minutes to spin up the extra capacity that is required to support your maximum workload.

Finally, specialist service providers can provide the engineering innovation and the ongoing capital investment and re-investment in ways that your organization simply cannot. Today, when the most plain vanilla commodity datacenter server lists for over $10,000, and a high end storage array may list for over $1,000,000, a cloud service provider’s technology purchasing power and the resulting economic leverage they can provide are critical enablers of your businesses’ ability to invest in your business, instead of in technology. Because of the sheer number of enterprises which a Cloud provider services, and because of a well-designed Cloud’s architecture, which is should be vendor and technology agnostic, a provider can negotiate much better prices for technology. Combining purchasing leverage with the fact that these resources are shared across the customer base means that prices for providing IT services, and for phasing in new technologies, are orders of magnitude less than in-house options.

So the next time you fancy some heavy data center gear – some faster, bigger, nuclear-powered black IT monolith that will have you on your knees begging for another Capital Expenditure to your Finance team, think of the Prince Street Station. It’s not a question of if, but when your new object of technolust will be sitting in some museum with a guy like me standing in front of it wondering, “Why did anyone ever think this was a good idea?”.

You want to concentrate on solving problems for your business. You don’t want to be investing in and building powerplants, you just want to be flipping on the light switch and paying the electric bill each month. Utility computing offerings can help you realize that ambition.