The Fact-Free Universe

media_xl_1029851

Some days one just feels out of sorts.

You can’t exactly put your finger on what’s wrong, but something clearly is.

Cause and effect don’t seem to be working. The comforting normal logic that usually holds everything together seems to have taken some ‘paid-time-off’.

Then it hits you.

People are saying stuff, but none of it is making any sense. Lips are moving, but nothing useful is implanting in your brain.

Folks are repeating the same nonsense – over and over – in the fond hope that if they repeat it enough times, it will somehow become true.

Welcome to the fact-free universe.

Things didn’t used to be this way.

The objectively observable – what we used to call facts – used to underlie our entire intellectual lives.

It didn’t matter what the sphere of discourse was. Public Policy. Science. Technology. And most assuredly, Commerce. Commerce, above all things, was firmly rooted in the most objective of facts, the dollar.

But then something happened.

It likely started in politics, where the distinction between ‘Facts’ and various grades of untruth, ranging from ‘the excusable spin’, to ‘the misrepresentation’, to ‘the inexcusable fabrication’ all the way through ‘the jaw dropping whopper’, were just a little less clear cut.

Things generally deteriorated when the internet, media and new media collaborated to create an entire ecosystem for reinforcing statements which were not based in fact. Repetition was essentially substituted for objectively observable truth – “Say something enough times and people will falsely conclude that it is true”.

Soon, that system was spun all the way up and working well enough that people saying stuff that was so far out there – so far removed from fact – became just another everyday occurrence. That it became something that people stopped paying attention to.

And that my friend, was most assuredly not a good thing.

Because with Politics, at least the fact free universe was confined within a cosmically small proportioned box.

But then, it got out.

And like the Manhattan Project’s bright flash in the desert of Alamogordo, a chain reaction began that we’re not entirely confident will ever actually stop.

Lately, I’ve been seeing it showing up in IT Service Business contexts, and I’m not sure I have the toolset to deal with it effectively.

The signs are pretty easy to recognize.

There will be an very very expensive suit. Inside the suit is either an attorney who has never been present in previous conversations, or a consultant from one of the 10 firms that make their bullion by negotiating-IT-Service-deals-by-proxy.

The job of the-person-in-the-suit will be to say things on behalf of his client that his client would struggle mightily to say without busting out laughing.

Folks that have sat across the table from me always inevitably remark on my ability to remain calm, civil, and tell people that they are completely whack-a-doodle in a way that somehow escapes their notice and that they inexplicably end up feeling good about.

It’s a talent.

But folks that will sidle up to me and smilingly say stuff that has nothing to do with the world in which I live and work have lately caused me to take and extra breath or three before I speak.

“I’m sorry, but you cannot ask me to agree to service penalties of two million dollars on a service contract worth one hundred thousand dollars.”

“I’m sorry, but we cannot agree to guarantee end-to-end availability on a network where we only provide 10% of the total connection path and devices.”

“I’m sorry, but I cannot agree to a master agreement where we agree in advance to the types and amounts of lawsuits to be filed against us for various forms of service degradation.”

“I’m sorry, but we cannot agree to a 2 hour onsite hardware repair commitment for your ‘field office’ in the hills on the border between Pakistan and Afghanistan.”

Calm. Civil. Based in fact.

If the person on the other side of the table doesn’t engage on these terms then, and wants to stay out in the twilight zone, it’s a walk away.

I always say precisely what I will do, and work to ensure that I and my companies do exactly that.

I’ve spoken before about the need for absolute transparency in conducting IT Service business. That quality of communications is completely symmetrical – it’s a two way street. If both parties to the transaction aren’t equally committed to being clear and unambiguous, then the conversation is just a monologue.

An organization that will resort to these kinds of tactics is simply not going to be a trustworthy business partner – not because they will use untruths to gain advantage, but because they may have squandered their ability to differentiate the truth from something less. One can’t have any assurances that customers will perceive value and that suppliers will make money in an environment devoid of objective facts – performance, commitments, measured timings. Some organizations have gone so far out into the fact free universe, that they can’t even find their way back to the objective world in which the rest of us live and work

Advertisements

Hardware is for the Weak

0
Me, I love hardware.

Almost doesn’t even matter what kind of hardware it is.

I used to have an Apple Lisa that was one of the earlier pre-production units. The entire system looked handbuilt and like it was 100% focused on the guy that might have to someday service it. All of the subsystems – power, memory, processor, floppy drives (yeah…) — were encased in aluminum billet modular sub-chassis, secured by Dzus quarter turn fasteners like a formula race car or bike, and the entire system could be disassembled in about 40 seconds with no tools. It had Steve and Woz’ signatures cast into the interior of the back cover.

Beautiful.

I have had several very early Compaq rack mount ProLiants that were almost as pretty.

My stereo speakers are a set of vintage Yamaha studio monitors. Apart from their dead flat response curves, if one pulls the drivers out of the cabinets one finds the commonality between Yamaha’s music and motorcycle businesses – the driver frames, which are normally stamped steel – are, in these speakers, machined from a cube of solid aluminum billet, with all of the lathe marks clearly still visible.

Gorgeous.

Folks that know me well know I’m a bit too enthusiastic about the motorcycles that BMW makes. Apart from the elegance and economy of their older designs, if one opens one of their engines up – especially one that has been through 100,000 miles on the road – one sees material choices and machining that meet or exceed NASA standards. My K bike – which has a motor that was simply a miniaturized version of BMW’s class leading early 80s F1 powerplant – looks like a brand new engine that has never been run after more than 80,000 miles – components that were polished or peened during manufacture look new – camshafts and valve buckets have no scuffing or wear marks.

Breathtaking.

And that’s just the tip of the hardware iceberg. We could talk about titanium oxide plated pocketknives, modern database engine servers with 20 way symmetric processors and acres of DRAM, aluminum bicycle frames with welds that look like jewelry, anything made of Damascus steel, or the intricate machining and solid heft of a well-made firearm.

But we won’t.

Because I’ve come to the belated conclusion that hardware is for the weak. The material world that you and I grew up in is an artifact of the past. Its dead weight that holding us back and keeping us from seeing the real source of power in the modern universe. Power that is nearly without limit, but comes free from mass, free from the need to be maintained, and is completely devoid of any physical manifestation at all.

That power is software, and more specifically it is the software that stands behind newly emerging software defined networks.

You’ve been told, likely a thousand times, that some technology will change the nature of the technology universe. I could tell you that this is different, and you wouldn’t buy it. So let’s skip the hype and cut right to the substance.

Network devices, when compared to their cousins on the systems side of the house, have always been more tightly tied to the hardware on which they run compared to general purpose servers. The three levels of a network device – the data plane, which forward packets; the control plane, which directs traffic on the data plane; and the applications plane, which provides higher level functions like IP telephony, quality of service and security functions – are tightly tied to the hardware on which they run. Network devices – at least ones beyond the basic commodity level – are built around their application-specific integrated circuits (ASICs), and the circuit boards and boxes in which they live.

You want a router? You get a box.

Want a network switch? You get a box.

Want firewalls, quality of service appliances or secure access endpoints, and you might need three more.

You want a better any of the above? You need to get rid of the old box and get a new box.

And those boxes, their maintenance, their management, and all the racking and stacking, unstacking and re-racking, all cost money. Lots of money.

I’ll freely disclose that I work for a company whose legacy is in managed network services. We’re expert in the discipline of keeping all these boxes playing together like a well-tuned orchestra, and its been a very good business for us. Standing up new networks requires large capital expenditures, and a team to manage that network – whether in-house or through a service provider – requires large operating expenses. The more types of traffic one drives through those pipes – data, voice, video, i.e. the higher the degree of convergence – the larger those capital expenditures and operating expenses get.

And when those devices get to the end of their useful lives – whether through increased demand or through age – the entire cycle of investment and expenditure repeats.

Designing, implementing and managing networks is a dynamic, risky and expensive endeavor. If you’ve done it for long enough you are bound to think that there has to be a better way.

And now there is.

I was out grazing the Internet when this story got stuck between my teeth — http://www.networkworld.com/article/2358220/smb/ntt–39-s-global-cloud-services-show-what-software-based-networks-can-do.html .

My interest is not entirely academic, as my afore-not-exactly-mentioned employer is also a wholly owned-subsidiary of the same corporate parent, so this services roll-out is by a sister company.

The initial suite of features that are currently in production: Firewalls, Application Acceleration, and Secure Access Control represent a pretty substantial percentage of the overall market for network technology and managed network services. So the market is definitely there if the technology does what it claims.

And what it claims to do is staggering.

Good engineering has reworked those three layers that make up network services, and deployed code that can be run remotely and virtualized on somebody else’s shared utility computing cloud.

Want to deploy a new firewall?

No box.

Go to a service portal and provision one. Or, if you’re a managed customer, the provider does it for you.

10 minutes later you’ve got a firewall.

You didn’t make a capital investment for a new appliance. And as you run it, you also avoid the maintenance contract, licensing, support and potentially even some staffing costs that used to come with that firewall. The notion of a ‘technology refresh’ – the cyclical reinvestment that also came with that firewall is rendered meaningless.

Whenever any feature needs to be changed, whether for bug fixes, security patches, or feature upgrades, the virtual machine code can be updated. The provider tests it, determines it works, then shuts down your virtual firewall and brings up a new one.

10 minutes again. Still no box.

You just pay your bill.

Imagine how much one could save out of your total budget for a firewall.

Now imagine your network has 300 firewalls.

And 300 application acceleration appliances.

And few hundred more secure remote endpoints.

See where this is going?

The kind of changes that eliminating capital expenditures and lowering operational expenses of this magnitude make to a business has a direct impact on their financial results.

So if the technology works, this is what good business technical people will do. This technology will completely change the nature of the technology universe.

Now no one will start ripping out running networks to be the first to do this.

But for startups or organizations deploying new locations, this will look smarterfastercheaper.

For companies whose business involves sale and support of traditional networks this looks like one of those pivotal opportunities. To either get on the software defined networking technology train, or risk being run over by it.

The Fantastic Four vs. Auditman

Fantastic_Four_Vol_4_1_Textless

Admit it.

When you were about 9 years old, you know you wanted to be Superman.

Ok. Maybe it wasn’t Superman. Perhaps it was Spiderman. Or The Flash. Or Johnny Storm, the Human Torch.

But as a little kid, with no filter of maturity or responsibility or self-consciousness to interfere with those innermost longings, you know you wanted to be a superhero.

How could you not?

Who wouldn’t want to fly, to bend time and space, to control the elements and physics itself?

Who wouldn’t want to make the impossible actual?

Of course you did.

You’re not 9 anymore, though, and humming the Superman theme song under your breath isn’t going to help with the marching orders you just received at work.

Superhero powers will be required… especially the compression of time and space one.

Startup companies and anyone that works on software projects know exactly how this goes. A customer project due date or a ship date gets established, usually in a complete vacuum of any understanding about what needs to occur to actually meet the commitment. Then, good people, with better intentions, and really understanding family members, burn the midnight oil, then burn the 2 a.m. oil, then watch the sun come up. And repeat. Sometimes for weeks at a time.

These superheroes come up against a seemingly impossible problem, and just guts it out by throwing themselves at it, hoping that the flash of insight needed to get through it will occur, just when it needs to, and just in time to save the day.

You and I, we idolize these people. How many times have we sat through a team meeting where after a successful delivery a glowing management type has gushed over the accomplishments of one of his engineers? We respect their effort, their determination, their results, their superpowers.

They deserve our respect, and we give it to them.

But is this really how it’s supposed to be?

Inevitably there are people that will say that this IS how it’s supposed to be – that without the hard, cold pressure of a cruel deadline, that no one ever accomplished anything. That perspective, and the barely organized chaos that it engenders, make it possible for the same dysfunctional pattern to recur again and again.

It’s like we want to see the flashing capes, see Mr. Fantastic stretching his mind and his body to solve the unsolvable, not see the Invisible Woman as she manipulates things we cannot perceive, see The Thing crush, see the Torch melting weaponry that was thought to be invincible. We want to see the full superheroic arsenal – intellect, physicality, raw power, excitement, explosions – on full display in the achievement of our technologic goals.

It only takes a minute’s introspection, however, to really understand that in the world of technology and technology service, we shouldn’t be craving this kind of excitement, when what we really need is predictable, utterly undramatic boredom. When we contract with a partner to complete a technology implementation, we don’t want Mr. Fantastic, we want Auditman – easily identified by his green eyeshade and readers.

Repeatability and predictability – boredom – is brought about through expert and technically informed project planning, mature processes and disciplined execution. If a technology or service partner is regaling you with battle tales of finishing code or implementing network services in three day long battles under withering enemy fire from disgruntled aliens, what they’re sending you is a coded message – that they hadn’t done this before, or they didn’t understand what it would take and they didn’t leave themselves enough time.

If your provider is telling you that their last technology deployment connected 3,000 sites, that every one was completed when it was supposed to be, that no-one ever so much as got the slightest bit of raised blood pressure, no one was ever late for dinner, and everyone got to bed on time, that’s the kind of war story you should find exciting.

We all need our heros.

Just save them for the comics.

****


All comic book characters are the property of their respective publishers. My Thanks to Segal and Shuster and to Stan Lee, who taught me that not every lesson from books comes from schoolbooks. And my apologies to the accounting profession for even suggesting they are all boring.

TTL

image005

Pop geek quiz.

If you didn’t even have to think to know the meaning of this acronym, this bit is for you.

If you don’t know that TTL – the Time To Live value – is what keeps networks from collapsing under the weight of all of the data that would otherwise be bouncing around lost, never to arrive or otherwise be useful in any other way, then this bit is also for you. Twice.

Some fundamental design concepts prove to be so large in their application, that one wonders how we managed to successfully breathe in and out before they revealed themselves to their designer. The sheer mass of these fundamental concepts only reveal themselves to us over time as well, as we keep finding new applications for them after we had all long assumed there couldn’t be any more of them.

The TTL, or network Time To Live, assumes that all data in transmission only has a certain potential usefulness, when viewed in terms of time. That if data shows up at its destination after a certain point in time, that it just shouldn’t have bothered showing up at all. Oh, and as an aside, if it showed up that late or was otherwise being transmitted after that time, it was probably also lost, as well.

And here’s where TTL unexpectedly gets larger.

Explosively, massively, relativistically larger.

The network engineer will tell you that data should only have a defined Time To Live on the network.

I’m here to tell you that Information should also have its own – configurable — Time to Live as well.

Think about that for just a second, maybe two, and imagine what it would mean if the time that certain types of information could only be stored could be tuned according to its intended use.

Imagine the impacts on IT security, if the types of commercial data that are consistently being breached, were set to have to destruct and have to be renewed in such a short period of time that stealing them would get you virtually nothing.

What got me thinking about this was a case reported in the New York Times — http://www.nytimes.com/2014/05/14/opinion/ordering-google-to-forget.html?_r=0 – about a Spanish citizen winning a court case that has created a precedent that Individuals control the rights over whether or how long information about them should be made available via the Internet.

So it is, essentially, up to you – at least under Spanish law – to determine how long information about you is publically shared.

So we have a principle that – today – does not yet have a technical solution or an infrastructure to support it. The principle that all personal information is controlled by the associated human.

TTL.

Imagine all of the types of information that people will want to restrict. Pictures of you…doing that thing…. at that long ago frat party. Adverse employment actions. Stupid Facebook flame wars on politically incorrect subjects. The list is virtually and literally endless.

The entire structure and implementation of networks and file systems will need to be reworked to include the header and metadata structures to support and implement this.

It’s something that will totally revolutionize the way we regard and handle all digital information in the future. It could just restore rational operation to the whole psyche of the entire human race, which has been made collectively insane by being unable to forget anything at all.

Information Time to Live. Information that is designed to be forgotten.

Have your lawyers call my lawyers to draw up the agreement. You know that this little brainstorm of mine is the way forward for us all.

Transparency

blah-blah

I’ve had some great gigs and I’ve had some terrible gigs.

One of the best I ever had was as the Solutions Architect for the Custom Services Unit of the now absorbed Compaq Computer Corporation.

In that role, my job was to work with our Sales Teams to engage with clients, understand their businesses, and work with them to determine how we could best help them to achieve their goals.  Our clients were an unusual stew of small businesses, start-ups, and a few smaller government customers that had been experiencing uncharacteristically un-governmental, sweeping dynamic change.

Really, from the point of view of an alpha propeller-head, it was nearly perfect little world.

I would talk to customers to understand where their organizations were and were they were going. We would identify and prioritize their most pressing information challenges. They would tell me what technical capabilities they had and what kind of budgets they could bring to bear to address the problems. With this conversation, I could then provide them access to the technical assistance and solutions they needed and could not easily provide for themselves.

Lots of people would look at this as some sort of driving to get the order, and those folks would see a stressful, goal oriented process. For me, though, it was working together, actively listening, and letting your customer tell you where they thought they needed help.

After one of these conversations, I would go back to my home office and write up my notes into a statement of work. I’d identify the different resources it might take to deliver the project, and develop some resource estimates and a commercial structure.  If the job was going to take hardware or software to complete, I would put together a bill of materials with servers, storage and networking gear that Compaq sold. I’d head back to the client’s and review the contract in detail with them. I don’t recall ever writing one up where I didn’t get the go ahead to deliver the service.

So we’d sign the statement of work, shake hands, and I’d pull my Aerostich on over my business clothes and ride my motorcycle back home. My business suit would go back on its hangar in the closet, and the next morning I’d pull on a pair of jeans and an oxford cloth shirt, put my tools in my laptop bag, and ride back to my client’s place and start to build the stuff I’d sold the day before.

It is the easiest thing in the world to make promises when you know you will be the one keeping them.

It sounds simple, but it really isn’t.

Say exactly what you mean. Mean what you say. And follow though and do what you said you would do.

My customers understood that that was the deal – that they weren’t likely to have to appeal to some disengaged Account Manager that didn’t really have any skin in the game.  I’d said I’d take care of it, and even though I always had support from specialist members of my team – Network, Storage and Messaging System pros – they knew that in the end I would make sure that they got what I’d committed to.

Now not every IT Service engagement is that simple.

They should be, though.

I’ll admit that my case – where I was the salesman, consultant, architect, Project Manager and service delivery resource – is an unusual, best-case scenario that borders on some sort of IT Nerd Elysium.

When you engage with an Information Technology service provider, you’ll be dealing with at least those half a dozen people, as well as their internal legal, finance and management teams. Every additional participant in that process tends to make the entire journey slower, longer, less agile, more cumbersome and harder to understand, and ultimately, to manage to success.

My Compaq Statements of Work used to average between two and a half and four pages. Nobody had any questions about what was being described and what work would be performed.  Since then, I’ve seen contract documents that looked like the Manhattan phone book, and whose content required a legal team and an accountant to untangle, and even then your entire team might not agree on exactly what was being said.

salford%20mental%20health%20decision236

When you go to the market, clarity, transparency and simplicity are what you should demand. Service Providers that comport themselves in this manner are putting themselves in your – the client’s – shoes, not hiding behind language and instruments whose jobs are to obscure and obfuscate what you’re getting for your money. Information Technology has become so complex and the commercial stakes are so high that going it alone is no longer a viable option.

If what your IT Provider is saying isn’t absolutely clear to you, you need to find another provider.

Digital Illiteracy

Image

 

Sure, you can write your data, but will you be able to read it?

There was a time in my professional life when I spent lots of time and energy helping people implement database, file and messaging servers.

Inevitably, folks would want to talk about some form of archiving.

They’d have been the recipient of some pitch or seen some spec sheet on some wild optical media device, and then been seduced by the dark side.

 “25 years! 50 year media life!”

“Ha!”, I would say, usually projecting some subtle degree of smugness, “and how are you going to read it, 50 years from now?”

 “What application, what file system, what hardware, what technology are you planning to have around then that will give you some confidence of your ability to read this stuff?”

That good advice was, at the time, largely theoretical, a matter of informed conjecture.

Just last week though, the news provided me with not one, but two examples of that specific problem playing out.

The first was a case of the Andy Warhol Museum coming on a group of old diskettes in their collection, as reported in the Washington Post.  The Pittsburg museum was fortunate enough to be able to work with the Carnegie Mellon Computer Club — remember Mach Unix? — who had the curiosity and the skills to piece together what computer and application created them. Ever more fortunately, The Computer Club actually had a running example of a Commodore Amiga in their collection which allowed them to read the disks and open the files.

The files, in this case comprised at least 4 recognizable signature original Warhol works, along with several more bits of Andy just mucking about with a new computer.  

Andy is an icon of 20th century style — These might not be irreplaceable masterworks, but Warhol’s whole approach to art was that all of his work was conceived to be eminently replaceable.

The second current example of getting back the data is a subject of somewhat more gravity.

It seems that back before the Apollo moon landings, NASA sent several missions to map the moon so we would be able to identify safe landing spots for the planned manned missions. Doug Bierend at Wired has written a fascinating story about a project to recover the information.

It seems that the photographic images — including the first ever of the dark side of the moon — were sent back as analog data. The pictures that were used and published at the time were made by transmitting single scanned screens of information from a photograph made by the satellite and printed out on thermal printers in 20 foot long strips which were then taped together into a whole image. That resulting montage was then photographed and passed to the wire services. If you’ve ever seen contemporary published photographs they’re notable for an weird, banded appearance that are artifacts of that oddball process.

No one, actually, has ever seen the original photographs, because the satellites that took them crashed into the moon.

I used to work for a company that was headquartered within sight of the impressive blimp hangar at Moffett Field in Silicon Valley. Up and away!    At NASA’s Ames Research Center, located there, there is a closed McDonald’s restaurant filled with Analog Tape Drives and a crew of techs who describe themselves as “digital archeologists” and refer to their Jolly Roger-flying former fast food joint as “McMoon’s”.

These talented folks started from scratch and had to reverse engineer the data formats and how the information was encoded. Nothing was known, and the nature of the problems that needed to be solved ranged from the sublime to the ridiculous.

Do you know that the specifications for a 1966 Ampex tape drive call for whale oil to be used as the lubricant? Like they have that down at the local Radio Shack. 

To cut to results, these guys succeeded in cracking the encoding, hacking a bridge into Adobe Photoshop, and recovering and rendering the information at better resolution than is possible with our current digital cameras.

The resulting photographs are staggering — breathtaking.

The information was there, but no one had ever devised a way to see it until now.  We sent rockets to the dark side of the moon, but never really saw the full results of that work — just dim shadows.

If there’s a Nobel prize for Geekery, these guys have my vote.

So if you find yourself in the position of making decisions about a new information system, my old good advice about considering how you intend to access that information in the future is doubly good now. The life cycles of data storage devices have shortened dramatically. Operating systems, applications and file systems have gotten more and more fragmented. If you’re planning on storing large amounts of data with a cloud provider, you’d don’t want to be surprised if it turns out they haven’t really considered the portability and tools they’ll use to move your data if you need them to later.

In system design, you need to spend nearly as much time considering how to manage the access life cycle of the data as you do figuring out what data you will write.

Cause it does you no good whatsoever to write if you can’t read.

Pearl Street Station

Edison's Jumbo Dynamo

Edison’s Jumbo Dynamo

I’ll admit it.

I’m a server hugger. I have an inappropriate emotional relationship with hardware and software. I want my information and my most important stuff on my own machinery. I want to apply all my long fought geek mastery to maxing out performance, redundancy, and fault tolerance. I want to hear the white noise from an incomprehensible number of disk spindles and cooling fans shrieking at max RPM, to feel the racks shaking, see disk lights flashing in hypnotic patterns when I/O starts to peak.

Oh yeah.

I’ll hazard a guess that if you’ve hung with me this far, that you’re one, too.

And you and me, pal, our way of looking at the IT universe is completely obsolete. Our attachment to iron – to the visible manifestation of our power over information – has made us both as endangered as typewriters, roll-film cameras, carburetors and 100 watt incandescent lightbulbs.

Lightbulbs and their inventor, Thomas Edison, are, in fact, the best illustration of why how we all look at Information Technology has to change.

In 1879, Thomas Edison successfully designed a practical electric incandescent light. Practical, in that it worked, was safe, and could be manufactured for a reasonable cost. There was just one little problem.

There was no electrical supply.

Accordingly, in 1882, Edison designed, constructed and opened the Pearl Street Power Station, which was strategically located in the most densely populated district of lower Manhattan. Edison’s technical accomplishments and investments were enormous. To meet the projected demand, Edison designed ‘The Jumbo Dynamo’ – 27 tons of steam boiler-powered cast iron, steel and brass that could generate 100 Kilowatts of electric power. The Pearl Street Power station had six of these dynamos, with their fireboxes and boilers extending two additional floors beneath the street. The cost and risk was unprecedented, with the construction of the station costing nearly a third of a million 1880s dollars, and income that didn’t recoup the investment until the station’s third year of operation.

I’ve stood in front of one of the Jumbos while it is operating, and it is a truly awesome experience, with massive parts spinning within arm’s reach that shake the floor and allow for a visceral appreciation of the power required to run the thing and the power that it makes. The reason that I’ve been able to observe a Jumbo in operation is today, one of the two remaining is fully operational in its home in the Greenfield Village at the Henry Ford Museum in Dearborn, Michigan. At Greenfield, the Jumbo sits in the relocated Detroit Powerhouse #1, where it is a honored as National Historic Mechanical Engineering landmark by the American Society of Mechanical Engineers. The Jumbo sits in a museum because, despite the extraordinary breakthrough it represents, it was a dead end in the history of power engineering – it was, bluntly, a mistake.

These mistakes are a fact of technology and engineering that are inescapable. If Thomas Edison could make them, so can you.

Edison’s mistake was simple, but fundamental. Like many entrepreneurs, he was so close to the technology he created that it was difficult to see its flaws. The Jumbo generated Direct Current (DC) power, which is not really suited for transmission over any significant distance, and has other technical issues. As demonstrated by Nicola Tesla, Alternating Current (AC), was far better suited for general electrification and combined residential and industrial use. Pearl Street Station was decommissioned in 1895, and the entire electrical world has been all AC ever since.

So why should you, as a modern Information Technology practitioner, care about orphans from the dawn of the industrial age?

Simple. You do not want to be building powerplants – you just want to turn on the lights.

Think about some of the quantum leaps in the power of computing technology across the last few years. 64-Bit processors. Virtualization. Infiniband and 40GB networks. Solid State Drives. Embedded Flash Cache Memory. Non-relational large object databases and datastores. Software defined networking. Analytics software running on graphics processing units. If every time a new technological development came along, you needed to take your precious infrastructure to the landfill and start over, you would be out of business. The cost of building these IT infrastructures continues to rise, and startups and agile companies of all stripes do not want to bear the capital investment costs of both the initial build outs and the ongoing IT arms race that follows.

Utility computing, where processing power and storage is just provided as a service, just like electricity, is the permanent solution to that intractable problem. There are only a handful of service providers globally that have the ability today to support global enterprises and workloads.

Why do these types of Utility Computing Service engagements make compelling technical and economic sense? First, they are wholly outcome and usage based. You are only paying for an outcome, not a specific technology or method. Customers only pay for what they actually use. As methods improve, Service Providers are motivated to deploy them, as they increase their value to customers. The underlying in-house IT principle of trading massive amounts of Capital Investment for the capability of obtaining service likely will not be fully utilized does not make business sense.

Second, in-house IT architectures have to be over architected and over specified. If you have one day a year when your systems need to support max load, you need to specify everything to ensure that you can support that one day. And you had sure better not miss your calculations and come up short. Trusting your workload to a properly designed utility computing service means that if for 364 of those days you only need 5% of that peak capacity, you run at 5%, and on that peak day, it takes all of 15 minutes to spin up the extra capacity that is required to support your maximum workload.

Finally, specialist service providers can provide the engineering innovation and the ongoing capital investment and re-investment in ways that your organization simply cannot. Today, when the most plain vanilla commodity datacenter server lists for over $10,000, and a high end storage array may list for over $1,000,000, a cloud service provider’s technology purchasing power and the resulting economic leverage they can provide are critical enablers of your businesses’ ability to invest in your business, instead of in technology. Because of the sheer number of enterprises which a Cloud provider services, and because of a well-designed Cloud’s architecture, which is should be vendor and technology agnostic, a provider can negotiate much better prices for technology. Combining purchasing leverage with the fact that these resources are shared across the customer base means that prices for providing IT services, and for phasing in new technologies, are orders of magnitude less than in-house options.

So the next time you fancy some heavy data center gear – some faster, bigger, nuclear-powered black IT monolith that will have you on your knees begging for another Capital Expenditure to your Finance team, think of the Prince Street Station. It’s not a question of if, but when your new object of technolust will be sitting in some museum with a guy like me standing in front of it wondering, “Why did anyone ever think this was a good idea?”.

You want to concentrate on solving problems for your business. You don’t want to be investing in and building powerplants, you just want to be flipping on the light switch and paying the electric bill each month. Utility computing offerings can help you realize that ambition.