The famous inventor Thomas Alva Edison was admired for his tenacity and drive. He didn’t easily accept failure, nor admit to having failed. For example, after countless failed experiments trying to create a light bulb, he stated: “I haven’t failed. I've just found 10,000 ways that won't work.”
Inventors are a nimble bunch who know about unintended outcomes and the importance of improvising. Edison also once stated, “Just because something doesn't do what you planned it to do doesn't mean it's useless.”
Many IT pros can relate to Edison and know all about elusive success and about trying to salvage something useful when the outcome wasn’t optimal, especially those tasked to create new applications.
Many an unsuccessful development project that ran out of time and money has been declared successful, using political maneuvers to trim back requirements to meet the deliverable. How many new apps do you have that meet their originally defined requirements and that were completed on time and within budget? That would be a success worth noting.
Finding information about project failures is also elusive because an organization that has just squandered some big bucks learning what doesn’t work is tight-lipped for competitive reasons. The likelihood of finding people in IT who will talk about their failures diminishes as the magnitude of those failures increases. Keeping quiet allows others to make the same expensive mistakes. Avoiding embarrassment and bad publicity are also compelling reasons for silence.
Lack of discussion about IT failures contributes to the high failure rate for IT projects, both large and small because many of the same mistakes are repeated by different organizations. A common reason so many intelligent people are making such bad choices on platforms and vendor toolsets is that they believe they’re equipped with capable tools when they start down the path, only to learn later of critical shortcomings, usually pertaining to inability to scale. By the way, whenever I’m trying to explain the concept of limited scalability to a non-technical manager, I use a simple analogy that there are lots of things that can be used as hammers, but don’t scale as well as real hammers. I ask, “Have you ever driven a small nail with something other than a hammer, and if so, what did you use?” Frequent answers include a rock, screwdriver handle, back of a wrench, butt-end of a paper stapler, etc. I then point out that as either the size of the nail grows or frequency of use increases, the scalability limitations of each alternative to a hammer are quickly revealed.
When choosing platforms and software tools, lessons learned from some of the larger IT fiascoes studied reveal four areas I call “gaps” that warrant attention early on.
First, identify critical functionality gaps. Properly comparing any two solutions requires actually looking at the different levels at which the systems can perform work. A common mistake made by IT planners is to overlook throughput capabilities and limit their consideration to raw power and speed. It’s during consideration of the impact of things such as security, virtualization, cryptography, serialization, data sharing, and clustering that the differences start to become clear.
Second, be honest about the knowledge gap. It can be tough finding people who can properly compare technologies, so get proper technical help during architectural planning and be diligent when checking vendor references and track records. If a strong reference base doesn’t exist, at least you know when you’re about to become a pioneer on the bleeding edge.
Third, there exists one or more scalability gaps. How do you plan to find them? Many a mega-failure went live after doing only some cursory testing and then simply couldn’t scale-up to handle the increased workloads. Make sure a proof-of-concept examines and identifies stress levels and breaking points for realistic workload projections. One $400 million fiasco was the result of a new customer service application being developed for a distributed platform the vendor promised could support hundreds of users per server, but response time soon became unacceptable after 10 users logged on. Not long after, the distributed platform was quietly scrapped and the new application moved to the mainframe.
Last, be realistic about the credibility gap and surround yourself with partners and advisors you can trust and who have a proven track record. Especially beware of independent consultants who have close ties to vendors. Always remember, prescription without proper diagnosis is malpractice. Choose your platforms and software tools wisely.