The way we dealt with savings in IT in the past was often more tactical than strategic, with results we didn’t always anticipate. Can you honestly say we really did do proper risk analysis when we were asked to cut another 10 percent? The effects of years of cost-cutting exercises have taken some time, but they’re finally hitting us at exactly the worst possible moment.
Today, every transaction counts more than ever before. People are shopping around more than they used to because they simply have less to spend. And by “people,” I’m not just talking about customers using application services directly; I’m also referring to applications used by client-facing staff—they must be snappy, reliable, and deliver impressive response times. But what we’re seeing is quite the opposite; the applications, databases, and equipment they’re running on have suffered from the many tactical savings exercises.
The nature of applications is that their behavior changes over time; the amount of data changes, the workload itself changes, the workload shifts in time (from work days to evenings to weekends with the introduction of 24x7 help desks, mobile apps, etc.). We know this and that is why, for many years, we’ve spent a great deal of time managing and monitoring the trends of our applications, transactions, and systems. In many companies, these were the first roles to disappear when they were asked to reduce the cost of IT.
Even six to 12 months after these changes went into effect, many CIOs looked around and didn’t notice a real difference; things were running as usual and there were no real complaints. But 24 months later, entropy began to set in, small cracks started to appear, the unpredictable nature of applications began to take its toll, and instead of linear performance degradation, we noticed many progressive performance problems, resulting in reduced availability, missed Service Level Agreements (SLAs), and unexpected costs. Firefighting, new hardware, unexpected program changes, and database maintenance jobs all turn out to be very expensive.
Add to this the complexity of today’s cross-platform applications that touch everything from mainframe databases, to transaction servers running on Linux, to services running in the cloud and you just know things will only get worse unless we do something.
That something is investing in Application Performance Monitoring solutions. Not just to manage the performance of your complex applications, but also to:
• Identify which applications are among the top-10 worst performers and fix them, instead of randomly looking for performance problems
• Provide a “single source of truth” to reduce the finger-pointing once a performance problem is discovered, reducing the time to resolution and saving lots of money
• Monitor trends to avoid the unexpected acquisition of new hardware and solve performance problems (and avoid the walk to the CFO to ask for more money).
But, most of all, we must invest in a solution that helps us avoid application delays from turning into nightmare problems in an environment where every business transaction counts. Look around and you will see it’s clear that companies that recognize the customer experience is paramount will have a demonstrable advantage against their competitors. This is the chance for IT to show what we can do well: help our companies come out of this one better ... stronger.