Change is accelerating everywhere for IT departments, as business trends heighten the demand for enhanced performance and increased efficiency. In an information age, it’s the IT guys who have to make it all work. For that, a new approach dubbed Application Portfolio Management (APM) is arriving in the nick of time.
Making it all mesh is harder than outsiders can possibly understand. This can lead to poor upper-management decisions. The risks associated with changing complex systems require a true enterprise scope of view for fact-based decisions. This refers to the capability to see and understand all the links and data flows, usually in mixed systems that could include mainframe, Unix, Windows, System i, and more, running applications in many languages.
Until recently, it either wasn’t possible or was prohibitively expensive to generate a map of all applications and data flows, regardless of their platforms. Today, the techniques to deliver comprehensive views are evolving fast to meet business needs. Transitions to new models such as Service- Oriented Architecture (SOA) require accurate knowledge of all affected systems.
The tools of APM aren’t yet well-understood in many companies, but for mapping complex, mixed systems, APM’s arrival is opportune. We’re in a period some call “post-IPO,” meaning the huge public offerings of the ’90s have all but disappeared. In their place, we now see industry consolidations. U.S. mergers and acquisitions volume alone was $1.5 trillion last year, up 24 percent from 2006, according to provisional figures from data provider Thomas Financial, reported in a January 2007 McLean Group report.
In a parallel trend, outsourcing of vital services, often to remote locations, has dramatically changed job markets in advanced countries but doesn’t always deliver the promised efficiencies or cost reductions.
Globalization tends to accelerate these trends as companies adapt to an increasingly international business environment. Companies find that rationalizing existing systems can help improve the flexibility and performance of their business processes. But their systems often grew over a period of decades, in highly variable mixes of platforms and languages. Long periods of gradually accumulating code and process modifications created systems that are rarely well-documented and never simple to change.
Yet to remain competitive, companies must quickly respond.
Understanding the Problem
Since the ’80s, the problem for developers has been to make needed changes fast enough to meet the next wave of evolving business demand. The first step always is to understand what needs changing. When you look closer, that challenge is far from trivial.
In a recent report, Peter S. Kastner, vice president and research director for IT at the analyst firm Aberdeen Group, noted: “With an average of 15 million lines of code per enterprise, we estimate that the average Global 1000 enterprise has produced 1.5 billion lines of code over the past 30 years at a 2006 replacement cost of $75 billion.”
Clearly, protecting this huge asset base is a high priority. Snowballing upgrade costs to rationalize, modernize, or consolidate systems are a real risk. Kastner’s report calls for “long-term investments in technology and people skills in five areas, including Web browser information delivery, business process modeling, legacy application modernization, data migration and information as a service, and SOA middleware.”