Bill Ulrich is president of TSG, Inc. (www.tsgconsultinginc.com), a management consulting firm specializing in advising and mentoring businesses on business architecture/architecture alignment. He is also co-author of Information Systems Transformation: Architecture-Driven Modernization Case Studies (Morgan Kaufmann, 2010). We visited with Ulrich to discuss his recent book—along with the case for application modernization—and what sites are doing about it.
Mainframe Executive: Sites have had a decade of mainframe modernization. Over this period, what lessons have we learned about what does and doesn’t work?
Ulrich: We keep learning—and unlearning—the same things. I restructured my first COBOL program in 1980, and there were robust modernization methodologies dating back to the early ‘90s. However, true modernization, which is characterized by three interrelated disciplines—analysis, refactoring and transformation—has stalled in recent years. In part, it has been replaced by a “lift and shift” mindset, which moves intact systems to non-mainframe platforms and is driven by the narrow goal of saving money on mainframe costs. From this, organizations have learned several key lessons. The first is that ignoring core software assets or allowing them to degrade over long periods isn’t a good way to leverage powerful software assets, and it doesn’t position companies for business agility. This is because application and data structures are highly isolated and businesses are trying to become much less so, driven by customer demands and the need to be much more agile with business and competitive intelligence. Simply moving older architectures via lift and shift can be a real dead end because systems leaving the mainframe environment languish under emulation environments.
One company we worked with is now undergoing a major replacement effort for a set of lifted and shifted applications because they boxed themselves into older emulation tools and couldn’t take advantage of the advances in System z and related middleware over the past 10 or more years. Their systems and data degraded and it’s hurting them strategically. They’re looking at packages, major rewrites, and retooling. There are much more elegant ways of transforming applications to target architectures that involve platform changes. We present a handful of case studies on this topic in our book. What does work is systematic analysis, business/information technology architecture mapping, refactoring, and reuse where it advances a given set of project objectives. There’s no need or justification for moving most applications from the mainframe, but you can improve them dramatically and incrementally while achieving real business value. The mainframe isn’t going away for most organizations.
ME: What are the basic ground rules of modernization?
Ulrich: We’ve outlined 15 principles of modernization (see the accompanying sidebar) that are accepted truths in how we should act when our decisions involve an environment that’s permeated by an installed base of existing software systems. You can avoid catastrophes and waste in terms of IT department spending if you follow these principles. Don’t pretend that legacy systems aren’t there. Instead, embrace these systems and look for creative ways to work with them, using modernization as a foundation to leverage application and data architectures into opportunities that avoid major rewrites and potential software package acquisitions. Businesses can’t keep making costly investments in projects that are delivered late, aren’t delivered at all, or, if delivered, provide marginal business value.
ME: Approaches to modernization have ranged from modest screen-scraping and leaving almost everything mainframe-resident to totally shifting applications off the mainframe. Is there a “best architectural” approach to modernization if you’re coming from a mainframe environment that maximizes both time to market and Return on Investment (ROI)?
Ulrich: Screen-scraping was the poor man’s approach to thinking the organization was achieving SOA, but it wasn’t business-driven to an appropriate degree. Principle 14 says that initial project stages should achieve early wins for frontline business users through the alignment of business processes, user interfaces, and shadow (business-deployed) systems. This gains the confidence of the business when early ROI is achieved, and it gives the IT department some room to explain how more invasive refactoring and transformational solutions can provide more long-term, robust value. The best architectural mix is a twofold approach. First, launch a series of rapid response projects based on achieving early wins for the business that create business value while laying a foundation for back-end changes to core application and data architectures. Second, the issue of moving or not moving a system to another platform may come up, but this should be driven by the degree of difficulty balanced against the business benefits of such a move.
ME: What should sites include in their application modernization toolsets and what questions should they be asking toolset vendors?
Ulrich: There are different types of tooling, but the basic toolset should have a solid analysis capability built on top of a robust tool repository that ideally has standards-based ability to exchange systems metadata with other tools. The tool should cover your basic environment. For example, if you are a System z customer, your analysis tool should cover COBOL, CICS, IMS (if you have it), DB2, and any other languages and databases as appropriate. There are tools for more mainframe languages, but not all tools cover all languages. So, there’s a need to have multiple tools cover multiple environments and to be able to exchange information via a repository standard such as the OMG’s [Object Management Group’s] Knowledge Discovery Metamodel [KDM]. Beyond this, you should have some basic refactoring tools. IBM still has a restructuring service, but there are also good tools for slicing logic out of applications and dramatically streamlining them. The basic tools can be used for a wide variety of refactoring approaches, including rationalizing data definitions down to a standardized subset. I caution people on the use of rule extraction tools. They’re not toys, and unless you have a clear method, usage scenario and cost justification, you can spend a lot of time and money and gain limited value. They work well if you know what you’re doing and stay focused. Regardless of the tools a site selects, it’s paramount to go into the tool selection process with a well-defined methodology and a well-developed cost model.