It’s not unusual for data migration projects to quickly face complex compliance considerations that add significantly to the completion time. With only one month to migrate a small European division’s disparate inventory management system to a newly developed inventory management application, the global manufacturing company’s development manager faced two sizeable hurdles:

• Users throughout the world wouldn’t have access to data during the migration process
• Real user data had to be sent out of the country from the European division’s data center to the U.S., raising compliance issues that had to be overcome or risk severe legal penalties…

Read Full Article →

CICS Explorer was first released as a SupportPac nearly three years ago. Since then, it has continued to evolve, giving users a modern, intuitive, consistent interface to configure, administer, and create CICS applications. The first product release, CICS TS V4.1, covered a subset of CICS system administration topologies, providing access to Business Application Services (BAS) and CICS System Definitions (CSDs) for both standalone and CICSPlex System Manager (SM)-managed regions. During development of CICS Transaction Server (TS) V4.2, CICS Explorer was included in the design of each new function, consolidating its role as “the new face of CICS.” This article discusses the latest innovations in the product, which ensure that CICS Explorer will be increasingly valuable to CICS users…

Read Full Article →

How can you provide reliable, consistent access to your CICS servers? The High Availability (HA) functionality provided in z/OS, CICS Transaction Gateway (CICS TG), and CICS Transaction Server (CICS TS) supports duplication of services. This duplication removes single points of failure in the system while simultaneously projecting a single system to users. Choosing the topology used to create a highly available system will depend on the technology available and where redundancy is required. Why create a highly available topology?…

Read Full Article →

An Interview With Milt Whitham

Mainframe Executive visited with Milt Whitham, vice president of Software Engineering for CA Technologies. Milt has spent much of his career in mainframe technology through its many ups and downs. He is intimately aware of the importance of continuously monitoring and reducing costs as are most mainframe professionals. In this interview, Whitham gives us some insight into ways costs can be controlled.

Mainframe Executive: What do you find is the most significant initiative when talking to mainframe customers?

Milt Whitham: At an event in February 2011, CA Technologies surveyed its mainframe customers and found that 75 percent indicated that lowering costs and doing more with less was their number one priority for 2011. This is clearly the priority of the customers I speak to directly as well.

ME: What are some of the ways that mainframe customers can reduce costs?

Whitham: There are three main ways today, and I’ll talk about each of them.

1) Exploit Specialty Processors: If you have zIIPs, zAAPs or IFLs, then exploit them. Identify the best fit workloads for these processors and utilize them to their fullest capacity. Look for vendors that leverage the specialty processors in their products.

At CA Technologies, we've been developing the capabilities to implement zIIP broadly across our portfolio. Considering that the fully loaded cost of using a General Purpose Processor can be high, starting with technologies our clients use extensively, such as CA Datacom, CA IDMS and CA NetMaster, we’ve implemented this technology as a way to reduce GPP utilization to help reduce the total cost of ownership. And, we didn't stop there. We have zIIP/zAAP enabled more than 20 of our products to date and plan to continue to exploit this capability in more and more of our products.

We've also embraced Java in two of the products that are key to our strategy of helping customers reduce costs, simplify management and improve operational efficiency—CA Mainframe Software Manager (CA MSM) and CA Mainframe Chorus—enabling them to offload the bulk of their processing to specialty engines. By doing this, we are both delivering new functionality, which streamlines management, and executing those functions on lower cost capacity without incremental MIPS growth.

2) Release Latent Value from Existing Product Investments: Many customers have been running the same solutions and utilizing the same functionality for decades. Yet over the years, vendors have added new features and functionality to their solutions that you might not be aware of and may not be leveraging. For example, there are capabilities available that can increase productivity or help build on existing integration capabilities. Or, in many cases, you can eliminate duplicate functionality in other products or automate manually intensive tasks by utilizing these new features.

An important way CA Technologies helps its customers unlock the latent value of its solutions is by offering a free Mainframe Value Program (MVP) assessment. During this assessment, our subject matter experts will help the customer understand and recommend potential unused functionality, help them verify their solution is optimally configured, and ensure it is delivering value to their organization. We have already provided this service to more than 520 of our customers.

In addition, CA products make extensive use of the Health Checker for z/OS. We provide more than 200 health checks across 36 products that examine each customer environment to ensure the software is configured properly and running in optimal condition. Like the MVP, health checks are provided to our customers at no additional cost to help maximize the investment they’ve already made in CA solutions.

3) Standardized Operational Stack: Over the years, many sites inherit or acquire duplicate functionality with similar product sets from different vendors. This can be very costly and difficult to maintain. A best practice is to standardize on a single set of solutions and stick to it, ensuring that the standardized operation stack of solutions is tightly integrated to further reduce costs.

ME: You mentioned a “standardized operational stack,” can you please elaborate on this some more?

Whitham: Having duplicate software solutions from multiple vendors can be very costly to maintain and manage in several ways. First, licensing can be costly, particularly without any real leverage for discounting. Second, the costs of training, maintenance, and administration of multiple solutions can be very significant. Third, many of these disparate solutions are not integrated and drive up costs and, more importantly, complexity across your infrastructure through years of extensive manual integration or limited automation.

Vendor consolidation and software rationalization into a standardized operational stack is a way to lower costs, as long as risk is identified and mitigated, and important management functionality is not lost. Many customers are choosing CA Technologies as their primary software vendor because of our expansive portfolio, integrated solutions and experience in the enterprise market, and because of the migration tools and management functionality we put in place to help mitigate risk and provide equal or greater functionality.

ME: Vendor consolidation and software rationalization can be risky. How does CA mitigate these without putting its customers’ businesses at risk?

Whitham: We offer customers our Mainframe Software Rationalization Program which provides a comprehensive nine-step methodology to analyze the customer’s current portfolio, identify potential business and technical risks, migrate from multiple ISV solutions to CA solutions, and guarantee project migration success.

We feature a proven methodology, an experienced staff, and a suite of mainframe software analysis and migration tools:  

Discovery & Analysis Tools: Our discovery and analysis tools provide transparency into our customers’ environments and help identify product usage, determine how widely deployed the products are, and what features and functions are being used.

Once we have the results from the analysis tools, we can use this data to provide an objective approach as opposed to a subjective approach to assess the level of difficulty and risk. In fact, in some cases we will come back to the customer and recommend that they don’t migrate based on level of risk.

Migration Tools: We offer a suite of migration tools that provide an automated, repeatable process for migrating existing mainframe software products to CA Technologies’ solutions. By using software instead of manual efforts, the risk of keying errors is reduced and the time to value can be accelerated.

Audit Tools: At the end of the project, our software auditing tools give us visibility into the current status of our customers’ environments. We use the results of the auditing software jobs to confirm that the migration work is complete.

ME: Where can customers go to learn more about CA Technologies’ programs?

Whitham: They can visit our Website at and they can find additional information about CA Technologies’ programs and products at…

Read Full Article →

Having helped major enterprises across the world for more than 40 years, CICS continues to innovate in the field of transaction processing, delivering new capabilities that address users’ needs. CICS Transaction Server for z/OS V4.2, announced April 5, 2011, with delivery slated for June, will help users compete in the marketplace, comply with standards and regulations, and control their business…

Read Full Article →

Reputation-damaging corporate scandals, such as the Enron case in 2001, have highlighted the need for stronger compliance and regulations for publicly listed companies. Compliance regulations—such as Sarbanes-Oxley, the Health Information Portability and Accountability Act (HIPAA), and the Gramm-Leach-Bliley Act—were created to ensure that financial, customer, and patient information is safeguarded and available for long periods of time…

Read Full Article →

IT executives need to make investments that enable their business to gain competitive advantages. However, after years of reducing budgets and streamlining operations, most IT organizations struggle to fund new investments.

Requesting new funding to enable these investments is often either dead on arrival, or a process which consumes the window of time available to complete the project and gain the competitive advantage. So IT executives are left seeking ways to reduce costs and re-allocate those funds to drive new business growth.   

Finding even more IT cost reduction opportunities is not easy.

The hardships of the recent economic downturn forced many IT organizations to employ a number of traditional approaches to cutting costs such as reducing head count, outsourcing, consolidating data centers and replacing high-quality software products with cheaper alternatives. Anyone involved with P&Ls over the past three years understands the need for these “whatever it takes” methods of reducing costs, but the reality is that each of these old-style methods adds risk — and hidden costs, especially for organizations with complex environments.

Many IT executives now find themselves at the tail-end of these brute-force approaches to cost cutting, spending more and more time managing  situations caused by simplistic reduction methods. As a consequence, those managers find it nearly impossible to stop the death spiral and secure the budget necessary for IT to again become a growth driver for the business.

To overcome this “rock or hard place” scenario, savvy IT executives are demanding “win-win” methods to reduce cost and free funds for growth opportunities.

One example is to locate opportunities to reduce MIPS usage. This low-risk approach to cutting costs frees up funds and improves responsiveness while requiring minimal investment to locate MIPS reduction opportunities, counteracting the rise of MIPS.

According to a January 2011 report titled “IBM System z MIPS Shows Increased Centralization and a Return to Growth,” research firm ITCandor Limited estimates that from 2009 to 2010, while IBM’s price per MIPS declined 16 percent, MIPS usage grew by 35 percent.1

But how does this translate into gaining competitive advantages?

The answer starts with understanding this process:

•    Mainframe costs are driven by MIPS consumption.
•    MIPS consumption is driven by CPU usage.
•    CPU usage is driven by applications.
•    Poorly performing applications increase CPU usage.
•    Simplistic cost-cutting measures unintentionally bleed poorly performing applications into your systems.
•    Poorly performing applications increase MIPS, costs, and redirect productive IT resources for defensive fire-fighting.

Even if an organization’s legacy systems are classified as static, MIPS consumption slowly accrues over time due to routine changes to environments and databases and especially “fix to run” application changes. These incremental increases often go undetected, but the cumulative effects can drastically increase MIPS expenditures for both hardware and software.

Top-notch shops find MIPS reduction opportunities by measuring application performance and faults to discover where CPU is being used – the old axiom that you can’t improve what you can’t measure has never been truer.

An effective performance measurement process must be efficient. Organizations should be able to control measurement tools like a faucet – turning them on when savings are needed and off when applications are running well. Taking that concept a step further, the best tools will turn themselves on and off as needed.

Typical organizations find some “low-hanging fruit” for MIPS reduction through a quick assessment of their systems and stop there. However, the best run shops are always on the lookout for resource waste and recognize that even a two-percent improvement adds up when compounded daily. In either case, a good portion of the cost reduction opportunities found require only simple application changes to yield significant results with minimal expenditure of resources.

In the end, the combination of little effort and great savings enables the savvy IT executive to focus more time, money, and resources on developing new services that provide competitive advantages for the business . . . and career advantages for the executive!

Read Full Article →

When IBM released CICS TS 2.2 in December 2002, which introduced Task-Related User Exits (TRUE) in the Open Transaction Environment (OTE) architecture, a primary selling point was potentially significant CPU savings for CICS/DB2 applications defined as threadsafe. To be threadsafe, a program must be Language Environment- (LE-) conforming and knowledgeable CICS programmers must ensure the application logic adheres to threadsafe coding standards. (For more information, see “DB2 and CICS Are Moving On: Avoiding Potholes on the Yellow Brick Road to an LE Migration,” z/Journal, April/May 2007.) This may require knowledge of Assembler code to follow the many tentacles of application logic that need to verify the application and its related programs are threadsafe. If you define a program to be threadsafe, but the application logic isn’t threadsafe, then unpredictable results could occur that could compromise your data integrity. This article provides some background on what threadsafe means at the program level, how to identify and correct non-threadsafe coding, and how to ensure your programs are maximizing their potential CPU savings…

Read Full Article →