Moving to a normalized database with no detailed analysis of the application layer leads to a highly inefficient API. The “read everything” philosophy, to a fully normalized database, was cumbersome. A decision was made to denormalize the database. This impacted the conversion, future applications, and the wisdom of the migration in the first place. Conducting and acting on proper research is better, though it may take longer.
DB2 CICS Migrating to DB2 WebSphere
One client came up with a creative way to convert a CICS-based legacy application on an older Amdahl machine that was using DB2 to a new WebSphere-based application running on an IBM z800. Determining how to convert a legacy application from CICS to WebSphere without a “big switch” was the biggest challenge. What about using DB2 z/OS data sharing in the Parallel Sysplex environment? That environment lets you run DB2 on a variety of different hardware platforms, while still being able to share data among the DB2 subsystems. It does so by using DB2 data sharing and linking the machines together via the coupling facility technology. This way, the same database can be shared by applications running on two different machines.
The database was designed to receive orders from queues that correspond to certain partitions that were key limited in the same way as the queues. Queries, inserts, and updates would all be involved. The idea was to convert a set of related transactions together from COBOL/CICS on the original subsystem on the older Amdahl to Java/WebSphere on the new z/800. By using DB2 data sharing, only one copy of the database was needed because both machines could be in the same data-sharing group and allowed to share the same data via the coupling facility. Transactions could then be moved from the legacy interface to the new interface in a controlled fashion and while the performance is monitored. This allowed for a smooth, controlled transition from old to new without an outage. The transactions were first carefully studied and a plan was developed as to which type of transactions would first be moved and converted. A performance testing plan was also employed as part of this creative solution to bring the application forward.
DB2 z/OS Migration From Other Databases
What may work well on other platforms may not work well in a z/OS environment. In many one-to-one conversions, performance expectations weren’t just lower than expected; work was simply not accomplished in the allowed timeframes. When re-centralizing data onto the z/OS platform or simply migrating an application to the z/OS platform, recall the uniqueness of the DB2 z/OS environment, take advantage of all it has to offer, and account for the differences.
A common mistake people make involves their assumptions about how stored procedures work on other platforms. On database servers such as Sybase, Microsoft SQL Server and Oracle, you often see applications designed with a stored procedure for every SQL statement. We saw one instance where there were 200 tables and four stored procedures for each table (one for SELECT, one for INSERT, one for DELETE, and one for UPDATE) for a total of 800 stored procedures. This is a black box I/O module design and will perform horribly on the DB2 z/OS platform. This type of design needs to be re-evaluated and SQL placed into the application to make use of the SQL language.
In the case of stored procedure execution on z/OS, DB2 has to go cross-memory to call the stored procedure in another address space. Subsequently, the stored procedure goes cross-memory to execute SQL statements from what is another allied address space. In addition to these cross-memory calls, the operating system has to manage the stored procedure address spaces, and there are several z/OS system settings (Workload Manager [WLM] policies, number of TCBs, etc.) that can impact this. If you’re using Java, then you have issues with starting virtual machines within the address spaces. In addition, Java programs consume about twice the CPU as other stored procedure languages so that can be a factor, too. This is all exaggerated by techniques used to write the stored procedures.
Programmers who are used to coding in Sybase or Oracle tend to write highly inefficient DB2 z/OS stored procedures. That’s because the Sybase and Oracle procedures run within the database server. People tend to use the procedures extensively as simple extensions to their applications. However, with DB2, programs work quite differently. If you have only single SQL statements in your DB2 stored procedures, then you can assume a significant performance detriment. If you’re calling stored procedures from other stored procedures, then your performance detriment is amplified. In DB2 for LUW Version 8.2, the SQL stored procedures now run within the database server as run-time structures. The same will hopefully soon be true for a future release of DB2 for z/OS.
We’ve also seen situations in which the DBMS being migrated from offers functions that don’t exist on DB2 for z/OS. In these situations, all is not lost. DB2 for z/OS is extremely flexible with its User-Defined Functions (UDFs). We’ve been able to solve all these incompatibilities by coding our own SQL and external UDFs. It takes more programming effort, but that’s easier than an application re-design (assuming that’s even possible).
Remember, too, that SQL, data types (e.g., VARCHARs) and other items also work quite differently on DB2 z/OS. Best performance will be achieved by converting or migrating with an understanding of the uniqueness and capabilities of the z/OS platform.
With any type of conversion or migration, extensive testing on various designs needs to be performed to ensure the data structures are as optimal as possible for the application, and the integrity and future use of the database aren’t sacrificed. It’s also important that your conversion methodologies don’t limit your ability to make future database changes for performance reasons. There will be some performance degradation during most legacy conversions, but patching the problems without thought as to how the data is going to be used in the future can make the migration a waste of time and resources. There are clever solutions to many problems and you should take time to explore them.