DB2 & IMS

Over the past several years, autonomic statistics has been a much-discussed feature of DB2 10 and 11. The vast majority of customers we talk with are very excited about this feature because it solves the often-asked question: Why can’t DB2 be smart enough to automatically execute RUNSTATS when required? However, after revisiting with some of these customers, we were surprised to learn that once the excitement died down, none had implemented autonomic statistics. Most of the feedback pointed to the implementation being confusing and cumbersome…

Read Full Article →

DB2 is an extremely efficient and scalable relational database, capable of processing thousands of transactions per second through a single DB2 subsystem. By taking advantage of data sharing groups, DB2 can scale, almost linearly, across multiple logical partitions (LPARs), enabling applications to exploit the parallelism inherent in a z/OS sysplex. This allows DB2 applications like those designed to run in CICS MRO or a WebSphere Application Server horizontal cluster to realize a significant increase in throughput due to their ability to load balance across multiple LPARs. But what about batch applications? Is it possible for them to exploit the parallelism in a z/OS sysplex and see an increase in throughput as well?…

Read Full Article →

Concurrency is a critical performance aspect of any database application. While there are many options available to improve it, one of the more recent features that has quietly made its way into DB2 for LUW and z/OS is concurrent access resolution. This feature can dramatically improve the concurrency and performance of some applications, but it does come with a price. A careful understanding of how concurrent access resolution affects query result sets, how it may impact database size and performance, and when it works and doesn’t work are critical when using this feature. This is especially true since you may not realize you already could be using it! In addition, it’s important to understand the other options that can be used to improve the concurrency of your applications and database, and how concurrent access resolution fits into the picture. However, before we dive into the details of concurrent access resolution, let’s first get an understanding of some of the choices available for application concurrency in DB2…

Read Full Article →

Today’s Fortune 500 companies are embracing distributed techniques to access mainframe data on DB2. Although online transaction processing (OLTP) workloads still dominate DB2’s local processing, more applications, such as data warehousing analytics, run remotely as distributed tasks. The interesting part of distributed DB2 is the terminology; it has its own set. If you spend time dealing only with the local side, some of these terms may be new, maybe even confusing in some rare cases. Here’s a distributed terminology refresher using a couple of subsystem parameters that affect distributed data facility (DDF) processing: MAXDBAT and CONDBAT on the DSN6SYSP macro and CMTSTAT on the DSN6FAC macro. …

Read Full Article →

Current use of the term Big Data is a result of the confluence of several industry trends over the last decade. These trends include the accumulation of large amounts of data in databases, the advent of large-scale enterprise data warehouses, multiple means of high-speed data transfer and an explosion in the design and use of sensors for gathering data…

Read Full Article →

Version 13 of the IBM Information Management System (IMS) makes sending asynchronous callout messages to WebSphere MQ simpler and more flexible than ever before. You no longer need to code IMS exit routines to define MQ headers or restart IMS after changing either the MQ header values or message routing information. You can now use Open Transaction Manager Access (OTMA) destination descriptors and IMS commands to dynamically define and route messages in IMS that are destined for WebSphere MQ…

Read Full Article →

Many people in the computer field are surprised when they learn that tape was available before disk. Think about your favorite old TV shows or movies; when the director wanted to show data being processed, you would see flickering lights on part of the CPU or console area. Often, you would see the old-fashioned tape reel being written to or read. Tape was one of the earliest means of storing and conveying large amounts of data and information. IBM introduced the 726 tape unit in 1952, while the first IBM disk, the 350, was introduced in 1956…

Read Full Article →

Analytics is a hot topic. The amount of data stored in our operational systems is increasing daily, and management is realizing this information can and should be harnessed quickly for the business to make timely decisions on sales directions, talent acquisition, cost containment and more. A big challenge is to formulate answers to these questions that use the most current information, are inexpensive and easy to create, and can deliver the answers quickly. Often, great expense is incurred in moving data, creating data warehouses and using specialized software to produce various reports…

Read Full Article →

Over the last 10 years, the velocity of change in applications and databases has increased while IT staff has become leaner. This challenges Database Administrators (DBAs) to keep databases optimized and in step with supported applications. Fortunately, there are also tools that provide greater visibility of end-to-end application performance and its impact on DB2, as well as new automation that can simplify some of the workloads DBAs face on a daily basis…

Read Full Article →

SQL was first developed in the ’70s as a method to communicate with relational databases. It has since evolved to support many more data retrieval options, data types and other complexities. As a result, improperly coded and untuned SQL can also adversely impact DB2 performance and data center Service Level Agreements (SLAs). There are several reasons for this…

Read Full Article →