Aug 16 ’11

Reduce Your Security Risk By Complying With Standards

by Barry Schrager in z/Journal

Your mainframe may be the most secure computing platform in your organization, but did you know it’s also at high risk for security breaches and regulatory non-compliance?

An impressive finding of the 2011 Verizon Data Breach Report is that 89 percent of organizations suffering payment card breaches hadn’t been validated compliant with Payment Card Industry Data Security Standard (PCI DSS) at the time of the breach. (See the report at These standards are meant to be used as a benchmark, with standards compliance leading to an improved security posture.

Even if an installation isn’t legally subject to the PCI DSS standards, are they subject to other standards such as the Healthcare Information Portability and Accountability Act (HIPAA), National Institute of Standards and Technology (NIST), International Organization for Standardization (ISO), or the Defense Information Systems Agency (DISA) Security Technical Implementation Guide (STIG)? All these standards require an installation to review its system and network configuration security and integrity controls to maximize the security of the environments and minimize vulnerabilities. Implementing the appropriate standards for your site—or better yet, implementing the extensive standards of the DISA STIGs—will make your site less vulnerable to a breach and the financial and public image damages associated with it.  

Your IT management team must perform due diligence and apply best efforts to protect your data and IT infrastructure from breaches. IT is also responsible for making other senior management aware of the threats facing them if left unaddressed. Yet, this issue is often misunderstood and the requirement is ignored. For instance, the Verizon report shows that 44 percent of respondents indicated they maintained a policy that addresses information security. But only 16 percent of those breached in 2010 had such a policy. Clearly, installations that take security standards seriously and do their best to implement them are much less likely to suffer a breach.

Minimize or Eliminate Vulnerabilities

The purpose of these standards is to minimize or eliminate the vulnerabilities in z/OS systems. A vulnerability can be introduced by poor hardware configuration, poor system configuration parameters, poor security system controls, or a lack of system integrity caused by poor design and coding standards in either z/OS itself, Independent Software Vendor (ISV) products, or homegrown code.

For example, an easily identifiable hardware vulnerability might be the sharing of DASD storage between the production environment(s) and the test/development environment(s). While this practice was the norm many years ago, when disk storage devices were connected with physical cables and the test/development environment(s) were on separate computer systems and used as a backup system, this is no longer necessary today. Devices are now mapped with the Input/Output Configuration Data Set (IOCDS) and the Input/Output Definition File (IODF), which can be altered at Initial Program Load (IPL) time or with a console operator command. Normally, test and development environments have significantly less stringent security standards than production environments. Why put the production data and the system and application libraries at risk in these less secure environments?

Failure to fully protect all the Authorized Program Facility (APF) libraries on the system is an example of a system configuration/security system controls vulnerability. These APF libraries are defined in the Operating System Parameter Library. Authorized programs within them are marked with an “AC(1)” linkage editor attribute. For example, a program marked AC(1) residing in an APF authorized library can issue the MODESET KEY=ZERO Assembly language macro and obtain control in PSW Key Zero.

PSW Key Zero lets a program modify any storage in the system. This might legitimately be used to modify control block parameters to adjust the way the system or a vendor-supplied subsystem operates. It might also be illegitimately used to modify the user’s security credentials to make the user appear as a different and more privileged user or entity. In this way, the program could access and modify any data without any indication or journaling. Even the most diligent installation security staff won’t notice an issue or be able to reconstruct what happened to cause a breach to occur under these circumstances.

There may be hundreds of APF authorized libraries on any system. What controls are there on each of them? Who can update them? When a new one is added, is it reviewed to ensure no additional AC(1) modules were placed in it? Who reviews the security system logs to ensure that no illegitimate updates were allowed to occur in the libraries? What security system controls are in place over the operator command that dynamically adds these libraries?

A vulnerability introduced by a poorly designed or coded authorized interface in z/OS, a vendor product, or a homegrown program or service can be exploited to gain the same level of control as an authorized program. That means the exploitation can then be used to alter the identity of the current user and gain access to and possibly modify sensitive or critical data without event logging.

Statement of Integrity

The SHARE Security Project was formed in 1972 to develop security requirements for future IBM operating systems. Project participants quickly realized there was no possibility of data security if the formal interfaces of the operating system could be bypassed due to lack of system integrity. IBM acknowledged this problem and responded to it in 1973 with its Statement of Integrity for OS/VS2 and subsequent operating systems, MVS and z/OS. (See the statement at

When IBM introduced MVS, it knew some vendor and homegrown programs would require the privilege to obtain control in an authorized state and use special authorized interfaces to the operating system; they provided the APF library functionality to address that. However, IBM clearly places the burden of responsibility on the installation for these programs: “To ensure that system integrity is effective and to avoid compromising any of the integrity controls provided in the system, the installation must assume responsibility … that its own modifications and additions to the system do not introduce any integrity exposures.” (This statement appears in the z/OS V1R12 MVS Authorized Assembler Services Guide, page 423, accessible at Note that this applies to both homegrown and ISV programs and products.

Unfortunately, poor design and coding standards in the operating system, vendor products, and homegrown authorized routines and interfaces can also introduce vulnerabilities. In the PC world, we’re used to using virus checkers that look for routines either executing or residing on disk that leverage known vulnerabilities in the Windows operating system (what PC vulnerability people call “root-causes”). z/OS installations are fortunate to not need virus checkers because IBM commits to remediate all reported issues, but there are still vulnerabilities on its systems attributable to undiscovered design and coding errors in IBM and homegrown or ISV programs and service interfaces. The integrity vulnerability issue can’t be ignored. Within the recent past, more than 100 system integrity vulnerabilities have been located in z/OS and ISV products, according to Ray Overby of Key Resources, Inc.

Installations must demand their ISVs adhere to IBM’s z/OS Statement of Integrity and commit to always take action to remediate a system integrity issue when it’s reported. More important, they must include issues of integrity design in their software development and code review process. Installations should also engage professional vulnerability experts to review their systems and accompanying software to ensure they aren’t inadvertently introducing vulnerabilities that can be exploited to breach their security.

The vulnerabilities created by poor configuration, security system controls, and system integrity are more easily exploited by insiders. But, before you can assume insiders are a totally trusted group of individuals, it’s important to recognize that the percentage of breaches caused by insiders is rising. For example, in the 2008 Strategic Counsel Survey sponsored by CA Technologies, the percentage of internal breaches rose from 15 percent in 2003 to 44 percent in 2008. (See details at: A 2010 PacketMotion Survey of U.S. government agencies revealed that employees still pose the biggest security threat (see

The White House, as reported in USA Today on January 6, 2011, asked agencies to review their data security as it relates to insiders: “In an attempt to tighten control of classified information, the Obama administration has issued governmentwide guidelines urging officials to be wary of ‘insider threats’ and suggesting how supervisors can evaluate employee ‘trustworthiness.’ ”

A January 3, 2011, Office of Management and Budget memorandum asks: “What steps has your agency taken to implement the latest version of the NIST SP-800 series guidance on Information Assurance, Risk Management, and Continuous Monitoring?” According to Gartner (see their Research Note G00172909), the IBM z/OS mainframe continues to be an important platform for many enterprises, hosting about 90 percent of their strategic applications. Enterprises may not take the same steps to address configuration errors and poor identity and entitlements administration on the mainframe as they do on other operating systems. So, the incidence of high-risk vulnerabilities is astonishingly high, and enterprises often lack formal programs to identify and remediate these.

The 2010 CyberSecurity Watch Survey sponsored by CSO Magazine ( indicates that “cybercrimes committed by insiders are more costly and damaging than attacks from the outside.” A more troubling survey comment (accessible via the same link) was from Deloitte & Touche LLP: “We believe that most cybercrimes go unreported, not because they are handled internally, but rather because they are never detected in the first place. This is a proverbial ‘tip-of-the-iceberg’ situation, and the implications are significant.”

z/OS mainframes have always been assumed to be secure because of IBM’s Statement of Integrity and the capabilities of three software security products—IBM’s Resource Access Control Facility (RACF) and CA Technologies’ ACF2 and Top Secret—that control access to the data sets and resources contained on those systems. However, the system configuration and security system controls and parameters must be validated to ensure maximum protection. The installations most seriously breached are those that fail to put information security high enough on their priority list to assure compliance with standards intended to avert disasters.

The U.S. Government has invested a great deal of money into developing the DISA STIGs (see details at for a variety of platforms, including mainframes, and these can be used in addition to vulnerability scans and penetration tests to ensure maximum protection for these systems. Vulnerability scans and penetration tests are required under several standards:

It’s been easy to believe these apply only to the non-mainframe environment because mainframe z/OS systems were inherently secure. But this isn’t the case, and to be compliant, installations must also add these tests to their security assurance project list.


Installations can’t ignore their responsibility to information security on z/OS mainframes. It’s imperative for the security of their organization’s data and the protection of their customers’ private and confidential information. z/OS mainframes are the most securable systems available, but only with due diligence will the security on these systems be strengthened to the highest level possible and the risk of disaster minimized.