May 14 ’13

Warning: Mainframe Data Leakage Poses Significant Risk

by Editor in Enterprise Executive


An Interview With Rich Guski
 

Enterprise Executive sat down with Rich Guski, who recently retired from IBM, to get his insight into current and future security trends he sees. Rich was a key participant in RACF security development and also the architect of several CICS security functions that shipped with RACF for z/OS 1.10 during his 27-year tenure with IBM. He’s also a Certified Information Systems Security Professional (CISSP), as defined by the International Information Systems Security Certification Consortium (ISC)2. 
 
Enterprise Executive:
Rich, what are you currently doing now that you’ve retired from IBM?

Rich Guski: I’m currently doing mainframe computer security consulting work.

EE: Do you continue to stay apprised of current security trends that would benefit mainframe professionals?

Guski: Yes. I still attend and speak occasionally at RACF User Group (RUG) meetings and Vanguard conferences.

EE: What current or future trends do you see in the realm of data security that affect the z/OS environment?

Guski: If you’re the manager of an IT organization, one of your responsibilities, as the custodian of your organization’s data, is to comply with requirements for the security and handling of sensitive data. For many years, the simplest way to demonstrate compliance was to use a well-known mainframe Access Control Product (ACP), such as IBM’s RACF, CA’s ACF2 or CA’s Top Secret, and use the associated ACP tools to generate reports to prove to auditors that you’re protecting the sensitive data. But lately, newly emerging standards for security of sensitive data are complicating this picture.

EE: Can you give us an example of such an emerging standard and what it means for the IT executive. 

Guski: Yes. Certain sets of security requirements, the Payment Card Industry Data Security Standards (PCI DSS), for example, have evolved their own requirements for the security and handling of sensitive data such as credit card numbers that are used by their industry. The PCI Security Standards Council has the responsibility of managing the PCI DSS standard. What makes the PCI DSS standard unique is that, unlike many other regulations, it comes from private industry rather than the government.

EE: Are there other standards and regulations besides PCI DSS that IT executives should be concerned about?

Guski: Yes, there are other compliance frameworks that, while not exactly the same as PCI requirements, nevertheless result in managerial action items similar to those driven by PCI DSS. However, for the sake of brevity, allow me to focus on PCI DSS for now, but be mindful that my conclusions will apply to other sets of sensitive information that a typical IT executive might be responsible for. Look at it this way: PCI DSS could be viewed as a “Standard of Due Care” in case a data breach ever goes to litigation.

EE: Most mainframe shops use mainframe security products such as RACF, CA-ACF2 or CA Top Secret. Don’t these provide all the security required for mainframe data?

Guski: No. What I’m saying is that these new standards and regulations such as PCI DSS are effectively raising the bar of mainframe security beyond the current reach of these products as they’re used today.

EE: Can you explain this rather strong statement?

Guski: Sure. Most mainframe ACPs are configured to use Discretionary Access Control (DAC), which is an access architecture whereby security administrators or data owners decide how the data should be protected. Users, who must access the data in order to use it as part of their job function, are granted at least READ authority to the data. Any user who can read the data, in effect, becomes a “custodian” of the data with direct control over the disposition of the data. As an example of how a custodian of data can change its security and disposition, consider the following: A user who’s authorized to READ certain data can make a copy of that data, giving the copy a different name with different access control rules. Therefore, they can give READ authority to other users without regard to the data content. Responsible managers know what they know about the location of production confidential and sensitive information, but they don’t know when the confidential and sensitive information is copied to unknown data repositories.

EE: You mean to say that “unknown” sensitive data may have proliferated inside the mainframe environment in such a way that IT organizations don’t know exactly where it is and how it’s protected? Wow! Could you explain how this might happen and provide some examples?

Guski: Yes, of course. Consider the following common scenarios:

• Your production support team is under pressure to fix a program abnormal termination (Abend) and get production back on schedule. To test a required fix, a team member copies production data to his or her own “user-ID prefixed data sets.” To expedite problem resolution, no time is spent sanitizing confidential and sensitive information, which may exist within the data. After the problem is resolved, for various valid reasons, the copied data isn’t deleted from the system.
• Another scenario is when a system user uploads confidential and sensitive information from a distributed platform to the mainframe into a repository that may be protected differently and is unknown to the manager who’s the responsible custodian of the data.
• A user is assigned to produce a report for executives and must do queries of a database containing sensitive information. He stores the query results in data sets under his userid prefix and then produces the report. He never deletes the data sets containing the query results, which contain sensitive information.

In each example, the copied data is inappropriately protected and logging attributes may be incorrectly configured. Again, the scenario continues downhill when the data isn’t promptly deleted, which is often the case. Additionally, some of the co-workers also routinely have access to this data since it’s stored in user-ID prefixed data sets, compounding the problem. These and similar scenarios are referred to as “data leakage,” which is now becoming a recognized risk by IT auditors. While the security experts are focusing on cyber security attacks, is anyone paying attention to the threat of insiders downloading improperly secured leaked data?  

EE: How about companies that have outsourced the management of their mainframes and sensitive data to third-party service providers?

Guski: This is an interesting question. Managers who are responsible for the security and disposition of sensitive data sometimes assume that since the processing of the data has been moved outside their organization, they’re no longer responsible for its security and disposition, but this isn’t so. Again, using PCI as an example, the organization is still responsible for ensuring the service provider performs certain control functions to ensure compliance with the PCI requirements regarding proper handling and security of data, and that the output of these control procedures is presented to the requesting organization to be added to their records for later perusal by their auditors.

A secondary problem that often shows up when an IT organization turns PCI cardholder data over to a service provider occurs when copies of the cardholder data, either complete or partial, are mistakenly left on the organization’s mainframe. PCI requirements state that the organization must be able to show that no such data exists outside of “known data repositories.” These scenarios show how sensitive data can leak outside the scope of a known data repository and become a security and audit risk to the organization.
 
EE: OK, I can see how data leakage can occur especially over a long period of time and with changes of personnel. But how does data leakage add risk to an IT organization’s bottom line?

Guski: Risk assessments are fundamental requirements found in almost all regulations and requirements, and they have long been a tool for mainframe security auditors. They’re important in determining how data should be protected whether it’s stored, transmitted or archived. Since data leakage has only recently begun to be recognized as a threat, it has only now begun to be included in mainframe risk assessments by auditors. To ignore data leakage in mainframe risk assessments presents an obvious loophole. If a mainframe risk assessment hasn’t been conducted at all, then it’s highly likely that little thought has been given to the mainframe data leakage problem.

Furthermore, if this risk wasn’t identified and included in a mainframe risk assessment, management isn’t positioned to make an intelligent decision regarding potential risk to the organization such as “accept the risk and associated consequences if a breach does occur,” or “demonstrate due diligence by initiating a data discovery project to scan and find all data repositories for unknown cardholder data.” Identifying and documenting mainframe data leakage in a risk assessment also removes the “plausible denial” factor.

To further expand on this point with PCI as the example, let’s consider an instance where all known cardholder data has been identified and is included in the scope of the Cardholder Data Environment (CDE), and any unknown cardholder data is considered to be outside the scope of the CDE.

The following excerpts are from the PCI DSS 2.0:

Scope of Assessment for Compliance with PCI DSS Requirements

The first step of a PCI DSS assessment is to accurately determine the scope of the review. At least annually and prior to the annual assessment, the assessed entity should confirm the accuracy of their PCI DSS scope by identifying all locations and flows of cardholder data and ensuring they are included in the PCI DSS scope. To confirm the accuracy and appropriateness of PCI DSS scope, perform the following:

• The assessed entity identifies and documents the existence of all cardholder data in their environment to verify that no cardholder data exists outside of the currently defined cardholder data environment (CDE).
• Once all locations of cardholder data are identified and documented, the entity uses the results to verify that PCI DSS scope is appropriate (for example, the results may be a diagram or an inventory of cardholder data locations).
• The entity considers any cardholder data found to be in scope of the PCI DSS assessment and part of the CDE unless such data is deleted or migrated/consolidated into the currently defined CDE.
• The entity retains documentation that shows how PCI DSS scope was confirmed and the results retained, for assessor review and/or for reference during the next annual PCI SCC scope confirmation activity.

To be a bit more specific, there are several PCI requirements that will be identified as “Not in Place” when “undiscovered” cardholder data exists outside the defined CDE on a mainframe. I will cite two of these requirements as examples, along with the risk associated with not knowing if and where all such data exists:

PCI Requirement 3.1.1.d: Verify that policies and procedures include at least one of the following: A programmatic process (automatic or manual) to remove, at least quarterly, stored cardholder data that exceeds requirements defined in the data retention policy. The risk associated with data leakage is that unknown cardholder data that leaks out of the confines of the known environment will be non-compliant with the PCI organization’s data retention policy.

PCI Requirement 9.10.2: Verify that cardholder data on electronic media is rendered unrecoverable via a secure wipe program in accordance with industry-accepted standards for secure deletion, or otherwise physically destroying the media (for example, degaussing). The risk associated with data leakage is that on mainframes, electronic media includes data repositories residing on both DASD and tape. Unknown cardholder data won’t be identified and therefore may not be rendered unrecoverable via a secure wipe program.

Although PCI data has been used repeatedly as examples in this discussion, this same thought process should also be applied to any confidential and sensitive information stored on the mainframe.

EE: OK, I see now how data leakage can translate into a mainframe audit compliance risk. So, are there any commercially supported tools available that can help assess and mitigate the risks associated with data leakage on mainframes?

Guski: Your question uncovers another problem with conducting a risk assessment for data leakage on mainframes. Although data leakage discovery tools are presently in use for distributed platforms, they’re only just beginning to become available for the mainframe.

An example of a comprehensive and commercially supported data leakage discovery and prevention tool that runs on the mainframe is DataSniff from XBridge Systems. This product provides the capability to search for and discover confidential and sensitive data so that appropriate protection can be applied. This protection may include deletion, migration to removable media, encryption or validation of the access controls for this data. This action will significantly reduce the data leakage risk to any organization.

DataSniff can also be used to support projects such as “encrypt all social security numbers.” The first step is to find all files that contain social security numbers, including those files associated with data leakage. 

And after the encryption project is complete, running regular data vulnerability scans is important because social security numbers can creep back into the mainframe environment from external sources.

EE: Rich, in closing, can you summarize and possibly leave us with any additional suggestions for improving security for sensitive data of which we may be responsible?

Guski: Certainly. IT managers are responsible for the security of confidential and sensitive information that’s entrusted to their organization. Mainframe interaction with distributed environments and other factors, such as mergers and acquisitions, have added the new threat of data leakage to the existing responsibilities that IT management must address. The PCI standard, which is typical among recently emerged data security standards, implies that Data Leakage Prevention (DLP) must be addressed to prove compliance. Commercially supported discovery tools, such as DataSniff from XBridge Systems, have only recently become available for mainframes. IT organizations with mainframes should consider this tool in order to understand and significantly reduce the data leakage risk and to ensure audit compliance.