A recent Gartner Research Note (G00172909) had an interesting comment regarding this:

“The IBM z/OS mainframe continues to be an important platform for many enterprises, hosting about 90% of their mission critical applications. Enterprises may not take the same steps to address configuration errors and poor identity and entitlements administration on the mainframe as they do on other OS’s. Thus, the incidence of high-risk vulnerabilities is astonishingly high, and enterprises often lack formal programs to identify and remediate these.”

It was Alan Harrison of the Royal Bank of Scotland who coined the phrase “securable” years ago to describe the state of System z. Note that Alan’s view exactly matches the Gartner comment.

Phil Young, aka Soldier of Fortran (@mainframed767, has become obsessed with raising awareness of System z and how to accurately assess their security. One of Phil’s themes is that System z is vulnerable because of unencrypted network connections which will allow the stealing of User IDs and passwords. The User IDs and passwords can then be leveraged to obtain the data that the original user legitimately had access to, and possibly leveraging inadequate operating system and security controls to gain access to more data. He says this is because of the reluctance of support personnel to remove anything. For example, when new encrypted ports were defined, the old unencrypted ports were left around “just-in-case” some process used them “occasionally”. But, the usage was never tracked and eventually cleaned up. The most sensitive ports are those used for TSO and FTP access.

Phil is right on this—I’ve seen this in terms of System z datasets and access permissions being left around—support personnel are frightened to change anything that could result in a production outage. I had a discussion with John Busse on this concept. John used to be my Manager of Technical Support at SKK for our ACF2 and Examine/MVS (now CA-Auditor) products. John went with the company to CA Technologies, and then decided to start his own company. He created what is now called the CA-Cleanup product which collects the usage data of dataset access permissions, and then removes them if they have not been used in a specified period of time. John told me he was surprised that the common waiting period used by his clients was 15 months. This allowed the usage that was once a year to go on unhindered. But, in Phil’s examples, the unencrypted ports were left exposed for years, not months.

Now the usage of the access permissions is just one area that has to be addressed. That does not address the permissions by category of data. Data must be categorized, and the access permissions must be compared to the people who should have access to that category of data; PCI, PII, PHI, IP, etc. Plug Alert: (Our DataSniff product provides part of that solution on System z platforms by scanning datasets and database tables, and then categorizes them when sensitive data is discovered. Security Administrators can then use their Access Control Products [ACP], CA-ACF2, RACF, or CA-Top Secret, to produce a list of user IDs authorized to access the data, which can then be compared to the list of users who should have access to the data).

This process should also be used for the sensitive system datasets. Examples are the parameter libraries, the system link list libraries, and the authorized libraries. If a rogue insider can modify any of these libraries then that insider will have the authority to bypass the ACP controls to access or modify any dataset or database table that resides on System z.

Another integrity exposure I have seen in this area is when DASD storage is shared between two different systems or LPARs that do not share the same ACP database. This provides the opportunity for differing access permissions which are especially vulnerable when one of the shared systems is a production system and the other is not. These “sandbox” systems usually have loose access controls so the data on the shared storage devices may be vulnerable.

Another vulnerability highlighted by Phil Young was the exposure of the RACF database or one of its backup copies. For example, it is easy to determine the name of the RACF primary and backup databases using the RVARY LIST command—it just displays it. If these databases are not properly protected, then the stored passwords are easily obtained. Although it is not easy to reverse engineer the password, a brute force process can be used to generate a password that will result in the same hash. If the RACF user id selected has powerful privileges or access to sensitive data, the User ID could then be used to read or download a copy of any dataset containing sensitive information.

This brute force hack against an ACP database to retrieve a password provides the same vulnerability as the sharing of passwords as in the Edward Snowden case. Why people would give their accounts/passwords to others is mystery to me, but it is done often by co-workers and even managers to their employees. I raised the issue of multiple people using the same User ID in my 1974 SHARE Presentation on Data Security Requirements (1) and said the locations of the access could be analyzed to determine if access was being performed from multiple locations simultaneously—e.g. New York and Chicago. This was in the day when all mainframe terminals were hardwired and unfortunately, in this world of multiple windows being open on a single computer and the Internet, it is a little more difficult to track down this activity. I have not spent a lot of time looking into this, but it should be possible to analyze the IP addresses being used for system access and highlight ones that may be at risk.

2 Pages