Apr 10 ’14
System z IS NOT Secure: But It Is Securable
A recent Gartner Research Note (G00172909) had an interesting comment regarding this:
“The IBM z/OS mainframe continues to be an important platform for many enterprises, hosting about 90% of their mission critical applications. Enterprises may not take the same steps to address configuration errors and poor identity and entitlements administration on the mainframe as they do on other OS’s. Thus, the incidence of high-risk vulnerabilities is astonishingly high, and enterprises often lack formal programs to identify and remediate these.”
It was Alan Harrison of the Royal Bank of Scotland who coined the phrase “securable” years ago to describe the state of System z. Note that Alan’s view exactly matches the Gartner comment.
Phil Young, aka Soldier of Fortran (@mainframed767, https://github.com/mainframed/) has become obsessed with raising awareness of System z and how to accurately assess their security. One of Phil’s themes is that System z is vulnerable because of unencrypted network connections which will allow the stealing of User IDs and passwords. The User IDs and passwords can then be leveraged to obtain the data that the original user legitimately had access to, and possibly leveraging inadequate operating system and security controls to gain access to more data. He says this is because of the reluctance of support personnel to remove anything. For example, when new encrypted ports were defined, the old unencrypted ports were left around “just-in-case” some process used them “occasionally”. But, the usage was never tracked and eventually cleaned up. The most sensitive ports are those used for TSO and FTP access.
Phil is right on this—I’ve seen this in terms of System z datasets and access permissions being left around—support personnel are frightened to change anything that could result in a production outage. I had a discussion with John Busse on this concept. John used to be my Manager of Technical Support at SKK for our ACF2 and Examine/MVS (now CA-Auditor) products. John went with the company to CA Technologies, and then decided to start his own company. He created what is now called the CA-Cleanup product which collects the usage data of dataset access permissions, and then removes them if they have not been used in a specified period of time. John told me he was surprised that the common waiting period used by his clients was 15 months. This allowed the usage that was once a year to go on unhindered. But, in Phil’s examples, the unencrypted ports were left exposed for years, not months.
Now the usage of the access permissions is just one area that has to be addressed. That does not address the permissions by category of data. Data must be categorized, and the access permissions must be compared to the people who should have access to that category of data; PCI, PII, PHI, IP, etc. Plug Alert: (Our DataSniff product provides part of that solution on System z platforms by scanning datasets and database tables, and then categorizes them when sensitive data is discovered. Security Administrators can then use their Access Control Products [ACP], CA-ACF2, RACF, or CA-Top Secret, to produce a list of user IDs authorized to access the data, which can then be compared to the list of users who should have access to the data).
This process should also be used for the sensitive system datasets. Examples are the parameter libraries, the system link list libraries, and the authorized libraries. If a rogue insider can modify any of these libraries then that insider will have the authority to bypass the ACP controls to access or modify any dataset or database table that resides on System z.
Another integrity exposure I have seen in this area is when DASD storage is shared between two different systems or LPARs that do not share the same ACP database. This provides the opportunity for differing access permissions which are especially vulnerable when one of the shared systems is a production system and the other is not. These “sandbox” systems usually have loose access controls so the data on the shared storage devices may be vulnerable.
Another vulnerability highlighted by Phil Young was the exposure of the RACF database or one of its backup copies. For example, it is easy to determine the name of the RACF primary and backup databases using the RVARY LIST command—it just displays it. If these databases are not properly protected, then the stored passwords are easily obtained. Although it is not easy to reverse engineer the password, a brute force process can be used to generate a password that will result in the same hash. If the RACF user id selected has powerful privileges or access to sensitive data, the User ID could then be used to read or download a copy of any dataset containing sensitive information.
This brute force hack against an ACP database to retrieve a password provides the same vulnerability as the sharing of passwords as in the Edward Snowden case. Why people would give their accounts/passwords to others is mystery to me, but it is done often by co-workers and even managers to their employees. I raised the issue of multiple people using the same User ID in my 1974 SHARE Presentation on Data Security Requirements (1) and said the locations of the access could be analyzed to determine if access was being performed from multiple locations simultaneously—e.g. New York and Chicago. This was in the day when all mainframe terminals were hardwired and unfortunately, in this world of multiple windows being open on a single computer and the Internet, it is a little more difficult to track down this activity. I have not spent a lot of time looking into this, but it should be possible to analyze the IP addresses being used for system access and highlight ones that may be at risk.
System z’s are vulnerable to “insider attacks” when the system configuration or ACP controls are improperly implemented. It is important to realize that “insiders” does not just include employees. It also includes contractors and hackers who were able to steal the logon credentials of trusted insiders; which is how the Pirate Bay hacker gained his initial entry into the System z’s at Logica and the Nordea Bank in Scandinavia(2). Once inside the System z perimeter, the system is vulnerable for the reasons listed above and others as well.
It is crucial that organizations realize that while their System z’s are vulnerable to “insider attacks”, System z is a highly securable system if properly configured. I have heard from several organizations that their System z’s have been secure for more than 40 years and are now waiting for an Audit Report finding to justify further investment. So instead of being proactive, these organizations are more focused on being reactive. Think about the harm to both the organization and to the individuals whose data is being held by it that this attitude exposes.
(1) 1974 SHARE Presentation on Data Security Requirements www.share-sec.com/history.html
(2) Pirate Bay co-founder charged with hacking IBM mainframes, stealing money http://www.pcworld.com/article/2034733/pirate-bay-cofounder-charged-with-hacking-ibm-mainframes-stealing-money.html
About the Author
Barry Schrager, creator of ACF2, started in data security in the early 1970′s when he introduced TSO (IBM’s Timesharing Option) to the faculty, staff, and students at the University of Illinois in Chicago where he was Assistant Director of the Computer Center. TSO provided full capabilities of the Operating System, then MVT, to its users, which meant they could allocate and delete datasets, write programs to access them, etc. The problem was there was no usable security available on the system—the only security was password protection where the console operator would be prompted for the password for batch jobs and the TSO user for TSO sessions. There was also no way to identify and validate users gaining access to the system from batch jobs. Students began modifying and deleting datasets just for fun. If you think about it, they would be called “hackers” today.
So, Barry wrote a user validation system called Resident Account to validate all users for access to the system and then had Eberhard Klemens, who was working for him at the time, develop intercepts to control access to datasets based upon the first character of the second level index. So, for example, a dataset with a name of userid.$xyz.data, because of the $ would only be accessible to the user himself and a dataset with a name of userid.#xyz.data would be accessible to everyone for just read access. This allowed faculty and students doing research to protect their data and allowed professors the ability to share data in a read-only mode with their classes.
Because of this work, in 1972 Barry was asked to form the Security Project within the SHARE organization and, in 1974, the Project submitted its requirements to IBM which included “protection by default” and “algorithmic grouping of datasets and users” (think ACF2 pattern masking and RACF generic profiles). IBM responded in 1976 with RACF, which did not meet the requirements, and told Barry they were not achievable, so Barry worked on developing a system that would meet the requirements. This was done as a prototype at the University of Illinois and then the London Life Insurance Company of London Ontario Canada supported the development of the commercial product, ACF2.
ACF2 was the first commercially successful security system and was, and in many cases is still being, used by the Executive Office of the President of the United States, the Senate, the CIA, NSA, M-5, the Federal Reserve System, the FDIC, General Motors, Chrysler, Procter & Gamble, the entire Australian Government, and many other significant organizations. When SKK was acquired by UCCEL at the end of 1986, ACF2 had a 60% market share against both IBM’s RACF and CA’s Top Secret.
Barry continues to be involved in mainframe data security as President of Xbridge Systems, Inc. (www.xbridgesystems.com), the developer of the mainframe data discovery product, DataSniff, and is honored to be a member of Enterprise System Media's Mainframe Hall of Fame, which includes such luminaries as Dr. Gene Amdahl and Admiral Grace Hopper. For more on the history of data security, see the SHARE Security Project’s History at www.share-sec.com/history.html.