IT Management

In the past, data center security was simpler to implement; in fact, there was a time when data center managers could see all the inputs and outputs to the mainframe in one or two rooms. However, this was when data was input via punched cards and output was recorded on tape or impact printers using green bar paper. A terminal had to be added as a logical unit and data center managers knew exactly who had access to the mainframe. Even when someone was logged in, managers always knew exactly what users were doing. Because computing resources were so precious, any abnormal behavior would have some effect on the environment. The SNA network was secured because all the devices on the network were defined. A physical survey of the data center was all that was needed to ensure it was secured.

New technology and constant market pressures have caused simple data center security to be a thing of the past. Today’s market requires constant improvements in worker productivity. The advent of personal computing, Local Area Networks (LANs), and the Internet have led to a time of “pervasive connectedness,” even for the mainframe. Moreover, information access is no longer limited to the “select few.” Today, “information for all” is being embraced even though original mainframe design and scope never contemplated that.

In some ways, the modern mainframe is no different from any UNIX or Windows server. It’s TCP/IP-connected and serves the data needs of almost every endpoint on the network. Just because the mainframe still resides in the glass house doesn’t mean it’s as safe as it was 25 years ago; the mainframe sitting on the raised floor of the data center is no longer the “air gapped” icon of data protection.

Equally, the speed with which sensitive data on the mainframe can be converted to cash by cyber crooks has increased. The effects of pervasive connectedness aren’t limited to encroachments on the legacy mainframe identity and access management schemes. A credit card number stolen from the data center can be sold over the Internet in seconds. According to various industry and analyst estimates, 70 percent of all strategic data remains on the mainframe and much of that data relates to customer and consumer information. Yet, contemporary market conditions demand always-on access to data to support online shopping, administration of retail banking, brokerage or other financial accounts, and other needs. Significant actions with potentially grave repercussions can be performed in a completely faceless manner. The drive to satisfy customer demands for convenience in a global market has given rise to new risks that must now be mitigated.

The consequence of increasing customer demands is that the risk to data becomes an issue not just for technologists who manage it, but for legislators and industry regulators. Protection of data privacy is now one of the principle areas of focus in each and every technology audit or review; it’s become mandatory to demonstrate that appropriate best practice controls are in place to avoid disruptions to technology and business plans.

From Terminal Server to Data Server

The original workload profile of the mainframe was much different from today. Dumb terminals with green screens illuminated the office space with CICS panels, green bar reports littered the desks, and there were fewer users that directly interacted with the mainframe. Today, there might not be as many users that directly interact with the mainframe through Interactive System Productivity Facility (ISPF) or CICS, but the number of users and programs that access information on the mainframe has grown exponentially.

Systems programmers can now only guess how many external systems they’re interfacing with between CICS, WebSphere, MQ, DB2, and other mainframe systems providing the back-end and transaction management backbone of Web-based applications. To a useful degree, the security servers have kept up with the growth in the number of TCP/IP connections to the mainframe, but their utility in protecting sensitive information remains limited.

Systems such as IBM’s RACF, CA Top Secret and CA ACF2 provide almost no protection beyond the perimeter of the mainframe. When data from z/OS is transferred to another operating system, it’s no longer under the umbrella of protection that these systems can provide. How is that data being protected once it passes from z/OS to another operating system such as UNIX, Linux, or Windows? Can you really rely on the access controls for those operating systems to protect the data in the same way, using the same resources and controls as on z/OS?

The mainframe data center initially found itself ill-prepared to address the combined risks of increased connectedness and elevated value of sensitive information to the online crook. The industry had been focused on the need for operational excellence, to accomplish more work through automation while containing the costs associated with infrastructure and application development. It isn’t trivial to add data protection to existing applications and workflows, particularly since some applications have been developed over a generation and represent highly critical and complex sets of business rules (which, unfortunately, are sometimes not well-documented outside the code itself). While auditors and regulators demand that protection of sensitive data increase to mitigate new risks, stakeholders and stockholders still require that operations remain lean to contain costs.

3 Pages