Mar 31 ’13

Establishing Data Access Management Consistent With Guiding Security Principles for DB2 10 for z/OS

by Tatiana Lavrentieva in Enterprise Tech Journal

In our increasingly virtualized, fast-paced world, data has become both the biggest enterprise asset and a liability. Information security policies and guiding security principles have become an essential part of the corporate risk management framework to mitigate the exposure risk of corporate data. Guiding security principles—a set of concise, high-level engineering best practices—can provide an effective foundation for systems security and should be considered as appropriate guidance when designing and maintaining IT systems.

This article outlines a practical, cohesive approach to translate the key guiding security principles into role-based access control mechanisms; it considers DB2 z/OS as a practical example and explores its security features available in DB2 10:

• New authorization model to support fine-grained DB2 system authorities
• Roles and trusted context
• Support for z/OS identity propagation using distributed identity filters.

A practical approach to control access to data in the DBMS from all entities ranging from system authorities to applications may be useful to many IT practitioners in corporate environments, including application, security and data architects, and Database Administrators (DBAs).

Business Drivers

Commonly recognized security principles can be used to provide an efficient framework and directions for making architectural, design and operational decisions for implementing security policies to pursue risk-driven security objectives and simultaneously facilitate legal and regulatory compliance. Implementing systematic, risk-based security polices based on a comprehensive set of security principles directly contributes to the stability of business applications and positively influences consumer confidence and customer loyalty.

The fundamental, commonly recognized security principles IT organizations use as architectural statements or directives are:

• Least privilege
• Separation of responsibilities
• Maintaining accountability and traceability of a user or process.

There’s another key guiding security principle, which recently emerged: Compliance doesn’t equal security.

Efforts to achieve compliance with standards and regulations aren’t a substitute for implementing risk-based security policies defined based on guiding security principles. Security regulations and standards imposed by various governing bodies are really “minimum” standards that focus on specific subsets of data and security threats. If IT organizations were implementing risk-based security policies consistent with guiding security principles initially, there’s a good chance the standards and regulations may not have been required. Defining risk-based security polices consistent with guiding security principles simplifies and shortens the path to compliance with existing or future regulations and standards, making it a sustainable, replicable process.

This approach facilitates satisfying most of the regulations and standards requirements without extensive changes. Focusing on implementing risk-based security polices consistent with guiding security principles better prepares enterprises to protect their assets against various security threats and is arguably a more efficient, cost-effective way to achieve regulatory compliance.

Role-Based Data Access Control

In corporate IT environments, guiding security principles are frequently included in security models and architectures. However, the problem isn’t merely what story needs to be told; it’s how to tell this story. When it comes to practical implementation of these principles for DBMSes, the picture isn’t encouraging. Data is the lifeblood of today’s enterprises and DBMSes are the heart of IT ecosystems. However, more the norm than the exception are utilization of overprivileged user accounts and system authorities, loss of user identities when connecting to database systems and loss of control over data access. Reasons for this include:

• Historical. Many corporate security efforts focus on establishing perimeter protection rather than protection of data at the source; application and database security often exist as separate silos.
• Functional. Database systems may lack support for the necessary security functionality.
• Operational. There’s a skills gap between security analysts, database administrators and application development teams.
• Cultural. Compliance management and security solutions are treated as separate IT initiatives.

Steps taken to meet compliancy requirements may not necessarily result in implementing widespread database security policies. For example, data subject to compliance requirements can be moved into dedicated, secured databases on isolated networks designed to meet requirements of a specific standard. The cost of this solution is significant, but security breaches may still occur in the remaining, inadequately protected databases and the monetary damage from these breaches can still be substantial (see Figure 1).

In many corporate IT environments, the implemented data access management approach has these shortcomings:

• Data access management is enforced primarily in middle-tier applications. Only coarse-grained access rules are implemented directly in DBMSes. Users, who bypass applications to run ad hoc queries, have overprivileged access to data. System authorities such as SYSADM have unrestricted access to data.
• Accounts with overassigned privileges are used when accessing enterprise data from client applications and for administrative and operational purposes. Most client applications use pooled connections with associated overprivileged accounts to access databases on behalf of users. When a database connection pool is used, the database connection is never established on behalf of the user who performs the transaction and the database never receives the user identity. Database audit records collect the identities of overprivileged users associated with database connection pools instead of the users on whose behalf transactions were executed.
• Access privileges are assigned to specific users or groups, which are collections of users, rather than to roles—collections of entitlements. Groups are static in the sense that group membership is determined based on user identities. Conversely, security roles are granted to users dynamically based on conditions such as user name, origin of request or the time of day and are aligned with job functions.
• In environments where client applications hosted on distributed platforms and DB2 are on z/OS, there’s an inherent gap when controlling access to data in DB2 for z/OS due to the mismatch between distributed and mainframe identities. This mismatch essentially prevents the use of fine-grained access control models such as Row and Column-based Access Control (RCAC). After all, how can you assign fine-grained permissions for data access to a specific row or column in a DBMS if all you have is an arbitrary user identity associated with pooled connections rather than the actual user identity (see Figure 2)?

Access Control Mechanisms

To establish an effective access control mechanism where any type of access to the database is automatically a subject of access control, we must start by discussing the access control mechanisms available:

Standard SQL access control mechanism: SQL language supports the Discretionary Access Control (DAC) mechanism where SELECT, INSERT, UPDATE, and DELETE privileges are granted or revoked with the SQL GRANTS and REVOKE commands:

GRANT privileges
ON object
TO users
[WITH GRANT OPTIONS]

DAC defines object permissions at the discretion of the originator or owner on the table level. Access privileges may be passed on to other users by an owner’s object. Usually, DAC policies are implemented through views. However, complex, fine-grained DAC access policies are difficult to change and tedious to maintain. Lack of proper security assurance, violation of the separation of responsibilities principle, and management overhead are reasons why SQL DAC is inefficient when used on its own.

Mandatory Access Control (MAC) mechanisms: MAC is an access control model where the system enforces security policies on objects independent of user operations. DBMSes offer a variation of MAC mechanisms called Label-Based Access Control (LBAC). LBAC lets DBAs set security labels at the row or column level and mediate access to data based on the identity and label of the user and the label of the row or column. In addition to LBAC, DB2 10 for z/OS offers RCAC, which lets you set up access rules to a table at the row and column level.

The problem with LBAC is that this approach requires data classification; it may quickly become too granular and is likely to be burdensome for most IT environments. Label-based models originated and have been widely used in military and government environments and likely are most appropriate for these settings. RCAC addresses the shortcomings of LBAC, but still lacks alignment with the least privilege security principle and dynamic separation of responsibilities.

Role-Based Access Control (RBAC): RBAC establishes security policies by granting access privileges to roles rather than individuals or groups. In the RBAC model, users can’t transfer privileges to other users. This addresses the ownership rights problem in the SQL DAC approach. RBAC differs from MAC, as it grants permissions to individual transactions rather than specific objects. Flexibility is supported through roles, which, unlike groups, can be assigned dynamically based on a variety of attributes ranging from time-based to security certificates and host names. RBAC simplifies reallocating users from one role to another and altering privileges for an existing role. RBAC can be combined with DAC and RCAC capabilities to enforce separation of responsibilities, follow the least privileges access principle, keep management overhead in check, ensure fine-grained data access and facilitate regulatory compliance. 

RBAC, complemented with DAC and MAC capabilities, is the most effective option from a total-cost-of-ownership perspective. That’s attributable to the significant labor and maintenance burden associated with using the DAC or MAC access control models alone, as well as the need to meet regulatory requirements.

An efficient data access mechanism that enforces centrally managed, role-based security policies directly in the DBMS should include these capabilities (see Figure 3):

• Access roles, which are a collection of privileges defined in accordance with the separation of responsibilities and least privileges principles
• Trust context as a collection of roles dynamically associated with a user based on user identity and a multi-dimensional attribute set (i.e., security certificates, host names)
• End-to-end auditing, as directed by the accountability and traceability principle, so users in various roles don’t abuse legitimate database privileges for unauthorized purposes.

Role-Based Data Access Control for System Authorities

Executing the separation of responsibilities principle is an essential part of the access control security model and key objective of many compliance initiatives such as the Sarbanes-Oxley Act, Payment Card Industry Data Security Standard (PCI DSS), etc. Essentially, it focuses on restricting the authority of individuals to minimize opportunities for fraud and human error so no individual can control all information system administrative and support functions. 

The principle of the least privilege complements the principle of separation of responsibilities and states that any entity (system or human) should have access to only the resources (platforms, data) required to perform their job functions. The principle of the least privilege can be implemented as a “need-to-know” approach where entities have access to only the functions and information relevant to their roles and duties.

To uphold these security principles, DB2 10 for z/OS introduced a new authorization model to support fine-grained system authorities and separate the people responsible for the security of the corporate data from the people responsible for the integrity and maintenance of data systems. When the system parameter SEPARATE_SECURITY is set to “YES,” the previously unrestricted power of SYSADM authority is divided among several new database roles to ensure no one has full control over both the data and configuration of the system. These new database roles separate the data security and the account management from the traditional SYSADM role. Specifically:

• DBADM can now be a system authority for managing all databases in a DB2 10 for z/OS subsystem without the ability to access data or control access to data.
• SECADM is a new authority for performing security tasks without the ability to change or access data. An individual in this role is responsible for managing data access privileges, which entails creating, modifying, and dropping trusted context, roles and audit policies.
• DATAACCESS is a new authority for accessing data without the ability to manage data or control access to data.

To ensure consistent implementation of the RBAC access model, each authority can be associated with a role definition—standalone database object, not dependent on any specific accounts (authorization IDs). DB2 roles were introduced in DB2 9 for z/OS as an object defining a collection of privileges aligned with a job function. But DB2 10 for z/OS introduces new system authorities and has made it possible to define a comprehensive, role-based access control model for application and administrative access.

Role-Based Data Control Mechanism for Application Access

As discussed previously, most applications use the same approach to access data in DB2. In this approach, coarse-grained data access privileges are granted to a group or individual users. This is because middleware servers normally maintain a pool of connections to the DBMS and each connection is associated with the same authentication ID. It isn’t practical to maintain a connection for each individual user since it negatively impacts performance. So most systems use overprivileged authentication IDs, which grant access to data spanning many actual job responsibilities. To complicate the matter further, applications hosted on distributed platforms are associated with distributed user identities and DB2 on z/OS resources are protected using RACF user identities. This mismatch prevents the use of fine-grained access control models such as RCAC.

Let’s consider the details of this traditional approach for applications running on WebSphere Application Server (WAS) as shown in Figure 4. The overprivileged RACF user identity (username/password) associated with pooled database connections is hard-coded as a Java Authentication and Authorization Service (JAAS) alias in WAS and stored directly in the file system of application servers in a security.xml file as an encoded value.

Custom password encryption can be enabled but it doesn’t address the key issues with the current approach to access control:

• Violation of the least privilege and separation of responsibilities principles by associating a database connection pool with a single overprivileged DB2 authorization ID. Permissions are granted to either an overprivileged individual ID or a group, which is a collection of users established exclusively based on user identity, and not associated with a specific activity.
• Loss of user identity and the ability to trace the activity to a specific entity.

To facilitate establishing effective role-based access control mechanisms, DB2 9 and 10 for z/OS introduced new security concepts: roles and trusted context. Role is a collection of privileges aligned with job functions. In DB2 9 and 10 for z/OS, access to DB2 objects can be restricted based on the role membership when DB2 trusted contexts are defined; in other words, in DB2 for z/OS, roles are only available as part of trusted connections. 

Trusted context is a DB2 object that controls an application’s access to data based on trust attributes: connection AuthID, domain name, encryption type and list of host names (or IP addresses, though host names are preferable from an operational standpoint) from where connections can be made. With trusted context:

• Users eligible to use trusted context get access privileges through roles associated with trusted context
• The current connection user identity can be switched with or without requiring further authentication.
   
To establish a role-based application access to data for an n-tier application, the following approach can be used:

• Define roles in DB2 and middle-tier applications so they’re aligned with the actual business roles.
• Implement declarative or programmatic role checks in the application to establish role membership; map roles to trusted connections.
• Associate DB2 roles and trusted contexts so DB2 acts as the actual policy enforcement point. Distributed users will be granted data access privileges associated with a specific role once they meet the requirements set by application logic and trusted context (see Figure 5).

The caveat is that, in environments where applications are hosted on distributed platforms and DB2 on z/OS, DB2 can’t verify roles membership for distributed users. While DB2 10 for z/OS supports further restriction of data access by activating row and column access control aligned with DB2 roles, this functionality isn’t much help when using new functions (such as VERIFY_TRUSTED_CONTEXT_ROLE_FOR_USER) in environments where identity propagation isn’t supported.

Consider our example for an application deployed on WAS, which requires access to data in DB2 for z/OS (see Figure 6). When trusted context is configured, DB2-side applications running on WAS will use the same pool of database connections but the authentication method changes to “trusted connection” and the AuthID hard-coded in the JAAS alias is now an account with connect-only privileges.

Implementing a role-based control mechanism for application access consistent with the least privileges principle using DB2 roles and trusted context provides these advantages:

• Increased data protection from human errors and malicious actions. It decreases the impact of defects in SQL queries and SQL injection attacks (error or attack containment).
• Simplified auditing. Using DB2 roles to grant privileges and align DB2 roles with business roles helps document and streamline database access.
• Decreased risks due to compromised middle-tier servers. Even if a DB2 authentication or ID/password associated with pooled connections would fall in the wrong hands, it doesn’t result in a security breach because this account has only connect privileges and no data access privileges. Trusted context will also prevent access from unauthorized servers even if a legitimate connection credential is used.
• Preserves the performance advantages of pooled database connections.

Supporting Accountability

Establishing a role-based access control model using DB2 trusted context and DB2 roles lets you manage an application’s access rights to DB2 resources in a way that’s consistent with the least privileges and separation of responsibilities principles. However, in corporate environments, employees sometimes play several roles. They also may be granted temporary access rights beyond their normal duties. In other words, there may be situations when enterprise data may be accessed using legitimate role privileges, but then temporarily modified and changed back to satisfy the integrity rules in an inappropriate manner.

The principles of accountability and traceability establish that all actions must be traceable to the entity on whose behalf the action is being taken; the history of data manipulations is maintained and available for examination by authorized parties. To implement this principle, several mechanisms must be supported:

• Association of a unique identity with a sequence of actions so that action or sequence of actions can be unquestionably traced back to this identity
• Creation of a record describing the sequence of carried actions, along with a unique identity associated with this sequence, timestamp and other attributes, which may be used to verify the occurrence of the specific action.

Implementing the security principle of accountability and traceability for DB2 for z/OS requires establishing a mechanism to propagate user identities to DB2 to uphold end-to-end traceability. This can be challenging in many corporate environments where middle-tier applications run on distributed platforms. In these environments, transactions are associated with distributed identities while DB2 for z/OS operations run under RACF identities. Additionally, using pooled database connection mechanisms further complicates the problem.

Before DB2 10 for z/OS and z/OS Version 1, Release 11, the primary tool to propagate and map distributed identities to z/OS was the Enterprise Identity Mapping (EIM) mechanism. It provides the mapping of user identities from various user registries using z/OS Lightweight Directory Access Protocol (LDAP) server. The EIM mechanism was replaced by DB2 support for z/OS identity propagation using distributed identity filters and will be retired in the next DB2 release. The distributed identity filter rules define an association between a RACF user ID and one or more distributed identities. The rules don’t revalidate distributed identities; they map them to RACF user IDs. When identity filters are used and a user is audited on the z/OS operating system using SMF, the audit record contains the distributed identity (distinguished name), domain and the mapped RACF user ID. This establishes end-to-end accountability based on associating the transaction with a user on whose behalf this transaction was executed.

DB2 roles and trusted context apply the least privileges and separation of responsibilities principles and also support propagating distributed user identities from middle-tier applications. Assuming a distributed user identity is established, it can be propagated via DB2 trusted context. Specifically, for Java Enterprise Edition (JEE) applications running on WAS, if a distributed user identity is established in JEE security context, it will be automatically propagated to DB2 via DB2 trusted context (see Figure 7).

Advantages of using this approach include:

• Establishes end-to-end individual accountability across platforms using native mechanisms available in DB2 10 for z/OS. The distributed user identity received from middle-tier servers is maintained and logged along with the RACF identity.
• Bridges security silos between distributed applications and DB2 for z/OS, enabling the use of the fine-grained access control RCAC for environments where n-tiered applications use distributed identities
• Creates a more granular accountability structure by facilitating the process of tracing the transactions to a particular user in a specific role
• Simplifies problem analysis and reconstruction of event activities.

Conclusion

The value of establishing a centralized, role-based data access control mechanism goes beyond meeting regulatory and audit requirements. It offers a lower cost of ownership for organizations, reduces security risks and establishes a foundation for comprehensive data protection.

References

• John Viega and Gary McGraw, Building Secure Software: How to Avoid Security Problems the Right Way. Addison-Wesley, 2002
• Jerome H. Saltzer and Michael D. Schroeder, “The Protection of Information in Computer Systems,” 1278-1308. Proceedings of the IEEE 63, 9, www.cs.virginia.edu/~evans/cs551/saltzer/
• David F. Ferraiolo and D. Richard Kuhn, “Role-Based Access Controls,” National Institute of Standards and Technology, http://csrc.nist.gov/rbac/ferraiolo-kuhn-92.pdf
• New DB2 z/OS V10 authorization model for administrative authorities, http://publib.boulder.ibm.com/infocenter/dzichelp/v2r2/index.jsp?topic=%2Fcom.ibm.db2z10.doc.seca%2Fsrc%2Ftpc%2Fdb2z_adminauthorities.htm
• “z/OS Identity Propagation,” www.redbooks.ibm.com/redbooks/pdfs/sg247850.pdf.