Security

Compliance is no longer an optional facet of doing business. Government agencies, service providers, even customers may require a declaration and proof of compliance. Where it used to be that ad hoc reporting could fill the requirement for compliance documentation, business today must have a well-documented and defensible set of standards that prove their compliance to relevant standards. This means standardized tests, documented results and quantifiable reports. And it isn’t enough to meet the requirements of your home government, but now you must meet the privacy and compliance requirements of any country wherein your data resides.

When measuring compliance, each standard must be capable of standing alone. Each standard is a piece in the mosaic, but it must be a discrete, self-contained piece of the overall compliance picture. Within each standard, we will find:

• A description of the standard specifying what’s being tested. This might include the name of the test, such as the Security Technical Implementation Guide (STIG) reference number, and other important documentation. The report produced by the standard must be as self-sufficient as possible. The report should point out what’s non-compliant (so you can take corrective action) and may optionally point out what’s compliant (if it’s compliant, you don’t need to take any action, so reporting may be optional). 
• An indication of any versioning. While building a standard, or working toward compliance, you may take several steps in the process. Versioning helps document these steps. Also, standards aren’t static; for example, STIG issues updates to its standards on a quarterly basis.
• A definition of the environment(s) to be tested by this standard. There may be more than one environment in a standard. Specifying the test items (environment) separately from the tests ensures the environments can be dynamic. The environment is simply the collection of data points used by the tests for the standard in determining compliance.
• A description of any exceptions to the standard. An exception is a short-term variance, something to be overlooked for now, but that must be addressed before full compliance can be achieved. These exceptions must be documented, reviewed and accepted as a temporary situation.
• A description of any exemptions to the standard. An exemption is a long-term or permanent variance to the standard—something that’s architecturally impossible to correct and is, therefore, exempt from the standard when compliance is being measured. These exemptions must be documented and accepted by the party requiring the compliance as permanent conditions that can’t be brought into compliance.

It’s important the description of the standard be clear and well-formed. The description must stand alone, making it clear what the tests are, their environment and the outcome that will prove compliance or not. You must have a list of the tests and their required outcomes. The expectations must be clear and quantifiable. Each test has one of four possible outcomes:

• Compliant. The environment matches the test.
• Non-compliant. The environment fails to match the test (or may match the non-compliant test if the test is so constructed).
• Exception. The environment has a documented and accepted failure of a test.
• Not applicable. The environment doesn’t match that described in the test; for example, a Windows machine doesn’t have RACF, so any such test isn’t applicable.

Exceptions and exemptions count as neither compliant nor non-compliant. Standards may also vary in specificity. Some are broad in their approach and leave room for interpretation both in what may be tested and how it’s to be tested; for example, “There can be no personally identifiable information in the file.” At the other end of the spectrum is this: “SYS1.PARMLIB member IEAAPP00 must have an entry for xyzpdq.” That’s specific and leaves little room for interpretation. These differences in specificity must be planned for and addressed in your implementation of the compliance testing.

As the collection of tests within the standard builds and grows, the need for versioning will become apparent. For example, the January version of the tests within a standard allow for a specific set of exceptions whereas the February version of the tests within a standard should have fewer exceptions, and any exceptions that can’t be corrected should be moved to exempt status. Therefore, the environment, and possibly the set of tests within the standard, grows, month by month or iteration by iteration.

When preparing to run your tests and determine your level of compliance, before any testing can take place, the environment(s) must be built; that is, the context for the standard must be established. For example, if the standard is “System Data Sets are Protected Against Casual Reading,” the environments might include a list of system data sets or a listing of data set (RACF, ACF2 or Top Secret) protections.

Once the environment(s) are built, then the tests can be run against a list of resources. In this case, perhaps all SYS1.data sets are expected to have a UACC less than READ (UACC=NONE). However, the SYS1.MQxxx data sets aren’t yet ready to be included (an exception in this version), and the SYS1.USER.PROCLIB is never to be included because it’s designed to be updated by the user community (the environment being protected is architected so that this data set is required to be exempted and can’t be considered in the compliance). Therefore, it can’t be considered as being in compliance, nor non-compliant, since it’s deliberately designed to be exempt from the rules.

The tests that comprise the standard (prove the claim of compliance) are then applied against each of the resources in the environment and tallied. The resulting tally is a quantified and specific numeric and can either be a ratio (45 of 50 data sets are compliant) or a percentage (we’re 90 percent compliant). This process is repeated for each criterion to be addressed by the compliance verification and for each change made to the compliance requirements. As we all know, these requirements are ever-changing.

Conclusion

To determine whether or not an installation is compliant, you need to: 

• Clarify for which standard(s) compliance is a goal.
• Determine your interpretation of the standard and whether it applies to your environment; there will likely be tests that don’t apply.
• Determine how to collect all the data points that will be tested by tests within that standard (build the environment[s]).
• Determine which, if any, exceptions and exemptions need to be defined within the environment(s).
• Apply the tests against the environments, keeping a tally so you can report percentage of compliance currently achieved.
• Keep tests within your standards current (update the tests as the standards may be updated).
• Use versioning so you can keep current test sets separate from future test sets and keep past tests locked away.

Hopefully, this article has made it clear that while compliance is necessary, you can approach it in a structured and orderly fashion.