IT decision-makers expect application development and QA teams, who must contend with mounting workloads and limited resources, to quickly produce incredibly complex applications that provide rapid time to value.

High-quality, rigorously tested applications that provide a competitive edge should be the norm. But in reality, the process is often cumbersome and fraught with complications. Ironically, the very process of testing often puts your customers’ private information and your company’s reputation at risk, thus jeopardizing the fundamental goal of providing best-in-class customer experiences that allow an organization to flourish. How is this so?

The only way to ensure applications will perform well under real-world conditions is to mimic the data that will flow through them when they go live. IT must use information from its production system—information containing customer names, social security identifiers and credit card numbers, for example—to test and validate the logic, functionality and performance of the application. However, for as critical as it is, many organizations typically test with too much data, not the right data and unprotected data.

The Good News

The good news is capturing data and disguising it for testing purposes can be done in such a way that it protects sensitive customer information, increases developer productivity and gets applications to market faster. The key is the way in which privacy rules are assigned and enforced and the way in which the test data is created.

Test data optimization (TDO) is a proven way to create efficient, effective and secure test data. Before we explore TDO in more detail, let’s examine some less efficient methods of creating test data.

Common Methods for Creating Test Data

Companies can practice any number of homegrown and vendor-supported methods for creating test data with varied results. But these methods are flawed on many levels:

The fake method. One common approach companies take is to generate realistic, yet fake, data. While this method ensures personal information is protected, it’s an extremely time-consuming process that can’t anticipate all the potential variables. This process often results in test data creation and maintenance overhead and incomplete application tests—both of which contribute to increased costs and delayed application delivery.

The manual method. Another approach is to use real production data and then manually write computer programs to disguise personal information. In this scenario, each application test would run a full cycle of all the system data, which is expensive both in terms of CPU utilization and test data storage.

This particular practice does ensure the application is thoroughly tested, but since teams are manually writing programs to protect the data, they may unknowingly create and apply the wrong privacy rules. This tends to happen when personnel are forced to analyze hundreds of objects, such as files and data tables, and then physically apply disguise rules to each and every object. If a privacy rule should change, each one of those objects has to be maintained to ensure the changed or new rule is applied. This can be an unmanageable problem in large shops. 

Using TDO

Ideally, staff should be able to identify sensitive data elements within their shop’s metadata, and build disguise rules in the development environment for these data elements. When a process is run, personnel can look at what files or tables are being acted upon, and determine if any of the identified data elements exist. If they do exist, then rules can be applied at that time to disguise the data at all levels and across all platforms. In the event a disguise rule changes, it’s one simple change that will be picked up the next time the object is acted upon.

2 Pages