Primerica, Inc., headquartered in Duluth, GA, is a leading distributor of financial products to middle income households in North America. Primerica representatives educate their Main Street clients about how to better prepare for a more secure financial future by assessing their needs and providing appropriate solutions through term life insurance, which they underwrite, and mutual funds, annuities and other financial products, which they distribute primarily on behalf of third parties. In addition, Primerica provides an entrepreneurial full- or part-time business opportunity for individuals seeking to earn income by distributing the company’s financial products. Primerica insured more than 4.3 million lives and approximately 1.9 million clients maintained investment accounts with Primerica as of Dec. 31, 2012. Coupled with the success of serving the financial needs of such a sizeable client base is the need to maintain confidentiality and security.
Enterprise Executive recently spoke with Primerica CIO David Wade to discuss the company’s methods of managing and securing data.
Enterprise Executive: David, I understand that, for a period of time, Primerica was part of Citigroup. Can you explain how your IT operations and decisions worked within this environment?
David Wade: Yes, when we were part of Citigroup starting back in 1998, the information systems department was organized to banking standards even though we were an insurance company. We were also audited to banking standards both by our own internal auditors and the Citigroup audit departments.
As a member of the Citigroup organization, we had to implement and operate according to the Citigroup Information Technology Management Policy otherwise known as CITMP. The CITMP provided us with a framework of policies, standards, procedures and guidelines for the 10 different areas of an information systems department defined as Architecture, Change Management, Project Management, Information Security, Continuity of Business (COB), Problem Management, Resource Management, Internet Management, Software Management and Contracting and Outsourcing. Since being spun-off from Citigroup in April 2010, we’ve continued to operate under the CITMP framework.
The beauty of operating within a framework of policies, standards and procedures is the discipline it brings to managing the areas of an information systems department. Policies for each area of the department are enforced by standards, and each standard is enforced by procedures. If a change is made to a policy, then it’s necessary to determine if any changes to the standards and procedures must be made. We were audited to all three of these items in the framework when we were part of Citigroup, and we continue to be audited to these items today.
EE: What made you decide you needed to look into data discovery on your mainframe?
Wade: For us, data loss is a critical area of risk. Like any good organization, we take those risks very seriously.
In the past, an emerging risk wasn’t having data encrypted on laptops and desktops. So, we locked down everything by encrypting all data on laptops and certain desktops. We also locked down all the USB ports in the company, so anywhere a DVD or CD could be written, it’s encrypted because we wanted to make sure data that isn’t authorized couldn’t be taken outside the company.
The newest emerging risk is the improper storing of any credit card data. We have 85 GB of data stored on disk and significantly more on tape. It’s imperative for us to identify the locations of all credit card data to confirm the data is in its proper location, and reconcile on a periodic basis that such data remains properly managed. To accomplish this task, we needed to locate a solution that would enable us to scan all our data, identify any credit card numbers and permit us to perform certain other functions.
Another reason we were looking for a data discovery tool was to confirm that all our data was correctly classified. All our data is classified as public, internal, confidential or restricted. We wanted to be able to confirm that we didn’t have data that was incorrectly classified. Therefore, we needed a way to reconcile all our data classifications, and resolve whatever mistakes had been made through manual procedures. Once that’s completed, we wanted to implement a reconciliation program that would be invoked on a highly regular basis to continuously confirm that no data is incorrectly classified.