IT Management

In the past, when data needed to be securely transported through a network, the common solution was to protect the transport layer through the use of Transport Layer Security (TLS) or Secure Sockets Layer (SSL). This enabled data to be transported from point to point in a secure manner. While this was good, it quickly became apparent it wasn’t flexible enough to protect the complex services and application program interfaces (APIs). This is due to the fact that most times these services and APIs are of composite nature and consist of many smaller aggregated components. Providing security from the client to the first point of entry, instead of point of actual consumption, doesn’t make any sense.

The next step in that evolution of finding a safe, flexible method of transporting data was to provide protection at the field or message level. The thought was that if the specific, sensitive fields or the entire message were protected, then the data was secure. However, the problem with this type of data protection is that while there may be a definition of what’s considered to be “sensitive data,” there’s no way to define in what context certain data is to be considered sensitive using static devices/policy. Therefore, anyone who is authenticated to access sensitive data has access to all such data without regard to the context of the data. The result is somewhat of a static solution.

Remember when we used data loss protection (DLP) technologies to secure data? 

The problem with using DLP to protect data is that it’s passive in most cases. DLP technology identifies sensitive data based on some context/policy combination and then blocks the transaction. While this can work for rigid sets of enterprise security policies, this may not work well for cloud environments where security policies need to be much more flexible than simply a context/policy combination. Otherwise, a person who is authorized and really needs access to the secured data could become very annoyed with their transactions being stopped at each attempt.

Delivering security using both content and context data protection provides the flexibility needed to meet today’s needs. What if there was a way to provide data protection that’s identity-aware, location-aware and invocation-aware while being policy-based, compliance-based, and more important, very dynamic? In other words, what if you could provide data protection based on content and context awareness?

Essentially “content/context-aware” data protection is data protection on steroids. Gone are the days in which you can get your systems compliant and be done with having to protect data. Today, data is no longer staying within the compliant and controlled confines of the enterprise data center; it’s constantly moving through other systems. Since data is constantly on the move, getting your systems compliant, risk averse and secure simply isn’t enough anymore.

The Complexity of Securing Data

A good example of data on the move and its inherent risks is when a cloud computing solution is used. When data is moved through cloud providers—especially the public cloud—and removable devices such as tablet computers and cell phones are added to the mix, the issue of data security becomes much more challenging. Sprinkle data residency issues on top—such as European and Singaporean laws—and the risk of data security issues is quite evident. Timely issues with the NSA snooping are a cherry on the top.

If cloud services are being used, do you have a firm grasp of where your data resides? Take a close look at your cloud provider contract. Are there any guarantees on where the data is stored (data residency and/or compliance)? Are there any guarantees on where the data will be processed (meaning location of data processing)? Is the cloud provider willing to share the liability with you if they lose your or your customer’s data? While some cloud providers are better than others, the terms of such agreements can be very scary when it comes to protecting data. No wonder companies are scared to death about protecting their data when moving to cloud solutions.

The data residency issues are especially big for some European customers. When you’re providing multicountry services, some European customers specify not only data residency for data at rest, but also have mandates to do data processing where they’re permitted to process data.

Here’s how complex the security becomes: Imagine you’re dealing with financial, healthcare and other sensitive data for a specific country, and they ask that you not only store that data in a place that’s within legal boundaries of that country, but also that you process the data within the data centers located in their country. This means you need to sanitize the data, route the messages to services located in a specific place, desensitize the data for processing and sanitize it again for storage.

Hackers are attacking your systems for one reason only—data. You can spin that any way you want, but at the end of the day, they aren’t attacking your systems to see how you’ve configured your workflow or how efficiently you processed your orders. They’re looking for the gold nuggets of information they can either resell or use to their own advantage to gain monetary benefits. This means your files, data in transit, storage data, databases, archived data, etc. are all vulnerable and will mean something to the hacker.

Gone are the days when someone was sitting in mom’s basement, hacking into U.S. military systems to boast their ability with a small group of friends. Modern day hackers are sophisticated, well-funded, for-profit organizations, backed by either big organized cyber gangs or by some entity of an organization or country.

Protecting the data is an absolute must. You need to protect your data at rest (regardless of how old the data might be), data in motion (going from somewhere to somewhere—whether it’s between processes, services, enterprises or into/from the cloud or to storage) and data being processed or used.

Desanitize Data on the Fly

The Intel Expressway Tokenization Broker (ETB) was developed to address these types of problems. ETB is a secure hardware or software appliance designed to desanitize your data on the fly. As such, it functions as a tokenization broker for any enterprise application tasked with handling sensitive data. ETB works by tokenizing sensitive data in documents or API calls and stores encrypted data in a protected, secure vault where it can be accessed only by authenticated applications and users while circulating only the tokens.

The following elements are embedded in the Intel ETB and should be included in any solution you consider:

• Security of your sensitive message processing device. What’s the point of having a security device inspecting your crucial traffic if it can’t be trusted? You need a solution that has the certifications to back up claims by the vendor of being secure. This means a third-party validation agency should have tested the solution and certified it to be strong enough for an enterprise, data center or cloud location. This certification must include FIPS 140-2 Level 3, CC EAL 4+, DOD PKI, STIG vulnerability tested, NIST SP 800-21 and provide support for hardware security module (HSM). The validation must come from recognized authorities, not from just the vendor.
• Support for multiple protocols. When protecting your data, it’s imperative to choose a solution that supports more than the http/https/ Simple Object Access Protocol (SOAP), JavaScript Object Notation (JSON), Asynchronous JavaScript and XML (AJAX) and Representational State Transfer (REST) protocols. You need to consider whether the solution supports all standard protocols known to the enterprise/cloud such as Java Message Service (JMS), MQ, enterprise message service (EMS), FTP, TCP/IP (and secure versions of all the aforementioned) and Java Database Connectivity (JDBC). More important, you also need to determine if the solution can interface with industry standard protocols natively such as SWIFT, ACORD, FIX, HL-7, MLLP, etc. Finally, you need to determine whether or not the solution has the capability to extend its options to support custom protocols that you might possess. In other words, the solution you’re considering should give you the flexibility to inspect your ingress and egress traffic, regardless of how the data traffic flows.
• Ability to read a very wide variety of message formats. You need to be able to look into any format of data that’s flowing into, or out of, your system when the necessity arises. This means you should be able to inspect not only XML, SOAP, JSON and other modern message formats, but the solution should also be able to retrofit with your existing legacy systems to provide the same level of support. Message formats such as COBOL, ASCII, Binary, EBCDIC and other unstructured datastreams are equally important as well. Sprinkle in the industry format messages such as SWIFT, NACHA, HIPAA, HL7, EDI, ACORD, EDIFACT, FIX and FpML as well. You also need a solution that can look into MS Word, MS Excel, PDF, PostScript and HTML as well to help protect data in these message formats.
• Ability to sense not only the sensitive nature of the message, but who is requesting it, in what context and from where. Essentially, you should be able to not only identify sensitive data based on content but also based on the context. Intention, or heuristics, is much more important than simply sensing something is going out, or in. So this essentially means you should be able to sense who is accessing what, when, from where, and more important, from what device. Once you identify that information, you should be able to determine how you want to protect the data. For example, if a person is accessing a specific data from a laptop within the corporate network, you can let the data go with the transport security, assuming the requestor has enough rights to access that data. But if the same person is trying to access the same data using a mobile device, you can tokenize the data and send only the token to the mobile device. This allows you to solve the problem of unknown location as well. All conditions being the same, the tokenization will occur based on a policy that senses the request came from a mobile device.
• Ability to dynamically tokenize, encrypt, format and preserve the encryption based on the need. This will allow you to be flexible to encrypt certain messages/fields, tokenize certain messages/fields or do format-preserving encryption (FPE) on certain messages.
• Support of the strongest possible algorithms to encrypt, store and use the most random possible number for tokenization. You should verify your solution has strong encryption algorithm options available out of the box (such as AES-256, SHA 256, etc.), but also delivers cutting-edge security options as they’re introduced, such as support for the latest versions of any security updates.
• Maximum key protection. This is the most important point. When you’re looking at solutions, make sure the solution meets not only all the aforementioned points, but also provides maximum protection for the security keys. This means the key storage should be encrypted, have separation of duties (SOD) capabilities, key-encrypting keys, strong key management options, key rotation, re-key options when the keys need to be rotated/expired or are lost, key protection, key lifetime management, key expiration notifications, etc. In addition, you also need to explore whether there’s an option to integrate with your existing key manager in-house such as RSA Data Protection Manager (DPM). (The last thing you need is to disrupt the existing infrastructure by introducing newer technology.)
• Encryption of messages while preserving the format to facilitate normal processing. This is really important if you want to do the tokenization or encryption on the fly without interrupting the back-end or connected client application processing. When data is encrypted and the format of the data is preserved, the data will not only look and feel the same as the original data but the receiving party won’t be able to tell the difference.

Conclusion

Data protection has evolved greatly over the last few years. While it’s no longer simply a matter of gaining system compliance, solutions such as the Intel ETB are increasing security and reducing risks for large enterprises with minimal effort without disrupting existing systems.