Sep 23 ’13
Big Data: Big Security Risk?
Like many other open source models, Hadoop has followed a path that hasn’t focused much on security. To effectively use Big Data, it needs to be properly secured. However, if you try to force fit it into an older security model, you might end up compromising more than you think. But if you make it highly secure, using a legacy security model, it might interfere with performance.
To effectively secure Big Data, follow these security tips, which aren’t addressed by prior security models:
Hold on to the keys to the kingdom. In a hosted environment, the provider holds the keys to your secure data. If a government agency legally demands access, the providers are obligated to provide access to your data. While it’s necessary, the onus should be on you to control when, what and how much access you’re giving to others and also keep track of the information released to facilitate internal auditing processes. Keep the keys to the kingdom with you. An encryption proxy can provide tighter control.
Encrypt data selectively. If you encrypt the entire data, it could slow down the performance significantly. To avoid that, some of the Big Data, Business Intelligence and analytics programs choose to encrypt only portions of data that’s deemed sensitive. It’s imperative to use a Big Data ecosystem that’s intelligent enough to encrypt data selectively.
A separate and more desirable option is to run faster encryption/decryption. Solutions such as Intel Hadoop security Gateway use Intel chip-based encryption acceleration (Intel AES-NI instruction set as well as SSE 4.2 instruction set), which is several orders of magnitude faster than software-based encryption solutions, and more secure, as the data never leaves the processor for an on- or off-board crypto processor.
Safeguard identifiable, sensitive data. Sensitive data need can be classified into two groups: risk or compliance. Safeguarding your data might include one of the following:
• Completely redact this information so you can never get back the original information. While this is the most effective method, it would be difficult to access the original data if needed.
• Tokenize the sensitive data using a proxy tokenization solution. You can create a completely random token that can be made to look like the original data to fit the format so it won’t break the back-end systems. The sensitive data can be stored in a secure vault and only associated tokens can be distributed.
• Encrypt the sensitive data using mechanisms such as Format Preserving Encryption (FPE), so the output encrypted data fits the format of the original data. Care should be exercised in selecting a solution to make sure the solution has strong key management and strong encryption capabilities.
Don’t let applications/services access raw data. This could be disastrous. Instead, you might want to enforce the data access controls as close to the data as possible. You need to distribute data, associated properties and classification levels and enforce them where the data is. One way to enforce this would be to have an Application Program Interface (API) expose data that can control the exposure based on data attributes locally.
Don’t allow APIs to be exposed. Many of the Big Data components communicate via APIs (i.e., HDFS, HBase and HCatalog). When you allow such powerful APIs to be exposed with little or no protection, it could lead to disastrous results. The most effective way to protect your Big Data goldmine is to introduce a touchless API security Gateway in front of the Hadoop clusters. The clusters can be made to trust calls only from the secure gateway. By choosing a hardened Big Data security gateway, you can enforce all the aforementioned security threats by using rich authentication and authorization schemes.
Protect that NameNode. This is important enough to be a separate issue. This arises from the architectural perspective that, if no proper resource protection is enforced, the NameNode can become the single point of failure, making the entire Hadoop cluster useless. It’s as easy as someone launching a Denial of Service (DoS) attack against webHDFS by producing excessive activity that can bring down webHDFS.
Identify, authenticate, authorize and control the data access. You need to have an effective Identity Management and Access control system in place to make this happen. You also need to identify the user base and effectively control access to the data consistently based on access control policies without relying on an additional identity silo. Ideally, authentication and authorization for Hadoop should leverage existing identity management investments. The enforcement should also take into account the time-based restrictions (such as certain users can access certain data only during specific periods, etc.).
Monitor, log and analyze usage patterns. Once you’ve implemented an effective data access control-based classification, you need to monitor and log the usage patterns. You need to constantly analyze the usage patterns to ensure there’s no unusual activity. It’s crucial to catch an unusual activity and access pattern early so you can avoid dumps of data making it out of your repository to a hacker.
As more and more organizations are rushing to implement and use the power of Big Data, care should be exercised to secure Big Data. Extending the existing security models to fit Big Data may not solve the problem; in fact, it might introduce additional performance issues. A solid security framework needs to be thought out before organizations can adopt enterprise-grade Big Data.