IT Management

More and more analysts are reaching the conclusion that software exploits are the number-one security vulnerability confronting companies today. 

Most contemporary software is full of exploitable elements that can be used to defeat access controls and inject code that can corrupt or hijack data. The bad guys know it. The software companies know it. Surprisingly, however, few users know it. 

Those consumers who do understand the problem typically fail to act on the issue. For example, they neglect to factor in any sort of security-oriented vulnerability criteria when selecting software to support mission-critical business processes. 

As a result, we continue to place data at risk. Regardless of the amount of money we spend on the coolest new biometric authentication wares, the latest encryption tools, nifty Host-based and Network-based Intrusion Prevention Systems (HIPS and NIPS), or the latest combo firewall/spam blocker/anti-virus appliances, the value of these investments is almost nil as long as software itself opens doors to hackers. 

We aren’t talking pie-in-the-sky philosophy here. Security risks in software are being built right into the code. Preventing this exposure from manifesting itself as a data disaster will require a mix of better tools, better coding practices, and better auditing and decision-making by buyers. 

Software Vulnerability Has Deep Roots  

For every function you need to write an application for, there are 70 or more ways to code it. Of these, only a handful of techniques are secure. 

Now, think bigger. When you have a cadre of software developers, whether in-house or outsourced, who are tasked to implement your software design, chances are many of them are unaware of the risks posed by certain coding practices. Still, others may be too set in their ways (or too lazy) to change the unsafe methods they use to code specific functions to more secure methods. 

Standards for writing secure code are evolving. Microsoft has a few gurus who are writing tomes on the subject— which is not to say this knowledge has translated into better security in the code coming from Redmond. Still, Mr. Gates has been trying to raise awareness of the problem and has set the bar pretty high not only for his own folks, but for the entire commercial software industry. 

Secure coding standards are emerging, which again isn’t to say that standards have been established by all managers of software development projects identifying unsafe coding practices and their preferred alternatives. Time is short in most software development initiatives and there’s little left over for training. Just getting the work done in the time allotted too often takes precedence over getting the code written securely. 

Even where secure coding standards have been established, enforcement is often woefully lacking. Few tools are available to quality assurance folks for use in spotting insecure code in any case. 

And, once the insecure code is compiled, vulnerabilities are being shrink-wrapped with the product and shipped to the consumer. The consumer, in turn, has no way to see the vulnerabilities because of a dearth of static code analysis tools that can evaluate software for backdoors and security gaps after the software is compiled. 

Current Testing Is Insufficient  

We could go on forever about the various methods software vendors use to address the problem. They use testing methodologies such as “fuzzing” to spot egregious flaws in their software—focusing, for the most part, on spotting errors that might cause applications to fail during use, but sometimes including security risks.

However, these techniques test only a small subset of the universe of possible code problems and vulnerabilities. By the time the code is released, it’s usually shipping with known flaws, which are addressed by a ceaseless litany of patches. That’s why “patch management” utilities have become such a cottage industry. 

What needs to happen is a ranking or rating system that evaluates application software for its integral coding practices and the vulnerabilities they represent. Maybe then, consumers can go into a software purchase with a better understanding of the insecurities they are introducing into their otherwise protected environments.