“Any darn fool can make something complex; it takes a genius to make something simple.” This quote has been attributed to different people over the years, from Albert Einstein to Pete Seeger. Whatever its origin, it can really be applied to code. When given a task to accomplish, developers should all be able to code something that will work. But how will it be understood and maintained in the future? How can you objectively measure something as complex as complexity?
You can try to count software lines of code, but that won’t tell you much about a program’s complexity. For that you need something that’s going to provide some insight into the guts of the program. The Halstead Metrics, for example, help developers glean more insight into the complexity of a program by looking at the actual verbs and variables used in the program instead of just the lines of code. They give you a feel for the size of the program. However, while the Halstead Metrics help with comparing programs on a high level, they really don’t get inside the program—where the complexity lies. In other words, you can’t see that one part of the program is more complex than another. For that we have to go down a level into the program—to the Paragraphs, Procedures or Perform Groups.
Two programs could have the exact same number of lines, operators and operands, yet be totally different to maintain. Why? It has to do with how they’re coded, what they’re doing and how the developer chose to implement the logic. The core of complexity is made up of decisions. Think about our own lives. We all know that decisions can be difficult, but if they’re made one at a time, then they aren’t so bad. But the reality is, decisions are often linked. A decision will invariably lead us down one path, resulting in other decisions, which lead to other decisions, and so on. The overall complexity of the situation grows, and so does the importance of making the right decision at each point. This is the problem we face in code, and, therefore, we need ways to manage the resulting complexity.
One way to manage the resulting complexity would be to just code all these decisions into one area, be it paragraph, procedure or module. Grouping them together is easy and it’s the way things are often done. But if you group decisions together, understanding the logic becomes very difficult because you will have to comb through a large section of code to get to the area in question. It would be clearer to isolate important pieces of logic into separate areas. So, for example, after a main decision, you would branch to different areas and continue this process. Each section should be similar in size and provide balance. Think of it like a Calder Mobile. At each point, there’s a split with the other elements balancing below it. But if any part gets too large, the balance is gone. This process should make it easier—assuming you have good naming and comments—to quickly locate areas you need to examine. You will have far fewer decisions to understand within those areas.
So, it’s better to break down the decisions into the smallest blocks possible and isolate them. In the end, you still have the same number of decisions, but maintenance is far easier. But how do we judge the size of these blocks? How can we quickly know the number of decisions in each block? If we think in terms of the delicate balance in a Calder Mobile, how do we ensure the blocks are similar in size and we get the “balance” we need?
This is where the McCabe Complexity Metric can be helpful. The McCabe Metric relates to the number of decision points (points where the logic path splits) in a section of code. So, for example, if we had four statements that moved data from one field to another and there are no decisions in it, that section would get a score of 1. If we put in one decision, then there would be two paths and it would be scored as 2. If one of the branches has a decision, then there would be three paths, so it would get a score of 3 and so on. This continues until you end up with, at times, pretty big numbers. Generally, it’s best to keep each section of code at 10 or less. Ten is about the number you can keep in your brain—any more and the theory is you will get lost.
The McCabe Metric will help you find hidden knots of logic in your programs. We’ve all experienced them—the areas in code that seem to be needlessly complex. They tend to be prone to error so developers will spend most of their time in these areas, analyzing and debugging. To quickly spot the areas of greatest complexity, you may want to establish a threshold of 10, or a slightly higher number such as 15, and critically examine anything that rises above it. Often, you will find that most of a program is usually under 10, but a few areas are higher, and sometimes much higher. The higher the number—and it can be in the thousands—the harder it will be to understand and implement a change for a section of code.
Putting the McCabe Metric to Use
Making a practice of assigning a McCabe score, along with noting the time it takes to understand each program’s code, helps developers predict how long it will take to implement changes on projects with similar McCabe scores. For example, a developer could say to an end user, “This change is in a block of code that has more than 500 paths, but the last one we did for you had only 20. It will take X amount of time longer to assess, change and test.” It can also be useful for assigning resources to the project. Managers may want to use more experienced staff on areas with the highest complexity and task them with not only implementing the change, but also simplifying the code, if possible.
By extending the McCabe Metric further in combination with the Halstead Metrics, you can also compare applications in your portfolio. For example, the metrics can help you choose whether to acquire a new application or consolidate duplicate applications. To make this determination, you would want to look at the functionality and the complexity because you will need to support it in the future. There are many things you can do with this metric to help in this regard. One is to use a threshold and record how many sections are above the threshold. Another is to calculate an average for each program to aid in comparisons.
The McCabe Metric can also be used to increase the overall quality of code. One organization did a before/after comparison of a program before moving the changes to production. The Halstead numbers had gone up slightly, but reviewing the McCabe numbers revealed a big change that could have future consequences. Originally, most of the program had low scores, but there were two areas that had numbers over 300. In the new version, there was one area with more than 800 with the rest below 100. So, the complexity was now clustered into one area that would be extremely difficult to understand, change and test in the future. They sent it back for rework to make sure the complexity scores were reduced. Had they not had this practice in place, the code would have been moved to production, and at some point in the future, it would have caused difficulties in maintenance. Resolving it now, with the changes fresh in mind, should be easier than dealing with it later.
The McCabe Complexity Metric is also helpful in determining the minimum number of unique test cases needed to test the code. So, if we had a section that had a score of 57, we would know that at the very least, 57 test cases are necessary to fully test the code. But it’s the unique test cases; just 57 records running through it may not work, but you know that 20 definitely won’t work. Using the McCabe Metric, along with a Control Flow diagram, you will be able to quickly come up with the necessary test cases. The number then establishes a baseline for complete testing. This, combined with code coverage results, can provide valuable proof of your testing efforts.
As code gets more and more complex, it becomes increasingly difficult to maintain even for the most experienced developers. The McCabe Metric is among the most valuable metrics in a developer’s tool box. When used along with the Halstead Metrics, the McCabe Metric can be effective in objectively assessing and comparing the complexity of new programs and applications, improving testing efforts and even increasing the overall quality of code. Developers adept at using these metrics will gain valuable, objective measures for complexity, so they can tame—and perhaps reverse—the trend of increasingly more complex programs.