Sometimes a little reverse psychology is necessary to effect a positive outcome. In this case, we show you the best ways to tune your applications by showing you how not to tune them. Consider it a worst practices list. If you don’t care about wasting money on poorly tuned applications, or you’re running your z/OS system for charity, you needn’t bother reading any further.
Worst Practice #1: Fix Problems in Production Only
The first (but not the only) way to waste money on CPU is to fix a performance problem only when the application goes into production. Who cares about pre-production performance when you have a deadline to meet and a user to satisfy? It will sort itself out eventually. The end users, many of whom will be paying customers (think mobile apps), will endure slow response times and even lengthy outages. Your public relations staff will be extremely busy in the days that follow, explaining the situation to the press and responding to unpleasant comments on social media.
Accenture did a study some years ago where they found that it costs up to eight times more to fix a performance defect in production than it does in design. This is because it’s much easier and cheaper for a programmer to change code or infrastructure in the design, unit test or even system test phases. The associated costs increase with each development phase up to and including the production roll out. Production changes are more costly because they require change controls, code or infrastructure changes and more testing, all while your paying customers are waiting for the application to be available again.
Worst Practice #2: Don’t Use Tools
The Pareto Principle, also known as the 80-20 rule, is alive and well in performance. For example, 80 percent of your application CPU will be consumed by only 20 percent of the application code. Chances are, your DB2 performance issue is an SQL statement buried deep within the application. It probably isn’t hard to fix; it’s only hard to find. You might want to get some help, but then again, tools are for amateurs, right? You read dumps and love hex output. It may take you two or three days to find that needle in the haystack (once you’ve found the correct haystack), but it was productive time well spent. Just keep on truckin’ by the seat of your pants and let your competition fool around with those fandangled performance tools.
Worst Practice #3: Don’t Worry About Buffers
In the old days (circa 1964), buffers were a huge part of performance tuning. There wasn’t a lot of memory to go around, so programmers had to allocate buffers sparingly. Neil Armstrong got to the moon using a computer with 64KB of memory! But today, we have so much memory and everything runs so fast, we no longer need to worry about such things as buffers. Except when we don’t use enough of them.
If you haven’t allocated enough buffers for a busy data set, for example, your I/O subsystem will gladly thrash, resulting in many more EXCP calls than are really needed in order to satisfy one logical request from the application. This is particularly true for VSAM indexing, where the default buffering is rarely sufficient for excellent performance. Each EXCP makes the response time longer, and eats up a little more CPU along the way. There are tools available that will indicate when buffering is inadequate, but your team can’t afford to spend time learning how to use them because they’re so busy fighting fires!
Worst Practice #4: Fight Fires All the Time