In a past job, I worked on simulating a call center operation with the goal of optimizing staff coverage in the call center. We would run various scenarios with a different number of representatives under different system loads. From that data, we would determine how many call center representatives should be staffing the call center at various times during the day.
The goal was to have the workers as close as possible to 100 percent engaged, while still having a small buffer for errors in the forecast. Too large a staff and the reps would sit idle while waiting for calls. Too few and the customers would spend too much time waiting on hold to speak to a customer service representative (CSR), get frustrated and hang up. Part of the simulation involved determining how long each customer spent speaking to an actual employee. This time was analyzed and distributions were created to accurately simulate this time. Built into that distribution was the time spent processing the different business transactions. For example, the CSR would have to bring up a customer’s record, find the item they wanted to order, process their method of payment and then place the order.
Looking back, what we were doing was optimizing call center coverage based on the performance of those executed business transactions. One transaction would search and return the customer data or open a new account; another would find the item; another would check stock; and another would execute payment. Each of those transactions had to traverse the systems of engagement to the systems of record. The system of record was an IBM mainframe computer with CICS and a DB2 back-end. The distribution of call times we created didn’t know whether the transactions were efficient or inefficient. We also didn’t know how long each transaction took. We only had raw data that said Customer X picked up the phone at Y time and stayed on the phone for Z amount of time.
Based on the simulation, we knew that a certain number of call center employees were needed at different times of the day and the call center could then staff to those levels. Today, I think back on those exercises and realize we stopped the project too soon. Had we continued with the process, we could have gone a few steps further and optimized the IT applications used to access the data, or at least looked at them to determine if they were efficient or inefficient. If we could have looked at those transactions as they spanned the multiple systems and seen what they were doing on the mainframe, maybe they could have been optimized to execute faster. Think about what effect that would have had on the simulation and forecasting of call center employees. If the transactions execute faster, the customer stays on the phone for less time, which means that fewer CSRs can service the same load of customers. Fewer CSRs means less overhead and a potentially significant cost savings for the company.
The moral of this story is application performance is often looked at as an IT problem, but it’s really a business problem. Poor performance impacts the business’s revenue. Studies prove—and my model proved—that optimizing performance will increase revenue.