Operations

Batch Automation as Infrastructure Optimization

2 Pages

The moment a business process is initiated, the underlying IT process takes over. From Enterprise Resource Planning (ERP) to Customer Relationship Management (CRM) to data warehousing to reporting tools, the demands of global business have added layers of complexity to business and IT processes. The accuracy and speed of IT are critical to bottom-line profitability. To compete effectively, an organization will commonly spend millions of dollars on software applications to handle its business processes, and hundreds of thousands more to deploy the applications into the corporate IT environment. However, this initial planned investment is often followed by a long line of unforeseen expenses. Once a process is initiated, organizations lose visibility of its execution as it enters the “black box” of the IT environment. The results are errant data, missed service levels, latency, and unplanned downtime.

Processes often span multiple applications and operating systems to provide required business functionality. For example, an online order that’s submitted through a corporate Website might kick off a process flow that enters customer data into a CRM application, retrieves a credit rating via a Web Service call, checks warehouse inventory, executes a credit transaction, delivers shipping information, and then e-mails the customer to confirm the order. Throughout this process, exceptions, errors and data events such as low inventory might trigger additional processes or put the entire process on hold.

Within a workflow tool, an experienced analyst might represent this entire business process flow as a clean and logical diagram, perhaps a series of boxes representing process components connected by simple black lines. Since the process has been thoroughly planned, it’s tossed over the wall for IT to execute. It seems straightforward enough; IT just needs to create the black lines to link it all together. But inventory might be stored in a Seattle warehouse that runs SAP Supply Chain Management and the billing center could be in Denver running Oracle Financials. Suddenly, those non-descript black lines in the business workflow represent critical IT integration touchpoints, and the way IT handles its black lines can directly affect whether the organization makes the Fortune 500 or files for Chapter 11.

Is “Good Enough” Really Good Enough?

Typically, a large IT organization uses tried-and-true methods of integration for its processes—custom scripting and human labor. These solutions work well enough to keep a company in business, but effectively, they cement processes in place. Emerging trends such as SOA and Business Process Management (BPM), corporate mergers and acquisitions, regulatory mandates, including Sarbanes-Oxley and the Health Information Portability and Accountability Act (HIPAA) and the increasing drive toward business-on-demand require organizations to reassess their current integration methodology. Dated forms of integration are error-prone, time-intensive, and inflexible. Maintenance costs, downtime, lack of visibility and poor audit functionality may have been the norm, but solutions now exist that optimize the application processing infrastructure for more efficient, accurate, and agile business process execution.

Over an application lifecycle, the piecemeal implementation and integration of enterprise solutions can equate to thousands of hours of systems integrators’ time just to get the software functional in its new environment. When the application goes live, not only are processes locked down by custom scripting, they’re heavily dependent on “swivel chair integration” and human handoffs to continue the process flow. This intervention introduces human error, latency, and the opportunity for malfeasance. While manual checkpoints might be required for some processes, the amount of labor that results from traditional integration techniques can quickly exceed expectations. When efficiency is the goal, any latency is too much. For example, users’ jobs can become more onerous due to complex submissions for routine reports and functions, while the IT staff and consultants who worked so hard to script the application into production can be left to maintain monstrous script libraries for years to come.

The manual element impedes efficiency and leads to another problem with the mature application—limited flexibility. When even minor changes occur, such as moving a machine or changing a database login, dozens, even hundreds of scripts must be updated. Adding new applications to the enterprise suite or upgrading existing applications may become as complex and expensive as the original ERP installation due to the mountains of scripts that must be created or updated. Extensive application customizations can effectively paralyze an enterprise when vendors announce dropped support, since IT can’t upgrade the core software in a timely manner, and headcount turnover can leave an organization without the critical brain trust needed to adapt existing scripted solutions.

A rigid IT application environment equates to a rigid business environment. A business process is only as flexible as the IT processes supporting it. When IT can’t adapt, business can’t adapt. And when that happens, the business becomes extinct.

The frantic pace of global business means products and entire industries are being commoditized. Today’s successful business must be able to rapidly change direction, but too often, the underlying IT organization can’t provide such responsiveness.

Imagine a hypothetical ERP rollout. We’ve scripted our solutions into production and have learned to deal with the issues of disgruntled users and operational staff. Management’s not too concerned if operations’ job is a little harder or some people end up staying late a few nights. The scripted integration is expensive to maintain, but in general, it works.

2 Pages