Oct 1 ’06
Job Scheduling: More Important Than Ever
For more than 20 years, mission-critical data processing has relied on the automated initiation, execution, management, integration, and recovery of batch-mode IT processing—scripts, jobs, tasks, and other non-interactive IT processes that manipulate a specific data set. These processes may be application-oriented, such as accounting, payments, supplier management, purchasing, ordering, fulfillment or data mining; or system-oriented, such as data backup, defragmentation, migration, export, transfer, or load operations. Typically, they process multiple data items (accounts, orders, transactions, database records, etc.) and return results as a summary or detail report. This is in contrast to real-time or online operations, which will typically process one item at a time before returning a single result to a specific online user.
Such processing has many different names, including production management, process automation, workload management, process workflow, and batch execution. However, most people with mainframe experience will know it as job scheduling, and 20 years on, it’s more important than ever. While it has a pedigree longer than most technologies, this isn’t the same old job scheduling of 1980. It’s packed with new features such as Web services Application Program Interfaces (APIs), service-based scheduling, cross-platform and multi-tier compatibility, complex event triggering, and messaging interfaces. It’s being used as a foundation technology for e-commerce, supply chain management, grid and cluster support, and virtualization implementations.
So what’s driving the continued importance of job scheduling? Of course, enterprises are always looking to lower the cost of doing business, and scheduled processing for batch jobs lets a single user process a large volume of like transactions with minimal interaction, accomplishing more work in a day than with interactive, online processing. Job scheduling lets multiple processes execute unattended, reducing the cost of managing data center operations such as submitting these processes, monitoring them, checking that they complete normally, etc.
Business processes, especially long-running but not time-critical processes, are more efficient when executed offline and when executed on high-throughput enterprise servers (such as System z). Users can submit long-running jobs and move immediately onto other work while they execute in the background. It also can reduce load on smaller systems during peak online usage, resulting in better online performance. So cost reduction and business efficiency continue to be strong drivers, as they’ve always been. However, there are a variety of relatively new drivers, too.
For example, packaged business applications, whether on mainframe, Unix, Linux or Windows, invariably need a sophisticated job scheduling infrastructure. Enterprises highlight the inability of built-in schedulers to handle the complexity of a real-life workload, such as end-of-month accounting. Problems include resolving contention between jobs and departments, sequencing jobs properly, adequately detecting and correcting scheduling and process errors, integrating with other application instances or external systems, and managing resource allocation.
For custom applications, job scheduling also plays an important role. Organizations that invest heavily in custom application development view their proprietary IT capability as a competitive differentiator. Most of this value is in unique online components, but some event-driven and timer-based processing is invariably required to manage application infrastructure and perform routine maintenance (e.g., database management, configuration tuning, data archiving, file transfer, etc.), and provide offline business processing. Integrating off-the-shelf job scheduling capabilities lets proprietary development focus on competitive features, and deliver them faster. This is especially true for the newest breed of homegrown applications, developed with newer technologies such as Enterprise JavaBeans (EJBs), Java 2
Enterprise Edition (J2EE), Microsoft .NET , Web services, etc. With these complex, multi-tiered applications, job scheduling can make good applications better, by extending their functionality without complex programming; for example, using a batch-driven Extract, Transform, and Load (ET L) process to feed financial data into a custom reporting facility, or even into more sophisticated Business Intelligence (BI) engines.
With Web services, it’s clear there’s significant opportunity for job scheduling to serve as an extension for the Enterprise Service Bus (ESB). Most vendors don’t currently support a true Service-Oriented Architecture (SO A) environment, but this is likely to change over time. There are two vectors for job scheduling in an SOA environment: as a service used by other applications, and as a service that controls other applications. Applications exposed as services could as easily be controlled in offline mode by a job scheduler, just as they’re controlled in online mode by another application or calling process.
SOA has the potential to make process scheduling more relevant, in the same way Enterprise Resource Planning (ERP) applications did. The expectation for the ES B is to deliver a real-time, dynamic, responsive transport layer for accessing a set of common application services from any online application. The reality is likely to be similar to ERPs: The initial hype and expectation will be online, but eventually will come the realization that offline, batch-mode processing is more efficient for any process that doesn’t need to return results immediately to a user at an online screen. There’ll likely be an online ES B, responding immediately to user events (mostly during online business hours), and an offline ES B, holding user events in batches until they can be processed most efficiently (e.g., overnight or during off-peak periods). At the intersection of these two vectors, job scheduling functions are exposed as services that control other common application services. Effectively, this becomes a scheduled service broker, where an application can ask a job scheduling solution to perform a process at a specific time, based on a specific event, or dependent on specific conditions. The job scheduling solution holds the request until the condition is fulfilled, then executes the service as requested. It may or may not pass results back to the calling service, or indeed to any other service, to form a services-oriented workflow.
This offline ES B could provide an extremely efficient way of moving large volumes of data between services, and for executing service-oriented processes that aren’t time-critical.
Job scheduling also plays a critical role in integrating these new applications and platforms with existing systems such as System z. The common interface enables one skill-set to control operations on new or different platforms. For example, Unix specialists can operate System z, and System z specialists can manage Unix or iSeries. Job scheduling also provides several organizations with Electronic Data Interchange (EDI)-like integration points for outside the organization—such as with suppliers and agents—by providing secure, Web-based interfaces to internal processes such as inventory and production.
Compliance to regulations and best practice standards is another major driver. Job scheduling can provide essential control and audit capabilities for access to sensitive systems and information, whether in healthcare, finance, or government. Job scheduling solutions can formalize and standardize system and application processes, by transforming ad hoc and manual operations into tightly structured procedures that are centrally secured, controlled, and audited.
For example, scheduling batch jobs for system maintenance allows administrators to manage data files at arm’s length, without having any personal access to them. This functional isolation helps ensure data integrity and enables regulatory compliance. In one example of how job scheduling addresses compliance, consider a Wall Street trading firm and how they must meet settlement deadlines to avoid potential breach of contract risks and adhere to strict Securities and Exchange Commission (SEC ) regulations regarding settlement times. Their risks range from customer service complaints, through litigation payments and fines, to delisting and criminal prosecution. However, they certainly don’t want to settle too early, as they want to keep as much cash on hand as possible to maintain other positions and to accrue short-term interest. They can’t afford human errors, just as they can’t afford exposure to processing failures. This firm relies heavily on job scheduling to conduct settlement transactions on specific, strategic timetables.
Job scheduling also addresses specific service-level issues. Many manufacturers use event-driven job scheduling to help with Just-in-Time (JIT) inventory and stock management. In manufacturing, there’s a delicate balance of inventory (both inputs and outputs) with production. Excess inventory reduces profitability by increasing the costs of processing, warehousing, and administration, and by reducing cash flow. Avoiding this problem by keeping inventory down often leads to under-fulfillment— not enough inventory to meet customer demand and production requirements, which leads to production lines standing idle, workers being paid to produce nothing, and customers being lost as orders go unfilled. This delicate equation is addressed by JIT processing, where inventory levels are based on real-time events such as processing capacity and stock depletion, and driving production based on real-time events such as customer orders and invoice processing. With event-driven job scheduling, manufacturers take existing customer orders in real-time and drive them through the entire value chain to automatically fulfill but not exceed customer demand.
As with many IT automation systems, job scheduling provides significant benefits as smaller companies grow, either organically or through Mergers and Acquisitions (M&A). Automation in general and job scheduling in particular let companies add new applications and handle larger customer volume without a linear increase in headcount. Companies going through M&A also benefit from being able to integrate systems quickly and easily with minimal disruption to existing processes. For growing companies, job scheduling provides an ability to meet market requirements faster with packaged batch processing. Whether companies deploy new applications, Internet-based applications, J2EE or Web services, every new application and server needs a common set of processes from day one—such as data archiving, backup and recovery, and user management, to ensure things don’t break. Job scheduling systems can automatically provide this baseline maintenance, bringing applications online faster, and with a lower setup cost.
Here are some recommendations, gathered from interviews with vendors, enterprises, and system integrators, for implementing job scheduling solutions:
- Make sure you’re adequately prepared. This includes assessing the environment, quantifying the deployment, getting business and executive buy-in, assembling the right skills, and determining your ROI and Total Cost of Ownership (TCO ) goals beforehand to ensure you don’t over-deliver.
- Develop baseline implementation standards that will fit long-term needs, across multiple environments (test, development, acceptance, production). Remember to plan and regularly test a failover environment, too.
- Implement in measurable, achievable phases, and continually review your progress. Consult with experts such as user groups and professional services, and listen critically to the vendor, as you start and complete individual phases.
- Apply the same discipline as you would to an application development project. Use an object-oriented approach, implementing reusable definitions and processes wherever possible. After implementation, establish a continual review process.
- Avoid the trap of thinking, “This is the way we do things, so this is how we will continue to do things.” Focus on the results and business goals, understanding what’s happening and why, while realizing that new systems won’t do things the same way they’ve always been done.
- Beware of custom code: both your own and the vendor’s. You’ll sometimes be unable to avoid customization, but you should try to deploy out-of-the-box functionality.
- Make sure you understand the full investment cost upfront. There are often “hidden” costs, too, such as the costs of additional software, hardware, resources, or upgrades.
- Avoid the assumption that everything is critical. Make a distinction between “important” and “critical.” Ask yourself, “If this process doesn’t complete, can I still do business?” Schedule your processes accordingly.
- Make sure your chosen solution will fit your existing and planned technology environment. Environmental support (or lack thereof) can have a significant impact on implementation cost and long-term ROI and TCO .
Job scheduling can provide a wide range of benefits for IT and users. From typical mainframe process automation to sophisticated support for J2EE, EJB, messaging and SOA applications, job scheduling continues to prove its relevance. If vendors and enterprises recognize that, job scheduling will further expand its business relevance and value and remain a core part of IT process automation for another 25 years.