Jan 1 ’04

The Event-Driven Enterprise

by Editor in z/Journal

Organizations trying to become an agile, real-time enterprise often don’t notice that most of their critical business events lie dormant, locked within the safe confines of their complex operating systems. Event-driven architecture offers the promise of unlocking business potential in a world where business processes and their dependencies on business events occur without latency. Imagine a corporation where:

You’re imagining a corporation that has successfully implemented event-driven architecture.

Since a business event loosely constitutes a change or an act in a business state, the information flow is asynchronous or push-oriented, originating from the source of the event and published out to the enterprise. This allows for a new, more virtual view of the corporate enterprise that expands beyond the hardened corporate boundary to encompass every other business entity that could be touched by event-driven business processes.


Multi-point human data entry, phone, fax, mail, and a host of other latency-flawed technologies support traditional methods of processing across enterprise applications. This involves linear batch processing and redundant, paper reporting, data entry, and manual exception processing that occurs in concert with disparate business units. Zerolatency, in reality, becomes days and perhaps weeks, so that information between business units is rendered irrelevant. Because it can improve upon these latency-flawed systems, enterprise application integration (EAI) has become a mainstream discipline that’s attractive to most CIOs.

Initially seen as a way of synchronizing important data under the control of various applications, EAI has evolved into a platform to streamline business processes and improve efficiency by eliminating the friction that occurs inside businesses as a consequence of data inconsistencies. Furthermore, with the advent of ubiquitous Internet connectivity, it has been used to ensure that key applications have the accuracy necessary to encourage confident use by customers, suppliers, and partners.

Most integration initiatives have been pull- or request/reply-oriented. This means that timing of the integration process occurs under the control of the integration platform itself. The integration platform requests or pulls information from the underlying application systems to have the information available to enrich a target application system. However, the pull process occurs at predefined intervals. Such initiatives have delivered considerable business value but are only the first step in making organizations more agile. To further reduce friction inside the business process, business and integration processes must often be triggered by events occurring in the underlying systems in near real-time. This approach is known as event-driven integration. An integration strategy that embraces an event-driven approach can deliver consistent information across the enterprise far more rapidly. Often, event-driven integration is the only way to ensure consistency since many events have meaning only if captured at the precise moment they occur.

An often-cited example is that of stock prices. Repeatedly pulling a quote from an underlying system that tracks the price movements is almost guaranteed to miss certain important price changes due to float—the time between initially retrieved information and its availability for processing. If a price moved from $1.87 to $1.88 and back to $1.87 between invocations of the price quote request from the integration platform, for all intents and purposes, the price movement never occurred from the perspective of the integration platform. One can think of many other examples of such time-sensitive events.

For the largest organizations around the world, implementing a strategy that depends on event-driven integration would mean capturing events that occur within mainframe systems. The IBM zSeries mainframe remains the primary processing platform for the transactional and batch workload of the Fortune 500. Changing the billions of lines of proprietary programming that underpin these transactional and batch applications isn’t for the faint-hearted. Implementing event-driven integration in support of mainframe events requires a non-invasive approach if it’s to have any chance of success.


Many of us have experienced the frustration or embarrassment of having a credit card transaction rejected. Often, it’s the result of an error of financial timing on behalf of the individual or institution. Such occurrences represent opportunities for financial companies to raise customers’ credit limits or offer other financial services to ensure that such an event doesn’t occur again. There are many other examples where business events present opportunities for companies to improve efficiency if the event can be captured and presented to a system designed to handle it. However, for many companies, significant delays occur between the event occurring and it being recognized by the department empowered to capitalize on it.

Consider the customer retention value if the customer involved in the unsuccessful credit card transaction immediately receives a text message on a mobile phone or an e-mail. The message would indicate the financial institution is looking into the event to determine how best to improve service in the future. Many mobile telephone companies have started implementing similar systems to support directory inquiry interactions. The simple process of requesting a telephone number is an event that results in a text message being sent to the customer confirming the number.

The value of an event can decrease in direct proportion to its float time, depending on the event’s nature and importance. For certain events, anything other than instantaneous processing is unacceptable. For others, hours or even days can pass with no appreciable loss of business value. Event-driven integration is essential for the former and can present interesting new business opportunities for the latter.

In any complex system supporting many constituents, huge numbers of events occur daily. The event itself is defined by the use to which it can be put. With sufficiently versatile event-detection technology, virtually all execution steps in existing application environments can be considered an event if business value can be derived from encapsulating the execution step as an event. For a directory inquiry, the execution step may be sending the data to the operator’s workstation in response to the request. When one considers that any screen interaction of a legacy system may be detectable as an event, the possibilities for deriving business value from event-driven integration grow exponentially.


Event-driven integration can be a troubling concept to grasp, but it has profound consequences on an organization’s efficiency and its ability to react quickly to opportunities. Several application types derive distinct benefit from an event-driven enterprise architectural approach:


An event is an abstract concept. Only the effects of the event can be seen. In computing terms, events mostly (but not exclusively) manifest themselves by a state change of some underlying data item. When we talk about event-driven integration, we’re really discussing an event in one application system triggering action in another. In technical terms, this may mean replicating the state change, or some derivative thereof, of the underlying data element that represents the event on another application system. This sounds like data replication, a well-understood technology. However, there are four key differences:


Consider that 470 of the Fortune 500 companies daily process more than $22 billion of transactions on mainframes. Clearly, many business events occur within mainframe systems. Integration of these events to EAI, BPM, or BAM technologies greatly increases the value of those technologies. Many events within mainframe applications can be identified by the changes that occur in underlying databases. However, equally as many events don’t change database state. Examples can be found mainly within inquiry-type systems, where the database state changes may be insufficiently granular to the business event, such that no meaning can be derived at the database level. For example, a sudden rush of inquiries regarding pricing on a particular item may mean that demand is about to surge. Pre-empting such a surge could improve customer satisfaction and generate more revenue. However, inquiries rarely leave a database footprint.

There are many other examples where the existence of an event cannot be discerned at the database level. That’s why early attempts to create event-driven integration in mainframe systems have focused on application changes, which put messages onto queues, for transport to the target system. Such a strategy relies on the organization having the stomach to embark on wholesale modification to legacy code. This is a rather unpalatable prospect for many. Most mainframe applications are considered sacrosanct. Few modifications are permitted because skills are scarce and source code is sometimes unavailable. Relying on database changes as an event trigger reduces the number of events that can participate in wider integration strategies and is fraught with shortcomings.

Mainframe databases and subsystems were designed during an era when integration with other systems was rarely a consideration. IBM has made many improvements in interoperability in recent years, but the company focuses entirely on request/reply-type integration and data replication. As we’ve seen, this doesn’t address the needs of the many integration projects where mainframes play. Some technologies have begun using the data replication method to deliver limited event-driven integration; however, the issues of latency and limited visibility mean that they represent only partial solutions.


There are various components that must exist to support event-driven integration. Collectively, these are often referred to as the event-driven architecture:


Event-driven architecture represents a fundamental shift in the way application integration initiatives are viewed. Instead of relying upon complex programming techniques within EAI tools for managing the timing of new business processes, or entering into expensive, risky modification of underlying application code, event information can now be dynamically, non-invasively managed at the source of the business event. 

This simple shift has profound effects on the latency of newly integrated business processes. However, underlying the simplicity of this description is a sophisticated new category of software infrastructure designed to make the identification, capture, and publishing of business events a realistic proposition for integration professionals. Implementing event-driven architecture successfully can deliver on the promise of EAI by eliminating latency and presenting new business opportunities by exposing business events previously hidden from view. Z