Many of us have experienced the frustration or embarrassment of having a credit card transaction rejected. Often, it’s the result of an error of financial timing on behalf of the individual or institution. Such occurrences represent opportunities for financial companies to raise customers’ credit limits or offer other financial services to ensure that such an event doesn’t occur again. There are many other examples where business events present opportunities for companies to improve efficiency if the event can be captured and presented to a system designed to handle it. However, for many companies, significant delays occur between the event occurring and it being recognized by the department empowered to capitalize on it.
Consider the customer retention value if the customer involved in the unsuccessful credit card transaction immediately receives a text message on a mobile phone or an e-mail. The message would indicate the financial institution is looking into the event to determine how best to improve service in the future. Many mobile telephone companies have started implementing similar systems to support directory inquiry interactions. The simple process of requesting a telephone number is an event that results in a text message being sent to the customer confirming the number.
The value of an event can decrease in direct proportion to its float time, depending on the event’s nature and importance. For certain events, anything other than instantaneous processing is unacceptable. For others, hours or even days can pass with no appreciable loss of business value. Event-driven integration is essential for the former and can present interesting new business opportunities for the latter.
In any complex system supporting many constituents, huge numbers of events occur daily. The event itself is defined by the use to which it can be put. With sufficiently versatile event-detection technology, virtually all execution steps in existing application environments can be considered an event if business value can be derived from encapsulating the execution step as an event. For a directory inquiry, the execution step may be sending the data to the operator’s workstation in response to the request. When one considers that any screen interaction of a legacy system may be detectable as an event, the possibilities for deriving business value from event-driven integration grow exponentially.
WHY IT’S IMPORTANT
Event-driven integration can be a troubling concept to grasp, but it has profound consequences on an organization’s efficiency and its ability to react quickly to opportunities. Several application types derive distinct benefit from an event-driven enterprise architectural approach:
- EAI: In the late 1990s, EAI emerged as a technology and a discipline aimed at linking the increasing array of disparate applications found in most organizations. Vendors such as Vitria, Active Software, New Era of Networks, and TIBCO developed products that would reach into existing systems in search of changed data items. Any qualifying changes were routed to target systems based on criteria defined in the software. Most early implementations were aimed at synchronizing databases more rapidly than could be achieved with data replication technology. A key element of many successful systems was the use of powerful messaging systems such as WebSphere MQ (WMQ). WMQ would provide rapid delivery assurance for changes that needed to be published around the network. EAI vendors sought to differentiate themselves by offering adapter technology for the most common database and application systems. Often, database polling helped detect events that would trigger a process inside the EAI tool.
- BUSINESS PROCESS MANAGEMENT (BPM) augments EAI by adding a layer of abstraction that lets non-technical users redefine business processes based on integration of state changed across various platforms. It’s highly dependent upon robust event-detection technologies since its usage metaphor is that of graphical representation of a business process with a clear initiation point. Event-driven integration is used to trigger a business process and may be used within the process itself for long-running business transactions. A new standard has emerged, known as J2EE Connector Architecture (J2CA), specifically to allow BPM and EAI technology to easily define events important to the process. A sound architecture for event-driven integration must support event definition based on J2CA.
- BUSINESS ACTIVITY MONITORING (BAM) is focused on delivering real-time information on the health of the company’s business processes to executives. BAM products typically appear as a graphically rich presentation layer, backed with sophisticated event correlation technology. BAM borrows heavily from the systems management world, where event detection and correlation has become the de facto method for rapidly diagnosing system problems. BAM exploits a similar concept at the business level. It relies upon timely notification of business-related events and performs customizable analysis of those events to determine how well the business processes within the company are performing.
- DATA SYNCHRONIZATION: Some integration needs have no more grand ideals than the simple synchronization of data between disparate database systems. This requirement is a close cousin of data replication and differs only in latency needs and recovery requirements. Event-driven integration can be used to rapidly publish database changes to technology on remote database platforms that can process and load the change into the target system.
EVENT-DRIVEN INTEGRATION VS. DATA REPLICATION
An event is an abstract concept. Only the effects of the event can be seen. In computing terms, events mostly (but not exclusively) manifest themselves by a state change of some underlying data item. When we talk about event-driven integration, we’re really discussing an event in one application system triggering action in another. In technical terms, this may mean replicating the state change, or some derivative thereof, of the underlying data element that represents the event on another application system. This sounds like data replication, a well-understood technology. However, there are four key differences:
- TIMING: Event-driven integration focuses on rapidly getting the information that describes the event to the target platform. It should certainly occur fast enough to ensure that the value of the event isn’t degraded. In practical terms, this means as soon as the transaction that created the event has been completed. Data replication isn’t concerned with real-time propagation of underlying state changes. Most data replication solutions periodically batch up changes and ship them to a target platform at predefined intervals. This may be hourly, or more frequently, each day.
- READ-ONLY EVENTS: Many events don’t result in change to underlying database or file systems in any way. A record lookup for a customer may be an important event from various perspectives. However, if an organization is relying on data replication to identify such an event, it will be disappointed. One need only consider Web and email marketing to see the value of capturing read-only events. Many Web applications can detect when particular individuals visit Websites merely to peruse what’s presented. This information can be used for automated followup or content personalization.
- ENRICHMENT: Often, information describing the event can be enriched by incorporating other environmental data simultaneously. The state change (if there is one) of the underlying data item may simply be inadequate to provide maximum value to the event consumer. It may be that factors such as time of day, network source, and operator identification—not reflected in any way in changes to the underlying data item—are required to fully qualify the event. Data replication tools provide limited support for such requirements since they need only replicate the changes to data to meet their objectives. Event-driven integration has message enrichment as a key requirement since environmental factors, present at the time of the event, can often provide essential context necessary to make the event truly consumable.
- TARGET ENVIRONMENT: Data replication solutions target other database systems for their captured changes. Often, they provide turnkey solutions to load the changes, according to certain rules, into the target systems. The package of information carried by a replication solution is normally a large batch of related changes that can be rapidly and efficiently loaded into a target database system. Event-driven integration targets the application types where data describing the event represent only a single event and where an EAI platform or BPM tool can easily consume the data. Data replication solutions will push information from the source platform in a form optimized for loading into a target database system.
BUSINESS EVENTS IN MAINFRAME SYSTEMS
Consider that 470 of the Fortune 500 companies daily process more than $22 billion of transactions on mainframes. Clearly, many business events occur within mainframe systems. Integration of these events to EAI, BPM, or BAM technologies greatly increases the value of those technologies. Many events within mainframe applications can be identified by the changes that occur in underlying databases. However, equally as many events don’t change database state. Examples can be found mainly within inquiry-type systems, where the database state changes may be insufficiently granular to the business event, such that no meaning can be derived at the database level. For example, a sudden rush of inquiries regarding pricing on a particular item may mean that demand is about to surge. Pre-empting such a surge could improve customer satisfaction and generate more revenue. However, inquiries rarely leave a database footprint.
There are many other examples where the existence of an event cannot be discerned at the database level. That’s why early attempts to create event-driven integration in mainframe systems have focused on application changes, which put messages onto queues, for transport to the target system. Such a strategy relies on the organization having the stomach to embark on wholesale modification to legacy code. This is a rather unpalatable prospect for many. Most mainframe applications are considered sacrosanct. Few modifications are permitted because skills are scarce and source code is sometimes unavailable. Relying on database changes as an event trigger reduces the number of events that can participate in wider integration strategies and is fraught with shortcomings.
Mainframe databases and subsystems were designed during an era when integration with other systems was rarely a consideration. IBM has made many improvements in interoperability in recent years, but the company focuses entirely on request/reply-type integration and data replication. As we’ve seen, this doesn’t address the needs of the many integration projects where mainframes play. Some technologies have begun using the data replication method to deliver limited event-driven integration; however, the issues of latency and limited visibility mean that they represent only partial solutions.
There are various components that must exist to support event-driven integration. Collectively, these are often referred to as the event-driven architecture:
- Event receptors are the only subsystem- specific elements of the architecture. They’re designed for a specific environment, inserting themselves at various places in the source systems, such that they can detect the occurrence of an event and any data that may describe the event. The most obvious example is a database trigger. This event receptor code is driven when a change to a monitored relational database column occurs. Where an event receptor is required to capture events that don’t change trigger-capable relational databases, it exists as a highly environment-specific exit. Examples of such exits in the mainframe world are: CICS, global user exits (GLUE), IMS data conversion exits, log exits, etc.
- Event processors receive the raw data that describes the event from the event receptor. They reformat and enrich the data such that it can be consumed effectively by the target application. The reformatting and enriching proceeds according to rules defined to the event processor. Such rules determine how the data should be marked up and what supporting environmental information is required to completely describe the event.
- Transport: The transport element of event-driven architecture defines how the completed messages describing the event are transmitted to the target application. This includes how they should be addressed and whether a technology such as WMQ is needed or whether simple HTTP is sufficient. Common requirements for transports layers within event-driven architectures relate to publish-and-subscribe functionality. This enables the decoupling of the source and target system from an addressing standpoint and makes for a more flexible, dynamic system.
- Event management application program interfaces (APIs) exist to externalize control of the environment to common management and development tools. Specifically, these APIs give access to event receptor control, event processors, and transport-addressing definitions. This enables products such as WebSphere Studio Application Dev e l o p e r / I n t e g r a t i o n E d i t i o n (WSADIE) to control the event-driven architecture from within the development environment of applications likely to consume the events. Additionally, the J2CA standard relies on event management APIs to interact with the event-driven architecture.
Event-driven architecture represents a fundamental shift in the way application integration initiatives are viewed. Instead of relying upon complex programming techniques within EAI tools for managing the timing of new business processes, or entering into expensive, risky modification of underlying application code, event information can now be dynamically, non-invasively managed at the source of the business event.
This simple shift has profound effects on the latency of newly integrated business processes. However, underlying the simplicity of this description is a sophisticated new category of software infrastructure designed to make the identification, capture, and publishing of business events a realistic proposition for integration professionals. Implementing event-driven architecture successfully can deliver on the promise of EAI by eliminating latency and presenting new business opportunities by exposing business events previously hidden from view. Z