Latest Entries

Numerous articles and white papers have been written on the performance advantages of Fibre Connection (FICON) over Enterprise Systems Connection (ESCON). Clearly, FICON is a major improvement over ESCON; data centers that have migrated from ESCON to FICON have seen improved response times and performance. Yet, only an estimated 20 to 25 percent of existing ESCON customers have migrated to FICON. A major factor in delaying a migration to FICON is the initial cost of entry, including the purchase of new hardware and infrastructure. Another factor was the lack of native FICON storage devices when FICON was initially announced as generally available. This has led to many prospective FICON customers having to do “rolling upgrades” of storage devices, which has further increased the cost of migrating. Mainframe users find it difficult to recognize and quantify cost savings associated with eliminating some ESCON infrastructure and leveraging other existing ESCON hardware by attaching it to the FICON network. …

Read Full Article →

In today’s 24x7 data center, one copy of a critical application volume isn’t enough. To run backup operations, load data warehouses, or test new versions of applications without disrupting the flow of information to users and applications, IT administrators must create multiple copies of primary production volumes…

Read Full Article →

There are many new and valuable functions and features in today’s mainframe environment. This article addresses one of these, the use of a coupling facility structure and the system logger to manage LOGREC data collection.

LOGREC data is comprised of records created when hardware or software errors are encountered. The data can be printed and analyzed using the Environmental Record Editing and Printing (EREP) program. Collecting LOGREC data has always been a bit cumbersome (not as onerous as collecting SMF data, but still a pain).

Old-Style LOGREC Data Collection
With the traditional LOGREC data collection method, the LOGREC data is stored on a disk data set. When the disk data set fills up, a console message is issued. When this “full” message is generated, either the computer operator or the mainframe automation software submits a batch process that copies the data to another storage area (usually a disk or tape Generation Data Group [GDG]), and then clears out the LOGREC data set. When multiple MVS images are deployed in a Parallel Sysplex environment, each MVS image must implement its own LOGREC data collection system.

The challenge is greater when you need to access this LOGREC data using the EREP program. Usually, where you need to run EREP, you’ll need to access the LOGREC data from multiple systems. With the traditional method, this means you must concatenate the appropriate LOGREC history data in your EREP JCL. While this is not an insurmountable ordeal, it’s just plain clumsy.

New-Style LOGREC Data Collection
In a Parallel Sysplex environment, you can leverage your coupling facility and the MVS logger component to automate LOGREC data collection. With this method, all the MVS images in your Sysplex write their LOGREC data to a structure in your coupling facility. When the structure fills up, the MVS logger function offloads the data into a disk data set. The MVS logger also prunes old records automatically based on parameters the systems programmer sets. Note: In a single system Sysplex, you can use a DASD-only log stream instead of a coupling facility structure.

Implementing LOGREC Data Collection
Implementing the new-style LOGREC data collection method requires that you:

•    Determine the size needed for the LOGREC structure
•    Create a new CFRM policy to add the LOGREC structure
•    Activate the new CFRM policy
•    Allocate the LOGREC log stream by updating the LOGR policy
•    Change the LOGREC recording medium to the log stream.

You can use two methods to determine the size needed for the LOGREC structure. IBM recommends that you determine how many records are written per second to the LOGREC data set on each of your systems, then use the CFSizer utility (www.ibm.com/servers/ eserver/zseries/pso) to determine the size.
Another method is to guess. You might select a size of 3,096 and an INITSIZE of 2,048. At my site, this structure allocation supports five Logical Partitions (LPARs) and works well.

Next, you must run the IXCMIAPU utility to define the new LOCREC structure in a new CFRM policy. The partial job stream in Figure 1 shows the JCL and control cards for adding the LOGREC structure. A new CFRM policy (named ZOSCFRM8) is created that includes the definition for the LOGREC structure. The SIZE parameter tells the CFRM policy that the largest the LOGREC structure can be is 3,096KB. The INITSIZE parameter caused the initial allocation of the LOGREC structure to be 2,048KB…

Read Full Article →

Never Say Never: My Foray Into Management

I swore I’d never do it . . . I’d never become one of them. I’d never drink the Kool-Aid, get the lobotomy, or trade in my laptop for an Etch-A-Sketch. I’d remain pure—a technician—unsoiled by politics or the hard choices a budget sometimes requires. I swore I’d never become a manager, but I did.

One Monday morning, Joe, my boss of nine years, called me into his office to tell me he had accepted early retirement and upper management wanted me to fill his position. Me! Why me?! What had I done?! Apparently, I had done enough to give them the idea I could perhaps get the job done.

I spent the next month with Joe as he cleared out 22 years of neatly filed notes, reports and studies, passing on what he thought I could use. I began attending meetings with Joe, and then, later, in his place. I began conducting the weekly staff meeting Joe had established and began attending his . . . er . . . my boss’s staff meeting. I began morphing myself into this new role. It wasn’t easy.

Then suddenly, the month was past and there I stood all alone. It took me two days to get the courage to move into my new office. I closed the door, sat down behind the desk, feeling awkward, elated, and intimidated all at once. Could I do this? There was no way to know except to try, I pondered, basking in the unfamiliar sunlight of my first window.

Then all hell broke loose. Month-end fell on me like a safe from a third-story window. All I could do was react. Unexpected, new workloads crashed in on our CPU-constrained complex and the phone wouldn’t stop ringing. At this point, I really appreciated my former boss’s expertise. How had he managed such chaos with apparent ease and grace, I wondered? In the end, he would have been quite challenged, as our postmortem revealed. But that experience taught me a valuable lesson: Management is a state of confused helplessness, punctuated by episodes of terror.

After the dust settled, things got better. I began to get on top of my problems. And they were mine alone, as I soon realized. That realization really hit home. I must not only assume, but also take ownership of my responsibilities. And I’ve had plenty of opportunities to take ownership. I quickly learned that I was responsible for things I didn’t even know existed, much less owned. The phone would ring and a new responsibility would emerge. At first, I was overwhelmed, but I soon discovered I was allowing these responsibilities to crush me. I learned I didn’t have to do that. I had an option called “delegation.” I’m allowed to delegate. I am encouraged to delegate. I can’t do this if I don’t delegate. I must delegate—that’s why I have those people working for me.
 
I discovered there is a certain beauty in delegation: Although I am ultimately responsible, I can share the burden of the myriad responsibilities with my team. I don’t have to solve the problem; I just have to ensure the problem is solved. This was very liberating. I now steer the cart full of steaming opportunities, while my team pulls it, instead of trying to push it uphill by myself, only to have it roll back on top of me. The key to this is having a good group of people working together as a team. Wisdom dictates that you surround yourself with the smartest and best people you can find. The better they are, the more you will be freed to concentrate their efforts.
 
Now I have a new problem: organization. How do I organize the efforts of this fine team? There are six team members; all very capable, all very experienced. How do I keep up? Should I even try? How much minutia do I attempt to comprehend and maintain without miring myself in the details? Or worse yet, bog down a team member? After 24 years in the trenches, I’ve discovered that I’m all about details, and as we all know, the devil is in the details. I realized that I have to let go. I’m encouraged to let go. I can’t do this if I don’t let go. There is this voice inside me that says “Delegate and let go . . . ” That’s why I have these fine people working for me. This is the hardest part for me, but I’m getting there.
 
I must allow my staff to do what they do best. I assign projects and expectations, and allow them the freedom to complete these projects in their own fashion. I provide guidance, when necessary, and await the results. I believe when I’m doing this most correctly, my team becomes an extension of myself, expediently accomplishing my goals and achieving my mission. It’s like using power tools—I can get so much more accomplished than I ever could before.
 
Over the course of my technical career, like everyone, I was managed and mismanaged by good and bad managers. The good managers I try to emulate, having learned from them what to do, as well as when and how to go about it. The bad managers were equally, if not more, instructive, teaching me, by example, what not to do or how not to do it. I also consult with my peers. My challenges are nothing new, just new to me. And I was raised right. “Use all the brains you have and all the brains you can borrow,” Joe always said.

I’ll never say never again. I’ve made the fundamental change I swore I would never make. Most days, I’m delighted I did it. I enjoy managing; it’s very rewarding. But I miss the nooks and crannies and nuances of the technology. The sandbox was always such a lovely place to play! I remember my predecessor often said he enjoyed technical work vicariously through his people. I never fully understood what he meant by that until now…

Read Full Article →

Nestled in the heart of southwestern Wisconsin’s lush corn and dairy farms, just north of the Illinois- Wisconsin border, lies the town of Monroe. It’s a beautiful, bustling burg, population 11,000, and home to a brand new ethanol refinery, the Berghoff Brewery, and The Swiss Colony.

The Swiss Colony is a thriving catalog, fulfillment and retail business, consisting of multiple companies. They are most famous for their fine cheeses and gourmet meat and nut assortments. They also operate
an extensive group of women’s clothing, home furnishings, and other consumer catalogs.

They run the bulk of their business from their Monroe, WI headquarters. Like any well-established company (they’ve been in business since 1926), they are long-time users of computer technology. A mainframe has been part of their IT environment since the early ’70s, when their Swiss Colony Data Center (SCDC) was incorporated. SCDC is the division that’s entirely responsible for all computerized technology support. In February 2004, SCDC became one of the newest z/OS sites in North America.

The story of how they converted to z/OS is downright inspirational.
In this era when so many IT projects fail and so much time and money are spent for so little payback from seemingly bottomless black holes, The Swiss Colony’s migration stands out as a clear, well-planned success story.
 
History
For many years, SCDC had run most of its operations on VM/VSE. They had grown to become one of the largest VM/VSE sites in the U.S. But they were beginning to feel a growth pinch, constrained by modern, external Web connectivity, raw operating system capacity, and overall, long-term volume projections for new business initiatives.

Among the primary business areas pushing for a new approach was The Swiss Colony’s marketing and fulfillment area. These folks wanted assurances from the IT staff that the computer environment was as modern and sustainable as possible.

SCDC conducted several studies to investigate their options. The alternatives included:
•    AS/400 style servers
•    Linux Open Source on z/VM
•    Unix
•    z/OS.

SCDC had accumulated a vast, customized inventory of applications that worked perfectly for their business. These systems, based on traditional VSE technologies, such as COBOL, CICS, CA-IDMS and CA-DATACOM, were ideally suited to their needs. That’s no surprise because they were developed over many years specifically to handle the unique nature of their business.

In May 2002, SCDC decided to convert to z/OS because it would:
•    Provide a modern, full-featured environment
•    Preserve prior investment in customized applications
•    Support an entire suite of software that would enhance Web connectivity, file transfers, and external interfaces
•    Operate on existing hardware.

SCDC knew early on they would need some help planning the conversion.
As the planning began and outside vendors began to examine the details of moving SCDC from VM/VSE to VM/z/OS, the company learned things that made z/OS look like an even better choice.


Planning for z/OS
SCDC obtained bids from IBM, Computer Associates, Prince, and CONVTEK after giving each vendor numbers about the expected volume of the conversion. This included such basics as JCL, application programs, and database subsystems.

SCDC selected CONVTEK and, by June 2002, was following a project plan and acquiring additional hardware. SCDC was among the earliest VSE adopters of a Virtual Tape Subsystem (VTS).
Like many retail businesses, The Swiss Colony has an annual peak season. This peak comes in the fourth quarter of the year, but the volume starts to build late in the third quarter. The c o n v e r s i o n accounted for this and avoided any major rollouts late in the year.

SCDC set an ambitious goal of being completely converted to z/OS by April 2004, a migration of less than two years.

Mile Markers
Some checkpoints in the project plan included:
•    Getting to a z/OS test bed with no applications running
•    Converting and moving database systems and transaction monitors (CICS) from VSE to the z/OS test bed
•    Converting the applications themselves, including COBOL, JCL, and other components
•    Training IT staff in z/OS
•    Having a complete production clone of their VSE environment running on z/OS.

The production clone was the major goal for the final phase of the conversion and SCDC reached that milestone ahead of schedule.

SCDC began converting its CA-IDMS environment in September 2002. This portion of the work wasn’t too difficult and took less than two weeks. Using a combination of File Transfer Protocol (FTP) transmissions and 3480 cartridges, the databases and dictionaries were brought across (VSE to z/OS), and then installed as cataloged z/OS files. The only elements that needed any conversion within CAIDMS were a few internal parameters of the SYSGEN, a system-level Assembler exit, and the JCL that started the database region. The portability of CA-IDMS when moving between VSE and z/OS was amazing—it literally ran exactly the same on z/OS as it had on VSE.

Working Toward the Cutover
Throughout the conversion process, as milestones were met, SCDC management ensured that all of the conversion processes put in place were repeatable. That is, once a successful migration of a component (COBOL, CICS, CA-IDMS, etc.) had occurred, the process was documented, JCL was locked in (both on VSE and z/OS), and any ancillary setup work was scripted and clearly documented.

This approach was effective. As pieces of SCDC’s entire production VSE system came across, they were essentially point-in-time copies. This made the transferred components excellent for functional testing on the z/OS system. But, whenever there are copies of data and programs, the copies quickly become stale. When the time came for the real cutover (and periodic refreshes throughout the conversion), all the individual processes used throughout the ongoing conversion would have to be run again to guarantee that the latest versions of databases, programs, and files were moved.

Management made sure that training was an ongoing part of the conversion. SCDC used MVS Training Inc. and Ajilon Consulting as well as major software vendors such as Computer Associates and IBM. Initially, SCDC relied upon outside consultants who had expertise in z/OS, but they knew their own people eventually had to be able to handle the z/OS side of the house.

C-Day
The long-awaited end of the road, originally scheduled for April 2004, was moved up nearly two months at the request of The Swiss Colony’s Fulfillment Services division. As 2003 rolled into 2004, and the peak season frenzy began to subside, the request came for an accelerated cutover. The thinking was that the maximum amount of time should be allowed before 2004’s peak season, to allow for any kinks to be ironed out of the new z/OS environment. Plus, Fulfillment Services was eager to exploit the new features and capacity of z/OS.

With a well-planned battery of fallbacks, vendor support, and entire teams of SCDC technicians standing at the ready, they pulled the plug on VSE and plugged in z/OS on the first weekend in February 2004.
The cutover went nearly flawlessly and SCDC has been running z/OS V1R3 on their z800 (under z/VM) ever since. They kept the production VSE image running (as a guest under z/VM) until the end of June 2004 to ensure that nothing was missed or accidentally dropped.

The Dollars
This article wouldn’t be complete without a discussion of the dollars involved. SCDC went over their budgeted amount, but only by 4 percent, an amount management considered “on-budget.” The consensus among management was that the extra money spent could almost be thought of as the cost of getting done early and accurately.

Many IT projects of this size (and larger) seem to turn into cash-draining, interminable exercises in futility. Being so close to budget and being ahead of schedule is almost unheard of! SCDC is to be commended for their fiscal and technical discipline in this conversion.

 Problems
So, it’s all sweetness and light for SCDC now that they’ve reached the nirvana of z/OS? Well, not quite.
There are still some islands of confusion and adjustments are still being made. Many of the things that have SCDC scratching their heads will become old hat as time passes. (For additional insight on challenging technical and human aspects of the migration, see the accompanying sidebar.)

Some difficulty is to be expected. After many years of doing things according to VSE standards and procedures, suddenly, there’s a new way. And new ways always seem strange, daunting, and different.

Unanticipated Benefits
SCDC discovered some things that made their z/OS migration look like an even better choice. These benefits centered around hardware advantages that emerged once they had a z/OS guest running under z/VM. Two major benefits were:

•    Using their VTS to full advantage when transferring data files between VSE and z/OS. Also, features of their VTS that were unavailable under VSE became available under z/OS.
•    Use of their EMC Redundant Array of Inexpensive Disk (RAID) arrays in Business Continuity Volume (BCV) mode. Essentially, this is DASD mirroring technology that was always available on the hardware, but couldn’t be configured using more modern software tools until z/OS was up and running.

Since part of the decision to migrate centered on moving to an operating system with native support for new, sophisticated external interfaces, these unanticipated benefits really played right to the company’s initial plan.

Some Final Thoughts
SCDC ran VM VSE/ESA on a z800 2066-001. They had four VSE guests (A, B, C, and D) running, with A and B being reserved for production, C for installation work, and D for application development. They now run only two z/OS guests, one for production and one for application development, under z/VM on the exact same z800.

They also accomplished a long-planned printer and DASD upgrade late in the conversion, although it was not considered part of the z/OS conversion. They’re looking at a possible CPU upgrade soon, but for now, the 8GB 192 MIPS z800 is working well.
Older hardware (e.g., 3420 tape drives, 3480 cartridge drives, and 3174 terminal controllers) is being phased out. Newer interfaces are being exploited, and The Swiss Colony is moving quickly to take advantage of their hard-earned z/OS success.


More Insights on The Swiss Colony Migration

Jim Moore: What was the stickiest aspect of the conversion?
 
CIO: Probably the toughest thing to adjust to was the batch job scheduling. We are using CA-Jobtrac on z/OS but had nothing at all like this on VSE. We’re starting to get used to it.

Technical support manager: When we switched to z/OS, we also upgraded all third-party software products to their latest release levels. So, in a way, this was a double conversion with a new OS plus new releases of things such as CA-IDMS and CA-DATACOM. Further, some third-party products on VSE simply fell away because they were only designed for VSE. There was a lot of renegotiating of software licenses required—more than we initially thought. New products also came into use on z/OS as well, such as the job scheduler already mentioned. MVS Quick-Ref from Chicago-Soft is another example.

Senior systems programmer: The system catalog still drives us nuts! On VSE, we used CA-DYNAM. The DYNAM catalog and the z/OS catalog just do not work the same. But, we’re gradually learning the differences and adjusting.

JM: How did the in-house technical staff react to the conversion? Apprehension? Dread? Joy? Willingness to do whatever it took?
 
Technical support manager: Most of our long-time IT employees have 20 to 30 years invested in VSE expertise. Naturally, the change to z/OS was a major issue for them. On the other hand, more recent hires were well-versed in z/OS (MVS, actually). We are still in the process of balancing all of this out.

Senior systems programmer (a veteran VSE technician): Every day, there’s something new to learn on z/OS. Yes, it’s tough to switch, especially so late in life. I find myself still doing things the VSE way and this burns me from time to time. I’ve gone from being an expert to being a newbie. But then again, so has everyone else.

CIO: Another benefit of converting to z/OS, from a human resources perspective, is that we anticipate tapping into a deeper pool of talent when hiring. It was becoming more difficult to find VSE talent. Over time, the conversion will serve us well in this area.

JM: Did the users notice any change or were they at all affected by the conversion? Have you gotten any feedback, negative or positive, from users?

Technical support manager: For the most part, the impact was minimal. This was a major goal. One user change involved a customized reporting solution that we had developed for VSE. We wrote a report-viewer that took mainframe reports from the VSE POWER queues and transferred them to a LAN server for online viewing. We had to replace this piece of software. We selected RWeb from Allen Systems Group as well as some native FTP transfers from VTAM, to replace the old POWER system. This was the most visible change for our users.

Senior systems programmer: On cutover day, we had an entire SWAT team standing by. Early that Saturday afternoon, we converted and then backed everything up. We turned it loose on the users and waited for the phone calls. None came. All of the legacy systems looked exactly the same running on z/OS as they did on VSE. Many users probably weren’t even aware of the magnitude of the change. We maintained round-the-clock support for the week that followed but we had no major problems.

CIO: We had 100 percent support from our executive management. Their attitude was extremely supportive. This helped foster a spirit of cooperation between the SC data center and our users…

Read Full Article →

What network connectivity options do IBM mainframes support?


  • Several years ago, this used to generate a single answer: SNA, of course! Using either native SNA support or protocol converters or emulators, you could connect any communication device to an IBM mainframe by making the device act as an SNA device. SNA used to be the de facto communications standard of the industry based on market share statistics. This is no longer the case. Today, TCP/IP connectivity for z/OS systems is perhaps the most popular among several mainframe connectivity options.
     
    SNA networks, with their more recent derivatives, such as Advanced Peer-to-Peer Networking (APPN) and High Performance Routing (HPR), are more robust and secure than TCP/IP. But this isn’t the first time a better technology lost share to a more mediocre one. Does anyone remember beta video tapes and the OS/2 operating system? But since TCP/IP communications has become ubiquitous, our goal is making it better by making it more secure and robust.

    This article addresses system security and high-availability, which are among the main challenges in modern networking infrastructures.

    How do you offer high-availability, connectivity, and accessibility to legitimate users and simultaneously prevent access for illegitimate users and malicious traffic?
     
    SNA networks have been known for resiliency, reliability, and security. When was the last time you heard of an SNA “worm” or Distributed Denial of Service (DDoS) attack that either completely or effectively brought down an SNA network? Unfortunately, TCP/IP networks are not as resilient, reliable, or secure. The following is from a seven-year-old (but still relevant) book I coauthored in 1997 titled SNA and TCP/IP Enterprise Networking (ISBN 0131271687):
     
    “Securing end-to-end connectivity solutions, utilizing Internet/intranet networks for mainframe-based information access, is becoming more and more popular every day, and for a good reason: Internet’s growing popularity promises easy and effective access to information.”
     
    Quoting Reed Hundt, the former chairman of the U.S. Federal Communications Commission (FCC): “We can make the libraries of the world available at the touch of a key to kids in their classrooms. That alone would do more to advance educational equality than anything would since the work of Horace Mann. And such a network would single-handedly change the mass-market, least common-denominator model of the media . . .”

    In our book, I also quoted Frank Dzubeck, president of Communications Network Architects Inc., who eight years ago predicted that: “By 2003, service providers will have effectively replaced most dedicated private line data networks with virtual network services. This phenomenon is well-founded in history: It will mirror, for data, a similar fate that befell dedicated voice networks.”
     
    That was a remarkably accurate prediction. Just one other quote to keep in mind—this one from baseball’s Yogi Berra: “Prophecy is really hard—especially about the future.”

    One of the main reasons service providers haven’t replaced most private lines with virtual connections over the Internet is security exposures associated with the public Internet. Or, as Reed Hundt noted, “The economics of the Internet are wacky. We have a hybrid of regulated telephone monopolies and notfor- profit academic institutions where we need to have a competitive market. This hybrid gives us part useful chaos and part pure obstacle.”

    So, what’s so special about Internet security? Why are Internet security concerns many magnitudes higher than those for, say, public circuit, packet, or frame relay switching networks? One of the main differences is the fact that no single body is responsible for the Internet. If, for example, a company were using a certain carrier for public frame relay service, this carrier would have contractual obligations to deliver reliable, secure services. With the Internet, such an approach isn’t applicable. Although Virtual Private Network (VPN) offerings present an interesting exception to this notion, this is merely a case of an exception that proves the rule.

    Another major reason for security challenges is the ever-growing number of sophisticated users who are surfing the net, sometimes with a clear intention to break into someone’s network. Some do it for money, some for demonic reasons, and some even for pleasure. However, don’t be fooled by the innocent intentions of the hobbyist amateur hackers. The first computer virus inventors didn’t do it for profit either, but caused some significant losses.

    People claiming their network is 100 percent secure are probably fooling themselves or their bosses. However, because of the security exposure any business gets by connecting to the Internet, it’s the responsibility of network managers to protect their networks to such extent that it would be cost-prohibitive for both professional and amateur hackers to break in.

    Security management has recently gained increased exposure to the general public. Beyond the issues of information integrity, privacy and enterprise reliability, the costs to businesses are enormous. According to statistics published by z/Journal in July 2004: “Despite more spending on security technology, attacks by hackers and virus writers are up for the first time in three years and downtime has increased. Research firm Computer Economics calculates that viruses and worms cost $12.5 billion worldwide in 2003. The U.S. Department of Commerce’s National Institute of Standards and Technology says software flaws each year cost the U.S. economy $59.6 billion, including the cost of attacks on flawed code.”

    Unless caught and forced to do so, companies rarely disclose compromises to their security. According to statistics published by Carnegie Mellon’s CERT Coordination Center, 137,529 security incidents were reported in 2003. This represents almost 43 percent of the 319,992 total incidents reported during the last 16 years. The number of mail messages handled by CERT also grew in geometrical proportion in the last four years: from 56,365 in 2000, to 118,907 in 2001, to 204,841 in 2002, to 542,752(!) in 2003.

    The Computer Security Institute (CSI) surveyed about 500 security practitioners, senior managers, and executives (mostly from large corporations and government agencies in the U.S.) and found that:
     
    •    The most expensive computer crime over the past year was due to denial of service.
    •    The percentage of organizations reporting computer intrusions to law enforcement over the last year is on the decline. However, the key reason cited for not reporting intrusions to law enforcement is the concern for negative publicity.
    •    Most organizations conduct some form of economic evaluation of their security expenditures, with 55 percent using ROI, 28 percent using Internal Rate of Return (IRR), and 25 percent using Net Present Value (NPV).
    •    More than 80 percent of the surveyed organizations conduct security audits.
    •    Most organizations don’t outsource computer security activities. Among organizations that do outsource some computer security activities, the percentage of security activities outsourced is quite low.
    •    The Sarbanes-Oxley Act is beginning to have an impact on information security in some industries.
    •    Most organizations view security awareness training as important, although respondents from all sectors typically don’t believe their organization invests enough in this area.

    Security Standards
    Extensible Authentication Protocol Over Ethernet (EAPOE - IEEE 802.1X) is a security technology that’s getting a lot of attention lately, and rightly so. EAP was originally developed for Point-to-Point Protocol (PPP) and described in the RFC 2284 (www.ietf.org/rfc/rfc2284.txt). The Institute for Electrical and Electronic Engineers (IEEE) extended it to Ethernet LANs. The standard cites several advantages of authentication on the edges of the enterprise network:

    •    Improved security
    •    Reduced complexity
    •    Enhanced scalability
    •    Superior availability
    •    Translational bridging
    •    Multi-cast propagation
    •    End stations manageability.
    •    
    Although IEEE 802.1X is often being associated with security protection for wireless communication, it’s much more appropriate for wired protocols, where the physical cabling infrastructure is much more secure. IEEE 802.1X communications begins with an unauthenticated client device attempting to connect with an authenticator. The access point/switch responds by enabling a port for passing only EAP packets from the client to an authentication server (e.g., RADIUS). Once the client device is positively authenticated, the access point/switch opens the client’s port for other types of traffic. A variety of networking gear vendors and many client software providers currently support IEEE 802.1X technology.

    IEEE 802.1X proved inadequate for securing wireless infrastructure, so IEEE earlier this year finalized a more appropriate security standard for wireless: IEEE 802.11i. The Wi-Fi Alliance (www.wi-fi.org/), a vendor group focused on wireless technology standards compliance, is starting to certify products for IEEE 802.11i compliance. These products will be called Wireless Protected Access (WPA2), not to be confused with the prior version of WPA released by Wi-Fi in 2003 to address some of the shortcomings of Wired Equivalent Privacy (WEP), which was part of the original IEEE 802.11 standard. It received considerable criticism due to security exposures.

    The Center for Internet Security (CIS) (www.cisecurity.org/) is an important source for Internet security standards. The CIS is charged with developing security templates and benchmarks for evaluation of security posture for most popular products on the Internet. Security templates and benchmarks with scoring tools for security rating available free from the CIS include Solaris, different flavors of the Microsoft Windows operating system, HP-UX, Linux, Cisco routers, and the Oracle database.
    In September 2004, CIS started development of a new benchmark for one of today’s hottest, but least secure technologies: wireless networks. In addition, the CIS offers tips on email and an automated scanning tool for the System Administration, Networking, and Security (SANS) FBI “Top 20” vulnerabilities list.

    SANS Institute (www.sans.org/) is a cooperative research and education organization established in 1989 to support security practitioners from industry and academia by providing them with an additional vehicle for sharing information and lessons learned in search of security solutions. Since CIS tools are being developed according to industry best practices based on information contributed by its members, the more organizations that decide to join the CIS and implement benchmark tools, the better and more secure the Internet will become.

    Another interesting security standard is the Baseline Operating Systems Security (BOSS) project (http://bosswg. org/) from the IEEE Computer Society. This standard is being developed by the IEEE P2200 Work Group within an emerging IEEE Information Assurance community that aims to realize the full potential of IT to deliver the information it generates, gathers, and stores. In addition, this group helped launch the IEEE P1618 and P1619 standards relating to Public Key Infrastructure (PKI) certificates and architecture for encrypted, shared media, respectively.

    Dealing With DDoS
    DDoS attacks caused the most financial damage in 2003. Different tools dealing with DDoS attacks are available today. It’s fair to assume that anyone trying to implement any security tool into an existing network infrastructure wouldn’t want to move it into production without adequate testing. Testing a DDoS protection tool is probably one of the most challenging tasks for a service provider. About two years ago, I had an opportunity to beta test an interesting and unique tool dealing with DDoS attacks, which have become critical points of pain, introducing serious problems for many organizations across multiple industries, academia, and government. These attacks have cost businesses billions of dollars in lost revenues.

    In DDoS attacks, hackers use potentially thousands of previously compromised computer systems to unleash malicious traffic assaults against a network or server, depriving legitimate users of normally available network services and resources. The distributed nature of DDoS attacks, as well as the complexity and volume of such attacks, have so far prevented practical solutions. Most anti-DDoS tools focus on identifying sources of the attacks and then attempting to shut off traffic from the suspected sources or to the victim by manipulating access lists on routers or firewalls. The anti-DDoS tool we were testing uses a different approach. This tool does not reside on a critical path in the network. Once an attack is identified, the appliance will divert the traffic to itself, but instead of shutting off all traffic to the victim, it will simply discard the malicious frames and simultaneously let the legitimate traffic through.

    Due to the distributed nature of DDoS, lab tests are inadequate for accurate simulation of these attacks, and therefore cannot sufficiently predict how well a technology will perform in a real-life Internet environment. So, to test a DDoS tool in real-life environment, we decided to simulate DDoS in a coordinated effort from the Internet. This test was successfully performed in coordination with Mercury Interactive, which launched multiple DDoS coordinated and distributed attacks, such as SYN flood, TRINOO, Targa3, Jolt, UDP floods, and IP fragments, via the Internet against our tested devices while concurrently measuring the service levels for legitimate users. The results of this beta test proved in a close-to-real-life situation that sites located within the DDoS-secured environment enjoyed uninterrupted flow of legitimate traffic, even at the peak of the different DDoS attacks.

    Summary
    Internet security vulnerabilities are never going to disappear. Since z/OS systems are now being connected to the Internet, network managers must be aware that there’s no such thing as 100 percent security. Remember, your goal isn’t total system security but making it cost-prohibitive for intruders to get through your security defense systems. Technology is certainly getting more sophisticated, but so are security hackers. So, it’s even more important to stay on top of the technology and implement the most sophisticated security protection tools available.

    Technology is only one element of the security puzzle and enforcement of sound security policies is the most important factor in protection. Assuming such policies are in place, the human factor could be isolated for evaluation when an exposure occurs. It’s important for security policies to be accepted and enforced at all levels within an organization, and management must take steps to educate staff about the need for security and how to take appropriate measures to maintain it. In many security-conscious environments, the primary emphasis is placed on blocking external attacks. Too often, organizations fail to recognize the vulnerabilities that exist within their own environments, such as internal attacks from disgruntled employees and industrial espionage.

    Network security can be protected through a combination of high-availability network architecture and an integrated set of security access control and monitoring mechanisms. Recent incidents demonstrate the importance of monitoring security and filtering incoming traffic as well as outbound traffic generated within the network. Defining a solid, up-to-date information protection program with associated access control policies and business recovery procedures should be the first priority on the agenda of every networked organization.

    There are no magic bullets for security protection and the only way to address security appropriately is to deal with it at multiple levels. Having a well-designed network with secure Virtual LANs (VLANs) and properly configured firewalls with the appropriate Access Control Lists (ACLs) is important, but not sufficient. In addition to the firewall, it’s important to implement both network- and host-based intrusion detection and intrusion protection tools as well as DDoS protection tools (see Figure 1).



Securing physical and networking layers is critical for building a secure, reliable enterprise infrastructure. Nevertheless, all seven layers of the Open Systems Interconnection (OSI) model play an important role in security infrastructure. Some take it even a layer higher, to the so-called layer 8, or the “political layer.”

Specifically, a firm’s information security posture—an assessment of the strength and effectiveness of the organizational infrastructure in support of technical security controls— must be addressed through the following activities:

•    Auditing network monitoring and incident response
•    Communications management
•    Configurations for critical systems: firewalls, Domain Name Systems (DNSes), policy servers
•    Configuration management practices
•    External access requirements and dependencies
•    Physical security controls
•    Risk management practices
•    Security awareness and training for all organization levels
•    System maintenance
•    System operation procedures and documentation
•    Application development and controls
•    Authentication controls
•    Network architecture and access controls
•    Network services and operational coordination
•    Security technical policies, practices, and documentation.

A properly designed network with comprehensive security policies aimed at high availability, recoverability, and data integrity institutes the necessary infrastructure to conduct activities in a secure, reliable fashion…

Read Full Article →

According to some analysts, upward of 60 percent of corporate data is stored and processed on the mainframe. Mainframe data sources are behind the world’s largest Online Transaction Processing (OLTP) applications, maintain strategic information, and represent a significant capital and human investment. …

Read Full Article →

Application integration is a common theme underlying e-business, Customer Relationship Management (CRM), Supply Chain Management (SCM) and many other modern business strategies. Enterprises use integration to enable faster, more-effective cooperation among an ever-expanding range of business units and people. As more modern applications architectures continue to spread, organizations with an install base of working applications are faced with a disconcerting choice—what to do with them! The options fall into three categories: extend, replace or migrate. Replacement options include packaged software or complete redevelopment. Migration options include conversion as-is, or transformation to a new language and platform. For many, the risks here are beyond their tolerance level, or simply too costly. Legacy extension, or integration, with new, more modern architectures can be relatively low cost and low risk…

Read Full Article →