Storage

Today, it’s generally accepted that System z environments span multiple, geographically dispersed data centers. These data centers are increasingly interconnected for the purposes of disaster recovery (DR) and continuous availability (CA). There are many reasons for this, including the ever-increasing dependency of businesses, governments and society on It services. There’s also the requirement to meet government regulations related to the availability of these It services. Finally, advances in connectivity technology mean it’s now possible to do what previously was impossible, not feasible or even unthinkable.

It’s also still common today to find that the different groups and stakeholders in these enterprises have all implemented different strategies and components. Sometimes, I’ve found these groups—such as the storage, network, hardware platform teams, etc.—seldom even meet with each other. In any event, nobody has overall responsibility for the complete end-to-end architecture solution. So you’ll have a hodge-podge, klugey, smorgasbord, suboptimal architecture and configuration with lower performance, higher costs, less flexibility and lower resilience than you really should. After all, we’re talking about an architecture for your enterprise’s most important data.

Adding to the complexity is the management of these environments. Many data centers are relatively ad hoc in their structure. The multisite connectivity for these data centers wasn’t engineered as a single, complete solution in the same manner as you would a passenger airliner. So, silos of different elements have evolved and these silos are often owned and managed by different teams or departments within the organization. All these factors combine to make changes difficult and complex. Sometimes, the internal politics makes the U.S. Congress look like a friendly environment. It doesn’t have to be that way.

You really should be taking a long, hard look at how you’re organized, with an eye toward establishing a true, coordinated, end-to-end, extended distance architecture managed by a team responsible for end-to-end connectivity. Call it the connectivity architecture team. This must represent a single point of control and ownership that enables all departments with responsibilities within the end-to-end connectivity solution to work optimally together.

A team that focuses on end-to-end solutions will create a less-fractured infrastructure that represents and conforms to a single architectural blueprint. The foundation of this infrastructure is the connectivity layer, where overlap is required across all the technology disciplines. This connectivity layer underpins all the other technology silos, so responsibility for, and ownership of, this layer must be under the control of one team. This often represents a requirement for a new role with responsibility for all the connectivity pieces. This doesn’t necessarily need to be a new team of people. It might be additional responsibilities for an existing role, or even formalizing a role that already exists in a de facto manner. It’s important that other teams must acknowledge who holds responsibility for the connectivity layer and the components within it.

There will be naysayers—the empire builders who want to jealously control their fiefdoms. But at the end of the day, we’re talking about improving how you manage cross-site connectivity for your organization’s most important asset: your data.