Enterprise Tech Journal recently had the pleasure of speaking with Kevin Goulet about overcoming the challenges inherent in managing Big Data on System z. As vice president of Product Management for the CA Technologies Database Management portfolio with a long history of product leadership roles, Kevin is in a prime position to observe the Big Data market and the issues companies face in establishing effective Big Data management solutions. Since a sizeable portion of the audience of Enterprise Tech Journal is DBAs who are often charged with the day-to-day handling of Big Data solutions, we focused our questions on issues that will help them be more effective in that role. Let’s see what this expert in the field had to say.

Enterprise Tech Journal: Thanks for joining us today, Kevin. Let’s start with your observations about the current state of the Big Data market.

Kevin Goulet: Glad to be here! As you know, a majority of companies have massive data stockpiles on their mainframes culled from the cloud, social media, mobile devices, email, the Internet of Things, relational and non-relational databases, spreadsheets, video and countless other sources. This store of data is known as Big Data because it’s grown so large that traditional data analysis and management solutions are too slow, too small and too expensive to handle it. Despite (or perhaps because of) the exponential growth of data, and as with any technology in the early stages of adoption and development, most companies are in the discovery stage of evaluating the best means of extracting value from it.

ETJ: Before we talk about how to deal with the challenges of Big Data management, can you define those challenges for us?

Goulet: The challenges facing Big Data administrators, the hands-on users of any business intelligence solutions that companies have launched, are critical to the success of any such initiative. First, you need an effective process for moving a cornucopia of structured and unstructured data from the mainframe to the Hadoop environment and potentially back again. (Hadoop is the technology for map reducing software, the engine behind extracting value from Big Data analysis.) Those processes need to be secure, close to real-time and performed at regular intervals. Hadoop clusters need to be up and running all the time, and the data moving back into the mainframe must be clean and in sync with the original database schemas, so that it can be used productively. It’s also important to automate management so that DBAs don’t spend all their time doing manual scheduling. So I would say that the overall concerns of Big Data administrators are processes and reining in the amount of time they commit to Big Data management.

ETJ: Just how widespread are these challenges?

Goulet: We speak with administrators across the gamut of industries and all face these challenges. If misery loves company, then DBAs can take solace in the fact they aren’t alone in dealing with these issues.

ETJ: What about the challenges to the enterprise in creating a Big Data management solution?

Goulet: Funny you should ask, as I wrote a column for this issue of Enterprise Tech Journal (see page 68) that addresses that very topic.

ETJ: Who would the Big Data administrator be in the typical enterprise? (Editor’s note: Big Data administrators may want to suggest that their managers read Kevin’s column.)

2 Pages