Conor O'Mahony's Database Diary

Your source of IBM database software news (DB2, Informix, Hadoop, & more)

Archive for the ‘Big Data’ Category

How IBM and Oracle Approach Big Data Solutions

leave a comment »

This blog posts refers to the definition of Big Data commonly in use today. I do not include mainframe-based solutions, which some people might argue tackle Big Data challenges.

Both IBM and Oracle are going after the Big Data market. However, they are taking different approaches. I’m going to take a few moments to have a very brief look at what both companies are doing.

First of all, Oracle have introduced an “appliance” for Big Data. IBM have not. I put the word appliance in quotes because I consider this Oracle appliance to be closer in nature to an integrated collection of hardware and software components, rather than a true appliance that is designed for ease of operation. But the more important consideration is whether an appliance even makes sense for Big Data. There is a decent examination of this topic in the following blog post from Curt Monash and the accompanying comment stream: Why you would want an appliance — and when you wouldn’t. But, regardless of your position on this subject, the fact remains that Oracle currently propose an appliance-based approach, while IBM does not.

The other area I will briefly look at is the scope of the respective vendor approaches. In the press release announcing the Oracle Big Data Appliance, Oracle claim that:

Oracle Big Data Appliance is an engineered system optimized for acquiring, organizing, and loading unstructured data into Oracle Database 11g.

IBM takes a very different approach. IBM does not see its Big Data platform as primarily being a feeder for its relational database products. Instead, IBM sees this as being one possible use case. However, the way that customers want to use Big Data technologies extend well beyond that use case. IBM is designing its Big Data platform to cater for a wide variety of solutions, some of which involve relational solutions and some of which do not. For instance, the IBM Big Data platform includes:

  • BigInsights for Hadoop-based data processing (regardless of the destination of the data)
  • Streams for analyzing data in motion (where you don’t necessarily store the data)
  • TimeSeries for smart meter and sensor data management
  • and more

So, as you can see, there are fundamental differences in the ways that IBM and Oracle are developing products for Big Data solutions. For more information, see IBM Big Data and Oracle Big Data.

About these ads

NYSE Euronext uses Netezza to Manage their “Big Data”

leave a comment »

NYSE Euronext operates multiple securities exchanges, including the New York Stock Exchange and Euronext. As you might imagine, securities exchanges present significant data management challenges. But NYSE Euronext didn’t just want to have a transactional system, they wanted to do much more with their data, further increasing the challenges. At the 2011 IBM Information On Demand (IOD) conference, NYSE Euronext described their challenges and the solution they chose. In particular, they highlight Netezza’s tremendous performance and how fast it is to get up-and-running with Netezza.

Not only is it easy to get up-and-running with Netezza, but it is easy to manage your environment on an ongoing basis. You can hear for yourself in this short video segment…

Written by Conor O'Mahony

February 24, 2012 at 12:02 pm

Get a Free Copy of the Forrester Wave™ for Enterprise Hadoop Solutions

with 2 comments

Today, Forrester published its Wave analysis for enterprise Hadoop solutions. It has detailed coverage of the Hadoop solutions from vendors like IBM, MapR, Cloudera, Hortonworks, and others. If you are considering an enterprise Hadoop solution, such as IBM InfoSphere BigInsights, it will make for very interesting reading. You can download a free copy of the report from The Forrester Wave™: Enterprise Hadoop Solutions, Q1 2012.

Written by Conor O'Mahony

February 2, 2012 at 2:41 pm

Need Help Determining Hadoop Split Sizes? Use Adaptive MapReduce Instead!

with 2 comments

IBM is actively working on adaptive features for the Map and Reduce phases of its InfoSphere BigInsights product (which is based on Apache Hadoop). In some cases, this involves applying techniques commonly found in mature data management products, and in some cases it involves developing new techniques. While a number of these adaptive features are still under development, there are some features in the product today. For instance, BigInsights currently includes an Adaptive Mapper capability that allows Mappers to successively process multiple splits for a job, and avoid the start-up costs for subsequent splits.

When a MapReduce job begins, Hadoop divides the data into multiple splits. It then creates Mapper tasks for each split. Hadoop deploys the first wave of Mapper tasks to the available processors. Then, as Mapper tasks complete, Hadoop deploys the next Mapper tasks in the queue to the available processors. However, each Mapper task has a start-up cost, and that start-up cost is repeated each time a Mapper task starts.

With BigInsights, there is not a separate Mapper task for each split. Instead, BigInsights creates Mapper tasks on each available processor, and those Mapper tasks successively process the splits. This means that BigInsights significantly reduces the Mapper start-up cost. You can see the results of a benchmark for a set-similarity join workload in the following chart. In this case, the tasks have a high start-up cost. The AM bar (Adaptive Mapper) in the chart is based on a 32MB split size. You can see that by avoiding the recurring start-up costs, you can significantly improve performance.

Adaptive MapReduce Benchmark: Set-Similarity Join Workload

Of course, if you chose the largest split size (2GB), you would achieve similar results to the Adaptive Mapper. However, the you might potentially expose yourself to the imbalanced workloads that sometimes accompany very large splits.

The following chart shows the results of a benchmark for a join query on TERASORT records. Again the AM bar (Adaptive Mapper) in the chart is based on a 32MB split size.

Adaptive MapReduce Benchmark: TERASORT Join Workload

In this case, the Adaptive Mapper results in a more modest performance improvement. Although, it is still an improvement. The key benefit of these Adaptive MapReduce features is that they eliminate some of the hassles associated with determining the split sizes, while also improving performance.

As I mentioned earlier in this post, a number of additional Adaptive MapReduce features are currently in development for future versions of BigInsights. I look forward to telling you about them when they are released…

In the mean time, make sure to check out the free online Hadoop courses at Big Data University. I previous blogged about my experiences with these courses in Hadoop Fundamentals Course on BigDataUniversity.com.

Written by Conor O'Mahony

December 7, 2011 at 1:07 pm

Comparing HDFS and GPFS for Hadoop

leave a comment »

Here is a chart that compares the performance of Hadoop Distributed File System (HDFS) with General Parallel File System-Shared Nothing Cluster (GPFS-SNC) for certain Hadoop-based workloads (it comes from the Understanding Big Data book). As you can see, GPFS-SNC easily out-performs HDFS. In fact, the book claims that a 10-node GPFS-SNC-based Hadoop cluster can match the performance of a 16-node HDFS-based Hadoop cluster.

Comparing HDFS and GPFS for Hadoop Workloads

GPFS was developed by IBM in the 1990s for high-performance computing applications. It has been used in many of the world’s fastest computers (including Blue Gene and Watson). Recently, IBM extended GPFS to develop GPFS-SNC, which is suitable for Hadoop environments. A key difference between GPFS-SNC and HDFS is that GPFS-SNC is a kernel-level file system, whereas HDFS runs on top of the operating system. This means that GPFS-SNC offers several advantages over HDFS, including:

  • Better performance
  • Storage flexibility
  • Concurrent read/write
  • Improved security

If you are interested in seeing how GPFS-SNC performs in your Hadoop cluster, please contact IBM. Although GPFS-SNC is not in the current release of InfoSphere BigInsights (IBM’s Hadoop-based product), GPFS-SNC is currently available to select clients as a technology preview.

Written by Conor O'Mahony

November 30, 2011 at 1:07 pm

IBM is Baking NoSQL Capabilities into DB2 and Informix

leave a comment »

IBM recently revealed its plan to integrate certain NoSQL capabilities into IBM DB2 and Informix. In particular, it is working to integrate graph store and key:value store capabilities into the flagship IBM database products. IBM is not yet indicating when these new capabilities will be available.

IBM does not plan to integrate all NoSQL technologies into DB2 and Informix. After all, there are many NoSQL technologies, and quite a few of them are clearly not suitable for integration into IBM’s products. The following chart summarizes the NoSQL product landscape. This landscape includes more than 100 products across a number of database categories. IBM is saying that they will integrate certain NoSQL capabilities into their products and work hand-in-hand with others NoSQL technologies.

NoSQL Landscape

Readers of this blog will know that these developments are consistent with my view that certain NoSQL technologies will eventually find themselves integrated into the major relational database products. In much the same way as the major relational database products fended off the challenge of object databases by adding features like stored procedures and user-defined functions, I expect the major relational database products to fend off the NoSQL challenge with similar tactics. And don’t forget that the major relational database products have already integrated XML capabilities, providing XQuery as an alternate query language. Its not too much of a stretch to imagine how several of these NoSQL capabilities might be supported in an optimized way as part of a relational database product.

I look forward to blogging more about this topic as news about it emerges…

Written by Conor O'Mahony

November 21, 2011 at 9:00 am

Comparing “New Big Data” with IMS on the Mainframe

with 2 comments

While it does not come up often in today’s data management conversations, the IMS database software is at the heart of many major corporations around the world. For many people, it is the undisputed leader for mission-critical, enterprise transaction and data-serving workloads. IMS users routinely handle peaks of 100 million transactions in a day, and there are quite a few users who report more than 3,000 days without unplanned outages. That’s more than 8 years without an unplanned outage!

IBM recently announced IMS 12, claiming peak performance at a remarkable 66,000 transactions per second. The new release features improved performance and CPU efficiency for most IMS use cases, and a significant improvement in performance for certain use cases. For instance, the Fast Path Secondary Index means that workloads that use this secondary index are 60% faster.

It is interesting to compare the performance of IMS with the headline-grabbing “big data” solutions that are all the rage today. For instance, at the end of August this year, we read how Beyonce Pregnancy News Births New Twitter Record Of 8,868 Tweets Per Second. I am not saying that IMS can replace the infrastructure of Twitter. Far from it. However, I am saying that, when you consider that IMS can handle 66,000 transactions per second, the relative performance levels of the “new big data” solutions when compared with IMS are food for thought. Especially when you consider the very significant infrastructure in place at Twitter, and the staff needed to manage that infrastructure. And don’t forget that IMS supports these performance levels with full read-write capability, full data integrity, and mainframe-level security.

I appreciate that many of today’s Web-scale businesses begin with capital investments that preclude the hardware and software investments required for something like IMS. These new businesses need to be relatively agile, and depend upon the low barrier of entry that x86-based systems and open source/inexpensive software afford. However, I still think it interesting to put this “new big data” in perspective.

Written by Conor O'Mahony

November 9, 2011 at 2:17 pm

Demo: Analyzing Twitter Data with IBM Big Data

with one comment

Last week, I included a demonstration of Using Hadoop to Extract and Analyze Unstructured Information. Now I’d like to share another demo. This demo also shows InfoSphere BigInsights and InfoSphere BigSheets. BigInsights is essentially Apache Hadoop together with extensions for installation, management, security, and integration, while BigSheets is basically an easy-to-use interface for creating and running Map and Reduce jobs.

This demo shows you how to run sentiment analysis on Tweets. Some of the details of creating the specific text analytics are not included. But it is interesting and useful nontheless. It also shows how you can easily run some cool visualizations on that data. Make sure to keep watching until the end where David Barnes show a great visualization on the UK Parliment data.

Don’t forget there is no charge for BigInsights Basic Edition. You can freely download it from InfoSphere BigInsights.

Written by Conor O'Mahony

October 3, 2011 at 8:30 am

Demo: Using Hadoop to Extract and Analyze Unstructured Information

with one comment

Here’s a nice demo. It shows InfoSphere BigInsights, which is IBM’s Hadoop product. BigInsights is essentially Apache Hadoop together with extensions for installation, management, security, integration, and so on. The demo also shows InfoShpere BigSheets. BigSheets is basically an easy-to-use interface for creating and running Map and Reduce jobs. As you can see from the demo, BigSheets makes it quick and easy to apply text analytics extractors and filters to unstructured or semi-structured data. The demo itself shows how you can quickly analyze several aspects of revenue information pulled from earnings press releases. It even includes a nice round-trip to the annotated source data to see “why” certain conditions occurred.

Don’t forget there is no charge for BigInsights Basic Edition. You can freely download it from InfoSphere BigInsights.

Written by Conor O'Mahony

September 27, 2011 at 8:30 am

Benchmark Results for Informix TimeSeries in Meter Data Management

leave a comment »

AMT-SYBEX are a leading provider of platforms for traditional and smart metering. They created a Meterflow Benchmark to help customers choose the best underpinning infrastructure for their platform, and they worked with IBM to run that benchmark with Informix TimeSeries. I previous blogged about Why Informix Rules for Time Series Data Management. Well, the results of this benchmark further illustrate the benefits of Informix TimeSeries. The following quote is from the resulting AMT-SYBEX case study:

We believe that this represents ground breaking levels of performance which is ten times faster than other published benchmarks in this area.

As you can see, Informix is 10x faster than the leading database software they previously worked with. If you read the Executive Summary, you will also see that IBM Informix enjoys almost linear scalability when going from 10 million meters up to 100 million meters, which is a great testament to the efficiency of operation for Informix TimeSeries.

Written by Conor O'Mahony

September 26, 2011 at 1:50 pm

Why Informix Rules for Time Series Data Management

leave a comment »

Informix has had TimeSeries data management capabilities for more than a decade. However, those capabilities are garnering more attention today than ever before. As our world is becoming more instrumented, there is an increasing need to manage data from sensors. And this data from sensors is often being generated at intervals, creating the need for time series analysis.

As I’ve just said, Informix has long had TimeSeries capabilities. However, it wasn’t until recent customer evaluations became public knowledge that the incredible performance of Informix for time series applications became apparent. And now, as a result, Informix is being touted for Smarter Planet-type solutions, including Smart Grid systems.

I mentioned recent customer evaluations. One of those was at ONCOR, a provider of electricity to millions of people in Texas. ONCOR compared Oracle Database and Informix for its temporal data management and analysis needs. They discovered that, for their usage, Informix is 20x faster than Oracle Database when it comes to loading data from Smart Meters. They discovered that Informix is up to 30x faster than Oracle Database for their time series queries. And ONCOR discovered that, before applying data compression, Informix has storage savings of approximately 70% when compared with Oracle Database.

If you are using Oracle Database for a time series application, you should certainly consider Informix. You may significantly improve performance, while at the same time lowering your server, software, and storage costs. The secret sauce is in the way that Informix stores and accesses this time series data. The Informix approach is unique among relational database systems. It stores information that indicates the data source only one time, and then stores the time-stamped values for that source in an infinitely wide column beside it. This approach results in both the storage savings and the huge performance gains.

If you want to read more about Informix and the management of time series data, check out these recent blog posts: This Smart Meter Stuff is for Real and Using Informix to Capture TimeSeries Data that Overwhelms Commodity Databases.

Written by Conor O'Mahony

September 20, 2011 at 8:30 am

Hadoop Fundamentals Course on BigDataUniversity.com

with one comment

After spending some time reading about Apache Hadoop, I decided it was time to get my hands dirty. So this weekend, I took the Hadoop Fundamentals 1 self-paced course on BigDataUniversity.com. It is a really nice way to play with Hadoop. You have the choice of downloading the software and installing it on your computer, working with a VMware image, or working in the cloud. I chose the option of working in the cloud. Within a few minutes I had a Amazon AWS account, a RightScale account, and the software installed in the cloud. By the way, although the course is FREE, I did incur some cloud-related usage charges. It amounted to approximately $1 in Amazon charges for the time it took me to complete the course.

The course itself is quite good. It is, as the abstract implies, a high-level overview. It describes the concepts involved in Hadoop environments, describes the Hadoop architecture, and provides an opportunity to follow tutorials for using Pig, Hive, and Jaql. It also provides a tutorial on using Flume. Because of my experience with JavaScript and JSON, I feel most comfortable using Jaql to query data in Hadoop. However, the DBAs among you will probably feel most comfortable with Hive, given its SQL-friendly approach.

If you are curious about Hadoop, I’d recommend this course. I’m eagerly anticipating the availability of the follow-on Hadoop course…

Written by Conor O'Mahony

September 6, 2011 at 11:53 am

Are IBM DB2 and Oracle Database NoSQL Databases?

with one comment

The NoSQL movement has garnered a lot of attention recently. It has been built around a number of emerging highly-scalable non-relational data stores. The movement is also providing a real lease of life for smaller non-relational database vendors who have been around for a while.

Last week, I noticed an entire track for XML and XQuery sessions at the recent NoSQLNow Conference in San Jose. If XML databases and XQuery are key constituents of the NoSQL world, does that mean that IBM DB2 and Oracle Database should be included in the NoSQL movement? After all, both IBM DB2 and Oracle Database store XML data and provide XQuery interfaces. Of course, I’m not being serious here. I don’t believe that the bastions of the relational world should be included in the NoSQL community. Are native XML databases, which have been around for a while, really in the spirit of the NoSQL movement? What’s your opinion?

I believe that the boundaries of the NoSQL community are perhaps a bit looser than they should be. Essentially, absolutely everything except relational databases are being grouped under the NoSQL banner. I can understand how this has happened, but do the NoSQL community really want to dilute their message by including all of these technologies, most of which have been around for quite some time and had relatively limited traction. In the spirit of what I believe is at the genesis of the current NoSQL movement, I reckon that a NoSQL solution should have the following characteristics:
– Not be based on the relational model
– Have little or no acquisition cost
– Be designed to run on commodity hardware
– Use a distributed architecture
– Support extreme or Web-scale databases

Notice that I don’t include a characteristic based on lack of consistency. I reckon that, over time, consistency will become a characteristic of some NoSQL environments.

By the way, earlier in this blog post I referred to the XML and XQuery capabilities in IBM DB2 and Oracle Database. In case you are curious, there is a significant difference in how DB2 and Oracle Database have incorporated XML capabilities in their respective products, with Oracle essentially leveraging their existing relational infrastructure to provide several ways to store XML data, while IBM built true native XML storage capabilities into its product. In other words, DB2 is indeed a true “native XML store”. In the past, I used to blog about native XML storage over at www.nativeXMLdatabase.com, before handing the reigns over to Matthias Nicola. If you want a little more insight on XML support in Oracle Database, check out XML in Oracle 11g and Why Won’t Oracle Publish XML Benchmark Results for TPoX?

Written by Conor O'Mahony

August 28, 2011 at 2:51 pm

Introduction to Big Data Solutions

leave a comment »

Here’s a short video that was recorded at the IDUG conference, where I talk about the characteristics of Big Data solutions, discuss some of the technologies involved, and describe some real world Big Data solutions that IBM has implemented. Its a high-level introduction, but if you’re not sure what this “Big Data” term refers to, you may find it useful.

In the video, I try to quantify what “big” means today, as well as describing some lessons we have learned while implementing Big Data solutions. Technologies introduced include Map/Reduce systems, systems for analyzing streaming data, Massive Parallel Processing data warehouse systems, and in-memory database systems.

Those of you that know me in person, will see that I was a little under-the-weather when the video was recorded. You can hear it in my voice, see it in my demeanor, and notice it in my cadence. I hope you can get past this, and find this video useful.

Written by Conor O'Mahony

August 23, 2011 at 10:39 pm

IBM Launches Big Data Bootcamps

leave a comment »

As many of you know, IBM has been making big investments in Big Data. This includes InfoSphere BigInsights (which is based on Apache Hadoop), InfoSphere Streams, IBM Netezza, and more than $14B in analytics-based acquisitions. IBM is now announcing a set of hands-on workshops that will be held around the world to help you get to grips with Big Data. There will be 1,200 of these free workshops held in more than 150 cities in 60 countries in 2011. For more information, see IBM Launches Global Bootcamps to Help Companies Tackle Big Data Challenges.

Written by Conor O'Mahony

March 10, 2011 at 2:39 pm

Follow

Get every new post delivered to your Inbox.

Join 71 other followers

%d bloggers like this: