Archive for the ‘Oracle Database’ Category
JSC Rietumu Banka is one of the largest banks in the Baltic states. They recently migrated their data from Oracle Database on Sun servers to IBM DB2 on Power Systems servers, and enjoyed the following bebefits:
- Up to 30 times faster query performance
- 20-30% reduction in total cost of ownership
- 200% improvement in data availability
Like many major banks, JSC Rietumu Banka faced recent pressure to reduce IT costs. In particular, they were concerned with total cost of hardware, software, and staffing for their banking applications which used Oracle Database on Sun servers. After a thorough technical and financial evaluation, JSC Rietumu Banka chose to migrate their environment to DB2 on Power Systems servers.
Of course, the ease of migration was a significant factor in JSC Rietumu Banka being able to achieve these benefits. For more information about the “compatibility features” that make it easy to migrate from Oracle Database to IBM DB2, see Gartner: IBM DB2′s Maturing Oracle Compatibility Presents Opportunities, with some Limitations.
To learn more about this specific migration, read the full IBM case study.
This blog posts refers to the definition of Big Data commonly in use today. I do not include mainframe-based solutions, which some people might argue tackle Big Data challenges.
Both IBM and Oracle are going after the Big Data market. However, they are taking different approaches. I’m going to take a few moments to have a very brief look at what both companies are doing.
First of all, Oracle have introduced an “appliance” for Big Data. IBM have not. I put the word appliance in quotes because I consider this Oracle appliance to be closer in nature to an integrated collection of hardware and software components, rather than a true appliance that is designed for ease of operation. But the more important consideration is whether an appliance even makes sense for Big Data. There is a decent examination of this topic in the following blog post from Curt Monash and the accompanying comment stream: Why you would want an appliance — and when you wouldn’t. But, regardless of your position on this subject, the fact remains that Oracle currently propose an appliance-based approach, while IBM does not.
The other area I will briefly look at is the scope of the respective vendor approaches. In the press release announcing the Oracle Big Data Appliance, Oracle claim that:
Oracle Big Data Appliance is an engineered system optimized for acquiring, organizing, and loading unstructured data into Oracle Database 11g.
IBM takes a very different approach. IBM does not see its Big Data platform as primarily being a feeder for its relational database products. Instead, IBM sees this as being one possible use case. However, the way that customers want to use Big Data technologies extend well beyond that use case. IBM is designing its Big Data platform to cater for a wide variety of solutions, some of which involve relational solutions and some of which do not. For instance, the IBM Big Data platform includes:
- BigInsights for Hadoop-based data processing (regardless of the destination of the data)
- Streams for analyzing data in motion (where you don’t necessarily store the data)
- TimeSeries for smart meter and sensor data management
- and more
At the 2011 IBM Information On Demand (IOD) Conference, Coca Cola Bottling spoke about their experiences when moving from Oracle Database to IBM DB2. I have included some very brief video segments shot at the conference below. It is really interesting to heard about the experiences and impact of switching from Oracle to IBM from the people involved.
In the following short video segment, hear how Coca Cola Bottling have changed their fix pack philosophy as a result of moving. With Oracle Database, they would avoid fix packs unless they “had to”. But with DB2, applying fix packs is much easier and faster, providing faster access to new functionality, performance improvements, and bug fixes. Also, hear about how Coca Cola Bottling have had significant data storage savings thanks to moving to DB2. Who wouldn’t want to reclaim some of that IT budget allocated for storage purchases :-)
And finally, hear about their experiences with performance boosts and the autonomic computing capabilities in DB2.
Yesterday, Oracle announced a new TPC-C benchmark result. They claim:
In this benchmark, the Sun Fire X4800 M2 server equipped with eight Intel® Xeon® E7-8870 processors and 4TB of Samsung’s Green DDR3 memory, is nearly 3x faster than the best published eight-processor result posted by an IBM p570 server equipped with eight Power 6 processors and running DB2. Moreover, Oracle Database 11g running on the Sun Fire X4800 M2 server is nearly 60 percent faster than the best DB2 result running on IBM’s x86 server.
Let’s have a closer look at this claim, starting with the first part: “nearly 3x faster than the best published eight-processor result posted by an IBM p570 server“. Interestingly, Oracle do not lead by comparing their new leading x86 result with IBM’s leading x86 result. Instead they choose to compare their new result to an IBM result from 2007, exploiting the fact that even though this IBM result was on a different platform, it uses the same number of processors. Of course, we all know that the advances in hardware, storage, networking, and software technology over half a decade are simply too great to form any basis for reasonable comparison. Thankfully, most people will see straight through this shallow attempt by Oracle to make themselves look better than they are. I cannot imagine any reasonable person claiming that Oracle’s x86 solutions offer 3x the performance of IBM’s Power Systems solutions, when comparing today’s technology. I’m sure most people will agree that this first comparison is simply meaningless.
Okay, now let’s look at the second claim: “nearly 60 percent faster than the best DB2 result running on IBM’s x86 server“. Oracle now compare their new leading x86 result with IBM’s leading x86 result. However, if you look at the benchmark details, you will see that IBM’s result uses half the number of CPU processors, CPU cores, and CPU threads. If you look at performance per core, the Oracle result achieves 60,046 tpmC per CPU core, while the IBM result achieves 75,367 tpmC per core. While Oracle claims to be 60% faster, if you take into account relevant system size and determine the performance per core, IBM is actually 25% faster than Oracle.
Finally, let’s not forget the price/performance metric from these benchmark results. This new Oracle result achieved US$.98/tpmC, whereas the leading IBM x86 result achieved US$.59/tpmC. That’s correct, when you determine the cost of processing each transaction for these two benchmark results IBM is 39% less expensive than Oracle. (BTW, I haven’t had a chance yet to determine if Oracle Used their Usual TPC Price/Performance Tactics for this benchmark result, as the result details are not yet available to me; but if they have, the IBM system will prove to be even less expensive again than the Oracle system.)
Benchmark results are as of January 17, 2012: Source: Transaction Processing Performance Council (TPC), http://www.tpc.org.
Oracle result: Oracle Sun Fire X4800 M2 server (8 chips/80 cores/160 threads) – 4,803,718 tpmC, US$.98/tpmC, available 06/26/12.
IBM results: IBM System p 570 server (8 chips/16 cores/32 threads) -1,616,162 tpmC, US$3.54 /tpmC, available 11/21/2007. IBM System x3850 X5 (4 chips/40 cores/80 threads) – 3,014,684 tpmC, US$.59/tpmC, available 09/22/11.
Oracle garnered a lot of headlines a couple of weeks ago with their Oracle Database Appliance. It didn’t take long for SmarterQuestions to indicate why the IBM Smart Analytics Systems are A Smarter Database System for SMB Clients.
Recently, IBM added the following systems:
- IBM Smart Analytics System 5710, which is an x86-based Linux system
- IBM Smart Analytics System 7710, which is a Power Systems-based UNIX system
- IBM Smart Analytics System 9710, which are mainframe-based systems
These systems include everything you need to quickly set up a data warehouse environment, and to quickly have your business analysts working with the data.
On top of the servers and storage, it includes database and data warehouse software, Cognos software, cubing services, data mining capabilities, and text analytic capabilities. And it is available on your platform of choice (Linux, UNIX, or mainframe). It is also competitively priced, when you consider that the starting price for the 5710 is under $50k, just like the Oracle appliance. However, the IBM system includes all of the necessary software, whereas with the Oracle appliance you have to purchase the very expensive Oracle Database software separately. And the Oracle Database software is not exactly inexpensive.
If you want to learn more, please visit the IBM Smart Analytics Systems Web page.
Last week, the Americas SAP User Group (ASUG) hosted a Webcast titled Optimize your SAP Environment While Reducing Costs Webcast Materials. The Webcast was delivered by the inimitable Ray Wang, Principal Analyst and CEO at Constellation Research, together with Jack Mason from SAP and Larry Spoerl from IBM. The Webcast discusses how to deliver world-class performance for SAP applications, whilst reducing the Total Cost of Ownership (TCO) of the underpinning infrastructure. It is full of great practical advice, including direct comparisons of IBM DB2 and Oracle Database as part of that underpinning infrastructure. For more information, and for the commentary that goes with the following chart from this Webcast, make sure to check it out at Optimize your SAP Environment While Reducing Costs Webcast Materials:
Wikipedia defines autonomic computing as “the self-managing characteristics of distributed computing resources, adapting to unpredictable changes whilst hiding intrinsic complexity to operators and users”. Both IBM and Oracle have added autonomic computing features to their database software products. On 29 September 2011, IBM will host a Chat with the Labs webcast where the hosts will compare the autonomic computing features of IBM DB2 and Oracle Database in the following areas:
- Memory Management
- Storage Management
- Utility throttling
- Automatic Configuration
- Automatic Maintenance
You can sign up for the webcast at: DB2 and Oracle Database: An Autonomic Computing Comparison.
I just watched the Oracle Webcast announcing its new database appliance. Here are my initial reactions.
I expected Oracle to announce a mini-Exadata, as had been widely rumored. However, as far as I can see, this is not a mini-Exadata. I don’t have details yet, but it does not appear to contain the Exadata storage-layer software. This is simply an Oracle Database appliance, or an Oracle RAC appliance. Nothing more. In other words, it is the fusion of Oracle software, Oracle hardware, and some support/services.
Because this announcement is really about making Oracle Database easier to deploy, I’m not sure it has much applicability for organizations with an existing Oracle Database set-up, unless they are planning a hardware migration. But judging from how this was presented and positioned, Oracle are probably focusing this product on channel sales, and making this as partner-friendly as possible.
I like that Oracle have followed IBM’s lead and added pay-as-you-grow licensing/pricing. In this appliance, you activate and license CPU cores as needed. Of course, IBM pureScale Application System already offers this. However, IBM does still have an advantage in this regard with its ability to seamlessly add or remove database processing capacity, with a combination of its transparent scaling and daily-based software licensing. In other words, you can purchase the DB2 pureScale database software licenses only for the days where you need the extra capacity. This is a great strategy for eliminating the over-provisioning of database software licenses just to deal with situations where there are significant short-term spikes in demand (like retailers around the holidays, for instance). For more information, check out the “flexible licensing” section of the following blog post: IBM Previews New Integrated System for Transactional Workloads.
Something that’s not clear yet are the growth options for these Oracle Database Appliances. At least from my initial look, there does not seem to be a seamless upgrade path to Exadata. It appears that if someone wants to grow from this appliance, it will not be an insignificant undertaking. Perhaps someone from Oracle can comment on that.
So, in summary, this announcement appears to simply be a packaging exercise by Oracle, where they have created a relatively straightforward database appliance. Your thoughts/reactions?
Informix has had TimeSeries data management capabilities for more than a decade. However, those capabilities are garnering more attention today than ever before. As our world is becoming more instrumented, there is an increasing need to manage data from sensors. And this data from sensors is often being generated at intervals, creating the need for time series analysis.
As I’ve just said, Informix has long had TimeSeries capabilities. However, it wasn’t until recent customer evaluations became public knowledge that the incredible performance of Informix for time series applications became apparent. And now, as a result, Informix is being touted for Smarter Planet-type solutions, including Smart Grid systems.
I mentioned recent customer evaluations. One of those was at ONCOR, a provider of electricity to millions of people in Texas. ONCOR compared Oracle Database and Informix for its temporal data management and analysis needs. They discovered that, for their usage, Informix is 20x faster than Oracle Database when it comes to loading data from Smart Meters. They discovered that Informix is up to 30x faster than Oracle Database for their time series queries. And ONCOR discovered that, before applying data compression, Informix has storage savings of approximately 70% when compared with Oracle Database.
If you are using Oracle Database for a time series application, you should certainly consider Informix. You may significantly improve performance, while at the same time lowering your server, software, and storage costs. The secret sauce is in the way that Informix stores and accesses this time series data. The Informix approach is unique among relational database systems. It stores information that indicates the data source only one time, and then stores the time-stamped values for that source in an infinitely wide column beside it. This approach results in both the storage savings and the huge performance gains.
If you want to read more about Informix and the management of time series data, check out these recent blog posts: This Smart Meter Stuff is for Real and Using Informix to Capture TimeSeries Data that Overwhelms Commodity Databases.
When IBM DB2 first added syntax that is compatible with Oracle Database data types, SQL, PL/SQL, scripting, and more, Gartner wrote their “first take” on the technology in IBM DB2 9.7 Shakes Up the DBMS Market With Oracle Compatibility. More than two years later, Gartner are now following up with a research report on the features. You can read the research report at: IBM DB2′s Maturing Oracle Compatibility Presents Opportunities, with some Limitations. This is not a commissioned report. It is independent research from Gartner.
Do Oracle check their facts before they issue a press release? Because today there is yet another instance of a blatant mistruth issued by Oracle. This time in an official Oracle press release about an SAP benchmark result. Here is the offending quote:
“Oracle’s superior scalable cluster architecture has full high availability unlike IBM’s that does not scale beyond a single server“
The quote is attributed to Juan Loaiza, Senior Vice President, Systems Technology at Oracle Corporation. Now it is possible that this is not a case of Oracle intentionally trying to mislead the public. It is possible that this is a case of poor fact-checking from Oracle. And if that is the case, then they should have checked yesterday’s SAP benchmark results when an IBM DB2 cluster took top spot in another SAP benchmark.
For the record, IBM DB2 has outstanding scale-out capabilities. IBM DB2 provides both shared-nothing partitioning scale-out capabilities as well as shared-disk clustering scale-out capabilities. Many would argue that IBM DB2 has significantly superior scale-out capabilities when compared with Oracle Database. Especially when it comes to scale-out efficiency.
Note: When this first came to light, I was a little upset. After taking a little time to calm down, I updated some statements in this blog post to tone them down. Thankfully, I probably did this before anyone got a chance to read them :-)
Here is a video where Philip Howard, Research Director at Bloor Research, evaluates performance, scalability, administration, and cost considerations for IBM Smart Analytics System and Oracle Exadata [for data warehouse environments]. This video is packed with great practical advice for evaluating these products.
Philip Howard, Research Director at Bloor Research, recently evaluated the performance, scalability, administration, and cost considerations for the leading integrated systems from IBM and Oracle for OnLine Transaction Processing (OLTP) environments. Here is a summary of his conclusions:
And here is a video with his evaluation. It is packed with practical advice regarding storage capacity, processing capacity, and more.
The NoSQL movement has garnered a lot of attention recently. It has been built around a number of emerging highly-scalable non-relational data stores. The movement is also providing a real lease of life for smaller non-relational database vendors who have been around for a while.
Last week, I noticed an entire track for XML and XQuery sessions at the recent NoSQLNow Conference in San Jose. If XML databases and XQuery are key constituents of the NoSQL world, does that mean that IBM DB2 and Oracle Database should be included in the NoSQL movement? After all, both IBM DB2 and Oracle Database store XML data and provide XQuery interfaces. Of course, I’m not being serious here. I don’t believe that the bastions of the relational world should be included in the NoSQL community. Are native XML databases, which have been around for a while, really in the spirit of the NoSQL movement? What’s your opinion?
I believe that the boundaries of the NoSQL community are perhaps a bit looser than they should be. Essentially, absolutely everything except relational databases are being grouped under the NoSQL banner. I can understand how this has happened, but do the NoSQL community really want to dilute their message by including all of these technologies, most of which have been around for quite some time and had relatively limited traction. In the spirit of what I believe is at the genesis of the current NoSQL movement, I reckon that a NoSQL solution should have the following characteristics:
– Not be based on the relational model
– Have little or no acquisition cost
– Be designed to run on commodity hardware
– Use a distributed architecture
– Support extreme or Web-scale databases
Notice that I don’t include a characteristic based on lack of consistency. I reckon that, over time, consistency will become a characteristic of some NoSQL environments.
By the way, earlier in this blog post I referred to the XML and XQuery capabilities in IBM DB2 and Oracle Database. In case you are curious, there is a significant difference in how DB2 and Oracle Database have incorporated XML capabilities in their respective products, with Oracle essentially leveraging their existing relational infrastructure to provide several ways to store XML data, while IBM built true native XML storage capabilities into its product. In other words, DB2 is indeed a true “native XML store”. In the past, I used to blog about native XML storage over at www.nativeXMLdatabase.com, before handing the reigns over to Matthias Nicola. If you want a little more insight on XML support in Oracle Database, check out XML in Oracle 11g and Why Won’t Oracle Publish XML Benchmark Results for TPoX?