Archive for the ‘DB2 for LUW’ Category
- Performance is up to 3.3x faster than previous release for complex query workloads*
- The new Adaptive Compression has provided 7x or greater overall space savings for more than one client, with some tables achieving 10x space savings**
- In DB2 10 Early Access Program testing, DB2 obtained an average of 98% compatibility with Oracle PL/SQL***
- DB2 NoSQL Graph Store Accelerates Rational Use Case by up to 3.5x****
Check out the following great video from one of our Early Access Program participants:
For more information about these releases, make sure to visit the Launch Virtual Event.
* Based on internal tests of IBM DB2 9.7 FP3 vs. DB2 10.1 with new compression features on P6-550 systems with comparable specifications using data warehouse / decision support workloads, as of 4/3/2012.
** Based on client testing in the DB2 10 Early Access Program.
*** Based on internal tests and reported client experience from 28 Sep 2011 to 07 Mar 2012.
**** Based on internal benchmark tests of Rational Jazz graph store usage, comparing DB2 10 Graph Store with Jena TDB version 0.8.10.
JSC Rietumu Banka is one of the largest banks in the Baltic states. They recently migrated their data from Oracle Database on Sun servers to IBM DB2 on Power Systems servers, and enjoyed the following bebefits:
- Up to 30 times faster query performance
- 20-30% reduction in total cost of ownership
- 200% improvement in data availability
Like many major banks, JSC Rietumu Banka faced recent pressure to reduce IT costs. In particular, they were concerned with total cost of hardware, software, and staffing for their banking applications which used Oracle Database on Sun servers. After a thorough technical and financial evaluation, JSC Rietumu Banka chose to migrate their environment to DB2 on Power Systems servers.
Of course, the ease of migration was a significant factor in JSC Rietumu Banka being able to achieve these benefits. For more information about the “compatibility features” that make it easy to migrate from Oracle Database to IBM DB2, see Gartner: IBM DB2′s Maturing Oracle Compatibility Presents Opportunities, with some Limitations.
To learn more about this specific migration, read the full IBM case study.
At the 2011 IBM Information On Demand (IOD) Conference, Coca Cola Bottling spoke about their experiences when moving from Oracle Database to IBM DB2. I have included some very brief video segments shot at the conference below. It is really interesting to heard about the experiences and impact of switching from Oracle to IBM from the people involved.
In the following short video segment, hear how Coca Cola Bottling have changed their fix pack philosophy as a result of moving. With Oracle Database, they would avoid fix packs unless they “had to”. But with DB2, applying fix packs is much easier and faster, providing faster access to new functionality, performance improvements, and bug fixes. Also, hear about how Coca Cola Bottling have had significant data storage savings thanks to moving to DB2. Who wouldn’t want to reclaim some of that IT budget allocated for storage purchases
And finally, hear about their experiences with performance boosts and the autonomic computing capabilities in DB2.
The International DB2 User Group (IDUG) is a user-run organization. If you want independent information about DB2, IDUG is the place to go. This year, IDUG are have conferences in the US (Denver), Germany (Berlin), and Australia (Sydney). The good news is that the DB2night Show is holding a contest, and the prize is an all expenses-paid trip to the IDUG conference of your choice. The contest aims to identify new users who can speak about their experiences with DB2. It’s a talent contest of sorts, where the talent is sharing your experiences. If you have ever considered speaking at a conference, this contest is the ideal way to see how you might do in a fun setting.
Yesterday, Oracle announced a new TPC-C benchmark result. They claim:
In this benchmark, the Sun Fire X4800 M2 server equipped with eight Intel® Xeon® E7-8870 processors and 4TB of Samsung’s Green DDR3 memory, is nearly 3x faster than the best published eight-processor result posted by an IBM p570 server equipped with eight Power 6 processors and running DB2. Moreover, Oracle Database 11g running on the Sun Fire X4800 M2 server is nearly 60 percent faster than the best DB2 result running on IBM’s x86 server.
Let’s have a closer look at this claim, starting with the first part: “nearly 3x faster than the best published eight-processor result posted by an IBM p570 server“. Interestingly, Oracle do not lead by comparing their new leading x86 result with IBM’s leading x86 result. Instead they choose to compare their new result to an IBM result from 2007, exploiting the fact that even though this IBM result was on a different platform, it uses the same number of processors. Of course, we all know that the advances in hardware, storage, networking, and software technology over half a decade are simply too great to form any basis for reasonable comparison. Thankfully, most people will see straight through this shallow attempt by Oracle to make themselves look better than they are. I cannot imagine any reasonable person claiming that Oracle’s x86 solutions offer 3x the performance of IBM’s Power Systems solutions, when comparing today’s technology. I’m sure most people will agree that this first comparison is simply meaningless.
Okay, now let’s look at the second claim: “nearly 60 percent faster than the best DB2 result running on IBM’s x86 server“. Oracle now compare their new leading x86 result with IBM’s leading x86 result. However, if you look at the benchmark details, you will see that IBM’s result uses half the number of CPU processors, CPU cores, and CPU threads. If you look at performance per core, the Oracle result achieves 60,046 tpmC per CPU core, while the IBM result achieves 75,367 tpmC per core. While Oracle claims to be 60% faster, if you take into account relevant system size and determine the performance per core, IBM is actually 25% faster than Oracle.
Finally, let’s not forget the price/performance metric from these benchmark results. This new Oracle result achieved US$.98/tpmC, whereas the leading IBM x86 result achieved US$.59/tpmC. That’s correct, when you determine the cost of processing each transaction for these two benchmark results IBM is 39% less expensive than Oracle. (BTW, I haven’t had a chance yet to determine if Oracle Used their Usual TPC Price/Performance Tactics for this benchmark result, as the result details are not yet available to me; but if they have, the IBM system will prove to be even less expensive again than the Oracle system.)
Benchmark results are as of January 17, 2012: Source: Transaction Processing Performance Council (TPC), http://www.tpc.org.
Oracle result: Oracle Sun Fire X4800 M2 server (8 chips/80 cores/160 threads) – 4,803,718 tpmC, US$.98/tpmC, available 06/26/12.
IBM results: IBM System p 570 server (8 chips/16 cores/32 threads) -1,616,162 tpmC, US$3.54 /tpmC, available 11/21/2007. IBM System x3850 X5 (4 chips/40 cores/80 threads) – 3,014,684 tpmC, US$.59/tpmC, available 09/22/11.
Cloud computing is certainly a hot topic these days. If an organization is not already using cloud computing, it has plans to do so. The economics, agility, and value offered by cloud computing is just too persuasive for IT organizations ignore.
Even the high-profile Amazon outage couldn’t slow cloud computing’s relentless march towards mainstream adoption. If anything, that outage helped make cloud computing more robust by highlighting the need for hardened policies and procedures around provisioning in the cloud.
IBM recently announced updates to a set of products that make it easy to deploy DB2 and InfoSphere Warehouse on private clouds:
- IBM Workload Deployer (previously know as WebSphere CloudBurst), which is a hardware/software appliance that streamlines the deployment and management of software on private clouds.
- IBM Transactional Database Pattern, which works with the IBM Workload Deployer to generate DB2 instances that are suitable for transactional workloads.
- IBM Data Mart Pattern, which generates InfoSphere Warehouse instances for data mart workloads.
These patterns consist of more than just deploying virtual images with pre-configured software. You should instead think of them as being like mini-applications for configuring and deploying a cloud-based database instances. Users specify information about the database, and then the pattern builds and deploys the database instance.
The Transactional Database Pattern is for OLTP deployments. It includes templates for sizing the virtual machine, database backup scheduling, database deployment cloning capabilities, and tooling (including Data Studio). The Data Mart Pattern incorporates the features to the OLTP pattern, together with deep compression and data movement tools. But, of course, it is configured and optimized for data mart workloads in a virtual environment.
I’m still in the afterglow of the International DB2 User Group (IDUG) conference in Prague, Czech Republic. It was another great conference at a great facility in a great city. The conference organizers should be commended on a truly outstanding event. Its incredible to think that the conference organizers are user volunteers, and not professional conference planners! I’m already looking forward to the next IDUG EMEA conference in Berlin next year. If you are interested in a more in-depth discussion of the conference, including lessons learned from the technical sessions, Norberto Filho will be appearing on the DB2Night show on Friday 02 December 2011. Even if you were at the conference, there was so much happening there that you are sure to learn something new from Norberto’s experiences.
IBM recently revealed its plan to integrate certain NoSQL capabilities into IBM DB2 and Informix. In particular, it is working to integrate graph store and key:value store capabilities into the flagship IBM database products. IBM is not yet indicating when these new capabilities will be available.
IBM does not plan to integrate all NoSQL technologies into DB2 and Informix. After all, there are many NoSQL technologies, and quite a few of them are clearly not suitable for integration into IBM’s products. The following chart summarizes the NoSQL product landscape. This landscape includes more than 100 products across a number of database categories. IBM is saying that they will integrate certain NoSQL capabilities into their products and work hand-in-hand with others NoSQL technologies.
Readers of this blog will know that these developments are consistent with my view that certain NoSQL technologies will eventually find themselves integrated into the major relational database products. In much the same way as the major relational database products fended off the challenge of object databases by adding features like stored procedures and user-defined functions, I expect the major relational database products to fend off the NoSQL challenge with similar tactics. And don’t forget that the major relational database products have already integrated XML capabilities, providing XQuery as an alternate query language. Its not too much of a stretch to imagine how several of these NoSQL capabilities might be supported in an optimized way as part of a relational database product.
I look forward to blogging more about this topic as news about it emerges…
Here are my personal top 10 reasons to attend the upcoming International DB2 User Group (IDUG) conference in Prague, Czech Republic this November.
- 100+ of the best technical sessions about DB2, featuring IBM developers, industry experts, and users like you
- IBM keynote on the future of relational database software
- Official IBM certification tests at no additional cost
- Pre-conference seminars on preparing for DB2 certification tests at no additional cost
- Pre-conference workshop on preparing for DB2 10 for z/OS upgrades at no additional cost
- Conference exhibit hall with the world’s top DB2 tool vendors, consulting firms and solution providers
- Post-conference day-long educational seminars
- It’s a great way to meet and get to know fellow DB2 users
- It’s a great way to speak directly with the DB2 developers
- Prague is one of the most beautiful cities in the world
Registration is now open at http://bit.ly/IDUGEMEA. If you register before 17 October 2011, you can take advantage of the early bird discount and save 275 Euro + VAT.
Last week, the Americas SAP User Group (ASUG) hosted a Webcast titled Optimize your SAP Environment While Reducing Costs Webcast Materials. The Webcast was delivered by the inimitable Ray Wang, Principal Analyst and CEO at Constellation Research, together with Jack Mason from SAP and Larry Spoerl from IBM. The Webcast discusses how to deliver world-class performance for SAP applications, whilst reducing the Total Cost of Ownership (TCO) of the underpinning infrastructure. It is full of great practical advice, including direct comparisons of IBM DB2 and Oracle Database as part of that underpinning infrastructure. For more information, and for the commentary that goes with the following chart from this Webcast, make sure to check it out at Optimize your SAP Environment While Reducing Costs Webcast Materials:
Wikipedia defines autonomic computing as “the self-managing characteristics of distributed computing resources, adapting to unpredictable changes whilst hiding intrinsic complexity to operators and users”. Both IBM and Oracle have added autonomic computing features to their database software products. On 29 September 2011, IBM will host a Chat with the Labs webcast where the hosts will compare the autonomic computing features of IBM DB2 and Oracle Database in the following areas:
- Memory Management
- Storage Management
- Utility throttling
- Automatic Configuration
- Automatic Maintenance
You can sign up for the webcast at: DB2 and Oracle Database: An Autonomic Computing Comparison.
When IBM DB2 first added syntax that is compatible with Oracle Database data types, SQL, PL/SQL, scripting, and more, Gartner wrote their “first take” on the technology in IBM DB2 9.7 Shakes Up the DBMS Market With Oracle Compatibility. More than two years later, Gartner are now following up with a research report on the features. You can read the research report at: IBM DB2′s Maturing Oracle Compatibility Presents Opportunities, with some Limitations. This is not a commissioned report. It is independent research from Gartner.
The deadline for submitting proposals for presentations at next year’s DB2 Tech Conference in Denver, Colorado is fast approaching. Make sure to get your proposals in by 14 October 2011. You can submit your proposals on the International DB2 User Group Web site at Call For Presentations. If you look at the Web site, you will see the list of potential topics, as well as guidelines for the presentations. Essentially, the organizers are looking for presentations on almost every aspect of working with DB2. If you have experiences to share, presenting at the conference is a great way to get a complimentary pass to the conference.
Do Oracle check their facts before they issue a press release? Because today there is yet another instance of a blatant mistruth issued by Oracle. This time in an official Oracle press release about an SAP benchmark result. Here is the offending quote:
“Oracle’s superior scalable cluster architecture has full high availability unlike IBM’s that does not scale beyond a single server“
The quote is attributed to Juan Loaiza, Senior Vice President, Systems Technology at Oracle Corporation. Now it is possible that this is not a case of Oracle intentionally trying to mislead the public. It is possible that this is a case of poor fact-checking from Oracle. And if that is the case, then they should have checked yesterday’s SAP benchmark results when an IBM DB2 cluster took top spot in another SAP benchmark.
For the record, IBM DB2 has outstanding scale-out capabilities. IBM DB2 provides both shared-nothing partitioning scale-out capabilities as well as shared-disk clustering scale-out capabilities. Many would argue that IBM DB2 has significantly superior scale-out capabilities when compared with Oracle Database. Especially when it comes to scale-out efficiency.
Note: When this first came to light, I was a little upset. After taking a little time to calm down, I updated some statements in this blog post to tone them down. Thankfully, I probably did this before anyone got a chance to read them
A couple of years ago, IBM introduced the pureScale feature, which provides application cluster transparency (allowing you to create shared-disk database clusters). At the time, IBM had taken their industry-leading clustering architecture from the mainframe, and brought it to Unix environments. IBM subsequently also brought it to Linux environments.
Today, IBM announced its first public industry benchmark result for this cluster technology. IBM achieved a record result for the SAP Transaction Banking (TRBK) Benchmark, processing more than 56 million posting transactions per hour and more than 22 million balanced accounts per hour. The results were achieved using IBM DB2® 9.7 on SUSE Linux® Enterprise Server. The cluster contained five IBM System x 3690 X5 database servers, and used the IBM System Storage® DS8800 disk system. The servers were configured to take over workload in case of a single system failure, thereby supporting high application availability. For more details, see the official certification from SAP.