Archive for the ‘DB2 for z/OS’ Category
The International DB2 User Group (IDUG) is a user-run organization. If you want independent information about DB2, IDUG is the place to go. This year, IDUG are have conferences in the US (Denver), Germany (Berlin), and Australia (Sydney). The good news is that the DB2night Show is holding a contest, and the prize is an all expenses-paid trip to the IDUG conference of your choice. The contest aims to identify new users who can speak about their experiences with DB2. It’s a talent contest of sorts, where the talent is sharing your experiences. If you have ever considered speaking at a conference, this contest is the ideal way to see how you might do in a fun setting.
I’m still in the afterglow of the International DB2 User Group (IDUG) conference in Prague, Czech Republic. It was another great conference at a great facility in a great city. The conference organizers should be commended on a truly outstanding event. Its incredible to think that the conference organizers are user volunteers, and not professional conference planners! I’m already looking forward to the next IDUG EMEA conference in Berlin next year. If you are interested in a more in-depth discussion of the conference, including lessons learned from the technical sessions, Norberto Filho will be appearing on the DB2Night show on Friday 02 December 2011. Even if you were at the conference, there was so much happening there that you are sure to learn something new from Norberto’s experiences.
Now that the IBM Information on Demand (IOD) and International DB2 User Group (IDUG) conferences are behind me, I have time to blog about some of the great announcements from those conferences. Probably the announcement that generated the most interest among conferences attendees is the new release of the IBM DB2 Analytics Accelerator (IDAA). This product takes advantage of Netezza to accelerate analytics queries on DB2 for z/OS.
The way it works is… you specify the data whose analysis you want to speed up, and a copy of that data is placed on Netezza (DB2 for z/OS continues to be the system of record for all data). Then, when DB2 for z/OS receives a query, an optimizer determines whether that query should be handled by DB2 for z/OS or by IBM Netezza. Here is a chart from the IDUG conference that summarizes the query execution flow.
Conceptually, you could almost think of the IBM DB2 Analytics Accelerator as a mainframe specialty processor for analytics. I know its not actually a specialty processor, but it does perform the processing involved with complex analytics queries. It also makes life easier for database administrators who often struggle with long-running complex queries, by providing them with an accelerator that does not require additional tuning. To see how much faster it is, here is another chart from the IDUG conference. It shows the experiences of IBM DB2 Analytics Accelerator Beta program participants.
If you run complex analytical queries on DB2 for z/OS, it is almost certainly worth you while to learn more about the IBM DB2 Analytics Accelerator.
Here are my personal top 10 reasons to attend the upcoming International DB2 User Group (IDUG) conference in Prague, Czech Republic this November.
- 100+ of the best technical sessions about DB2, featuring IBM developers, industry experts, and users like you
- IBM keynote on the future of relational database software
- Official IBM certification tests at no additional cost
- Pre-conference seminars on preparing for DB2 certification tests at no additional cost
- Pre-conference workshop on preparing for DB2 10 for z/OS upgrades at no additional cost
- Conference exhibit hall with the world’s top DB2 tool vendors, consulting firms and solution providers
- Post-conference day-long educational seminars
- It’s a great way to meet and get to know fellow DB2 users
- It’s a great way to speak directly with the DB2 developers
- Prague is one of the most beautiful cities in the world
Registration is now open at http://bit.ly/IDUGEMEA. If you register before 17 October 2011, you can take advantage of the early bird discount and save 275 Euro + VAT.
The International DB2 User Group (IDUG) is presenting a free webcast featuring the most popular presentation from the most recent IDUG DB2 Tech Conference, as voted by attendees at the conference. Suresh Sane will present his A DB2 10 Customer’s Experience presentation, which will describe his experiences with DB2 10 for z/OS, including:
- How new SQL features help
- How hash access speeds up queries against large tables
- How access path determination is now smarter
- How concurrency is improved without sacrificing integrity
- How temporal tables simplify code
This webcast is a must-see for Database Administrators and Application Developers. It is filled with rich content, helpful hints and tips. As a special bonus, everyone who registers for the Webcast will receive a complimentary copy of the Business Value of DB2 10 – Smarter Database for a Smarter Planet report by Julian Stuhler, Triton Consulting. The Webcast will take place at 11am ET on Wednesday, 02 November 2011. To register for the Webcast, please go to DB2 10 Application Topics—A DB2 10 Customer’s Experience.
The American Association of Retired Persons (AARP) recently paired Netezza with their transactional environment, which includes DB2 for z/OS, and achieved remarkable results. Often when you read customer success stories, you are bombarded with metrics. The AARP success story has those metrics:
- 1400% improvement in data loading time
- 1700% improvement in report response time
- 347% ROI in three years
But metrics tell only part of the story. And sometimes the story gets a lot more interesting when you dig a little deeper.
AARP had been using Oracle Database for their data warehouse. But their system simply could not keep up with the demand. As Margherita Bruni from AARP says, “our analysts would run a report, then go for coffee or for lunch, and, maybe if they were lucky, by 5:00 p.m. they would get the response—it was unacceptable—the system was so busy writing the new daily data that it didn’t give any importance to the read operations performed by users.” The stresses on their system were so great that in 2009 alone, their Oracle Database environment had more than 30 system failures. To compound matters, these system performance issues meant that full backups were not possible. Instead AARP would back up only a few critical tables, which is a less than desirable disaster recovery scenario. Clearly, something had to be done.
AARP chose to move their 36 TB data warehouse to Netezza. You can see from the metrics above that they achieved remarkable performance improvements. But what do those performance improvements mean. Well, for the IT staff, they mean that they are relieved of a huge daily burden. Their old system required one full-time database administrator (DBA) and one half-time SAN network support person. These people are now, for the most part, free to work on other projects. And more importantly, they don’t have to deal with the stress of the old environment any more.
But the benefits are not being enjoyed only by the IT staff. They are also being enjoyed by the business analysts, who according to Bruni “could not believe how quickly results were provided—they were so shocked that their work could be accomplished in a matter of hours rather than weeks that, initially, they thought data was cached.” She goes on to say that “one analyst, who is now a director, told us that he used the extra time for other projects, which ultimately helped him become more successful and receive a promotion.” Now that is what I call a great impact statement. The metrics are great, but when someone is freed up to do work that gets them a promotion, that’s a very tangible illustration of the difference that Netezza can make.
Another illustration of the difference is the impact it had on the group that implemented the Netezza system. As Bruni says “after we moved to IBM Netezza, the word spread that we were doing things right and that leveraging us as an internal service was really smart; we’ve gained new mission-critical areas, such as the social-impact area which supports our Drive to End Hunger and Create the Good campaigns.” It certainly looks like you can add IT management to the list of constituents who have had a positive career impact as a result of moving from Oracle Database to IBM Netezza.
For more information about this story, see AARP: Achieving a 347 percent ROI in three years from BI modernization effort.
Any of you who are familiar with DB2 on the mainframe (officially known as DB2 for z/OS) know how efficient it is. The mainframe is not for every organization. However, for those organizations for whom the mainframe is a good fit, the tremendous levels of efficiency, reliability, availability, and security directly translate into significant cost savings.
Database software on the mainframe may be relatively boring when compared with the data management flavor of the day (whether it is Hadoop or any of the other technologies associated with Big Data). But when it comes to storing mission-critical transactions, nothing beats the ruthless efficiency of the mainframe. And that boring, ruthless efficiency has been winning over organizations.
Earlier this year, eWeek reported how the IBM Mainframe Replaces HP, Oracle Systems for Payment Solutions. In this article, eWeek describe how Payment Solution Providers (PSP) from Canada chose DB2 for z/OS over Oracle Database on HP servers. A couple of items in this article really catch the eye. One is that the operational efficiencies of the mainframe are expected to lower IT costs up to 35 percent for PSP. The other is that PSP’s system can now process up to 5,000 transactions per second.
Another organization who moved in the same direction is BC Card—Korea’s largest credit card company. The Register ran a story about how a Korean bank dumps Unix boxes for mainframes. BC Card is a coalition of 11 South Korean banks that handles credit card transactions for 2.62 million merchants and 40 million credit card holders in the country. They dumped their HP and Oracle Sun servers in favor of an IBM mainframe. In an accompanying IBM press release, it was revealed that IBM scored highest in every benchmark test category from performance to security to flexibility. Another significant factor in moving to the mainframe is the combination of the utility pricing that lets customers activate and deactivate mainframe engines on-demand, together with software pricing that scales up and down with capacity.
Despite continual predictions to its demise, it has been reported that the mainframe has experienced one of its best years ever, with an increase in usage (well, technically MIPS) of 86% from the same time in 2010. Much of this growth is coming from new customers to the mainframe. In fact, since the System z196 mainframe started shipping in the third quarter of 2010, IBM has added 68 new mainframe customers, with more than two-thirds of them consolidating from distributed systems.
It may not be as exciting as the newest technology on the block, but it is difficult to beat the reliability and efficiency of the mainframe. Especially when you are faced with the realities of managing a relatively large environment, and all of the costs associated with doing so. And don’t forget, the mainframe can provide you with a hierarchical store, a relational store, or a native XML store. And when you combine the security advantages and the 24×7 availability, together with cost efficiency, it makes for an interesting proposition.
The deadline for submitting proposals for presentations at next year’s DB2 Tech Conference in Denver, Colorado is fast approaching. Make sure to get your proposals in by 14 October 2011. You can submit your proposals on the International DB2 User Group Web site at Call For Presentations. If you look at the Web site, you will see the list of potential topics, as well as guidelines for the presentations. Essentially, the organizers are looking for presentations on almost every aspect of working with DB2. If you have experiences to share, presenting at the conference is a great way to get a complimentary pass to the conference.
The International DB2 User Group (IDUG) and IBM are offering a complimentary workshop for DB2 for z/OS clients who are planning to migrate to DB2 10. This workshop will help attendees maximize the business benefits and cost savings associated with moving to DB2 10; it will also ensure that they are adopting IBM best practices when doing so. The workshop is being offered immediately prior to the IDUG DB2 Tech Conference EMEA in Prague. Seats are limited, so make sure to sign up soon! The details are:
Date: 13th November 2011
Time: 9:30 AM – 5:00 PM
Location: IDUG DB2 Tech Conference EMEA in Prague
Link: DB2 10 Migration Planning Workshop
About three months ago, Chris Eaton created a DB2 Jobs and Consultants Marketplace group on LinkedIn. It has some job postings, as well as general items of interest for DB2 professionals. If you are looking for people with DB2 skills, it might be a good idea to also post your job openings here.
Larry Ellison is not prone to praising his competitors. So it was quite startling when he recently opined that “the IBM DB2 product on mainframe is a good product.” Of course, DB2 for z/OS is the undisputed leader in the RDBMS market when it comes to total system availability, scalability, security, and reliability. And today IBM officially announced a new major release of DB2 for z/OS.
DB2 10 for z/OS has garnered some great reaction from its most popular Beta program ever. Some Beta participants claimed that this is the best release in a decade. Here is why:
- CPU cycle reductions for most workloads.
While versions 3, 4, 5, 6, 7, 8, and 9 actually increased CPU times by a small amount, version 10 reduces them. After rebinding to DB2 10, most customers should see a 5%-10% CPU reduction out-of-the-box. Some will see even further reductions in CPU cycles.
- Support for up to 10x more concurrent users.
With DB2 10, virtual storage improvements are delivering up to 10 times more concurrent active threads. This allows many customers to reduce the number of DB2 members needed to support their workloads, resulting in net CPU and memory savings and improving application performance.
- New temporal capabilities built directly into the database.
DB2 10 delivers the industry’s first integrated bitemporal capabilities that are built directly into the database. This allows for queries over past, present, or future time periods. But the key thing to remember is that these bitemporal capabilities are provided by the core database engine. This means that you don’t have to maintain separate custom code to get these capabilities. You simply code SQL against the main table. DB2 for z/OS is the first RDBMS to deliver this!
Here are some quotes from Beta participants:
“We measured a 38% reduction in CPU for heavy insert workloads in a data sharing environment. That’s a significant savings which provides immediate business benefit.”―Peter Paetsch, BMW
“In addition to the cost savings, DB2 10 for z/OS offers a far superior data server environment than Oracle Database”—Manuel Gomez Burrierl, CECA
“We expect to reduce our data sharing requirements by 25%, which means less system, storage and resource.”―Banco do Brasil
There are many, many additional reasons to move to DB2 10. You can read about them at IBM – Announcing DB2 10 for z/OS.
The Independent DB2 User Group (IDUG) recently ran a contest called Do You DB2? where DB2 users were encouraged to share their experiences. I’d like to congratulate Michael Krafick of Atlanta, GA who won the Grand Prize of a 55″ internet-ready HDTV for describing his organization’s experiences with IBM Smarter Systems. Here are some excerpts from his story:
…a database with multiple partitions over three nodes. Initially performing well, we soon discovered that predicted monthly growth for the database was grossly underestimated. Performance and trending were showing that we were going to run into a problem sooner rather than later.
Not to mention, this was a data mart supporting Business Intelligence in a financial market. We were heading into that special time of year where the planets align – where month end loads, quarter end loads, and year end loads were about to collide.
… This is where IBM’s Balanced Warehouse methodology comes in to save the day… We would extend our cluster out by one more BCU to give us some more horsepower… The result was incredible. Here is just a taste of what we saw:
- Our CPU utilization saw a 50% drop. Where we had consistently ran at 95% utilized (spiking to 100%) we dropped to an average of 45% utilized.
- We were able to increase our workload by 10% compared to the month before (2077 Report Views over 1935 Report Views).
- Even with the additional increase in workload, we decreased our execution time by 50% (Original Execution Time- 357 Hours, New Execution Time – 210 Hours).
Essentially, our reports were flying and the client immediately came back “raving” over our speed. Our hard work had paid off and the DBA’s released a huge sigh of relief. There is a real feeling of accomplishment when you win one for the team.
I thought I’d take a few moments to share a couple of links with you. They are links to stories that appeared a couple of years ago about the history of DB2.
If you work with DB2 or relational databases in general, they make for very interesting reading. Here’s a great quote from the Information Week article…
At first, relational database was a highly mocked product, halting in its performance compared to the programmed-path systems. Skeptics like John Cullinane, founder of Cullinet Software, once took this reporter aside to instruct him that relational database would never amount to anything compared to his firm’s IDMS product. Last year, relational database represented an $18.6 billion a year market, according to IDC.
If you know of other good links, please share.
If you are a DB2 user in North America, make sure to check out the new contest over at the International DB2 User Group (IDUG). The contest is called Do you DB2? and includes prizes like a WiFi-enabled HD TV and exciting new Apple iPads. To enter simply tell IDUG exactly why you love working with DB2 in less than 1,000 words. The stories will be judged based upon originality of the content, efficacy of expression, and the use of proof points. It doesn’t matter what your scope or nature of experience with DB2 is, as long as your stories are memorable and unique you have a great chance to win.
Here are some sample topics that you could write about:
- Usability, Performance, Scalability or High Availability of DB2
- Features: Deep Compression, SQL compatibility, pureXML, security, autonomics
- Cost or risk of migrating from another database vendor to DB2
- Experience with the DB2 maintenance and support given by IBM
- Proof points on why DB2 is the best database software
- Stories about the reach and value of DB2’s worldwide community, forums, events or education
Make sure to get your entries in soon, as the deadline for entries is in June.
If you are a DB2 user, you probably already know that the International DB2 User Group (IDUG) offers the best DB2 conferences. These user-run conferences offer the largest number and widest range of sessions about DB2. In fact, there are more than 100 sessions about DB2 at the upcoming IDUG conference in Tampa, Florida on 10-14 May, 2010. This year, for the first time, IDUG has added the extremely popular Hands-On Labs that IBM has been offering at the Information on Demand Conferences for a number of years. Also, don’t forget about the FREE IBM certifications that are available to conference attendees. And, of course, there are lots of great opportunities to get to know other DB2 users like yourself. You can register for the conference at the IDUG Web site. There are even tips for justifying conference attendance to your manager.