The value of common database tools and linked processes for Db2, DevOps, and Cloud

Michael

by Michael Connor, Analytics Offering Management

Today we released DB2 V11 for Linux, UNIX and Windows. The release includes updates to Data Server Manager (DSM) V2.1 and Data Server Driver connectivity V11 and Advanced Recovery Feature (ARF) V11.    As many of you may be aware of – 2 years ago we embarked on a strategy to completely rethink our tooling strategy.  The market was telling us we needed to focus more on a simplified user experience, a web console addressing both the power and casual user role, and deliver deep database support in support of production applications.  In  March 2015, we delivered our first iteration of Data Server Manager as part of 10.5.  This year we have yet again extended capability to this valuable platform and in addition extended support across a number of IBM Data stores including DB2, dashDB, DB2 on Cloud, and BigInsights.

First let’s talk about some of the drivers we hear related to Database Delivery.

  1. The LOB and LOB developer communities want access to mission critical data and extend that data through new customer facing OLTP applications.
  2. Business analysts are using more data than ever – in generating and enhancing customer value through Analytic applications.
  3. These new roles need on demand access to data across all aspects of the delivery lifecycle from idea inception to production delivery and support.
  4. While the timelines are lessened, the data expanded and the lifecycle speeded up, quality cannot suffer.

Therefore, the DBA, Development, Testing, and Production support roles are now participating in activities known as Continuous Delivery, Continuous Testing, and DevOps.  With the goal of improving customer service, decreasing cycle and delivery times, without decreasing quality.

DSM pic1Some areas that are addressed by our broader solutions for Continues Delivery, Testing, and DevOps include:

  • High Performance Unload of production data and selective data environment, including test data environment restore with DB2 Recovery Expert
  • Simplified test data management addressing discovery, subsetting, masking, and refresh with Test Data Management.
  • Automated driving of application test and performance based workloads with Rational Functional and Performance Tester.
  • Release Management and Deployment automation with Rational Urbancode.

And finally, areas improved with our latest DB2 releases

  • SQL Development and execution with Data Server Manager
  • Test and Deployment Data Server Monitoring with Data Server Manager
  • SQL capture and analysis with Data Server Manager
  • Client and application Data Access, Workload and Failover management with Data Server Drivers

DSM Pic 2The Benefits of considering a Continuous — Solution include reduced cycle times, lower risk of failure, improved application performance and reduced risk of downtime.

With the V11 Releases we have delivered enhancements including:

  • DSM: DB2 LUW V11 support  and monitoring improvements for PureScale applications, Extended Query history analysis
  • ARF: DB2 LUW V11 support and improvements for Analytics usage with BLU Acceleration
  • DS Driver (Also DB2 Connect): Manageability improvements, Performance enhancements, and extended driver support now for iMAC applications.

DSM Pic 3Many of the improvements noted above are also available for our private Cloud offering in preview DashDB Local – which leverages DSM as an integral component of their dashboard, and our public Cloud offering DB2 on Cloud.

Read more details about the announcement for further information:   http://www-01.ibm.com/common/ssi/ShowDoc.wss?docURL=/common/ssi/rep_ca/9/872/ENUSAP16-0139/index.html&lang=en&request_locale=en

Also check out the DB2 LUW Landing Page:  http://www.ibm.com/analytics/us/en/technology/db2/db2-linux-unix-windows.html

 

Blogger:    Michael Connor, with Analytics offering management, joined IBM in 2001 and has focused early in his IBM career on launching the z/OS Development Tooling business centered on Rational Developer for z.  Since moving to Analytics in 2013, Michael leads the team responsible for Core Database Tooling

Placing Data for Performance. Fuhgeddaboutit!

Bill Cole

Bill Cole – Competitive Sales Specialist,Information Management, IBM

I like racing. Not watching it on television. That’s like watching a peach rot. It’s not quite so bad in person, though. I raced cars on ovals and road tracks and drag strips – not that you care – and one thing every racer knows is position counts, even in drag racing. Every bowler knows the same thing, too. Put the ball in the right place and you’ve got the best shot at a strike. It’s the same for every sport, right? Lob the ball to the strike zone and duck!

Data is much the same way. (Bet you wondered where I was going.) It’s not all the same. Access patterns for each row of data are different. They change over time and we just don’t have the time or resources to make the changes that would keep database performance where it should be so the throughput we deliver meets the business needs. And those needs aren’t static, are they? What was adequate yesterday is deadly slow today. So we’re always chasing the needle, right?

This means that we’re spending time on the same tasks over and over. Grab a performance report (pick your favorites; i tend to favor nmon and Data Console) and set out on the expedition of re-discovering the same information. A few queries aren’t tuned correctly, memory needs a bit of tweaking here & there, and some disks are taking a severe beating. Same old things. We know how to fix each of them.

So why are we spending any time at all chasing disk access issues? Sure, there’s a little period of adjustment when a new application settles in and we learn its performance characteristics/anomalies. But the honeymoon has long been over and we’re still chasing the same things for applications and environments that are old frenemies.

I just sat through an hour dissertation by a red product manager that purported to explain how that database manages (for an extra license fee, of course) data heat. That is, understanding the access characteristics of data objects and then deploying them to the appropriate media. The reason for the license fee is the use of some different forms of compression. Did I mention that the DBA has to manage the whole scenario? Or that two forms of compression will slow you down? Drastically? [Rant off]

DB2 has been doing this for a few releases now without the aid of extra license fees or involving the DBA in the process. The whole process of managing data for DB2 is done in the background. The database determines the access pattern and then moves the data to the appropriate media. No new bizarre compressions, no performance hits. Just set it and forget it. Sorry, that’s an old ad jingle. Just set up tablespaces on media with different characteristics.

The setup could be as mundane as RAID 5 and RAID 10 or more esoteric using Flash disk devices for very high-performance data. And DB2 moves the data around for you based on access patterns. Much easier than you trying to determine which tables need to be moved every month/week/day. No symbolic links to maintain. Let the database do the walking, as it were. After all, the database knows more about those patterns than we can ever discern through all those reports. And all of this is just part of the license fee for DB2. You get sleep as part of the deal. And who can argue with a few hours extra sleep?

Oh, and compression is there because you have DB2. Lots of different kinds of compression to save space and improve performance. No exotic (read that “silly”) compression algorithms to decode. Just the best compression you can find for your database. Easy. Free. Fast. Any questions?

Finally, my wife won’t let me race any more. Says it scares her. But I miss the thrill of the race. Getting myself into position to win. It’s all about position. Where you can go the fastest. It’s always clean air when you’re in front. Just like your data. Life is so much easier when you’re in front of the performance curve rather than chasing the needle.

And please download the DB2 database poster and Pin it your wall. It’ll answer some questions and make you look even smarter!

Follow Bill Cole on Twitter : @billcole_ibm

Learn more about how simple to use DB2 with BLU Acceleration is-

My Processors Can Beat Up Your Processors!

bill-cole[1]

Bill Cole, Competitive Sales Specialist,Information Management, IBM

I grew up a fan of Formula 1 and the Indianapolis 500. One of the great F1 racers was a British fellow named Sterling Moss.  Look him up.  Won lots of races for Mercedes-Benz and then went back to England and drove only British marquees.  They were underpowered but he still won races.  The other drivers hated it that he was driving a four-cylinder machine while they were driving V-8s and he still passed them.  And he waved and smiled as he went around them!  To paraphrase the song, every car’s a winner and every car’s a loser….

The moral of the story?  Make the most of the equipment you’ve got.  Sterling was a master of that concept.  We have the same issue in computing.  We assume that brute force will give us better performance.  Bigger is always better.  Speed comes from executing instructions faster.  Over-clocking is a wonderful thing.  More processors!!!  Gotta have more processors!  We’re geeks, so our “measuring” against each other is about our computers.  We’re dead certain that more and faster is the answer to the ultimate question of performance and capacity.

Oh really?  I know a red database that uses multiple processors to execute a single query in parallel.  Good idea?  Uh, not if you need to execute more than one query at a time on those processors.  You see, each query in that database believes it’s the only query on the system and should consume all of the resources.  No thought of tuning the workloads on the system, just single queries.  You see the white hair?  Getting the best use of all the processors was an art form with a moving target.

It’s really a matter of using your resources wisely and DB2 10.5 was created from the outset to use the system resources to enhance the performance of the all the workloads on a system.  In the example above, a simple parallel query with a low priority could essentially stop a queue full of high-priority jobs because it seized all the processors and wouldn’t give them back until the query was complete.  Explaining that one gets to be a bit technical.  Not to mention uncomfortable.

DB2 10.5 sizes up the running workloads and allocates resources according to priorities and needs.  Each new workload gets the same scrutiny before it’s added to the mix.  In fact, a low priority workload might even get held momentarily to make sure that the highest priority workloads complete quickly.  It’s that simple.

 And it’s not just processors, folks.  Memory is a valuable resource, too.  If every workload requires all the available, you’re going to have a problem.  So, in the same way as with processors, DB2 10.5 allocates memory to workloads on the basis of need and priority so that workloads complete properly.

That red database I mentioned.  It seems to believe that lots of badly used memory is the cure for everything.  They confuse the terms in-memory database and database in memory.  I’ve seen users pull an entire database into memory and still get lousy performance because the memory is managed badly.  I once re-tuned a Production database’s cache due to memory limitations causing queries to fail.  I dropped the cache size by 75% and didn’t mention it to anyone.  The next day I pointed out that we no longer had memory problems and everything was working well – and performance was what it always had been.

Note that DB2 10.5 does all this for you.  No resource groups or complicated formulas to get wrong.  No additional software to purchase or manage.  It’s all part of the package.  You don’t have to go back and modify your tables or applications.  Just load and go!  Get the benefits immediately.  Nice.

All this speaks to sizing our systems, too.  We inspect the workloads that will run on the system, not just a few queries, and fill out the necessary expenditure request.  We even add a fudge factor, right?  Maybe we add a few extra processors and a bit of extra memory as room for expansion.  But the workloads grow faster than anyone’s predictions and the hardware we ordered with all that extra capacity is going to be strained if we just throw workloads at it and hope for the best.  Making intelligent use of that capacity is our job – and the job of the software we deploy.  We can do the sizing with the confidence that our software has our back.

Finally, my uncle Emil kept a pair of draft horses on his farm long after he stopped farming with them.  He loved his horses and refused to part with these last two.  Other than being very large pets, they were good for one thing: Pulling cars out of the mud.  Folks would get stuck in the mud at the bottom of the hill after a good storm.  Inevitably, there’d be a knock on his door and he’d hitch one of the horses to the car and pull it out.  One night a truck got stuck after a good rain.  Emil hitched both horses to the truck.  The driver protested that horses couldn’t pull his truck out of the mud.  Emil smiled and talked to the horses.  They leaned into their collars and walked the truck out of the mud.  The driver opened his wallet.  Emil waved the man down the road and put the horses away.  Just being a good neighbor.  Getting the most out of our equipment is part of our compact with the business and BLU’s got that going on like no one else.  You’ll be a hero for thinking so far ahead!

Try DB2 10.5 today , and experience the innovative features and benefits that DB2 has to offer!