Mastering the DB2 10.1 Certification Exam – Part 2: Security

It’s hard to argue against the benefits of becoming a DB2 Certified Professional. Aside from gaining a better understanding of DB2, it helps keep you up to date with the latest versions of the product.  It also gives you professional credentials that you can put on your resume to show that you know what you say you know.

But many people are reluctant to put in the time and effort it takes to prepare for the exams. Some just don’t like taking tests, others don’t feel they have the time or money to prepare. That’s where we come in – the DB2 team has put together a great list of resources to help you conquer the certification exams.

We caught up with Anas Mosaad and Mohamed El-Bishbeashy who are part of the DB2 team to that developed the DB2 10.1 Fundamentals Certification Exam 610 Prep – a 6 part tutorial series aimed at helping DBA’s prepare for the certification exam.

What products are focused on in this tutorial?

In this tutorial we’ve focused completely on DB2 10.1 LUW

Tell us a little about what students can hope to learn about in this tutorial?

It is the second in a series of six tutorials designed to help you prepare for the DB2 Fundamentals Exam (610). It puts in your hands all the details needed to successfully pass security related question in the exam. It introduces the concepts of authentication, authorization, privileges, and roles as they relate to DB2 10.1. It also introduces granular access control and trusted contexts.

Why should a DBA be interested in this certification?

IBM professional certifications are recognized world wide, so you will get recognized! In addition, this one is the first milestone in the advanced DB2 certification paths (development, DBA and advanced DBA). It acknowledges that you are knowledgeable about the fundamental concepts of DB2 10.1. It shows that you have an in-depth knowledge of the basic to intermediate tasks required in day-to-day administration, basic SQL (Structured Query Language), understand which additional products are available with DB2 10.1, understand how to create databases and database objects, and have a basic knowledge of database security and transaction isolation.

Do have any special tips?

Absolutely, here are a few of our favorite tips for preparing for the certification exam:

  • Practice with DB2
  • If you don’t have access to DB2, download the fully functional DB2 Express-C for free
  • Read the whole tutorial before taking the exam
  • Be a friend of DB2 Knowledge Center (formerly infocenter)
  • When in doubt, don’t hesitate, post and collaborate in the forums.

For more information:

DB2 10.1 fundamentals certification exam 610 prep, Part 2: DB2 security

For the entire series of tutorials for Exam 610 DB2 Fundamentals, include the following:
Part 1: Planning
Part 2: DB2 security
Part 3: Working with databases and database objects
Part 4: Working with DB2 Data using SQL
Part 5: Working with tables, views, and indexes
Part 6: Data concurrency

About the authors:
Anas MosaadAnas Mosaad, a DB2 solutions migration consultant with IBM Egypt, has more than eight years of experience in the software development industry. He is a member of IBM’s Information Management Technology Ecosystem Team focusing on enabling and porting customer, business partner, and ISV solutions to the IBM Information Management portfolio, which includes DB2, Netezza, and BigInsights. Anas’ expertise includes portal and J2EE, database design, tuning, and database application development.

Mohamed El-BishbeashyMohamed El-Bishbeashy is an IM specialist for IBM Cairo Technology Development Center (C-TDC), Software Group. He has 12+ years of experience in the software development industry (8 of those are with IBM). His technical experience includes application and product development, DB2 administration, and persistence layer design and development. Mohamed is an IBM Certified Advanced DBA and IBM Certified Application Developer. He also has experience in other IM areas including PureData Systems for Analytics (Netezza), BigInsights, and InfoSphere Information server.

Roger Sanders: Previewing the DB2 Sessions at IBM Insight 2014

rogerBy Roger E. Sanders
Senior Consultant, Software Engineer

As a DB2 professional, there are two major events I look forward to each year – the International DB2 User’s Group North America conference and the IBM Insight (formerly Information On Demand) conference. And this year, I’m really excited about the IBM Insight 2014 conference.

Why? Well, for one thing, I have a new book out and I will be autographing copies outside the bookstore, right after the opening session on Monday morning. Writing is a solitary, lonely activity. And until others have read your work and commented on it, writing is an activity that is essentially done in a void. At book signing events, I have the rare opportunity to meet some of my readers and solicit their input on things I have written in the past. By finding out what readers like and (what they don’t), I can improve my skills as a technical writer.

For more information on the book signings taking place at the conference, see Book Signings and Giveaways at IBM Insights 2014.

But the conference is much more than just an opportunity to sign books. It is also an opportunity to hear technical presentations from many of the IBM Distinguished Engineers who are responsible for the development and testing of the DB2 software. This year, I’m looking forward to hearing about the latest features and functionality that can be found in the “Cancun” release of DB2 for Linux, UNIX, and Windows (DB2 10.5, FixPack 4).

I’ve read many of the product announcements and I have participated in a couple of webinars, but after attending some of the sessions being offered, I expect to come home with a burning desire to get some hands-on experience with some of the new features I will learn more about.

Finally, I’m looking forward to seeing many old friends. I’ve been attending both conferences regularly for over a decade and it seems like I meet new people every year. Consequently, each conference now feels like a family reunion. And after a day of trying to absorb a lot of technical information, spending time at the evening activities catching up with old friends is a great way to relax and wind down.

Here are some of the DB2 sessions that I am looking forward to attending:

I’ve made my travel arrangements, created a conference agenda, and have started packing my bags. Hope to see you at the IBM Insights 2014 conference next week!

Webinar: Why IBM DB2 Database Software Leaves the Competition in the Dust

We all know that IBM DB2 database software is an industry leader when it comes to performance, scale, and reliability. But how does it compare to the competition, specifically Oracle and SAPHANA?

IBM’s Chris Eaton joined IDUG’s DB2 Tech Talk to give an update on IBM DB2 and show how DB2 goes above and beyond our competitors to provide reliable functionality while saving businesses money.

During the presentation, Chris walked the audience through DB2’s latest release, DB2 10.5 “Cancun Release” and the four innovations that make BLU Acceleration  different from our competitors: Next Generation In-Memory, the ability to analyze compressed data, CPU Acceleration, and data skipping.

You can watch the entire presentation on the IDUG website by clicking here, and also review the Tweets from the event by logging on to the IBM DB2 Storify page here.

Still have additional questions? Feel free to leave them in the comment box below and we’ll get the answers to you shortly.

Chris eatonAbout Chris Eaton – Chris is a Worldwide Technical Sales Specialist for DB2 at IBM primarily focused on planning and strategy for DB2 on Linux, UNIX and Windows. Chris has been working with DB2 on the LUW platforms for over 21 years. From customer support to development manager to Externals Architect to Product Manager for DB2, Chris has spent his career listening to customers and working to make DB2 a better product. Chris is also the author of The High Availability Guide for DB2 , DB2 9 New Features, Break Free with DB2 9.7: A Tour of Cost-Slashing New Features and Understanding Big Data Analytics For Enterprise Class Hadoop And Streaming Data. Follow Chris on his blog here.

Balluff loves BLU Acceleration too

cassieBy Cassandra Desens
IBM Software Group, Information Management  

BLU Acceleration is a pretty darn exciting advancement in database technology. As a marketing professional, I can tell you why it’s cool..
BLU provides instant insight from real-time operation data,
BLU provides breakthrough performance without the constraints of other in-memory solutions,
BLU provides simplicity with a load-and-go setup,
etcetera, etcetera ..you get the point.

You can read our brochures and watch our videos to hear how DB2 with BLU Acceleration will transform your business. We think it’s the next best thing since sliced bread because we invented it. But is it all it’s cracked up to be? The answer is YES.

Clients all over the world are sharing how BLU Acceleration made a huge, positive difference to their business. Hearing customer stories puts our product claims into perspective. Success stories give us the ultimate answer to the elusive question “How does this relate to me and my business?”. Which is why I want to share with you one of our most recent stories: Balluff.

Balluff is a worldwide company with headquarters in Germany. They have over 50 years of sensor experience and are considered a world leader and one of the most efficient manufacturers of sensor technology.  Balluff relies on SAP solutions to manage their business, including SAP Business Warehouse for their data analysis and reporting.

Over the last few years Balluff experienced significant growth, which resulted in slowed data delivery. As Bernhard Herzog, Team Manager Information Technology SAP at Balluff put it “Without timely, accurate information we risked making poor investment decisions, and were unable to deliver the best possible service to our customers.”

The company sought a solution that would transform the speed and reliability of their information management system. They chose DB2 with BLU Acceleration to accelerate access to their enormous amount of data. With BLU Acceleration Balluff achieved:

  • Reduced reporting time for individual reports by up to 98%
  • Reduced backup data volumes by 30%
  • Batch mode data processing improvements by 25%
  • A swift transition with no customization needed; Balluff transferred 1.5 terabytes of data within 17 hours with no downtime

These improvements have a direct impact on their business. As Bernhard Herzog put it, “Today, sales staff have immediate access to real-time information about customer turnover and other important indicators. With faster access to key business data, sales managers at Balluff can gain a better overview, sales reps can improve customer service and the company can increase sales”.

Impressive, right? While you could argue it’s no sliced bread, it certainly is a technology that is revolutionizing reporting and analytics, and worth try. Click here for more information about DB2 with BLU Acceleration and to take it for a test drive.

_________________________________________________________________

For the full success story, click here to read the Balluff IBM Case Study
You can also click here to read Balluff’s success as told by ComputerWoche (Computer World Germany). Open in Google Chrome for a translation option.

What is DB2ssh?

photo.doBy Mihai Iacob
DB2 Security Development

The IBM DB2 pureScale Feature provides high levels of distributed availability, scalability and transparency to the application, but why do I need to enable password-less SSH for the root user in my DB2 pureScale cluster? Well you don’t any longer and this site  explains how to use db2ssh to securely deploy and configure the DB2 pureScale Feature.

Both the DB2 installer and GPFS, the filesystem used by DB2 pureScale, have a requirement to run commands as root on a remote system. Db2ssh provides an alternative to enabling password-less SSH as root, by effectively SSH-ing as a regular user, and then elevating privileges to root to run the require commands.

Wait, isn’t that asking for trouble? Can a non-root user run remote commands as root in my cluster ? Not at all, there are rigorous security checks put in place to make sure only the root user can run commands remotely as root. This is accomplished by having the root user digitally sign any message that is sent to the remote system and having the remote system verify this signature before executing any commands. SSH can also be configured in a secure way to prevent against replay attacks.

Take a look at the article to find out how to configure and troubleshoot DB2ssh.

Exclusive Opportunity to Influence IBM Product Usability: Looking for Participants for Usability Test Sessions – Data Warehousing and Analytics

Arno thumbnail 2By Arno C. Huang, CPE
Designer, IBM Information Management Design
IBM Design making the user the center of our productsThe IBM Design Team is seeking people with a variety of database, data warehousing and analytics backgrounds to participate in usability test sessions. We are currently looking for people who work in one of the following roles: DBA, Architect, Data Scientist, Business Analyst or Developer. As a test participant, you will provide your feedback about current or future designs we are considering, thus making an impact on the design of an IBM product and letting us know what is important to you.

Participating in a study typically consists of a web conference or on-site meeting scheduled around your availability. IBM will provide you with an honorarium for your participation. There are several upcoming sessions, so if you’re interested, we’ll help you find a session that best suits your schedule. If you are interested, please contact Arno C. Huang at achuang@us.ibm.com

Make Your Apps Highly Available and Scalable

By Vinayak Joshi
Senior Software Engineer, IBM

The IBM premium data-sharing technologies offer unmatched high-availability and scalability to applications. If you are a JDBC application developer wanting to explore how these benefits accrue to your application and whether you need to do anything special to exploit these benefits, my article – “Increase scalability and failure resilience of applications with IBM Data Server Driver for JDBC and SQLJ” – is a great source of information.

In the article, I explain how turning on a single switch on the IBM Data Server Driver for JDBC and SQLJ opens up all the workload balancing and high availability benefits to your JDBC applications. There is very little required for an application to unlock the workload balancing and high availability features built into the DB2 server and driver technologies.

For those curious about  how the driver achieves this in tandem with pureScale and sysplex server technologies, the article should provide a good end-to-end view. While all the nuts and bolts explanations are provided, it is stressed that all of it happens under the covers, and beyond the bare minimum understanding, application developers and DBA’s need not concern themselves with it too much if they do not wish to.

The aspects a developer needs to keep in mind are highlighted and recommendations on configuring and tuning applications are provided.  We’ve made efforts to keep the reading technically accurate while keeping the language simple enough for a non-technical audience to grasp.

Any and all feedback shall be much appreciated and taken into account. Take a look at the article by clicking here, and feel free to share your thoughts in the comment section below

Influence Your Data Management Future Today

RadhaBy Radha Gowda
Technical Marketing, IBM Analytics

Too many tools! Too many repositories! Too many installs! Focus on managing databases across the enterprise rather than an individual database.  Any of these sounds familiar?

Yes, we heard you. While IBM offers an impressive portfolio of data management tools to manage the complete data life cycle, we agree that there are far too many tools. We want to help you streamline your data management processes and make you even more productive. We are happy to introduce you to the next generation of data management tool for DB2 for Linux, UNIX and Windows – IBM Data Server Manager (beta). It is simple to install, easy to use and enterprise ready with the ability to manage hundreds of databases

Picture2

IBM Data Server Manager integrates key capabilities from the existing portfolio of data management tools. It offers a simple integrated web console to administer, monitor, manage, and optimize hundreds of DB2 for Linux, UNIX and Windows databases across the enterprise. It helps you identify, diagnose, solve, and prevent performance problems. Of course, not to underestimate the expert guidance you will receive to optimize query performance, which tables to convert to column-organized format or create shadow tables for — to optimize with BLU Acceleration, identify storage saving opportunities and more. And, it provides centralized client and server configuration management so you understand and control your environment more efficiently. Best of all, it is quick and easy to deploy.

IBM Data Server Manager beta software is available for Linux, AIX and Windows platforms. Early feedback on the tool has been very positive; we invite you to sign up for the beta program today and start influencing your data management future.

IBM Data Sever Manager – Simple. Scalable. Smart.

Troubles Are Out of Reach With Instant Insights

RadhaBy Radha Gowda
Technical Marketing, IBM Analytics

Bet you have been hearing a lot about shadow tables in DB2 “Cancun Release” lately.  Umm… do shadow and Cancun remind you of On the beach by Cliff Richards and the Shadows?  Seriously, DB2 shadow tables can make you dance to a rock ‘n’ roll on the beach because you will be trouble free with real-time insights into your operations and of course, lots of free time.

What is a shadow table?

Shadow tables have been around since the beginning of modern computing – primarily for improving performance.  So what does the DB2 shadow table offer? The best of both OLTP and OLAP worlds!  You can now run your analytic reports directly in OLTP environment with better performance.

Typically organizations have separate OLTP and OLAP environments – either due to resource constraints or to ensure the best OLTP performance.   The front-end OLTP is characterized by very small, but high volume transactions. Indexes are created to improve performance.  In contrast, the back-end OLAP has long-running complex transactions that are relatively small in number. Indexes are created, but they may be different from OLTP indexes.  Of course, an ETL operation must transfer data from OLTP database to OLAP data mart/warehouse at time intervals that may vary from minutes to days.

DB2 can help you simplify your infrastructure and operations with shadow tables. Shadow table is a column organized copy of a row-organized table within the OLTP environment, and may include all or a subset of columns.  Because the table is column organized, you get the benefit of enhanced performance that BLU Acceleration provides for analytic queries.

How do shadow table work?

shadow tables

Shadow table is implemented as a materialized query table (MQT) that is maintained by replication. IBM InfoSphere Change Data Capture for DB2, available in advanced editions, maintains shadow tables through automatic and incremental synchronization of row-organized tables.

While all applications can access the row-organized table by default, DB2 optimizer will perform the latency-based routing to determine whether a query needs to be routed to shadow tables or the row-organized source.

A truly flexible and trouble-free OLTP world

Shadow tables offer the incredible speed you have come to expect from BLU Acceleration while the source tables remain row-organized to best suit OLTP operations.  In fact, with shadow tables, the performance of analytical queries can improve by 10x or more, with equal or greater transactional performance*.

With instant insight into “as it happens” data for all your questions and all the free time you’ll have with no more indexing/tuning, what’s not to like? Try DB2 today

* Based on internal IBM testing of sample transactional and analytic workloads by replacing 4 secondary analytical indexes in the transactional environment with BLU Shadow Tables. Performance improvement figures are cumulative of all queries in the workload. Individual results will vary depending on individual workloads, configurations and conditions.

pureScale at the Beach. – What’s New in the DB2 “Cancun Release”

KellySchlamb Kelly Schlamb
DB2 pureScale and PureData Systems Specialist, IBM

cancun_beachToday, I’m thinking about the beach. We’re heading into the last long weekend of the summer, the weather is supposed to be nice, and later today I’ll be going up to the lake with my family. But that’s not really why the beach is on my mind. Today, the DB2 “Cancun Release” was announced and made available, and as somebody that works extensively with DB2 and pureScale, it’s a pretty exciting day.

I can guarantee you that you that over the next little while, you’re going to be hearing a lot about the various new features and capabilities in the “Cancun Release” (also referred to as Cancun Release 10.5.0.4 or DB2 10.5 FP4). For instance, the new Shadow Tables feature — which exploits DB2 BLU Acceleration — allows for real-time analytics processing and reporting on your transactional database system. Game changing stuff. However, I’m going to leave those discussions up to others or for another time and today I’m going to focus on what’s new for pureScale.

As with any major new release, some things are flashy and exciting, while other things don’t have that same flash but make a real difference in the every day life of a DBA. Examples of the latter in Cancun include the ability to perform online table reorgs and incremental backups (along with support for DB2 Merge Backup) in a pureScale environment, additional Optim Performance Manager (OPM) monitoring metrics and alerts around the use of HADR with pureScale, and being able to take GPFS snapshot backups. All of this leads to improved administration and availability.

There’s a large DB2 pureScale community out there and over the last few years we’ve received a lot of great feedback on the up and running experience. Based on this, various enhancements have been made to provide faster time to value, with the improved ease of use and serviceability of installation, configuration, and updates. This includes improved installation documentation, enhanced prerequisite checking, beefing up some of the more common error and warning messages, improved usability for online fix pack updates, and the ability to perform version upgrades of DB2 members and CFs in parallel.

In my opinion, the biggest news (and yes, the flashiest stuff) is the addition of new deployment options for pureScale. Previously, the implementation of a DB2 pureScale cluster required specialized network adapters — RDMA-capable InfiniBand or RoCE (RDMA over Converged Ethernet) adapter cards. RDMA stands for Remote Direct Memory Access and it allows for direct memory access from one computer into that of another without involving either one’s kernel, so there’s no interrupt handling and no context-switching that takes place as part of sending a message via RDMA (unlike with TCP/IP-based communication). This allows for very high-throughput, low-latency message passing, which DB2 pureScale uniquely exploits for very fast performance and scalability. Great upside, but a downside is the requirement on these adapters and an environment that supports them.

Starting in the DB2 Cancun Release, a regular, commodity TCP/IP-based interconnect can be used instead (often referred to as using “TCP/IP sockets”). What this gives you is an environment that has all of the high availability aspects of an RDMA-based pureScale cluster, but it isn’t necessarily going to perform or scale as well as an RDMA-based cluster will. However, this is going to be perfectly fine for many scenarios. Think about your daily drive to work. While you’d like to have a fast sports car for the drive in, it isn’t necessary for that particular need (maybe that’s a bad example — I’m still trying to convince my wife of that one). With pureScale, there are cases where availability is the predominant motivator for using it and there might not be a need to drive through massive amounts of transactions per second or scale up to tens of nodes. Your performance and scalability needs will dictate whether RDMA is required or not for your environment. By the way, you might see this feature referred to as pureScale “lite”. I’m still slowly warming up to that term, but the important thing is people know that “lite” doesn’t imply lower levels of availability.

With the ability to do this TCP/IP sockets-based communication between nodes, it also opens up more virtualization options. For example DB2 pureScale can be implemented using TCP/IP sockets in both VMware (Linux) and KVM (Linux) on Intel, as well as in AIX LPARs on Power boxes. These virtualized environments provide a lower cost of entry and are perfect for development, production environments with moderate workloads, QA, or just getting yourself some hands-on experience with pureScale.

It’s also worth pointing out that DB2 pureScale now supports and is optimized for IBM’s new POWER8 platform.

Having all of these new deployment options changes the economics of continuous availability, allowing broad infrastructure choices at every price point.

One thing that all of this should show you is the continued focus and investment in the DB2 pureScale technology by IBM research and development. With all of the press and fanfare around BLU, people often ask me if this is at the expense of IBM’s other technologies such as pureScale. You can see that this is definitely not the case. In fact, if you happen to be at Insight 2014 (formerly known as IOD) in Las Vegas in October, or at IDUG EMEA in Prague in November, I’ll be giving a presentation on everything new for pureScale in DB2 10.5, up to and including the “Cancun Release”. It’s an impressive amount of features that’s hard to squeeze into an hour. 🙂

For more information on what’s new for pureScale and DB2 in general with this new release, check out the fix pack summary page in the DB2 Information Center.