The value of common database tools and linked processes for Db2, DevOps, and Cloud

Michael

by Michael Connor, Analytics Offering Management

Today we released DB2 V11 for Linux, UNIX and Windows. The release includes updates to Data Server Manager (DSM) V2.1 and Data Server Driver connectivity V11 and Advanced Recovery Feature (ARF) V11.    As many of you may be aware of – 2 years ago we embarked on a strategy to completely rethink our tooling strategy.  The market was telling us we needed to focus more on a simplified user experience, a web console addressing both the power and casual user role, and deliver deep database support in support of production applications.  In  March 2015, we delivered our first iteration of Data Server Manager as part of 10.5.  This year we have yet again extended capability to this valuable platform and in addition extended support across a number of IBM Data stores including DB2, dashDB, DB2 on Cloud, and BigInsights.

First let’s talk about some of the drivers we hear related to Database Delivery.

  1. The LOB and LOB developer communities want access to mission critical data and extend that data through new customer facing OLTP applications.
  2. Business analysts are using more data than ever – in generating and enhancing customer value through Analytic applications.
  3. These new roles need on demand access to data across all aspects of the delivery lifecycle from idea inception to production delivery and support.
  4. While the timelines are lessened, the data expanded and the lifecycle speeded up, quality cannot suffer.

Therefore, the DBA, Development, Testing, and Production support roles are now participating in activities known as Continuous Delivery, Continuous Testing, and DevOps.  With the goal of improving customer service, decreasing cycle and delivery times, without decreasing quality.

DSM pic1Some areas that are addressed by our broader solutions for Continues Delivery, Testing, and DevOps include:

  • High Performance Unload of production data and selective data environment, including test data environment restore with DB2 Recovery Expert
  • Simplified test data management addressing discovery, subsetting, masking, and refresh with Test Data Management.
  • Automated driving of application test and performance based workloads with Rational Functional and Performance Tester.
  • Release Management and Deployment automation with Rational Urbancode.

And finally, areas improved with our latest DB2 releases

  • SQL Development and execution with Data Server Manager
  • Test and Deployment Data Server Monitoring with Data Server Manager
  • SQL capture and analysis with Data Server Manager
  • Client and application Data Access, Workload and Failover management with Data Server Drivers

DSM Pic 2The Benefits of considering a Continuous — Solution include reduced cycle times, lower risk of failure, improved application performance and reduced risk of downtime.

With the V11 Releases we have delivered enhancements including:

  • DSM: DB2 LUW V11 support  and monitoring improvements for PureScale applications, Extended Query history analysis
  • ARF: DB2 LUW V11 support and improvements for Analytics usage with BLU Acceleration
  • DS Driver (Also DB2 Connect): Manageability improvements, Performance enhancements, and extended driver support now for iMAC applications.

DSM Pic 3Many of the improvements noted above are also available for our private Cloud offering in preview DashDB Local – which leverages DSM as an integral component of their dashboard, and our public Cloud offering DB2 on Cloud.

Read more details about the announcement for further information:   http://www-01.ibm.com/common/ssi/ShowDoc.wss?docURL=/common/ssi/rep_ca/9/872/ENUSAP16-0139/index.html&lang=en&request_locale=en

Also check out the DB2 LUW Landing Page:  http://www.ibm.com/analytics/us/en/technology/db2/db2-linux-unix-windows.html

 

Blogger:    Michael Connor, with Analytics offering management, joined IBM in 2001 and has focused early in his IBM career on launching the z/OS Development Tooling business centered on Rational Developer for z.  Since moving to Analytics in 2013, Michael leads the team responsible for Core Database Tooling

Migrating a DB2 database from a Big Endian environment to a Little Endian environment

roger

By Roger Sanders, DB2 for LUW Offering Manager, IBM

What Is Big-Endian and Little-Endian?

Big-endian and little-endian are terms that are used to describe the order in which a sequence of bytes are stored in computer memory, and if desired, are written to disk. (Interestingly, the terms come from Jonathan Swift’s Gulliver’s Travels where the Big Endians were a political faction who broke their boiled eggs on the larger end, defying the Emperor’s edict that all eggs be broken on the smaller end; the Little Endians were the Lilliputians who complied with the Emperor’s law.)

Specifically, big-endian refers to the order where the most significant byte (MSB) in a sequence (i.e., the “big end”) is stored at the lowest memory address and the remaining bytes follow in decreasing order of significance. Figure 1 illustrates how a 32-bit integer would be stored if the big-endian byte order is used.

endian image1Figure 1. Big-endian byte order

For people who are accustomed to reading from left-to-right, big-endian seems like a natural way to store a string of characters or numbers; since data is stored in the order in which it would normally be presented, programmers can easily read and translate octal or hexadecimal data dumps. Another advantage of using big-endian storage is that the size of a number can be more easily estimated because the most significant digit comes first. It is also easy to tell whether a number is positive or negative—this information can be obtained by examining the bit at offset 0 in the lowest order byte.

Little-endian, on the other hand, refers to the order where the least significant byte (LSB) in a sequence (i.e., the “little end”) is stored at the lowest memory address and the remaining bytes follow in increasing order of significance. Figure 2 illustrates how the same 32-bit integer presented earlier would be stored if the little-endian byte order were used.

endian image 2

 Figure 2. Little-endian byte order

One argument for using the little-endian byte order is that the same value can be read from memory, at different lengths, without having to change addresses—in other words, the address of a value in memory remains the same, regardless of whether a 32-bit, 16-bit, or 8-bit value is read. For instance, the number 12 could be read as a 32-bit integer or an 8-bit character, simply by changing the fetch instruction used. Consequently, mathematical functions involving multiple precisions are much easier to write.

Little-endian byte ordering also aids in the addition and subtraction of multi-byte numbers. When performing such operations, the computer must start with the least significant byte to see if there is a carry to a more significant byte—much like an individual will start with the rightmost digit when doing longhand addition to allow for any carryovers that may take place. By fetching bytes sequentially from memory, starting with the least significant byte, the computer can start doing the necessary arithmetic while the remaining bytes are read. This parallelism results in better performance; if the system had to wait until all bytes were fetched from memory, or fetch them in reverse order (which would be the case with big-endian), the operation would take longer.

IBM mainframes and most RISC-based computers (such as IBM Power Systems, Hewlett-Packard ProLiant servers, and Oracle SPARC servers) utilize big-endian byte ordering. Computers with Intel and AMD processors (CPUs) use little-endian byte ordering instead.

It is important to note that regardless of whether big-endian or little-endian byte ordering is used, the bits within each byte are usually stored as big-endian. That is, there is no attempt to reverse the order of the bit stream that is represented by a single byte. So, whether the hexadecimal value ‘CD’ for example, is stored at the lowest memory address or the highest memory address, the bit order for the byte will always be: 1100 1101

Moving a DB2 Database To a System With a Different Endian Format

One of the easiest ways to move a DB2 database from one platform to another is by creating a full, offline backup image of the database to be moved and restoring that image onto the new platform. However, this process can only be used if the endianness of the source and target platform is the same. A change in endian format requires a complete unload and reload of the database, which can be done using the DB2 data movement utilities. Replication-based technologies like SQL Replication, Q Replication, and Change Data Capture (CDC), which transform log records into SQL statements that can be applied to a target database, can be used for these types of migrations as well. On the other hand, DB2 High Availability Disaster Recovery (HADR) cannot be used because HADR replicates the internal format of the data thereby maintaining the underlying endian format.

The DB2 Data Movement Utilities (and the File Formats They Support)

DB2 comes equipped with several utilities that that can be used to transfer data between databases and external files. This set of utilities consists of:

  • The Export utility: Extracts data from a database using an SQL query or an XQuery statement, and copies that information to an external file.
  • The Import utility: Copies data from an external file to a table, hierarchy, view, or nickname using INSERT SQL statements. If the object receiving the data is already populated, the input data can either replace or be appended to the existing data.
  • The Load utility: Efficiently moves large quantities of data from an external file, named pipe, device, or cursor into a target table. The load utility is faster than the Import utility because it writes formatted pages directly into the database, instead of performing multiple INSERT
  • The Ingest utility: A high-speed, client-side utility that streams data from files and named pipes into target tables.

Along with these built-in utilities, IBM InfoSphere Optim High Performance Unload for DB2 for Linux, UNIX and Windows, an add-on tool that must be purchased separately, can be used to rapidly unload, extract, and repartition data in a DB2 database. Designed to improve data availability, mitigate risk, and accelerate database migrations, this tool helps DBAs work with very large quantities of data with less effort and faster results.

Regardless of which utility is used, data can only be written to or read from files that utilize one of the following formats:

  • Delimited ASCII
  • Non-delimited or fixed-length ASCII
  • PC Integrated Exchange Format
  • Extensible Markup Language (IBM InfoSphere Optim High Performance Unload for DB2 for Linux, UNIX and Windows only.)

Delimited ASCII (DEL)

The delimited ASCII file format is used by a wide variety of software applications to exchange data. With this format, data values typically vary in length, and a delimiter, which is a unique character not found in the data values themselves, is used to separate individual values and rows. Actually, delimited ASCII format files typically use three distinct delimiters:

  • Column delimiters. Characters that are used to mark the beginning or end of a data value. Commas (,) are typically used as column delimiter characters.
  • Row delimiters. Characters that are used to mark the end of a single record or row. On UNIX systems, the new line character (0x0A) is typically used as the row delimiter; on Windows systems, the carriage return/linefeed characters (0x0D–0x0A) are normally used instead.
  • Character delimiters. Character that are used to mark the beginning and end of character data values. Single quotes (‘) and double quotes (“) are typically used as character delimiter characters.

Typically, when data is written to a delimited ASCII file, rows are streamed into the file, one after another. The appropriate column delimiter is used to separate each column’s data values, the appropriate row delimiter is used to separate each individual record (row), and all character and character string values are enclosed with the appropriate character delimiters. Numeric values are represented by their ASCII equivalent—the period character (.) is used to denote the decimal point (if appropriate); real values are represented with scientific notation (E); negative values are preceded by the minus character (-); and positive values may or may not be preceded by the plus character (+).

For instance, if the comma character is used as the column delimiter, the carriage return/line feed character is used as the row delimiter, and the double quote character is used as the character delimiter, the contents of a delimited ASCII file might look something like this:

10,”Headquarters”,860,”Corporate”,”New York”

15,”Research”,150,”Eastern”,”Boston”

20,”Legal”,40,”Eastern”,”Washington”

38,”Support Center 1″,80,”Eastern”,”Atlanta”

42,”Manufacturing”,100,”Midwest”,”Chicago”

51,”Training Center”,34,”Midwest”,”Dallas”

66,”Support Center 2″,112,”Western”,”San Francisco”

84,”Distribution”,290,”Western”,”Denver”

Non-Delimited ASCII (ASC)

With the non-delimited ASCII file format, data values have a fixed length, and the position of each value in the file determines which column and row a particular value belongs to.

When data is written to a non-delimited ASCII file, rows are streamed into the file, one after another and each column’s data value is written using a fixed number of bytes. (If a data value is smaller that the fixed length allotted for a particular column, it is padded with blanks.) As with delimited ASCII files, a row delimiter is used to separate each individual record (row) — on UNIX systems the new line character (0x0A) is typically used; on Windows systems, the carriage return/linefeed characters (0x0D–0x0A) are used instead. Numeric values are treated the same as when they are stored in delimited ASCII format files.

Thus, a simple non-delimited ASCII file might look something like this:

10Headquarters       860Corporate   New York

15Research                150Eastern          Boston

20Legal                        40 Eastern         Washington

38Support Center   180Eastern        Atlanta

42Manufacturing    100Midwest       Chicago

51Training Center   34 Midwest       Dallas

66Support Center   211Western        San Francisco

84Distribution         290Western        Denver

 

PC Integrated Exchange Format (IXF)

The PC Integrated Exchange Format file format is a special file format that is used almost exclusively to move data between different DB2 databases. Typically, when data is written to a PC Integrated Exchange Format file, rows are streamed into the file, one after another, as an unbroken sequence of variable-length records. Character data values are stored in their original ASCII representation (without additional padding), and numeric values are stored as either packed decimal values or as binary values, depending upon the data type used to store them in the database. Along with data, table definitions and associated index definitions are also stored in PC Integrated Exchange Format files. Thus, tables and any corresponding indexes can be both defined and populated when this file format is used

Extensible Markup Language (XML)

Extensible Markup Language (XML) is a simple, yet flexible text format that provides a neutral way to exchange data between different devices, systems, and applications. Originally designed to meet the challenges of large-scale electronic publishing, XML is playing an increasingly important role in the exchange of data on the web and throughout companies. XML data is maintained in a self-describing format that is hierarchical in nature. Thus, a very simple XML file might look something like this:

<?xml version=”1.0″ encoding=”UTF-8″ ?>

<customerinfo>

<name>John Doe</name>

<addr country=”United States”>

<street>25 East Creek Drive</street>

<city>Raleigh</city>

<state-prov>North Carolina</state-prov>

<zip-pcode>27603</zip-pcode>

</addr>

<phone type=”work”>919-555-1212</phone>

<email>john.doe@xyz.com</email>

</customerinfo>

As noted earlier, only IBM InfoSphere Optim High Performance Unload for DB2 for Linux, UNIX and Windows can work with XML files.

db2move and db2look

As you might imagine, the Export utility, together with the Import utility or the Load utility, can be used to copy a table from one database to another. These same tools can also be used to move an entire database from one platform to another, one table at a time. But a more efficient way to move an entire DB2 database is by using the db2move utility. This utility queries the system catalog of a specified database and compiles a list of all user tables found. Then it exports the contents and definition of each table found to individual PC Integrated Exchange Format (IXF) formatted files. The set of files produced can then be imported or loaded into another DB2 database on the same system, or they can be transferred to another server and be imported or loaded to a DB2 database residing there.

The db2move utility can be run in one of four different modes: EXPORT, IMPORT, LOAD, or COPY. When run in EXPORT mode, db2move utilizes the Export utility to extract data from a database’s tables and externalize it to a set of files. It also generates a file named db2move.lst that contains the names of all of the tables that were processed, along with the names of the files that each table’s data was written to. The db2move utility may also produce one or more message files containing warning or error messages that were generated as a result of the Export operation.

When run in IMPORT mode, db2move uses the file db2move.lst to establish a link between the PC Integrated Exchange Format (IXF) formatted files needed and the tables into which data is to be populated. It then invokes the Import utility to recreate each table and their associated indexes using information stored in the external files.

And, when run in LOAD mode, db2move invokes the Load utility to populate tables that already exist with data stored in PC Integrated Exchange Format (IXF) formatted files. (LOAD mode should never be used to populate a database that does not already contain table definitions.) Again, the file db2move.lst is used to establish a link between the external files used and the tables into which their data is to be loaded.

Unfortunately, the db2move utility can only be used to move table and index objects. And if the database to be migrated contains other objects such as aliases, views, triggers, user-defined data types (UDTs), user-defined functions (UDFs), and stored procedures, you must duplicate those objects in the target database as well. That’s where the db2look utility comes in handy. When invoked, db2look can reverse-engineer an existing database and produce a set of Data Definition Language (DDL) SQL statements that can then be used to recreate all of the data objects found in the database that was analyzed. The db2look utility can also collect environment registry variable settings, configuration parameter settings, and statistical (RUNSTATS) information, which can be used to duplicate a DB2 environment on another system.

 

DB2 Direct: A new way of consuming your Database

headshots 012

by Phillip Downey, WW program Director, IBM Analytics Platform Hybrid Cloud Strategy

 

In DB2 11.1, we introduced two new and easy to consume DB2 Direct editions: DB2 Direct Advanced and DB2 Direct Standard. Both editions bring a new dimension to the database offerings for the small and larger enterprise clients that are looking for the flexibility and scalability of the hybrid cloud. They can be acquired directly online via passport advantage and offer a simplified licensing metric and monthly subscription pricing model that are ideal for private, public and hybrid cloud deployments.

Packaging

·        DB2 Direct Advanced Edition

The DB2 Direct Advanced Edition has all DB2 Server and Client features from DB2 Advanced Server Edition including encryption, multitenant deployments, adaptive compression, BLU Acceleration, SQL compatibility with PL/SQL, Data Server Manager, pureScale and database partitioning feature options. It also includes federation capabilities providing access to non-DB2 database sources like Oracle, MS SQL, Teradata, Hadoop, Netezza, Spark and other solutions.

Advanced Federation Capabilities

Phil blog image

 

 

 

 

 

It also includes access to 10 User licenses of Infosphere Data Architect per installation for designing and deploying database implementations.

·        DB2 Direct Standard Edition

DB2 Direct Standard Edition is modelled on DB2 Workgroup Edition, which provides encryption, pureScale for Continuously available HA deployments, Multitenant Deployments, SQL compatibility with PL/SQL, Data Server Manager Base Edition, Table partitioning, multi-dimensional clustering, parallel query and concurrent Connection pooling. It is limited to 16 cores and 128GB of RAM and is ideal for small to mid-sized database applications providing enterprise level availability, Query performance and Security as well as unlimited database size

You can take advantage of the new subscription model to lower costs and enjoy licensing flexibility for on-premises and cloud deployments:

Licensing Metrics:

Virtual ProcessorCore (VPC) Charge metric

  • Virtual processor core licensing gives you flexibility and simplified sub capacity licensing options that enables you to optimize your licensing to meet your business requirements.
  • There are two Licensing Scenarios you can apply
    • Simply license the sum of all available Virtual Processor Cores on all Virtual Servers the Direct edition is installed on
    • OR when you can identify a Server and it is more cost effective to do so simply license all available Processor Cores on the Physical Server regardless of the number of virtual machines on the system.
  • Benefits: This makes it simple for private and public Cloud deployments alike and enables you to optimise your licensing

Pricing Structure:

Subscription based pricing

      • DB2 Direct Advanced $354 USD per month per VPC
      • DB2 Standard Edition $135 USD per month per VPC

(Prices as of May 10th, 2016 in the United States.)

Each Deployment requires a minimum of 2 VPCs except in the case of Warm standby, which requires only one VPC.

These editions are ideal for customers who want to move to a subscription based model on their private cloud or a 3rd party vendors (hosts) and pay as their applications grow in size. It is also ideal for ISV’s who charge their applications to customers on a subscription model and want an easy to order database at competitive subscription pricing.

Understanding the Virtual Process Core Metric

Virtual Processor Cores are defined to simplify licensing in the private or public cloud deployment environment. You can deploy DB2 licenses with confidence even though you may or may not be aware of the underlying infrastructure. It enables customers to easily analyze their Licensing requirements including in sub-capacity situations.

A Virtual Processor Core is a Processor Core in an unpartitioned Physical Server, or a virtual core assigned to a Virtual Server.  The Licensee must obtain entitlement for each Virtual Processor Core made available to the Program.

For each Physical Server, the Licensee must have sufficient entitlements for the lesser of

  1. the sum of all available Virtual Processor Cores on all Virtual Servers made available to the Program or
  2. all available Processor Cores on the Physical Server.

Other key Virtual Processor Core considerations for you to understand

    • If the number of VPCs is greater than the physical cores, then you only need to license the number of physical cores on the machine
    • Minimum of 2 VPCs per deployment (1 VPC for idle/warm standby)

You can determine the VPC requirement through DB2 Itself by executing the following on each Physical or logical server DB2 is installed on and take the Online CPU Count and divided it by the HMTdegree result (threading degree) to get the count of Virtual CPU’s present.

“Db2pd –osinfo”

An example of this In a Cloud deployment

  • A customer buys a Virtual Cloud Server as a Service on a internal private cloud or MSP like Softlayer/Azure/ Amazon/rackspace ….
  • They purchase an 8 core Virtual CPU Environment
  • The customer runs “ Db2pd –osinfo” is run on the machine and shows HMTDegree of 1 and OnlineCPU of 8

The customer must license 8 VPC for this environment

An Example of a Private Cloud deployment using VM-Ware

  • A customer deploys Multiple VMWare Hosts are created on a server to run DB2. The server is a 2 Socket server, 8 cores per processor, with hyper-threading turned on to a degree of 2 (16 physical cores) Each of the 11 virtual VMs deployed Reports 6 Virtual Processors.
  • The Customer runs “db2pd –osinfo” across all VMWare Hosts reporting a total of Online CPU of 64 across 11 Virtual Machines (HMTDegree of 1 for all VMs)

As the Hardware can be physically Identified as a 16 core server the customer only has to pay for 16 VPC’s not 64 as some competitor programs would as it is the lesser of the two numbers.

Stay tuned for more information around the enhancements that DB2 v 11.1 comes with.  You may also want to attend the upcoming webinar on June 14th to learn how to maximize your data infrastructure investments. Register here http://bit.ly/v11launchwebcast

 

IBM Insight 2015 – A guide to the DB2 sessions

sajan

By   Sajan Kuttappa,  Marketing Manager, Analytics Platform Services

In just a few weeks from now, thousands of people will converge in Las Vegas for the much talked about IBM Insight 2015 conference at Mandalay Bay, Las Vegas.

If you are a DB2 professional, an information architect or a database professional interested in knowing about the latest in in-memory technology, DB2 for SAP workloads and database Administration tools, there is an excellent lineup of sessions by subject matter experts that has been planned for you at the Insight conference. This article will highlight the topics that will be covered so that you can create your agenda in advance

IBM DB2 continues to be the best database option for SAP environments. Experts will share DB2 BLU Best Practices for SAP systems and the latest features of DB2 that enable in-memory, high-availability and scalability for SAP. For those interested in new deployment options like Cloud, we recommend sessions covering IBM’s portfolio of Cloud solutions for SAP on DB2 customers. The Hands-on-Labs at the conference will showcase how to best leverage DB2 BLU for SAP Business Warehouse.

Don’t miss the many client stories about how they benefited from DB2’s in memory technology (BLU Acceleration) to enable speed-of-thought analytics for their business users, share their lessons learned on and best practices, and talk about enhancements and tips for DB2 LUW and DB2 BLU. If you are planning for increased workloads, look out for the session on scaling up BLU acceleration in a high concurrency environment.
Learn more about upgrading to the Data Server Manager for DB2 and simplify database administration, optimize performance with expert advice & reduce costs across the enterprise.  Apart from this you can hear how our clients achieved cost savings and reduced time-to-market by migrating to DB2 LUW. Also on the menu is a Database Administration Crash course for DB2 LUW that will be conducted by top IBM champions in the field.

There is a lot that will be take place in Las Vegas. A week of high-quality educational sessions, hands-on-labs and panel discussions awaits you so attendees can walk away with better insights into how DB2 integrates into big data analysis and how it delivers in the cloud and more. We look forward to meeting you in Las Vegas for Insight 2015; and whatever happens in Vegas (at Insight) should definitely not stay in Vegas!!!

A list of all the sessions can be found at the below links

DB2 for SAP:   http://bit.ly/db2sapatinsight
Core DB2 for the enterprise: http://bit.ly/db2coreatinsight
DB2 with BLU Acceleration: http://bit.ly/db2bluatinsight
DB2 LUW tools / Administration: http://bit.ly/db2toolsatinsight

So start planning your agenda for Insight 2015 .

Follow us on Twitter (@IBM_DB2 ), Facebook (IBM DB2) for regular updates around the conference and key sessions.

Continuous availability benefits of pureScale now available in a new low cost DB2 offering

KellySchlambKelly Schlamb
DB2 pureScale and PureData Systems Specialist, IBM

Today, IBM has announced a set of new add-on offerings for DB2, which includes the IBM DB2 Performance Management Offering, IBM DB2 BLU Acceleration In-Memory Offering, IBM DB2 Encryption Offering, and the IBM DB2 Business Application Continuity Offering. More details on these offerings can be found here. Generally speaking, the intention of these offerings is to make some of the significant capabilities and features of DB2 available as low cost options for those not using the advanced editions of DB2, which already include these capabilities.

If you’ve read any of my past posts you know that I’m a big proponent of DB2’s pureScale technology. And staying true to form, the focus of my post here is on the IBM DB2 Business Application Continuity (BAC) offering, which is a new deployment and licensing model for pureScale. This applies to DB2 10.5 starting with fix pack 5 (the current fix pack level released in December 2014).

For more information on DB2 pureScale itself, I suggest taking a look here and here. But to boil it down to a few major points, it’s an active/active, shared data, clustering solution that provides continuous availability in the event of both planned and unplanned outages. pureScale is available in the DB2 Advanced Workgroup Server Edition (AWSE) and Advanced Enterprise Server Edition (AESE). Its architecture consists of the Cluster Caching Facilities (CF), which provide centralized locking and data page management for the cluster, and DB2 members, which service the database transaction requests from applications. This multi-member architecture allows workloads to scale-out and workload balance across up to 128 members.

While that scale-out capability is attractive to many people, some have told me that they love the availability that pureScale provides but that they don’t have the scalability needs for it. And in this case they can’t justify the cost of the additional software licenses to have this active/active type of environment – or to even move from their current DB2 Workgroup Server Edition (WSE) or Enterprise Server Edition (ESE) licensing up to the corresponding advanced edition that contains pureScale.

This is where BAC comes in. With BAC – which is a purchasable option on top of WSE and ESE – you can create a two member pureScale cluster. The difference, and what makes this offering interesting and attractive for some, is that the cluster can be used in an active/active way, but it’s licensed as an active/passive cluster. Specifically, one member of the cluster is used to run your application workloads and the other member is available as a standby in case that primary member fails or has to be brought down for maintenance. But isn’t that passive? No… and the reason is that this secondary member doesn’t just sit idle waiting for that to happen. Under the BAC offering terms, you are also allowed to run administrative operations on this secondary “admin” member. In fact, you are allowed to do all of the following types of work on this member:

  • Backup, Restore
  • Runstats
  • Reorg
  • Monitoring (including DB2 Explain and any diagnostic or problem determination activities)
  • Execution of DDL
  • Database Manager and database configuration updates
  • Log based capture utilities for the purpose of data capture
  • Security administration and setup

By offloading this administrative work off of the primary member, you leave it with more capacity to run your application workloads. And with BAC, you are only fully licensing the one primary member where your applications are running (for either WSE or ESE plus BAC). The licensing of the secondary member, on the other hand, falls under DB2’s warm/idle standby licensing which means a much reduced cost for it (e.g. for PVU pricing the secondary member would only be 100 PVUs of WSE or ESE plus 100 PVUs of BAC). For more details on actual software costs, please talk to your friendly neighborhood IBM rep.

BACgraphicAnd because this is still pureScale at work here, if there’s a failure of the primary member, the application workloads will automatically failover to the secondary member. Likewise, the database will stay up and remain accessible to applications on the secondary member when the primary member undergoes maintenance – like during a DB2 fix pack update. In both of these cases the workload is allowed to run on the secondary member and when the primary member is brought back up, the workloads will failback to it. All of the great availability characteristics of pureScale at a lower cost!

If you contrast this with something like Oracle RAC One Node, which has some similar characteristics to IBM DB2 BAC, only the primary node (instance) in Oracle RAC One Node is active and the standby node is not. In fact, it’s not even started until the work has to failover, so there’s a period of time where the cluster is completely unavailable. So a longer outage, slower recovery times, and no ability to run administrative work on this idle node like you can do with BAC.

Sounds great, right?

And for those of you that do want the additional scale-out capability, but like the idea of having that standby admin member at a reduced cost, IBM has thought of you too. Using AWSE or AESE (the BAC offering isn’t involved here), you can implement a pureScale cluster with multiple primary members with a single standby admin member. The multiple primary members are each fully licensed for AWSE or AESE, but the single standby admin member is only licensed as a passive server in the cluster (again, using the PVU example that would only be 100 PVUs of either AWSE or AESE). In this case, you can do any of that administrative work previously described on the standby member, and it’s also available for workloads to failover to if there are outages for one or more of the primary members in the cluster.

Happy clustering!

Learn about DB2 at Kolkata India DB2 user event

There is nothing more exciting than hearing how to revolutionize your business with DB2 for Linux, UNIX and Windows, so here is your chance to unlock the best practices and learn from the experts. I encourage you to weave this event into your busy schedule this week. I promise you won’t be disappointed!
Join technical experts form TCS Capegemini and IBM to learn how to maximize your IT opportunities using the keys to master the DB2 LUW locking
During this half-day agenda, you will learn how to make the right decisions for your current and future architecture:

There is NO REGISTRATION FEE to attend this Non-IBM event & the LUNCH on event day will be SPONSORED by IBM
When : 15th Nov 2014 (Saturday) from 9:30 Am to 3:30 PM
Venue : Techno India Campus,Salt Lake,Sector V, Kolkata, India
Who can join : Anyone who is having interest in DB2 or working on DB2
How to book your seat : Send a mail from your official mail id to kidug.india@gmail.com with subject line “I will attend”

Leading Speakers from : Capgemini, MJunction, TCS, IBM
KIDUG_Nov2014_DB2_Event_Invite_mailer

Mastering the DB2 10.1 Certification Exam – Part 2: Security

It’s hard to argue against the benefits of becoming a DB2 Certified Professional. Aside from gaining a better understanding of DB2, it helps keep you up to date with the latest versions of the product.  It also gives you professional credentials that you can put on your resume to show that you know what you say you know.

But many people are reluctant to put in the time and effort it takes to prepare for the exams. Some just don’t like taking tests, others don’t feel they have the time or money to prepare. That’s where we come in – the DB2 team has put together a great list of resources to help you conquer the certification exams.

We caught up with Anas Mosaad and Mohamed El-Bishbeashy who are part of the DB2 team to that developed the DB2 10.1 Fundamentals Certification Exam 610 Prep – a 6 part tutorial series aimed at helping DBA’s prepare for the certification exam.

What products are focused on in this tutorial?

In this tutorial we’ve focused completely on DB2 10.1 LUW

Tell us a little about what students can hope to learn about in this tutorial?

It is the second in a series of six tutorials designed to help you prepare for the DB2 Fundamentals Exam (610). It puts in your hands all the details needed to successfully pass security related question in the exam. It introduces the concepts of authentication, authorization, privileges, and roles as they relate to DB2 10.1. It also introduces granular access control and trusted contexts.

Why should a DBA be interested in this certification?

IBM professional certifications are recognized world wide, so you will get recognized! In addition, this one is the first milestone in the advanced DB2 certification paths (development, DBA and advanced DBA). It acknowledges that you are knowledgeable about the fundamental concepts of DB2 10.1. It shows that you have an in-depth knowledge of the basic to intermediate tasks required in day-to-day administration, basic SQL (Structured Query Language), understand which additional products are available with DB2 10.1, understand how to create databases and database objects, and have a basic knowledge of database security and transaction isolation.

Do have any special tips?

Absolutely, here are a few of our favorite tips for preparing for the certification exam:

  • Practice with DB2
  • If you don’t have access to DB2, download the fully functional DB2 Express-C for free
  • Read the whole tutorial before taking the exam
  • Be a friend of DB2 Knowledge Center (formerly infocenter)
  • When in doubt, don’t hesitate, post and collaborate in the forums.

For more information:

DB2 10.1 fundamentals certification exam 610 prep, Part 2: DB2 security

For the entire series of tutorials for Exam 610 DB2 Fundamentals, include the following:
Part 1: Planning
Part 2: DB2 security
Part 3: Working with databases and database objects
Part 4: Working with DB2 Data using SQL
Part 5: Working with tables, views, and indexes
Part 6: Data concurrency

About the authors:
Anas MosaadAnas Mosaad, a DB2 solutions migration consultant with IBM Egypt, has more than eight years of experience in the software development industry. He is a member of IBM’s Information Management Technology Ecosystem Team focusing on enabling and porting customer, business partner, and ISV solutions to the IBM Information Management portfolio, which includes DB2, Netezza, and BigInsights. Anas’ expertise includes portal and J2EE, database design, tuning, and database application development.

Mohamed El-BishbeashyMohamed El-Bishbeashy is an IM specialist for IBM Cairo Technology Development Center (C-TDC), Software Group. He has 12+ years of experience in the software development industry (8 of those are with IBM). His technical experience includes application and product development, DB2 administration, and persistence layer design and development. Mohamed is an IBM Certified Advanced DBA and IBM Certified Application Developer. He also has experience in other IM areas including PureData Systems for Analytics (Netezza), BigInsights, and InfoSphere Information server.

Balluff loves BLU Acceleration too

cassieBy Cassandra Desens
IBM Software Group, Information Management  

BLU Acceleration is a pretty darn exciting advancement in database technology. As a marketing professional, I can tell you why it’s cool..
BLU provides instant insight from real-time operation data,
BLU provides breakthrough performance without the constraints of other in-memory solutions,
BLU provides simplicity with a load-and-go setup,
etcetera, etcetera ..you get the point.

You can read our brochures and watch our videos to hear how DB2 with BLU Acceleration will transform your business. We think it’s the next best thing since sliced bread because we invented it. But is it all it’s cracked up to be? The answer is YES.

Clients all over the world are sharing how BLU Acceleration made a huge, positive difference to their business. Hearing customer stories puts our product claims into perspective. Success stories give us the ultimate answer to the elusive question “How does this relate to me and my business?”. Which is why I want to share with you one of our most recent stories: Balluff.

Balluff is a worldwide company with headquarters in Germany. They have over 50 years of sensor experience and are considered a world leader and one of the most efficient manufacturers of sensor technology.  Balluff relies on SAP solutions to manage their business, including SAP Business Warehouse for their data analysis and reporting.

Over the last few years Balluff experienced significant growth, which resulted in slowed data delivery. As Bernhard Herzog, Team Manager Information Technology SAP at Balluff put it “Without timely, accurate information we risked making poor investment decisions, and were unable to deliver the best possible service to our customers.”

The company sought a solution that would transform the speed and reliability of their information management system. They chose DB2 with BLU Acceleration to accelerate access to their enormous amount of data. With BLU Acceleration Balluff achieved:

  • Reduced reporting time for individual reports by up to 98%
  • Reduced backup data volumes by 30%
  • Batch mode data processing improvements by 25%
  • A swift transition with no customization needed; Balluff transferred 1.5 terabytes of data within 17 hours with no downtime

These improvements have a direct impact on their business. As Bernhard Herzog put it, “Today, sales staff have immediate access to real-time information about customer turnover and other important indicators. With faster access to key business data, sales managers at Balluff can gain a better overview, sales reps can improve customer service and the company can increase sales”.

Impressive, right? While you could argue it’s no sliced bread, it certainly is a technology that is revolutionizing reporting and analytics, and worth try. Click here for more information about DB2 with BLU Acceleration and to take it for a test drive.

_________________________________________________________________

For the full success story, click here to read the Balluff IBM Case Study
You can also click here to read Balluff’s success as told by ComputerWoche (Computer World Germany). Open in Google Chrome for a translation option.

Exclusive Opportunity to Influence IBM Product Usability: Looking for Participants for Usability Test Sessions – Data Warehousing and Analytics

Arno thumbnail 2By Arno C. Huang, CPE
Designer, IBM Information Management Design
IBM Design making the user the center of our productsThe IBM Design Team is seeking people with a variety of database, data warehousing and analytics backgrounds to participate in usability test sessions. We are currently looking for people who work in one of the following roles: DBA, Architect, Data Scientist, Business Analyst or Developer. As a test participant, you will provide your feedback about current or future designs we are considering, thus making an impact on the design of an IBM product and letting us know what is important to you.

Participating in a study typically consists of a web conference or on-site meeting scheduled around your availability. IBM will provide you with an honorarium for your participation. There are several upcoming sessions, so if you’re interested, we’ll help you find a session that best suits your schedule. If you are interested, please contact Arno C. Huang at achuang@us.ibm.com

Troubles Are Out of Reach With Instant Insights

RadhaBy Radha Gowda
Technical Marketing, IBM Analytics

Bet you have been hearing a lot about shadow tables in DB2 “Cancun Release” lately.  Umm… do shadow and Cancun remind you of On the beach by Cliff Richards and the Shadows?  Seriously, DB2 shadow tables can make you dance to a rock ‘n’ roll on the beach because you will be trouble free with real-time insights into your operations and of course, lots of free time.

What is a shadow table?

Shadow tables have been around since the beginning of modern computing – primarily for improving performance.  So what does the DB2 shadow table offer? The best of both OLTP and OLAP worlds!  You can now run your analytic reports directly in OLTP environment with better performance.

Typically organizations have separate OLTP and OLAP environments – either due to resource constraints or to ensure the best OLTP performance.   The front-end OLTP is characterized by very small, but high volume transactions. Indexes are created to improve performance.  In contrast, the back-end OLAP has long-running complex transactions that are relatively small in number. Indexes are created, but they may be different from OLTP indexes.  Of course, an ETL operation must transfer data from OLTP database to OLAP data mart/warehouse at time intervals that may vary from minutes to days.

DB2 can help you simplify your infrastructure and operations with shadow tables. Shadow table is a column organized copy of a row-organized table within the OLTP environment, and may include all or a subset of columns.  Because the table is column organized, you get the benefit of enhanced performance that BLU Acceleration provides for analytic queries.

How do shadow table work?

shadow tables

Shadow table is implemented as a materialized query table (MQT) that is maintained by replication. IBM InfoSphere Change Data Capture for DB2, available in advanced editions, maintains shadow tables through automatic and incremental synchronization of row-organized tables.

While all applications can access the row-organized table by default, DB2 optimizer will perform the latency-based routing to determine whether a query needs to be routed to shadow tables or the row-organized source.

A truly flexible and trouble-free OLTP world

Shadow tables offer the incredible speed you have come to expect from BLU Acceleration while the source tables remain row-organized to best suit OLTP operations.  In fact, with shadow tables, the performance of analytical queries can improve by 10x or more, with equal or greater transactional performance*.

With instant insight into “as it happens” data for all your questions and all the free time you’ll have with no more indexing/tuning, what’s not to like? Try DB2 today

* Based on internal IBM testing of sample transactional and analytic workloads by replacing 4 secondary analytical indexes in the transactional environment with BLU Shadow Tables. Performance improvement figures are cumulative of all queries in the workload. Individual results will vary depending on individual workloads, configurations and conditions.

Follow

Get every new post delivered to your Inbox.

Join 67 other followers