IBM DB2 sessions at IBM Insight at World of Watson conference

As organizations develop next-generation applications for the digital era, many are using cognitive computing ushered in by IBM Watson technology. To make the most of these next-generation applications, you need a next-generation database that must handle a massive volume of data while delivering high performance to support real-time analytics. At the same time, it must provide data availability for demanding applications, scalability for growth and flexibility for responding to changes

IBM DB2 enables you to meet these challenges by providing enterprise-class scalability while also leveraging adaptive in-memory BLU Acceleration technology to support the analytics needs of your business. DB2 also handles structured and semi-structured data from a variety of sources to provide deep insight. With the ability to support thousands of terabytes, you can use historic and current data to identify trends and make sound decisions. The new release DB2 11.1 that was announced earlier this year comes packed with many enhancements for BLU, OLTP, PureScale, security, SQL, and more!

Whether you are interested in an overview of the improvements available with the new release or an in-depth understanding of the new enhancements, IBM World of Watson is the place to be.  The IBM Insight conference is now part of IBM World of Watson 2016 on October 24-27 and continues to be the premiere industry event for data and analytics professionals, delivering unmatched value and exciting onsite opportunities to connect with peers, hear from thought leaders, experience engaging content, and receive training and certification.  This article will highlight the key DB2 sessions at the IBM World of Watson conference.

We will start with Session #3483 by Matt Huras, IBM DB2 Architect who will provide a technical overview of the new release and the value the new features provide for your installations. We also have the following sessions that provide deeper coverage for the new enhancements available with the new release

  • DB2 11.1 includes significant enhancements in the area of availability — particularly around the pureScale feature. You can attend the Session #1433 – “The Latest and Greatest on Availability and pureScale in DB2 11.1” to learn about these enhancements, including simplification of deployment, new operating system and virtualization options, HADR updates, and improvements in the areas of management and multitenancy.
  • DB2 11.1 packs several enhancements to protect your data whether they are on-premises or on the cloud. Do look out for Session #1038 – “DB2 Security: From the Data Center to the Cloud” for an overview of the various security mechanisms that are available with the latest version of DB2 for Linux, UNIX, and Windows, as well as introduce you to several things that must be taken into consideration if you plan on moving your DB2 database environment from the data center to the cloud.
  • There is a lot of talk about in-memory computing and columnar multi-partitioned databases to improve analytic query performance. DB2 1 brings MPP scale to BLU! If you need a detailed step-by-step approach to implement the newest version of DB2, come learn about often overlooked but very important best practices to understand before and after upgrading by attending the Session #1290– “Upgrading to DB2 with the Latest Version of BLU Acceleration”  
  • DB2 11.1 is the foundation for hybrid cloud database deployments. In addition to being available to install on cloud-based infrastructure it is also the foundation of DB2 on Cloud and dashDB cloud data service offerings. Attend the Session #1444 – “Hybrid Cloud Data Management with DB2 and dashDB” to learn more about these different options and when you’d want to choose one over another.
  • If you are deploying DB2 for SAP applications, we have lined up Session #2629 by SAP and IBM experts – “IBM DB2 on SAP – V11.1 Update and Recent Developments”.  In this session, we will give an overview of recent SAP on DB2 extensions and which DB2 V11.1 features are most important for SAP applications.  One of our clients – BCBS of TN will also share their experiences with DB2 V11.1 around analytics and the benefits that they’ve seen.

Our clients Nordea Group and Argonne National Laboratory will also share their experience with deploying IBM Data Server Manager.  The hands–on-labs HOL 1766B – “DB2 High Availability and Disaster Recovery with Single or Multiple Standby Databases” allows you to configure and manage a production database with single or multiple standby databases using DB2 HA/DR facilities.

If you are a new user of DB2, you can also read this guide to the introductory DB2 sessions . Whether you are determining your next move or optimizing your existing investments in data and analytics capabilities, the IBM World of Watson 2016 conference is the place for you. This is your opportunity to get the training, answers, certifications and insights you need to be at the top of your game . If you have not yet registered for the conference, we suggest you visit this link and register yourself  – 

IBM DB2 – the database for the cognitive era at IBM World of Watson 2016

IBM Insight, the premiere data, analytics and cognitive IBM conference, is now part of IBM World of Watson 2016 to be held at Las Vegas from October 24-27.  This year attendees will be able to experience first-hand a world of cognitive capabilities that IBM has been at the forefront of. World of Watson incorporates the kind of information you gained from IBM Insight — the tools and best practices to manage your data — and raises the game. You’ll also see how Watson’s capabilities give you a broad view of your business, its competitive landscape and what it takes to make your customers act. Our CEO – Ginni Rometty will deliver a keynote at this year’s conference. And on the evening of October 26th, our special event will feature Grammy winner Imagine Dragons.

Whether you’re a beginner or a seasoned DB2 professional, there is a treasure trove of information that you could walk away with. IBM experts and your peer speakers will share information about migration guidelines, new features of recent releases, implementation experiences, and much more. Likewise, our hands-on-labs (HOL) complement these topics to further enrich the experience.

For users new to DB2, we recommend attending session 3585 on “DB2 v11.1 Fundamentals” by Roger Sanders.  This presentation will provide a great overview of DB2 for Linux, UNIX and Windows. It will take attendees through the concepts covered on the DB2 11.1 Fundamentals certification exam: planning, security, working with databases and data objects, using SQL, and data concurrency. It will also provide a brief introduction to other DB2 based offerings like DB2 on Cloud and dashDB.

IBM provides number of database options for organizations who would like to deploy applications on the cloud – be it fully managed or hosted environment.  IBM dashDB for transactions provides a fully managed database service in the cloud that is optimized for online transaction processing workloads. DB2 on Cloud is a hosted service that offers the agility of cloud deployment and the management control you enjoy with the on-premises software.

  • If you would like to understand the capabilities of the dashDB for Transactions offering, consider attending session 3471 on “dashDB for Transactions: Fully Managed and Truly Awesome,” where we will discuss key features of this enterprise class service and its design and implementation for availability and performance.
  • DB2 on Cloud offering gives you everything you know and love about DB2 for Linux, UNIX and Windows software in a cloud environment hosted by IBM. You still have full DBA control to customize the database. You can rapidly provision it for instant productivity. And the monthly subscription-based licensing makes it easier to predict and control costs. As with any OLTP database supporting your critical applications, high availability and disaster recovery concerns are top of mind. We have lined up a session (Session #1439) that will help you understand how to “Implement High Availability and Disaster Recovery for DB2 on the Cloud.”

You can learn how to further optimize DB2 performance with management tools like IBM Data Server. The Hands-on-Lab  3141A- Secrets of the Pros: Using Data Server Manager to Monitor, manage and Mitigate Performance Problems will teach you how to use the latest version of IBM Data Server Manager to diagnose and resolve performance problems.

We hope that you can take advantage of these sessions by attending the World of Watson conference. Stay tuned for our next article on sessions for “Intermediate” skill sets and “Advanced” users.

We look forward to seeing you in Vegas. If you have not yet registered, please visit this link for more details –

The value of common database tools and linked processes for Db2, DevOps, and Cloud


by Michael Connor, Analytics Offering Management

Today we released DB2 V11 for Linux, UNIX and Windows. The release includes updates to Data Server Manager (DSM) V2.1 and Data Server Driver connectivity V11 and Advanced Recovery Feature (ARF) V11.    As many of you may be aware of – 2 years ago we embarked on a strategy to completely rethink our tooling strategy.  The market was telling us we needed to focus more on a simplified user experience, a web console addressing both the power and casual user role, and deliver deep database support in support of production applications.  In  March 2015, we delivered our first iteration of Data Server Manager as part of 10.5.  This year we have yet again extended capability to this valuable platform and in addition extended support across a number of IBM Data stores including DB2, dashDB, DB2 on Cloud, and BigInsights.

First let’s talk about some of the drivers we hear related to Database Delivery.

  1. The LOB and LOB developer communities want access to mission critical data and extend that data through new customer facing OLTP applications.
  2. Business analysts are using more data than ever – in generating and enhancing customer value through Analytic applications.
  3. These new roles need on demand access to data across all aspects of the delivery lifecycle from idea inception to production delivery and support.
  4. While the timelines are lessened, the data expanded and the lifecycle speeded up, quality cannot suffer.

Therefore, the DBA, Development, Testing, and Production support roles are now participating in activities known as Continuous Delivery, Continuous Testing, and DevOps.  With the goal of improving customer service, decreasing cycle and delivery times, without decreasing quality.

DSM pic1Some areas that are addressed by our broader solutions for Continues Delivery, Testing, and DevOps include:

  • High Performance Unload of production data and selective data environment, including test data environment restore with DB2 Recovery Expert
  • Simplified test data management addressing discovery, subsetting, masking, and refresh with Test Data Management.
  • Automated driving of application test and performance based workloads with Rational Functional and Performance Tester.
  • Release Management and Deployment automation with Rational Urbancode.

And finally, areas improved with our latest DB2 releases

  • SQL Development and execution with Data Server Manager
  • Test and Deployment Data Server Monitoring with Data Server Manager
  • SQL capture and analysis with Data Server Manager
  • Client and application Data Access, Workload and Failover management with Data Server Drivers

DSM Pic 2The Benefits of considering a Continuous — Solution include reduced cycle times, lower risk of failure, improved application performance and reduced risk of downtime.

With the V11 Releases we have delivered enhancements including:

  • DSM: DB2 LUW V11 support  and monitoring improvements for PureScale applications, Extended Query history analysis
  • ARF: DB2 LUW V11 support and improvements for Analytics usage with BLU Acceleration
  • DS Driver (Also DB2 Connect): Manageability improvements, Performance enhancements, and extended driver support now for iMAC applications.

DSM Pic 3Many of the improvements noted above are also available for our private Cloud offering in preview DashDB Local – which leverages DSM as an integral component of their dashboard, and our public Cloud offering DB2 on Cloud.

Read more details about the announcement for further information:

Also check out the DB2 LUW Landing Page:


Blogger:    Michael Connor, with Analytics offering management, joined IBM in 2001 and has focused early in his IBM career on launching the z/OS Development Tooling business centered on Rational Developer for z.  Since moving to Analytics in 2013, Michael leads the team responsible for Core Database Tooling

Migrating a DB2 database from a Big Endian environment to a Little Endian environment


By Roger Sanders, DB2 for LUW Offering Manager, IBM

What Is Big-Endian and Little-Endian?

Big-endian and little-endian are terms that are used to describe the order in which a sequence of bytes are stored in computer memory, and if desired, are written to disk. (Interestingly, the terms come from Jonathan Swift’s Gulliver’s Travels where the Big Endians were a political faction who broke their boiled eggs on the larger end, defying the Emperor’s edict that all eggs be broken on the smaller end; the Little Endians were the Lilliputians who complied with the Emperor’s law.)

Specifically, big-endian refers to the order where the most significant byte (MSB) in a sequence (i.e., the “big end”) is stored at the lowest memory address and the remaining bytes follow in decreasing order of significance. Figure 1 illustrates how a 32-bit integer would be stored if the big-endian byte order is used.

endian image1Figure 1. Big-endian byte order

For people who are accustomed to reading from left-to-right, big-endian seems like a natural way to store a string of characters or numbers; since data is stored in the order in which it would normally be presented, programmers can easily read and translate octal or hexadecimal data dumps. Another advantage of using big-endian storage is that the size of a number can be more easily estimated because the most significant digit comes first. It is also easy to tell whether a number is positive or negative—this information can be obtained by examining the bit at offset 0 in the lowest order byte.

Little-endian, on the other hand, refers to the order where the least significant byte (LSB) in a sequence (i.e., the “little end”) is stored at the lowest memory address and the remaining bytes follow in increasing order of significance. Figure 2 illustrates how the same 32-bit integer presented earlier would be stored if the little-endian byte order were used.

endian image 2

 Figure 2. Little-endian byte order

One argument for using the little-endian byte order is that the same value can be read from memory, at different lengths, without having to change addresses—in other words, the address of a value in memory remains the same, regardless of whether a 32-bit, 16-bit, or 8-bit value is read. For instance, the number 12 could be read as a 32-bit integer or an 8-bit character, simply by changing the fetch instruction used. Consequently, mathematical functions involving multiple precisions are much easier to write.

Little-endian byte ordering also aids in the addition and subtraction of multi-byte numbers. When performing such operations, the computer must start with the least significant byte to see if there is a carry to a more significant byte—much like an individual will start with the rightmost digit when doing longhand addition to allow for any carryovers that may take place. By fetching bytes sequentially from memory, starting with the least significant byte, the computer can start doing the necessary arithmetic while the remaining bytes are read. This parallelism results in better performance; if the system had to wait until all bytes were fetched from memory, or fetch them in reverse order (which would be the case with big-endian), the operation would take longer.

IBM mainframes and most RISC-based computers (such as IBM Power Systems, Hewlett-Packard ProLiant servers, and Oracle SPARC servers) utilize big-endian byte ordering. Computers with Intel and AMD processors (CPUs) use little-endian byte ordering instead.

It is important to note that regardless of whether big-endian or little-endian byte ordering is used, the bits within each byte are usually stored as big-endian. That is, there is no attempt to reverse the order of the bit stream that is represented by a single byte. So, whether the hexadecimal value ‘CD’ for example, is stored at the lowest memory address or the highest memory address, the bit order for the byte will always be: 1100 1101

Moving a DB2 Database To a System With a Different Endian Format

One of the easiest ways to move a DB2 database from one platform to another is by creating a full, offline backup image of the database to be moved and restoring that image onto the new platform. However, this process can only be used if the endianness of the source and target platform is the same. A change in endian format requires a complete unload and reload of the database, which can be done using the DB2 data movement utilities. Replication-based technologies like SQL Replication, Q Replication, and Change Data Capture (CDC), which transform log records into SQL statements that can be applied to a target database, can be used for these types of migrations as well. On the other hand, DB2 High Availability Disaster Recovery (HADR) cannot be used because HADR replicates the internal format of the data thereby maintaining the underlying endian format.

The DB2 Data Movement Utilities (and the File Formats They Support)

DB2 comes equipped with several utilities that that can be used to transfer data between databases and external files. This set of utilities consists of:

  • The Export utility: Extracts data from a database using an SQL query or an XQuery statement, and copies that information to an external file.
  • The Import utility: Copies data from an external file to a table, hierarchy, view, or nickname using INSERT SQL statements. If the object receiving the data is already populated, the input data can either replace or be appended to the existing data.
  • The Load utility: Efficiently moves large quantities of data from an external file, named pipe, device, or cursor into a target table. The load utility is faster than the Import utility because it writes formatted pages directly into the database, instead of performing multiple INSERT
  • The Ingest utility: A high-speed, client-side utility that streams data from files and named pipes into target tables.

Along with these built-in utilities, IBM InfoSphere Optim High Performance Unload for DB2 for Linux, UNIX and Windows, an add-on tool that must be purchased separately, can be used to rapidly unload, extract, and repartition data in a DB2 database. Designed to improve data availability, mitigate risk, and accelerate database migrations, this tool helps DBAs work with very large quantities of data with less effort and faster results.

Regardless of which utility is used, data can only be written to or read from files that utilize one of the following formats:

  • Delimited ASCII
  • Non-delimited or fixed-length ASCII
  • PC Integrated Exchange Format
  • Extensible Markup Language (IBM InfoSphere Optim High Performance Unload for DB2 for Linux, UNIX and Windows only.)

Delimited ASCII (DEL)

The delimited ASCII file format is used by a wide variety of software applications to exchange data. With this format, data values typically vary in length, and a delimiter, which is a unique character not found in the data values themselves, is used to separate individual values and rows. Actually, delimited ASCII format files typically use three distinct delimiters:

  • Column delimiters. Characters that are used to mark the beginning or end of a data value. Commas (,) are typically used as column delimiter characters.
  • Row delimiters. Characters that are used to mark the end of a single record or row. On UNIX systems, the new line character (0x0A) is typically used as the row delimiter; on Windows systems, the carriage return/linefeed characters (0x0D–0x0A) are normally used instead.
  • Character delimiters. Character that are used to mark the beginning and end of character data values. Single quotes (‘) and double quotes (“) are typically used as character delimiter characters.

Typically, when data is written to a delimited ASCII file, rows are streamed into the file, one after another. The appropriate column delimiter is used to separate each column’s data values, the appropriate row delimiter is used to separate each individual record (row), and all character and character string values are enclosed with the appropriate character delimiters. Numeric values are represented by their ASCII equivalent—the period character (.) is used to denote the decimal point (if appropriate); real values are represented with scientific notation (E); negative values are preceded by the minus character (-); and positive values may or may not be preceded by the plus character (+).

For instance, if the comma character is used as the column delimiter, the carriage return/line feed character is used as the row delimiter, and the double quote character is used as the character delimiter, the contents of a delimited ASCII file might look something like this:

10,”Headquarters”,860,”Corporate”,”New York”



38,”Support Center 1″,80,”Eastern”,”Atlanta”


51,”Training Center”,34,”Midwest”,”Dallas”

66,”Support Center 2″,112,”Western”,”San Francisco”


Non-Delimited ASCII (ASC)

With the non-delimited ASCII file format, data values have a fixed length, and the position of each value in the file determines which column and row a particular value belongs to.

When data is written to a non-delimited ASCII file, rows are streamed into the file, one after another and each column’s data value is written using a fixed number of bytes. (If a data value is smaller that the fixed length allotted for a particular column, it is padded with blanks.) As with delimited ASCII files, a row delimiter is used to separate each individual record (row) — on UNIX systems the new line character (0x0A) is typically used; on Windows systems, the carriage return/linefeed characters (0x0D–0x0A) are used instead. Numeric values are treated the same as when they are stored in delimited ASCII format files.

Thus, a simple non-delimited ASCII file might look something like this:

10Headquarters       860Corporate   New York

15Research                150Eastern          Boston

20Legal                        40 Eastern         Washington

38Support Center   180Eastern        Atlanta

42Manufacturing    100Midwest       Chicago

51Training Center   34 Midwest       Dallas

66Support Center   211Western        San Francisco

84Distribution         290Western        Denver


PC Integrated Exchange Format (IXF)

The PC Integrated Exchange Format file format is a special file format that is used almost exclusively to move data between different DB2 databases. Typically, when data is written to a PC Integrated Exchange Format file, rows are streamed into the file, one after another, as an unbroken sequence of variable-length records. Character data values are stored in their original ASCII representation (without additional padding), and numeric values are stored as either packed decimal values or as binary values, depending upon the data type used to store them in the database. Along with data, table definitions and associated index definitions are also stored in PC Integrated Exchange Format files. Thus, tables and any corresponding indexes can be both defined and populated when this file format is used

Extensible Markup Language (XML)

Extensible Markup Language (XML) is a simple, yet flexible text format that provides a neutral way to exchange data between different devices, systems, and applications. Originally designed to meet the challenges of large-scale electronic publishing, XML is playing an increasingly important role in the exchange of data on the web and throughout companies. XML data is maintained in a self-describing format that is hierarchical in nature. Thus, a very simple XML file might look something like this:

<?xml version=”1.0″ encoding=”UTF-8″ ?>


<name>John Doe</name>

<addr country=”United States”>

<street>25 East Creek Drive</street>


<state-prov>North Carolina</state-prov>



<phone type=”work”>919-555-1212</phone>



As noted earlier, only IBM InfoSphere Optim High Performance Unload for DB2 for Linux, UNIX and Windows can work with XML files.

db2move and db2look

As you might imagine, the Export utility, together with the Import utility or the Load utility, can be used to copy a table from one database to another. These same tools can also be used to move an entire database from one platform to another, one table at a time. But a more efficient way to move an entire DB2 database is by using the db2move utility. This utility queries the system catalog of a specified database and compiles a list of all user tables found. Then it exports the contents and definition of each table found to individual PC Integrated Exchange Format (IXF) formatted files. The set of files produced can then be imported or loaded into another DB2 database on the same system, or they can be transferred to another server and be imported or loaded to a DB2 database residing there.

The db2move utility can be run in one of four different modes: EXPORT, IMPORT, LOAD, or COPY. When run in EXPORT mode, db2move utilizes the Export utility to extract data from a database’s tables and externalize it to a set of files. It also generates a file named db2move.lst that contains the names of all of the tables that were processed, along with the names of the files that each table’s data was written to. The db2move utility may also produce one or more message files containing warning or error messages that were generated as a result of the Export operation.

When run in IMPORT mode, db2move uses the file db2move.lst to establish a link between the PC Integrated Exchange Format (IXF) formatted files needed and the tables into which data is to be populated. It then invokes the Import utility to recreate each table and their associated indexes using information stored in the external files.

And, when run in LOAD mode, db2move invokes the Load utility to populate tables that already exist with data stored in PC Integrated Exchange Format (IXF) formatted files. (LOAD mode should never be used to populate a database that does not already contain table definitions.) Again, the file db2move.lst is used to establish a link between the external files used and the tables into which their data is to be loaded.

Unfortunately, the db2move utility can only be used to move table and index objects. And if the database to be migrated contains other objects such as aliases, views, triggers, user-defined data types (UDTs), user-defined functions (UDFs), and stored procedures, you must duplicate those objects in the target database as well. That’s where the db2look utility comes in handy. When invoked, db2look can reverse-engineer an existing database and produce a set of Data Definition Language (DDL) SQL statements that can then be used to recreate all of the data objects found in the database that was analyzed. The db2look utility can also collect environment registry variable settings, configuration parameter settings, and statistical (RUNSTATS) information, which can be used to duplicate a DB2 environment on another system.


DB2 Direct: A new way of consuming your Database

headshots 012

by Phillip Downey, WW program Director, IBM Analytics Platform Hybrid Cloud Strategy


In DB2 11.1, we introduced two new and easy to consume DB2 Direct editions: DB2 Direct Advanced and DB2 Direct Standard. Both editions bring a new dimension to the database offerings for the small and larger enterprise clients that are looking for the flexibility and scalability of the hybrid cloud. They can be acquired directly online via passport advantage and offer a simplified licensing metric and monthly subscription pricing model that are ideal for private, public and hybrid cloud deployments.


·        DB2 Direct Advanced Edition

The DB2 Direct Advanced Edition has all DB2 Server and Client features from DB2 Advanced Server Edition including encryption, multitenant deployments, adaptive compression, BLU Acceleration, SQL compatibility with PL/SQL, Data Server Manager, pureScale and database partitioning feature options. It also includes federation capabilities providing access to non-DB2 database sources like Oracle, MS SQL, Teradata, Hadoop, Netezza, Spark and other solutions.

Advanced Federation Capabilities

Phil blog image






It also includes access to 10 User licenses of Infosphere Data Architect per installation for designing and deploying database implementations.

·        DB2 Direct Standard Edition

DB2 Direct Standard Edition is modelled on DB2 Workgroup Edition, which provides encryption, pureScale for Continuously available HA deployments, Multitenant Deployments, SQL compatibility with PL/SQL, Data Server Manager Base Edition, Table partitioning, multi-dimensional clustering, parallel query and concurrent Connection pooling. It is limited to 16 cores and 128GB of RAM and is ideal for small to mid-sized database applications providing enterprise level availability, Query performance and Security as well as unlimited database size

You can take advantage of the new subscription model to lower costs and enjoy licensing flexibility for on-premises and cloud deployments:

Licensing Metrics:

Virtual ProcessorCore (VPC) Charge metric

  • Virtual processor core licensing gives you flexibility and simplified sub capacity licensing options that enables you to optimize your licensing to meet your business requirements.
  • There are two Licensing Scenarios you can apply
    • Simply license the sum of all available Virtual Processor Cores on all Virtual Servers the Direct edition is installed on
    • OR when you can identify a Server and it is more cost effective to do so simply license all available Processor Cores on the Physical Server regardless of the number of virtual machines on the system.
  • Benefits: This makes it simple for private and public Cloud deployments alike and enables you to optimise your licensing

Pricing Structure:

Subscription based pricing

      • DB2 Direct Advanced $354 USD per month per VPC
      • DB2 Standard Edition $135 USD per month per VPC

(Prices as of May 10th, 2016 in the United States.)

Each Deployment requires a minimum of 2 VPCs except in the case of Warm standby, which requires only one VPC.

These editions are ideal for customers who want to move to a subscription based model on their private cloud or a 3rd party vendors (hosts) and pay as their applications grow in size. It is also ideal for ISV’s who charge their applications to customers on a subscription model and want an easy to order database at competitive subscription pricing.

Understanding the Virtual Process Core Metric

Virtual Processor Cores are defined to simplify licensing in the private or public cloud deployment environment. You can deploy DB2 licenses with confidence even though you may or may not be aware of the underlying infrastructure. It enables customers to easily analyze their Licensing requirements including in sub-capacity situations.

A Virtual Processor Core is a Processor Core in an unpartitioned Physical Server, or a virtual core assigned to a Virtual Server.  The Licensee must obtain entitlement for each Virtual Processor Core made available to the Program.

For each Physical Server, the Licensee must have sufficient entitlements for the lesser of

  1. the sum of all available Virtual Processor Cores on all Virtual Servers made available to the Program or
  2. all available Processor Cores on the Physical Server.

Other key Virtual Processor Core considerations for you to understand

    • If the number of VPCs is greater than the physical cores, then you only need to license the number of physical cores on the machine
    • Minimum of 2 VPCs per deployment (1 VPC for idle/warm standby)

You can determine the VPC requirement through DB2 Itself by executing the following on each Physical or logical server DB2 is installed on and take the Online CPU Count and divided it by the HMTdegree result (threading degree) to get the count of Virtual CPU’s present.

“Db2pd –osinfo”

An example of this In a Cloud deployment

  • A customer buys a Virtual Cloud Server as a Service on a internal private cloud or MSP like Softlayer/Azure/ Amazon/rackspace ….
  • They purchase an 8 core Virtual CPU Environment
  • The customer runs “ Db2pd –osinfo” is run on the machine and shows HMTDegree of 1 and OnlineCPU of 8

The customer must license 8 VPC for this environment

An Example of a Private Cloud deployment using VM-Ware

  • A customer deploys Multiple VMWare Hosts are created on a server to run DB2. The server is a 2 Socket server, 8 cores per processor, with hyper-threading turned on to a degree of 2 (16 physical cores) Each of the 11 virtual VMs deployed Reports 6 Virtual Processors.
  • The Customer runs “db2pd –osinfo” across all VMWare Hosts reporting a total of Online CPU of 64 across 11 Virtual Machines (HMTDegree of 1 for all VMs)

As the Hardware can be physically Identified as a 16 core server the customer only has to pay for 16 VPC’s not 64 as some competitor programs would as it is the lesser of the two numbers.

Stay tuned for more information around the enhancements that DB2 v 11.1 comes with.  You may also want to attend the upcoming webinar on June 14th to learn how to maximize your data infrastructure investments. Register here


Auditing Informix database connections



By Inge Halilovic, IBM Analytics Platform


Preserving integrity of information and managing compliance control across heterogeneous environments is becoming increasingly critical. IBM Security Guardium has worked with Informix for many years now  and  with Informix 12.10.xc6, you have increased capabilities when you audit the user actions for your Informix database server with IBM Security Guardium, version 10.0. Guardium prevents leaks from databases, ensures the integrity of information, and automates compliance controls across heterogeneous environments.

Guardium can now:

  • Mask sensitive data in Informix databases.
  • Audit, and close, any Informix connection (if necessary,), regardless of the connection protocol. Previously, Guardium audited and closed only TCP connections.

On the Informix side, you use the new ifxguard utility to monitor connections that are audited by Guardium. Every time a user session attempts an action that is auditable, an ifxguard agent contacts the Guardium server. The Guardium server audits the connection and takes any appropriate action. You can customize the behavior of the ifxguard utility:

  • Set the logging mode
  • Set the number of ifxguard worker threads to prevent heavy locking

You can enable auditing and set the actions of the database server if the Guardium server does not respond in the timeout period by setting the new IFXGUARD configuration parameter in the onconfig file. For example, if the timeout period is exceeded, the Informix server can allow the client connection without auditing, trigger an alarm, disable auditing altogether, or shut down.

The 2016 IIUG conference will include sessions by experts that cover this topic in greater detail. If you are interested in learning more you can Register for the conference and attend the session on auditing with IBM Security Guardium to find out how to configure auditing for your Informix database server.

The conference will be held from May 4th – 8th at the Sawgrass Marriott Golf Resort & Spa, Ponte Vedra Beach, Florida, USA.  The good news is that IIUG members get Flat $100 off on the registration fees. You can register here –

Golf, Beaches and Informix – Welcome to Florida!

By Sajan Kuttappa, Marketingsajan Manager – Analytics Platform Services, IBM

IBM Informix has forged new frontiers with its ability to effectively manage large amounts of data from the Internet of Things. You can also seamlessly integrate non-standard data types, with a rich set of APIs including REST, that enhance development simplicity, flexibility and time to market. All of this without compromising on availability, scalability and security which make it the most powerful enterprise class database in the market today

While Informix has evolved over the years to stay in tune with rapid advances in technology, the annual IIUG conference has established itself as the premiere data and analytics conference and the best place to learn about the latest updates from the technology world. The best brains are selected to share their expertise during 3 days of educational sessions that will help the audience develop key skills for career advancement while also providing great networking opportunities with IBM executives, Informix development team and more.

In 2015 we celebrated 20 years of the IIUG and I am sure the emotional moments from the conference held at San Diego last year will be etched in the memories of all those who attended. The 2016 conference will be held from May 4th – 8th at the Sawgrass Marriott Golf Resort & Spa, Ponte Vedra Beach, Florida, USA and the lineup of speakers and sessions looks very promising.

At the home of the PGA tour, you will be served an excellent platter of technical educational sessions that cover everything from the benefits of Hybrid databases, Spark analytics with Informix, Tools and technologies for the world of Internet-of-Things (IoT). You can further benefit from the optional tutorials on May 8th around database administration, application development and tools on Sunday. Become IBM Informix Certified by taking the IBM Informix Professional Certification exams or almost any other IBM Information Management exam (the first exam is usually free at a savings of about $150).   IIUG banner

Visit for more details and register yourself. Paid registration includes full access to IIUG 2016 from May 4 – 8 including continental breakfast and lunch each day, the Wednesday evening reception and admission to IIUG party on May 5 and 6. IIUG members who register online get $100.00 off registration fees

So get ready to tee off with the best minds in the technology world. Welcome to Florida!

IBM Insight 2015 – A guide to the DB2 sessions


By   Sajan Kuttappa,  Marketing Manager, Analytics Platform Services

In just a few weeks from now, thousands of people will converge in Las Vegas for the much talked about IBM Insight 2015 conference at Mandalay Bay, Las Vegas.

If you are a DB2 professional, an information architect or a database professional interested in knowing about the latest in in-memory technology, DB2 for SAP workloads and database Administration tools, there is an excellent lineup of sessions by subject matter experts that has been planned for you at the Insight conference. This article will highlight the topics that will be covered so that you can create your agenda in advance

IBM DB2 continues to be the best database option for SAP environments. Experts will share DB2 BLU Best Practices for SAP systems and the latest features of DB2 that enable in-memory, high-availability and scalability for SAP. For those interested in new deployment options like Cloud, we recommend sessions covering IBM’s portfolio of Cloud solutions for SAP on DB2 customers. The Hands-on-Labs at the conference will showcase how to best leverage DB2 BLU for SAP Business Warehouse.

Don’t miss the many client stories about how they benefited from DB2’s in memory technology (BLU Acceleration) to enable speed-of-thought analytics for their business users, share their lessons learned on and best practices, and talk about enhancements and tips for DB2 LUW and DB2 BLU. If you are planning for increased workloads, look out for the session on scaling up BLU acceleration in a high concurrency environment.
Learn more about upgrading to the Data Server Manager for DB2 and simplify database administration, optimize performance with expert advice & reduce costs across the enterprise.  Apart from this you can hear how our clients achieved cost savings and reduced time-to-market by migrating to DB2 LUW. Also on the menu is a Database Administration Crash course for DB2 LUW that will be conducted by top IBM champions in the field.

There is a lot that will be take place in Las Vegas. A week of high-quality educational sessions, hands-on-labs and panel discussions awaits you so attendees can walk away with better insights into how DB2 integrates into big data analysis and how it delivers in the cloud and more. We look forward to meeting you in Las Vegas for Insight 2015; and whatever happens in Vegas (at Insight) should definitely not stay in Vegas!!!

A list of all the sessions can be found at the below links

DB2 for SAP:
Core DB2 for the enterprise:
DB2 with BLU Acceleration:
DB2 LUW tools / Administration:

So start planning your agenda for Insight 2015 .

Follow us on Twitter (@IBM_DB2 ), Facebook (IBM DB2) for regular updates around the conference and key sessions.

Connected devices and new business engagement models for industries


By   Sajan Kuttappa,  Marketing Manager, Analytics Platform Services

Business strategy expert Michael Porter recently referred to it as the “Third wave of IT innovation”. Not a day passes without articles being published about the multi-billion dollar value that it will help create.

If you are still wondering what this is about…here’s one more hint –  While everyone seems to agree on the impact that this will create not many understand the true shape that this could take in the future.

We are talking about the Internet of Things (IoT) and the promise of an utopian future where connected devices are the norm. As experts will tell you, the IoT as it is popularly called, is not new-to-the-world. In fact it has been around in different forms for many years.

While a few industries like manufacturing are ahead of the curve with adoption of IoT, there are new use cases within other industries where the concept of devices talking to each other over the internet is helping with designing innovative products, tackling inefficiencies within supply chains, improving customer service and more. Businesses of all types – manufacturers, servicing organizations, public utilities, industrial, telecommunications, healthcare providers and more are adopting the use of sensor technology to lower operating costs and increase business value. This is driving the need for instant access to information for speed-of-thought insight and the ability to deliver new services in record time.IoT event - San Jose

Join IBM and Intel for a special event at the IBM Briefing Center in San Jose on August 17th. This half-day event will feature real world solutions using the latest IoT, cloud and analytics technologies. Come and learn how leading edge solutions can be implemented now and take the opportunity to meet the IBM and Intel teams supporting IoT industry solutions.

You can register for the event here –

Top ten reasons to love IBM Data Server Manager


By Radha Gowda

Technical Marketing, IBM Analytics

Today many in-house or multi-vendor data management tools lack end-to-end visibility, usability and scalability. Database administrators often struggle to keep workflows and analytics operating at optimum efficiency and are unable to provide the continuous availability needed for ever-increasing transaction volumes. By effectively integrating database management functions, organizations can streamline routine maintenance and free up resources for more strategic, business-enhancing projects.

IBM presents IBM® Data Server Manager, an integrated database management tool, to administer, monitor, manage and optimize the performance of hundreds of IBM DB2® for Linux, UNIX and Windows databases across the enterprise. It is easy to install, cloud ready and can be accessed remotely from any supported browser.

Find out more reasons to love Data Server Manager from this infographic and try the base edition today –

IBM Data Server Manager Top10