Tech Talks – DB2 for Linux, UNIX and Windows

by Sajan Kuttappa, Content Marketing Manager

IBM DB2 for Linux, UNIX and Windows database software is the foundation that powers many IBM Analytics offerings. In conjunction with the International DB2 Users Group (IDUG®), the DB2 product team hosts a series of monthly webinars highlighting key capabilities, use scenarios, and various aspects of data management needs. Below, you will find a listing of past webinars and upcoming topics. If there are topics that you would like us to cover, please email us at ibmdatamgmnt@gmail.com

2017
Topic Presenters
Extending SQL: Exploring the hidden JSON capabilities in DB2 George Baklarz
Jump Start 2017 with a new DB2 11.1 Matt Huras, Roger Sanders
2016
Topic Presenters
dashDB for Transactions – Fully managed Andrew Hilden
DB2 on the Cloud – Moving to the cloud with full control Jon Lind, Regina
BM DB2 on SAP – V11.1 Update and Recent Developments Karl Fleckenstein
DB2 Security: From the Data Center to the Cloud Roger Sanders
DB2 Tech Talk: Data Server Manager and DB2 connect Mike Connor, Anson Kokkat, Shilu Mathai
DB2 Tech Talk: DB2 V 11 performance update Peter Kokosielis
DB2 V11.1 Deep Dive on BLU & Analytics Enhancements John Hornibrook, David Kalmuk
Breaking scalability barriers: A DB2 V11.1 Technology Review Matt Huras / George Baklarz
DBaaS for Developers on IBM Cloud. Andrew Buckler
Can you use your SQL skills for big data? Paul Yip
What’s New in IBM Data Server Manager V1.1.2 Anson Kokkat

Will you join me at the Informix Users Group Conference 2017?

rajesh

 

 

by Rajesh Govindan, Portfolio Marketing Manager – Informix

Are you interested in improving your Informix skills, learning about new features, and networking with others who have encountered –and resolved –the same challenges that you face? Do you want to become professionally certified on Informix or other IBM Analytics products? Would you like to attend seminars and tutorials that help you develop skills that will increase your value to your organization? Of course you do!

To give you a heads up , next year’s International Informix Users Group conference will be held April 23 to April 27, 2017 in Raleigh, North Carolina, US. There, at Marriott City Center, you’ll enjoy three full days of educational sessions for Informix DBAs, developers and managers. We’ll have several tracks dedicated to specific areas of learning, so you can select which one is best for you.

The IIUG conference is the world’s largest gathering of Informix users. Last year’s event attracted over 400 professionals from throughout the world, and garnered enthusiastic reviews from many who participated.

Hari Ammundi, Senior DBA at Action Net, attended for the first time in 2016, and told us, “The workshops are a wealth of information that I’m going to take back, and I think I’ll be visiting year after year for these functions.”   Hari, we look forward to seeing you again!

If you’d like to learn more about this year’s IIUG event, visit the web site. Those who register before January 31st will save their company money with an Early Bird Special that offers $375 off the regular fee.

Also, if you’ve got something to teach or talk about, we’re seeking presenters. So drop a note to Bruce Simms at bruce@iiug.org. (Presentations to global audiences look great on performance reviews and resumes!)

This is the premier world event for Informix DBAs, developers and managers. Please join   and together we can ensure IIUG members remain recognized for their professionalism, expertise, and commitment to learning new Informix skills.

 

 

(Did you attend the IIUG event last year? If so, leave a comments to let me know what you enjoyed most at the conference.)

 

IBM DB2 sessions at IBM Insight at World of Watson conference

by Sajan Kuttappa,  Marketing Manager- IBM Analytics Platform

As organizations develop next-generation applications for the digital era, many are using cognitive computing ushered in by IBM Watson technology. To make the most of these next-generation applications, you need a next-generation database that must handle a massive volume of data while delivering high performance to support real-time analytics. At the same time, it must provide data availability for demanding applications, scalability for growth and flexibility for responding to changes

IBM DB2 enables you to meet these challenges by providing enterprise-class scalability while also leveraging adaptive in-memory BLU Acceleration technology to support the analytics needs of your business. DB2 also handles structured and semi-structured data from a variety of sources to provide deep insight. With the ability to support thousands of terabytes, you can use historic and current data to identify trends and make sound decisions. The new release DB2 11.1 that was announced earlier this year comes packed with many enhancements for BLU, OLTP, PureScale, security, SQL, and more!

Whether you are interested in an overview of the improvements available with the new release or an in-depth understanding of the new enhancements, IBM World of Watson is the place to be.  The IBM Insight conference is now part of IBM World of Watson 2016 on October 24-27 and continues to be the premiere industry event for data and analytics professionals, delivering unmatched value and exciting onsite opportunities to connect with peers, hear from thought leaders, experience engaging content, and receive training and certification.  This article will highlight the key DB2 sessions at the IBM World of Watson conference.

We will start with Session #3483 by Matt Huras, IBM DB2 Architect who will provide a technical overview of the new release and the value the new features provide for your installations. We also have the following sessions that provide deeper coverage for the new enhancements available with the new release

  • DB2 11.1 includes significant enhancements in the area of availability — particularly around the pureScale feature. You can attend the Session #1433 – “The Latest and Greatest on Availability and pureScale in DB2 11.1” to learn about these enhancements, including simplification of deployment, new operating system and virtualization options, HADR updates, and improvements in the areas of management and multitenancy.
  • DB2 11.1 packs several enhancements to protect your data whether they are on-premises or on the cloud. Do look out for Session #1038 – “DB2 Security: From the Data Center to the Cloud” for an overview of the various security mechanisms that are available with the latest version of DB2 for Linux, UNIX, and Windows, as well as introduce you to several things that must be taken into consideration if you plan on moving your DB2 database environment from the data center to the cloud.
  • There is a lot of talk about in-memory computing and columnar multi-partitioned databases to improve analytic query performance. DB2 1 brings MPP scale to BLU! If you need a detailed step-by-step approach to implement the newest version of DB2, come learn about often overlooked but very important best practices to understand before and after upgrading by attending the Session #1290– “Upgrading to DB2 with the Latest Version of BLU Acceleration”  
  • DB2 11.1 is the foundation for hybrid cloud database deployments. In addition to being available to install on cloud-based infrastructure it is also the foundation of DB2 on Cloud and dashDB cloud data service offerings. Attend the Session #1444 – “Hybrid Cloud Data Management with DB2 and dashDB” to learn more about these different options and when you’d want to choose one over another.
  • If you are deploying DB2 for SAP applications, we have lined up Session #2629 by SAP and IBM experts – “IBM DB2 on SAP – V11.1 Update and Recent Developments”.  In this session, we will give an overview of recent SAP on DB2 extensions and which DB2 V11.1 features are most important for SAP applications.  One of our clients – BCBS of TN will also share their experiences with DB2 V11.1 around analytics and the benefits that they’ve seen.

Our clients Nordea Group and Argonne National Laboratory will also share their experience with deploying IBM Data Server Manager.  The hands–on-labs HOL 1766B – “DB2 High Availability and Disaster Recovery with Single or Multiple Standby Databases” allows you to configure and manage a production database with single or multiple standby databases using DB2 HA/DR facilities.

If you are a new user of DB2, you can also read this guide to the introductory DB2 sessions . Whether you are determining your next move or optimizing your existing investments in data and analytics capabilities, the IBM World of Watson 2016 conference is the place for you. This is your opportunity to get the training, answers, certifications and insights you need to be at the top of your game . If you have not yet registered for the conference, we suggest you visit this link and register yourself  –  bit.ly/WorldofWatson 

Migrating a DB2 database from a Big Endian environment to a Little Endian environment

roger

By Roger Sanders, DB2 for LUW Offering Manager, IBM

What Is Big-Endian and Little-Endian?

Big-endian and little-endian are terms that are used to describe the order in which a sequence of bytes are stored in computer memory, and if desired, are written to disk. (Interestingly, the terms come from Jonathan Swift’s Gulliver’s Travels where the Big Endians were a political faction who broke their boiled eggs on the larger end, defying the Emperor’s edict that all eggs be broken on the smaller end; the Little Endians were the Lilliputians who complied with the Emperor’s law.)

Specifically, big-endian refers to the order where the most significant byte (MSB) in a sequence (i.e., the “big end”) is stored at the lowest memory address and the remaining bytes follow in decreasing order of significance. Figure 1 illustrates how a 32-bit integer would be stored if the big-endian byte order is used.

endian image1Figure 1. Big-endian byte order

For people who are accustomed to reading from left-to-right, big-endian seems like a natural way to store a string of characters or numbers; since data is stored in the order in which it would normally be presented, programmers can easily read and translate octal or hexadecimal data dumps. Another advantage of using big-endian storage is that the size of a number can be more easily estimated because the most significant digit comes first. It is also easy to tell whether a number is positive or negative—this information can be obtained by examining the bit at offset 0 in the lowest order byte.

Little-endian, on the other hand, refers to the order where the least significant byte (LSB) in a sequence (i.e., the “little end”) is stored at the lowest memory address and the remaining bytes follow in increasing order of significance. Figure 2 illustrates how the same 32-bit integer presented earlier would be stored if the little-endian byte order were used.

endian image 2

 Figure 2. Little-endian byte order

One argument for using the little-endian byte order is that the same value can be read from memory, at different lengths, without having to change addresses—in other words, the address of a value in memory remains the same, regardless of whether a 32-bit, 16-bit, or 8-bit value is read. For instance, the number 12 could be read as a 32-bit integer or an 8-bit character, simply by changing the fetch instruction used. Consequently, mathematical functions involving multiple precisions are much easier to write.

Little-endian byte ordering also aids in the addition and subtraction of multi-byte numbers. When performing such operations, the computer must start with the least significant byte to see if there is a carry to a more significant byte—much like an individual will start with the rightmost digit when doing longhand addition to allow for any carryovers that may take place. By fetching bytes sequentially from memory, starting with the least significant byte, the computer can start doing the necessary arithmetic while the remaining bytes are read. This parallelism results in better performance; if the system had to wait until all bytes were fetched from memory, or fetch them in reverse order (which would be the case with big-endian), the operation would take longer.

IBM mainframes and most RISC-based computers (such as IBM Power Systems, Hewlett-Packard ProLiant servers, and Oracle SPARC servers) utilize big-endian byte ordering. Computers with Intel and AMD processors (CPUs) use little-endian byte ordering instead.

It is important to note that regardless of whether big-endian or little-endian byte ordering is used, the bits within each byte are usually stored as big-endian. That is, there is no attempt to reverse the order of the bit stream that is represented by a single byte. So, whether the hexadecimal value ‘CD’ for example, is stored at the lowest memory address or the highest memory address, the bit order for the byte will always be: 1100 1101

Moving a DB2 Database To a System With a Different Endian Format

One of the easiest ways to move a DB2 database from one platform to another is by creating a full, offline backup image of the database to be moved and restoring that image onto the new platform. However, this process can only be used if the endianness of the source and target platform is the same. A change in endian format requires a complete unload and reload of the database, which can be done using the DB2 data movement utilities. Replication-based technologies like SQL Replication, Q Replication, and Change Data Capture (CDC), which transform log records into SQL statements that can be applied to a target database, can be used for these types of migrations as well. On the other hand, DB2 High Availability Disaster Recovery (HADR) cannot be used because HADR replicates the internal format of the data thereby maintaining the underlying endian format.

The DB2 Data Movement Utilities (and the File Formats They Support)

DB2 comes equipped with several utilities that that can be used to transfer data between databases and external files. This set of utilities consists of:

  • The Export utility: Extracts data from a database using an SQL query or an XQuery statement, and copies that information to an external file.
  • The Import utility: Copies data from an external file to a table, hierarchy, view, or nickname using INSERT SQL statements. If the object receiving the data is already populated, the input data can either replace or be appended to the existing data.
  • The Load utility: Efficiently moves large quantities of data from an external file, named pipe, device, or cursor into a target table. The load utility is faster than the Import utility because it writes formatted pages directly into the database, instead of performing multiple INSERT
  • The Ingest utility: A high-speed, client-side utility that streams data from files and named pipes into target tables.

Along with these built-in utilities, IBM InfoSphere Optim High Performance Unload for DB2 for Linux, UNIX and Windows, an add-on tool that must be purchased separately, can be used to rapidly unload, extract, and repartition data in a DB2 database. Designed to improve data availability, mitigate risk, and accelerate database migrations, this tool helps DBAs work with very large quantities of data with less effort and faster results.

Regardless of which utility is used, data can only be written to or read from files that utilize one of the following formats:

  • Delimited ASCII
  • Non-delimited or fixed-length ASCII
  • PC Integrated Exchange Format
  • Extensible Markup Language (IBM InfoSphere Optim High Performance Unload for DB2 for Linux, UNIX and Windows only.)

Delimited ASCII (DEL)

The delimited ASCII file format is used by a wide variety of software applications to exchange data. With this format, data values typically vary in length, and a delimiter, which is a unique character not found in the data values themselves, is used to separate individual values and rows. Actually, delimited ASCII format files typically use three distinct delimiters:

  • Column delimiters. Characters that are used to mark the beginning or end of a data value. Commas (,) are typically used as column delimiter characters.
  • Row delimiters. Characters that are used to mark the end of a single record or row. On UNIX systems, the new line character (0x0A) is typically used as the row delimiter; on Windows systems, the carriage return/linefeed characters (0x0D–0x0A) are normally used instead.
  • Character delimiters. Character that are used to mark the beginning and end of character data values. Single quotes (‘) and double quotes (“) are typically used as character delimiter characters.

Typically, when data is written to a delimited ASCII file, rows are streamed into the file, one after another. The appropriate column delimiter is used to separate each column’s data values, the appropriate row delimiter is used to separate each individual record (row), and all character and character string values are enclosed with the appropriate character delimiters. Numeric values are represented by their ASCII equivalent—the period character (.) is used to denote the decimal point (if appropriate); real values are represented with scientific notation (E); negative values are preceded by the minus character (-); and positive values may or may not be preceded by the plus character (+).

For instance, if the comma character is used as the column delimiter, the carriage return/line feed character is used as the row delimiter, and the double quote character is used as the character delimiter, the contents of a delimited ASCII file might look something like this:

10,”Headquarters”,860,”Corporate”,”New York”

15,”Research”,150,”Eastern”,”Boston”

20,”Legal”,40,”Eastern”,”Washington”

38,”Support Center 1″,80,”Eastern”,”Atlanta”

42,”Manufacturing”,100,”Midwest”,”Chicago”

51,”Training Center”,34,”Midwest”,”Dallas”

66,”Support Center 2″,112,”Western”,”San Francisco”

84,”Distribution”,290,”Western”,”Denver”

Non-Delimited ASCII (ASC)

With the non-delimited ASCII file format, data values have a fixed length, and the position of each value in the file determines which column and row a particular value belongs to.

When data is written to a non-delimited ASCII file, rows are streamed into the file, one after another and each column’s data value is written using a fixed number of bytes. (If a data value is smaller that the fixed length allotted for a particular column, it is padded with blanks.) As with delimited ASCII files, a row delimiter is used to separate each individual record (row) — on UNIX systems the new line character (0x0A) is typically used; on Windows systems, the carriage return/linefeed characters (0x0D–0x0A) are used instead. Numeric values are treated the same as when they are stored in delimited ASCII format files.

Thus, a simple non-delimited ASCII file might look something like this:

10Headquarters       860Corporate   New York

15Research                150Eastern          Boston

20Legal                        40 Eastern         Washington

38Support Center   180Eastern        Atlanta

42Manufacturing    100Midwest       Chicago

51Training Center   34 Midwest       Dallas

66Support Center   211Western        San Francisco

84Distribution         290Western        Denver

 

PC Integrated Exchange Format (IXF)

The PC Integrated Exchange Format file format is a special file format that is used almost exclusively to move data between different DB2 databases. Typically, when data is written to a PC Integrated Exchange Format file, rows are streamed into the file, one after another, as an unbroken sequence of variable-length records. Character data values are stored in their original ASCII representation (without additional padding), and numeric values are stored as either packed decimal values or as binary values, depending upon the data type used to store them in the database. Along with data, table definitions and associated index definitions are also stored in PC Integrated Exchange Format files. Thus, tables and any corresponding indexes can be both defined and populated when this file format is used

Extensible Markup Language (XML)

Extensible Markup Language (XML) is a simple, yet flexible text format that provides a neutral way to exchange data between different devices, systems, and applications. Originally designed to meet the challenges of large-scale electronic publishing, XML is playing an increasingly important role in the exchange of data on the web and throughout companies. XML data is maintained in a self-describing format that is hierarchical in nature. Thus, a very simple XML file might look something like this:

<?xml version=”1.0″ encoding=”UTF-8″ ?>

<customerinfo>

<name>John Doe</name>

<addr country=”United States”>

<street>25 East Creek Drive</street>

<city>Raleigh</city>

<state-prov>North Carolina</state-prov>

<zip-pcode>27603</zip-pcode>

</addr>

<phone type=”work”>919-555-1212</phone>

<email>john.doe@xyz.com</email>

</customerinfo>

As noted earlier, only IBM InfoSphere Optim High Performance Unload for DB2 for Linux, UNIX and Windows can work with XML files.

db2move and db2look

As you might imagine, the Export utility, together with the Import utility or the Load utility, can be used to copy a table from one database to another. These same tools can also be used to move an entire database from one platform to another, one table at a time. But a more efficient way to move an entire DB2 database is by using the db2move utility. This utility queries the system catalog of a specified database and compiles a list of all user tables found. Then it exports the contents and definition of each table found to individual PC Integrated Exchange Format (IXF) formatted files. The set of files produced can then be imported or loaded into another DB2 database on the same system, or they can be transferred to another server and be imported or loaded to a DB2 database residing there.

The db2move utility can be run in one of four different modes: EXPORT, IMPORT, LOAD, or COPY. When run in EXPORT mode, db2move utilizes the Export utility to extract data from a database’s tables and externalize it to a set of files. It also generates a file named db2move.lst that contains the names of all of the tables that were processed, along with the names of the files that each table’s data was written to. The db2move utility may also produce one or more message files containing warning or error messages that were generated as a result of the Export operation.

When run in IMPORT mode, db2move uses the file db2move.lst to establish a link between the PC Integrated Exchange Format (IXF) formatted files needed and the tables into which data is to be populated. It then invokes the Import utility to recreate each table and their associated indexes using information stored in the external files.

And, when run in LOAD mode, db2move invokes the Load utility to populate tables that already exist with data stored in PC Integrated Exchange Format (IXF) formatted files. (LOAD mode should never be used to populate a database that does not already contain table definitions.) Again, the file db2move.lst is used to establish a link between the external files used and the tables into which their data is to be loaded.

Unfortunately, the db2move utility can only be used to move table and index objects. And if the database to be migrated contains other objects such as aliases, views, triggers, user-defined data types (UDTs), user-defined functions (UDFs), and stored procedures, you must duplicate those objects in the target database as well. That’s where the db2look utility comes in handy. When invoked, db2look can reverse-engineer an existing database and produce a set of Data Definition Language (DDL) SQL statements that can then be used to recreate all of the data objects found in the database that was analyzed. The db2look utility can also collect environment registry variable settings, configuration parameter settings, and statistical (RUNSTATS) information, which can be used to duplicate a DB2 environment on another system.

 

Auditing Informix database connections

inge

 

By Inge Halilovic, IBM Analytics Platform

 

Preserving integrity of information and managing compliance control across heterogeneous environments is becoming increasingly critical. IBM Security Guardium has worked with Informix for many years now  and  with Informix 12.10.xc6, you have increased capabilities when you audit the user actions for your Informix database server with IBM Security Guardium, version 10.0. Guardium prevents leaks from databases, ensures the integrity of information, and automates compliance controls across heterogeneous environments.

Guardium can now:

  • Mask sensitive data in Informix databases.
  • Audit, and close, any Informix connection (if necessary,), regardless of the connection protocol. Previously, Guardium audited and closed only TCP connections.

On the Informix side, you use the new ifxguard utility to monitor connections that are audited by Guardium. Every time a user session attempts an action that is auditable, an ifxguard agent contacts the Guardium server. The Guardium server audits the connection and takes any appropriate action. You can customize the behavior of the ifxguard utility:

  • Set the logging mode
  • Set the number of ifxguard worker threads to prevent heavy locking

You can enable auditing and set the actions of the database server if the Guardium server does not respond in the timeout period by setting the new IFXGUARD configuration parameter in the onconfig file. For example, if the timeout period is exceeded, the Informix server can allow the client connection without auditing, trigger an alarm, disable auditing altogether, or shut down.

The 2016 IIUG conference will include sessions by experts that cover this topic in greater detail. If you are interested in learning more you can Register for the conference and attend the session on auditing with IBM Security Guardium to find out how to configure auditing for your Informix database server.

The conference will be held from May 4th – 8th at the Sawgrass Marriott Golf Resort & Spa, Ponte Vedra Beach, Florida, USA.  The good news is that IIUG members get Flat $100 off on the registration fees. You can register here – http://bit.ly/iiug2016reg.

Golf, Beaches and Informix – Welcome to Florida!

By Sajan Kuttappa, Marketingsajan Manager – Analytics Platform Services, IBM

IBM Informix has forged new frontiers with its ability to effectively manage large amounts of data from the Internet of Things. You can also seamlessly integrate non-standard data types, with a rich set of APIs including REST, that enhance development simplicity, flexibility and time to market. All of this without compromising on availability, scalability and security which make it the most powerful enterprise class database in the market today

While Informix has evolved over the years to stay in tune with rapid advances in technology, the annual IIUG conference has established itself as the premiere data and analytics conference and the best place to learn about the latest updates from the technology world. The best brains are selected to share their expertise during 3 days of educational sessions that will help the audience develop key skills for career advancement while also providing great networking opportunities with IBM executives, Informix development team and more.

In 2015 we celebrated 20 years of the IIUG and I am sure the emotional moments from the conference held at San Diego last year will be etched in the memories of all those who attended. The 2016 conference will be held from May 4th – 8th at the Sawgrass Marriott Golf Resort & Spa, Ponte Vedra Beach, Florida, USA and the lineup of speakers and sessions looks very promising.

At the home of the PGA tour, you will be served an excellent platter of technical educational sessions that cover everything from the benefits of Hybrid databases, Spark analytics with Informix, Tools and technologies for the world of Internet-of-Things (IoT). You can further benefit from the optional tutorials on May 8th around database administration, application development and tools on Sunday. Become IBM Informix Certified by taking the IBM Informix Professional Certification exams or almost any other IBM Information Management exam (the first exam is usually free at a savings of about $150).   IIUG banner

Visit www.iiug2016.org for more details and register yourself. Paid registration includes full access to IIUG 2016 from May 4 – 8 including continental breakfast and lunch each day, the Wednesday evening reception and admission to IIUG party on May 5 and 6. IIUG members who register online get $100.00 off registration fees

So get ready to tee off with the best minds in the technology world. Welcome to Florida!

Bringing Informix Technology to the World’s Fastest Growing Database Market

By Sajan Kuttappa
Social Media Marketing & Communications Manager, IBM

If someone wanted more proof that Informix is technologically a step ahead of competitors, IBM provided evidence in China! Even the hardest cynic would find it difficult to rebuff the fact that the adoption of a locally innovated version of the Informix database for the burgeoning database market in China is a masterstroke

Why Informix over other competitors?

When one of the fastest growing markets in the world embraces Informix over other open source projects, it merits taking a second look (or maybe more) at what differentiates Informix from the rest of the competitors.

Before I begin extolling the virtues of the Informix database you can find details of the deal that was signed between IBM and GBASE here:  GBASE and IBM to collaborate on locally innovated database in China.

IBM Informix is a high-performance Enterprise Class Database that provides you with competitive advantage with its low cost, no administrative overhead, and powerful innovative features like OLTP and OLAP capabilities. Informix offers high availability in a significantly less complex, less expensive manner compared to competition for both distributed and centralized deployments.

Analysts agree that where database infrastructures must meet the challenges of the future, Informix is an obvious candidate. You can read the comparative analysis here.  IBM Informix – The database for high availability and data replication

The Proof of the pudding is in the eating.

A growing database market like China adopting Informix technology to create their locally innovated database is strong reference for the technological superiority of Informix. While a lot has been said about the superior capabilities of the Informix product, this deal will go a long way in reinforcing Informix as the number one reliable and highest performing database for most applications including the “Internet of Things”.

If you would like to stay updated, follow @IBM_Informix on Twitter for the latest Informix announcements, and updates!

Follow me on Twitter, LinkedIn, Google+

Nothing Endures But Change – Face it With Confidence.

RadhaBy Radha Gowda
Technical Marketing, IBM Analytics

When faced with change, do you share Dilbert’s frustration (take a look at this Dilbert comic and you’ll see what we mean)?  Wait… don’t be fecklessly hopeless yet!  We understand that keeping up with competition and customer expectations in this constantly changing global economy requires you to continuously enhance products and services.   While change can bring in a wealth of new business opportunities, we also realize implementing these changes may cause a lot of grief, including production delays and deployment day disasters.

To put this in perspective, according to a survey from the Ponemon Institute that is sponsored by Emerson Network Power, the average cost of data center downtime across industries is $7,908 per minute (Survey-Infographic).

OWRFrom a data management perspective, we have a proposal to manage the change –  IBM InfoSphere Optim Workload  Replay.  This tool offers you the ability to capture actual production workload including workload concurrency, the order of SQL execution, all the input variables, and the workload characteristics that are needed to later replay the workload.  It even includes how long the statements ran, what was the SQL code that resulted etc.  You can then replay the captured workload in your pre-production environment and record the outcome.

This comprehensive set of inputs and outputs on both the original and the replayed versions lets you compare and verify if you are getting the same performance in your pre-production environment as you did earlier in production environment.  You can capture millions of SQL statements that run over a period of time in production and analyze how well they fare when replayed in a pre-production environment.

Some of the use cases where you may benefit from Optim Workload Replay are performance and stress testing, database upgrades/migration, on-going database maintenance, capacity planning, introducing new applications, platform consolidation, and periodic disaster recovery validation.

We invite you to check out IBM InfoSphere Optim Workload  Replay page and browse through the solution brief, white paper and more.

Change can be scary, but you now have a reason to smile.

When Your Database Can’t Cut It, Your Business Suffers

larry By Larry Heathcote
Program Director, IBM Data Management

 

Your database is critical to your business. Applications depend on it. Business users depend on it. And when your database is not working well, your business suffers.

IBM DB2 offers high performance support for both transactional processing and speed-of-thought analytics, providing the right foundation for today’s and tomorrow’s needs.

We’ve all heard the phrase “garbage in, garbage out,” and this is so true in today’s big data world. But it’s not just about good data; it’s also about the infrastructure that captures and delivers data to business applications and provides timely and actionable insights to those who need to understand, to make decisions, to act, to move the business forward.

 

It’s one thing to pull together a sandbox to examine new sources of data and write sophisticated algorithms that draw out useful insights. But it’s another matter to roll this out into production where Line of Business users depend on good data, reliable applications and insightful analytics. This is truly where the rubber meets the road – the production environment…and your database better be up to it.

Lenny Liebmann, InformationWeek Contributing Editor, and I recorded a webinar recently titled “Is Your Database Really Ready for Big Data.” And Lenny posted a blog talking about the role of DataOps in the modern data infrastructure. I’d like to extend this one more step and talk about the importance of your database in production. The best way I can do that is through some examples.

 

1: Speed of Deployment

ERP systems are vital to many companies for effective inventory management and efficient operations. It is important to make sure that these systems are well tuned, efficient and highly available, and when a change is needed that it be done quickly. Friedrich ran the SAP environment for a manufacturing company, and he was asked to improve the performance of applications that were used for inventory management and supply chain ops. More specifically, he needed to replace their production database with one that improved application performance but kept storage growth to a minimum. Knowing that time is money, his mission was to deploy the solution quickly, which he did… 3 hours up and running in a production environment with more than 80 percent data compression and 50x performance improvement. The business impact – inventory levels were optimized, operating costs were reduced and the supply chain became far more efficient.

 

2: Performance

Rajesh’s team needed to improve performance of an online sales portal that gave his company’s reps the ability to run sales and ERP reports from their tablets and mobile phones out in the field. Queries were taking 4-5 minutes to execute, and this simply was not acceptable – btw, impatience is a virtue for a sales rep. Rajesh found that the existing database was the bottleneck, so he replaced it. With less than 20 hours of work, it was up and running in production with a 96.5 percent reduction in query times. Can you guess the impact this had? Yep, sales volumes increased significantly, Rajesh’s team became heroes and the execs were happy. And, since reps were more productive, they were also more satisfied and rep turnover was reduced.

 

3: Reliability, Availability and Scalability

In today’s 24x7x365 world, transaction system downtime is just not an option. An insurance company was having issues with performance, availability, reliability and scalability needed to support the company’s rapid growth of insurance applications. Replacing their database not only increased application availability from 80 to 95 percent, but they also saw a dramatic improvement in data processing times even after a 4x growth in the number of concurrent jobs … and, decreased their total cost of ownership by 50 percent. The company also saw customer satisfaction and stickiness improve.

These significant results happened because these clients upgraded their core database to IBM DB2. DB2 offers high performance support for both transactional processing and speed-of-thought analytics, providing the right foundation for today’s and tomorrow’s needs.

To learn more, watch our webinar.

Follow Larry on twtter at @larryheathcote

 

Join Larry and Lenny on a Tweet Chat on June 26 11 ET.  Join the conversation using #bigdatamgmt.  For the questions and more details see: http://bit.ly/Jun26TweetChat

A Dollar Saved Is Two Dollars Earned. Over A Million Dollars Saved Is?

Radha

Radha Gowda, Technical Marketing, IBM Analytics

A refreshing new feeling because DB2 can offer your business 57% improvement in compression, 60% improvement in processing times, and 30-60% reduction in transaction completion time.

Coca-Cola Bottling Co. Consolidated (CCBCC) was faced with severe business challenges: the rising cost of commodities and sharply higher fuel prices cannot be allowed to impact consumers of its world-famous sodas.  At the time of an SAP software refresh, the CCBCC IT team reviewed the company’s database strategy and discovered that migrating to IBM DB2 offered significant cost savings.  DB2 has delivered total operating cost reductions of more than $1 million over four years. And, DB2 10 has continued to be a compression workhorse, delivering another 20 % improvement in compression rate.

Staying competitive in a tough market

Andrew Juarez, Lead SAP Basis and DBA at CCBCC, notes:   “We happen to be in a market where we are considered an expendable item. In other words, it is not something that is mandatory. So we cannot push the price off to our customers to offset any losses that we may have, which means that we need to be very competitive on how we price our product.

Making the move to IBM DB2

Tom DeJuneas, IT Manager at CCBCC, states:   “We did a cost projection, looking at the cost of Oracle licenses and maintenance fees, and calculated that we could produce around $750,000 worth of savings over five years by switching to IBM DB2. We also undertook a proof-of-concept phase, which showed that IBM DB2 was able to offer the same, and potentially more, functionality as an Oracle system.”

Moving from Oracle has brought about a significant change in the IT organization’s strategy, as Andrew Juarez explains:   “When we were on Oracle, our philosophy was that we did not upgrade unless we were doing a major SAP upgrade. If the version was stable, then we stayed on it. Now, with IBM DB2 our strategy has completely changed, because with every new release our performance keeps getting better and better, and the value of the solution continues to grow.

Fast, accurate data

IBM DB2 manages key data from SAP® ERP modules such as financials, warehouse management, materials management and customer data.  Tom DeJuneas states, “Many of our background jobs and online dialog response times have improved considerably. For example, on the first night after we performed the switchover, one of our plant managers reported that jobs that normally took 90 minutes to run were running in just 30 minutes. This was simply by changing the database. So we had a massive performance increase in supply chain batch runs right from the get-go.

Impressive cost savings

IBM DB2 has helped CCBCC to make better use of its existing resources, delaying costly investment in new hardware and freeing up more money for investment in other projects.

Originally, when we did our business case for moving to IBM DB2, it was built around the savings on our Oracle licenses and maintenance, and that was it,” notes Andrew Juarez. “We did not factor in disk savings, so the fact that we are seeing additional savings around storage is icing on the cake. We had originally projected about $750,000 savings over five years and to date we are at four years and have seen a just over a million dollars in savings after migrating to IBM DB2. So we have bettered our original estimate by more than 25 percent.”

Tom DeJuneas concludes, “At CCBCC it is very important for us to stay on the frontline of innovation, and technology like IBM DB2 helps us to do that. Based on our experience, I do not see why anyone running SAP would use anything other than IBM DB2 as its database engine.”

Download CCBCC migrates to IBM DB2, saves more than $1 million for complete details.

For new insights to take your business to the next level and of course, cost savings, we invite you to try the DB2 with BLU Acceleration side of life.