Join fellow DB2 Professionals at the 2017 IDUG DB2 Tech Conference in Anaheim

by Michael Roecken, DB2 Linux, UNIX and Windows Development

Join fellow DB2 professionals at the 2017 IDUG DB2 Tech Conference, April 30 – May 4 in Anaheim for a comprehensive 5-day event, featuring a wide variety of user-focused technical education and an exceptional mix of DB2 thought leaders.

At the IDUG DB2 Tech Conference, you can:

  • Learn about the latest products and services in DB2 firsthand from more than 12 exhibitors
  • Access more than 100 education sessions, hear from a powerful keynote speaker and choose from 100+ one-hour technical sessions
  • Join expert panels on DB2 z/OS and for Linux, UNIX and Windows
  • Hear from Keynote presenter Mike Gualtieri, Vice President, Forrester about Forrester’s view on Big Data, Analytics & Open Source
  • Hear from Daniel Hernandez, Vice President, IBM Analytics about Winning with Machine Learning. Monetize the data behind your firewall
  • Attend multiple networking events and connect with vendors and fellow attendees in the Exhibit Hall
  • Participate in the Certification Preparation Courses before taking your complimentary exam

Register now and get 25% off using promo code IBM17IDUGNA !

Visit http://www.idug.org/p/cm/ld/fid=990 to register and find out more details.

Bonus: Attend the IDUG Data Tech Summit

Designed for data architects and data scientists, the 2017 IDUG Data Tech Summit, May 1-2 in Anaheim, California, offers technical sessions that take a deep dive into emerging data technologies and trends, such as:

  • Big Data and analytics in a cognitive era
  • Data without governance is a liability: Data lake best practices
  • Machine learning with Spark
  • Performance enterprise architectures for analytic design patterns
  • R as a weapon of choice for data science

Visit http://www.idug.org/p/cm/ld/fid=1174 to register and find out more details.

Tech Talks – DB2 for Linux, UNIX and Windows

by Sajan Kuttappa, Content Marketing Manager

IBM DB2 for Linux, UNIX and Windows database software is the foundation that powers many IBM Analytics offerings. In conjunction with the International DB2 Users Group (IDUG®), the DB2 product team hosts a series of monthly webinars highlighting key capabilities, use scenarios, and various aspects of data management needs. Below, you will find a listing of past webinars and upcoming topics. If there are topics that you would like us to cover, please email us at ibmdatamgmnt@gmail.com

2017
Topic Presenters
Extending SQL: Exploring the hidden JSON capabilities in DB2 George Baklarz
Jump Start 2017 with a new DB2 11.1 Matt Huras, Roger Sanders
2016
Topic Presenters
dashDB for Transactions – Fully managed Andrew Hilden
DB2 on the Cloud – Moving to the cloud with full control Jon Lind, Regina
BM DB2 on SAP – V11.1 Update and Recent Developments Karl Fleckenstein
DB2 Security: From the Data Center to the Cloud Roger Sanders
DB2 Tech Talk: Data Server Manager and DB2 connect Mike Connor, Anson Kokkat, Shilu Mathai
DB2 Tech Talk: DB2 V 11 performance update Peter Kokosielis
DB2 V11.1 Deep Dive on BLU & Analytics Enhancements John Hornibrook, David Kalmuk
Breaking scalability barriers: A DB2 V11.1 Technology Review Matt Huras / George Baklarz
DBaaS for Developers on IBM Cloud. Andrew Buckler
Can you use your SQL skills for big data? Paul Yip
What’s New in IBM Data Server Manager V1.1.2 Anson Kokkat

DB2 for SAP – Poised for growth

800x400_minn

Like most organizations, your company is pressing ahead to close out projects and prepare new ones for the coming year. Inevitably, these annual evaluations and reviews call into question the preservation of your existing IT investment while exploiting new areas for growth. Some vendors strongly encourage vertically integrated solutions with a promise of seamless operation, but reality dictates a closer look at technology choices.

If you’re currently running an SAP environment with DB2 for Linux, UNIX and Windows software, there are quite a few options available to help you leverage existing DB2 infrastructure and be well positioned for new projects. You are cordially invited to attend a half-day seminar to learn key DB2 insights and considerations for SAP environments.

Our IBM SAP experts will:

  • Outline issues and factors affecting database use with SAP
  • Go over key considerations on selecting solutions
  • Provide a DB2 for Linux, UNIX and Windows Roadmap
  • Showcase breakthrough in-memory technologies
  • Illustrate how your continued use of DB2 for Linux, UNIX and Windows in SAP environment is not just safe but the optimal one for growth

The seminars will be held at 2 cities in North America. Please find below details and register for the one nearest to your city

  1. Cincinnati   – Monday, December 12th, 2016 from 9:30 AM –2:30 PM EST

http://ibm.biz/BdsqQv

     2. New York City –  Tuesday, December 13th, 2016 from 9.30 AM – 2.30 PM EST

 http://bit.ly/2fRPn1B

After all, wouldn’t you want to know how to make existing investment work smarter without the risks of rip and replace?

 

 

 

IBM DB2 sessions at IBM Insight at World of Watson conference

by Sajan Kuttappa,  Marketing Manager- IBM Analytics Platform

As organizations develop next-generation applications for the digital era, many are using cognitive computing ushered in by IBM Watson technology. To make the most of these next-generation applications, you need a next-generation database that must handle a massive volume of data while delivering high performance to support real-time analytics. At the same time, it must provide data availability for demanding applications, scalability for growth and flexibility for responding to changes

IBM DB2 enables you to meet these challenges by providing enterprise-class scalability while also leveraging adaptive in-memory BLU Acceleration technology to support the analytics needs of your business. DB2 also handles structured and semi-structured data from a variety of sources to provide deep insight. With the ability to support thousands of terabytes, you can use historic and current data to identify trends and make sound decisions. The new release DB2 11.1 that was announced earlier this year comes packed with many enhancements for BLU, OLTP, PureScale, security, SQL, and more!

Whether you are interested in an overview of the improvements available with the new release or an in-depth understanding of the new enhancements, IBM World of Watson is the place to be.  The IBM Insight conference is now part of IBM World of Watson 2016 on October 24-27 and continues to be the premiere industry event for data and analytics professionals, delivering unmatched value and exciting onsite opportunities to connect with peers, hear from thought leaders, experience engaging content, and receive training and certification.  This article will highlight the key DB2 sessions at the IBM World of Watson conference.

We will start with Session #3483 by Matt Huras, IBM DB2 Architect who will provide a technical overview of the new release and the value the new features provide for your installations. We also have the following sessions that provide deeper coverage for the new enhancements available with the new release

  • DB2 11.1 includes significant enhancements in the area of availability — particularly around the pureScale feature. You can attend the Session #1433 – “The Latest and Greatest on Availability and pureScale in DB2 11.1” to learn about these enhancements, including simplification of deployment, new operating system and virtualization options, HADR updates, and improvements in the areas of management and multitenancy.
  • DB2 11.1 packs several enhancements to protect your data whether they are on-premises or on the cloud. Do look out for Session #1038 – “DB2 Security: From the Data Center to the Cloud” for an overview of the various security mechanisms that are available with the latest version of DB2 for Linux, UNIX, and Windows, as well as introduce you to several things that must be taken into consideration if you plan on moving your DB2 database environment from the data center to the cloud.
  • There is a lot of talk about in-memory computing and columnar multi-partitioned databases to improve analytic query performance. DB2 1 brings MPP scale to BLU! If you need a detailed step-by-step approach to implement the newest version of DB2, come learn about often overlooked but very important best practices to understand before and after upgrading by attending the Session #1290– “Upgrading to DB2 with the Latest Version of BLU Acceleration”  
  • DB2 11.1 is the foundation for hybrid cloud database deployments. In addition to being available to install on cloud-based infrastructure it is also the foundation of DB2 on Cloud and dashDB cloud data service offerings. Attend the Session #1444 – “Hybrid Cloud Data Management with DB2 and dashDB” to learn more about these different options and when you’d want to choose one over another.
  • If you are deploying DB2 for SAP applications, we have lined up Session #2629 by SAP and IBM experts – “IBM DB2 on SAP – V11.1 Update and Recent Developments”.  In this session, we will give an overview of recent SAP on DB2 extensions and which DB2 V11.1 features are most important for SAP applications.  One of our clients – BCBS of TN will also share their experiences with DB2 V11.1 around analytics and the benefits that they’ve seen.

Our clients Nordea Group and Argonne National Laboratory will also share their experience with deploying IBM Data Server Manager.  The hands–on-labs HOL 1766B – “DB2 High Availability and Disaster Recovery with Single or Multiple Standby Databases” allows you to configure and manage a production database with single or multiple standby databases using DB2 HA/DR facilities.

If you are a new user of DB2, you can also read this guide to the introductory DB2 sessions . Whether you are determining your next move or optimizing your existing investments in data and analytics capabilities, the IBM World of Watson 2016 conference is the place for you. This is your opportunity to get the training, answers, certifications and insights you need to be at the top of your game . If you have not yet registered for the conference, we suggest you visit this link and register yourself  –  bit.ly/WorldofWatson 

IBM DB2 – the database for the cognitive era at IBM World of Watson 2016

sajan

 

 

by Sajan Kuttappa,  Marketing Manager- IBM Analytics Platform

IBM Insight, the premiere data, analytics and cognitive IBM conference, is now part of IBM World of Watson 2016 to be held at Las Vegas from October 24-27.  This year attendees will be able to experience first-hand a world of cognitive capabilities that IBM has been at the forefront of. World of Watson incorporates the kind of information you gained from IBM Insight — the tools and best practices to manage your data — and raises the game. You’ll also see how Watson’s capabilities give you a broad view of your business, its competitive landscape and what it takes to make your customers act. Our CEO – Ginni Rometty will deliver a keynote at this year’s conference. And on the evening of October 26th, our special event will feature Grammy winner Imagine Dragons.

Whether you’re a beginner or a seasoned DB2 professional, there is a treasure trove of information that you could walk away with. IBM experts and your peer speakers will share information about migration guidelines, new features of recent releases, implementation experiences, and much more. Likewise, our hands-on-labs (HOL) complement these topics to further enrich the experience.

For users new to DB2, we recommend attending session 3585 on “DB2 v11.1 Fundamentals” by Roger Sanders.  This presentation will provide a great overview of DB2 for Linux, UNIX and Windows. It will take attendees through the concepts covered on the DB2 11.1 Fundamentals certification exam: planning, security, working with databases and data objects, using SQL, and data concurrency. It will also provide a brief introduction to other DB2 based offerings like DB2 on Cloud and dashDB.

IBM provides number of database options for organizations who would like to deploy applications on the cloud – be it fully managed or hosted environment.  IBM dashDB for transactions provides a fully managed database service in the cloud that is optimized for online transaction processing workloads. DB2 on Cloud is a hosted service that offers the agility of cloud deployment and the management control you enjoy with the on-premises software.

  • If you would like to understand the capabilities of the dashDB for Transactions offering, consider attending session 3471 on “dashDB for Transactions: Fully Managed and Truly Awesome,” where we will discuss key features of this enterprise class service and its design and implementation for availability and performance.
  • DB2 on Cloud offering gives you everything you know and love about DB2 for Linux, UNIX and Windows software in a cloud environment hosted by IBM. You still have full DBA control to customize the database. You can rapidly provision it for instant productivity. And the monthly subscription-based licensing makes it easier to predict and control costs. As with any OLTP database supporting your critical applications, high availability and disaster recovery concerns are top of mind. We have lined up a session (Session #1439) that will help you understand how to “Implement High Availability and Disaster Recovery for DB2 on the Cloud.”

You can learn how to further optimize DB2 performance with management tools like IBM Data Server. The Hands-on-Lab  3141A- Secrets of the Pros: Using Data Server Manager to Monitor, manage and Mitigate Performance Problems will teach you how to use the latest version of IBM Data Server Manager to diagnose and resolve performance problems.

We hope that you can take advantage of these sessions by attending the World of Watson conference. Stay tuned for our next article on sessions for “Intermediate” skill sets and “Advanced” users.

We look forward to seeing you in Vegas. If you have not yet registered, please visit this link for more details – http://bit.ly/WorldofWatson

The value of common database tools and linked processes for Db2, DevOps, and Cloud

Michael

by Michael Connor, Analytics Offering Management

Today we released DB2 V11 for Linux, UNIX and Windows. The release includes updates to Data Server Manager (DSM) V2.1 and Data Server Driver connectivity V11 and Advanced Recovery Feature (ARF) V11.    As many of you may be aware of – 2 years ago we embarked on a strategy to completely rethink our tooling strategy.  The market was telling us we needed to focus more on a simplified user experience, a web console addressing both the power and casual user role, and deliver deep database support in support of production applications.  In  March 2015, we delivered our first iteration of Data Server Manager as part of 10.5.  This year we have yet again extended capability to this valuable platform and in addition extended support across a number of IBM Data stores including DB2, dashDB, DB2 on Cloud, and BigInsights.

First let’s talk about some of the drivers we hear related to Database Delivery.

  1. The LOB and LOB developer communities want access to mission critical data and extend that data through new customer facing OLTP applications.
  2. Business analysts are using more data than ever – in generating and enhancing customer value through Analytic applications.
  3. These new roles need on demand access to data across all aspects of the delivery lifecycle from idea inception to production delivery and support.
  4. While the timelines are lessened, the data expanded and the lifecycle speeded up, quality cannot suffer.

Therefore, the DBA, Development, Testing, and Production support roles are now participating in activities known as Continuous Delivery, Continuous Testing, and DevOps.  With the goal of improving customer service, decreasing cycle and delivery times, without decreasing quality.

DSM pic1Some areas that are addressed by our broader solutions for Continues Delivery, Testing, and DevOps include:

  • High Performance Unload of production data and selective data environment, including test data environment restore with DB2 Recovery Expert
  • Simplified test data management addressing discovery, subsetting, masking, and refresh with Test Data Management.
  • Automated driving of application test and performance based workloads with Rational Functional and Performance Tester.
  • Release Management and Deployment automation with Rational Urbancode.

And finally, areas improved with our latest DB2 releases

  • SQL Development and execution with Data Server Manager
  • Test and Deployment Data Server Monitoring with Data Server Manager
  • SQL capture and analysis with Data Server Manager
  • Client and application Data Access, Workload and Failover management with Data Server Drivers

DSM Pic 2The Benefits of considering a Continuous — Solution include reduced cycle times, lower risk of failure, improved application performance and reduced risk of downtime.

With the V11 Releases we have delivered enhancements including:

  • DSM: DB2 LUW V11 support  and monitoring improvements for PureScale applications, Extended Query history analysis
  • ARF: DB2 LUW V11 support and improvements for Analytics usage with BLU Acceleration
  • DS Driver (Also DB2 Connect): Manageability improvements, Performance enhancements, and extended driver support now for iMAC applications.

DSM Pic 3Many of the improvements noted above are also available for our private Cloud offering in preview DashDB Local – which leverages DSM as an integral component of their dashboard, and our public Cloud offering DB2 on Cloud.

Read more details about the announcement for further information:   http://www-01.ibm.com/common/ssi/ShowDoc.wss?docURL=/common/ssi/rep_ca/9/872/ENUSAP16-0139/index.html&lang=en&request_locale=en

Also check out the DB2 LUW Landing Page:  http://www.ibm.com/analytics/us/en/technology/db2/db2-linux-unix-windows.html

 

Blogger:    Michael Connor, with Analytics offering management, joined IBM in 2001 and has focused early in his IBM career on launching the z/OS Development Tooling business centered on Rational Developer for z.  Since moving to Analytics in 2013, Michael leads the team responsible for Core Database Tooling

Migrating a DB2 database from a Big Endian environment to a Little Endian environment

roger

By Roger Sanders, DB2 for LUW Offering Manager, IBM

What Is Big-Endian and Little-Endian?

Big-endian and little-endian are terms that are used to describe the order in which a sequence of bytes are stored in computer memory, and if desired, are written to disk. (Interestingly, the terms come from Jonathan Swift’s Gulliver’s Travels where the Big Endians were a political faction who broke their boiled eggs on the larger end, defying the Emperor’s edict that all eggs be broken on the smaller end; the Little Endians were the Lilliputians who complied with the Emperor’s law.)

Specifically, big-endian refers to the order where the most significant byte (MSB) in a sequence (i.e., the “big end”) is stored at the lowest memory address and the remaining bytes follow in decreasing order of significance. Figure 1 illustrates how a 32-bit integer would be stored if the big-endian byte order is used.

endian image1Figure 1. Big-endian byte order

For people who are accustomed to reading from left-to-right, big-endian seems like a natural way to store a string of characters or numbers; since data is stored in the order in which it would normally be presented, programmers can easily read and translate octal or hexadecimal data dumps. Another advantage of using big-endian storage is that the size of a number can be more easily estimated because the most significant digit comes first. It is also easy to tell whether a number is positive or negative—this information can be obtained by examining the bit at offset 0 in the lowest order byte.

Little-endian, on the other hand, refers to the order where the least significant byte (LSB) in a sequence (i.e., the “little end”) is stored at the lowest memory address and the remaining bytes follow in increasing order of significance. Figure 2 illustrates how the same 32-bit integer presented earlier would be stored if the little-endian byte order were used.

endian image 2

 Figure 2. Little-endian byte order

One argument for using the little-endian byte order is that the same value can be read from memory, at different lengths, without having to change addresses—in other words, the address of a value in memory remains the same, regardless of whether a 32-bit, 16-bit, or 8-bit value is read. For instance, the number 12 could be read as a 32-bit integer or an 8-bit character, simply by changing the fetch instruction used. Consequently, mathematical functions involving multiple precisions are much easier to write.

Little-endian byte ordering also aids in the addition and subtraction of multi-byte numbers. When performing such operations, the computer must start with the least significant byte to see if there is a carry to a more significant byte—much like an individual will start with the rightmost digit when doing longhand addition to allow for any carryovers that may take place. By fetching bytes sequentially from memory, starting with the least significant byte, the computer can start doing the necessary arithmetic while the remaining bytes are read. This parallelism results in better performance; if the system had to wait until all bytes were fetched from memory, or fetch them in reverse order (which would be the case with big-endian), the operation would take longer.

IBM mainframes and most RISC-based computers (such as IBM Power Systems, Hewlett-Packard ProLiant servers, and Oracle SPARC servers) utilize big-endian byte ordering. Computers with Intel and AMD processors (CPUs) use little-endian byte ordering instead.

It is important to note that regardless of whether big-endian or little-endian byte ordering is used, the bits within each byte are usually stored as big-endian. That is, there is no attempt to reverse the order of the bit stream that is represented by a single byte. So, whether the hexadecimal value ‘CD’ for example, is stored at the lowest memory address or the highest memory address, the bit order for the byte will always be: 1100 1101

Moving a DB2 Database To a System With a Different Endian Format

One of the easiest ways to move a DB2 database from one platform to another is by creating a full, offline backup image of the database to be moved and restoring that image onto the new platform. However, this process can only be used if the endianness of the source and target platform is the same. A change in endian format requires a complete unload and reload of the database, which can be done using the DB2 data movement utilities. Replication-based technologies like SQL Replication, Q Replication, and Change Data Capture (CDC), which transform log records into SQL statements that can be applied to a target database, can be used for these types of migrations as well. On the other hand, DB2 High Availability Disaster Recovery (HADR) cannot be used because HADR replicates the internal format of the data thereby maintaining the underlying endian format.

The DB2 Data Movement Utilities (and the File Formats They Support)

DB2 comes equipped with several utilities that that can be used to transfer data between databases and external files. This set of utilities consists of:

  • The Export utility: Extracts data from a database using an SQL query or an XQuery statement, and copies that information to an external file.
  • The Import utility: Copies data from an external file to a table, hierarchy, view, or nickname using INSERT SQL statements. If the object receiving the data is already populated, the input data can either replace or be appended to the existing data.
  • The Load utility: Efficiently moves large quantities of data from an external file, named pipe, device, or cursor into a target table. The load utility is faster than the Import utility because it writes formatted pages directly into the database, instead of performing multiple INSERT
  • The Ingest utility: A high-speed, client-side utility that streams data from files and named pipes into target tables.

Along with these built-in utilities, IBM InfoSphere Optim High Performance Unload for DB2 for Linux, UNIX and Windows, an add-on tool that must be purchased separately, can be used to rapidly unload, extract, and repartition data in a DB2 database. Designed to improve data availability, mitigate risk, and accelerate database migrations, this tool helps DBAs work with very large quantities of data with less effort and faster results.

Regardless of which utility is used, data can only be written to or read from files that utilize one of the following formats:

  • Delimited ASCII
  • Non-delimited or fixed-length ASCII
  • PC Integrated Exchange Format
  • Extensible Markup Language (IBM InfoSphere Optim High Performance Unload for DB2 for Linux, UNIX and Windows only.)

Delimited ASCII (DEL)

The delimited ASCII file format is used by a wide variety of software applications to exchange data. With this format, data values typically vary in length, and a delimiter, which is a unique character not found in the data values themselves, is used to separate individual values and rows. Actually, delimited ASCII format files typically use three distinct delimiters:

  • Column delimiters. Characters that are used to mark the beginning or end of a data value. Commas (,) are typically used as column delimiter characters.
  • Row delimiters. Characters that are used to mark the end of a single record or row. On UNIX systems, the new line character (0x0A) is typically used as the row delimiter; on Windows systems, the carriage return/linefeed characters (0x0D–0x0A) are normally used instead.
  • Character delimiters. Character that are used to mark the beginning and end of character data values. Single quotes (‘) and double quotes (“) are typically used as character delimiter characters.

Typically, when data is written to a delimited ASCII file, rows are streamed into the file, one after another. The appropriate column delimiter is used to separate each column’s data values, the appropriate row delimiter is used to separate each individual record (row), and all character and character string values are enclosed with the appropriate character delimiters. Numeric values are represented by their ASCII equivalent—the period character (.) is used to denote the decimal point (if appropriate); real values are represented with scientific notation (E); negative values are preceded by the minus character (-); and positive values may or may not be preceded by the plus character (+).

For instance, if the comma character is used as the column delimiter, the carriage return/line feed character is used as the row delimiter, and the double quote character is used as the character delimiter, the contents of a delimited ASCII file might look something like this:

10,”Headquarters”,860,”Corporate”,”New York”

15,”Research”,150,”Eastern”,”Boston”

20,”Legal”,40,”Eastern”,”Washington”

38,”Support Center 1″,80,”Eastern”,”Atlanta”

42,”Manufacturing”,100,”Midwest”,”Chicago”

51,”Training Center”,34,”Midwest”,”Dallas”

66,”Support Center 2″,112,”Western”,”San Francisco”

84,”Distribution”,290,”Western”,”Denver”

Non-Delimited ASCII (ASC)

With the non-delimited ASCII file format, data values have a fixed length, and the position of each value in the file determines which column and row a particular value belongs to.

When data is written to a non-delimited ASCII file, rows are streamed into the file, one after another and each column’s data value is written using a fixed number of bytes. (If a data value is smaller that the fixed length allotted for a particular column, it is padded with blanks.) As with delimited ASCII files, a row delimiter is used to separate each individual record (row) — on UNIX systems the new line character (0x0A) is typically used; on Windows systems, the carriage return/linefeed characters (0x0D–0x0A) are used instead. Numeric values are treated the same as when they are stored in delimited ASCII format files.

Thus, a simple non-delimited ASCII file might look something like this:

10Headquarters       860Corporate   New York

15Research                150Eastern          Boston

20Legal                        40 Eastern         Washington

38Support Center   180Eastern        Atlanta

42Manufacturing    100Midwest       Chicago

51Training Center   34 Midwest       Dallas

66Support Center   211Western        San Francisco

84Distribution         290Western        Denver

 

PC Integrated Exchange Format (IXF)

The PC Integrated Exchange Format file format is a special file format that is used almost exclusively to move data between different DB2 databases. Typically, when data is written to a PC Integrated Exchange Format file, rows are streamed into the file, one after another, as an unbroken sequence of variable-length records. Character data values are stored in their original ASCII representation (without additional padding), and numeric values are stored as either packed decimal values or as binary values, depending upon the data type used to store them in the database. Along with data, table definitions and associated index definitions are also stored in PC Integrated Exchange Format files. Thus, tables and any corresponding indexes can be both defined and populated when this file format is used

Extensible Markup Language (XML)

Extensible Markup Language (XML) is a simple, yet flexible text format that provides a neutral way to exchange data between different devices, systems, and applications. Originally designed to meet the challenges of large-scale electronic publishing, XML is playing an increasingly important role in the exchange of data on the web and throughout companies. XML data is maintained in a self-describing format that is hierarchical in nature. Thus, a very simple XML file might look something like this:

<?xml version=”1.0″ encoding=”UTF-8″ ?>

<customerinfo>

<name>John Doe</name>

<addr country=”United States”>

<street>25 East Creek Drive</street>

<city>Raleigh</city>

<state-prov>North Carolina</state-prov>

<zip-pcode>27603</zip-pcode>

</addr>

<phone type=”work”>919-555-1212</phone>

<email>john.doe@xyz.com</email>

</customerinfo>

As noted earlier, only IBM InfoSphere Optim High Performance Unload for DB2 for Linux, UNIX and Windows can work with XML files.

db2move and db2look

As you might imagine, the Export utility, together with the Import utility or the Load utility, can be used to copy a table from one database to another. These same tools can also be used to move an entire database from one platform to another, one table at a time. But a more efficient way to move an entire DB2 database is by using the db2move utility. This utility queries the system catalog of a specified database and compiles a list of all user tables found. Then it exports the contents and definition of each table found to individual PC Integrated Exchange Format (IXF) formatted files. The set of files produced can then be imported or loaded into another DB2 database on the same system, or they can be transferred to another server and be imported or loaded to a DB2 database residing there.

The db2move utility can be run in one of four different modes: EXPORT, IMPORT, LOAD, or COPY. When run in EXPORT mode, db2move utilizes the Export utility to extract data from a database’s tables and externalize it to a set of files. It also generates a file named db2move.lst that contains the names of all of the tables that were processed, along with the names of the files that each table’s data was written to. The db2move utility may also produce one or more message files containing warning or error messages that were generated as a result of the Export operation.

When run in IMPORT mode, db2move uses the file db2move.lst to establish a link between the PC Integrated Exchange Format (IXF) formatted files needed and the tables into which data is to be populated. It then invokes the Import utility to recreate each table and their associated indexes using information stored in the external files.

And, when run in LOAD mode, db2move invokes the Load utility to populate tables that already exist with data stored in PC Integrated Exchange Format (IXF) formatted files. (LOAD mode should never be used to populate a database that does not already contain table definitions.) Again, the file db2move.lst is used to establish a link between the external files used and the tables into which their data is to be loaded.

Unfortunately, the db2move utility can only be used to move table and index objects. And if the database to be migrated contains other objects such as aliases, views, triggers, user-defined data types (UDTs), user-defined functions (UDFs), and stored procedures, you must duplicate those objects in the target database as well. That’s where the db2look utility comes in handy. When invoked, db2look can reverse-engineer an existing database and produce a set of Data Definition Language (DDL) SQL statements that can then be used to recreate all of the data objects found in the database that was analyzed. The db2look utility can also collect environment registry variable settings, configuration parameter settings, and statistical (RUNSTATS) information, which can be used to duplicate a DB2 environment on another system.

 

IBM Insight 2015 – A guide to the DB2 sessions

sajan

By   Sajan Kuttappa,  Marketing Manager, Analytics Platform Services

In just a few weeks from now, thousands of people will converge in Las Vegas for the much talked about IBM Insight 2015 conference at Mandalay Bay, Las Vegas.

If you are a DB2 professional, an information architect or a database professional interested in knowing about the latest in in-memory technology, DB2 for SAP workloads and database Administration tools, there is an excellent lineup of sessions by subject matter experts that has been planned for you at the Insight conference. This article will highlight the topics that will be covered so that you can create your agenda in advance

IBM DB2 continues to be the best database option for SAP environments. Experts will share DB2 BLU Best Practices for SAP systems and the latest features of DB2 that enable in-memory, high-availability and scalability for SAP. For those interested in new deployment options like Cloud, we recommend sessions covering IBM’s portfolio of Cloud solutions for SAP on DB2 customers. The Hands-on-Labs at the conference will showcase how to best leverage DB2 BLU for SAP Business Warehouse.

Don’t miss the many client stories about how they benefited from DB2’s in memory technology (BLU Acceleration) to enable speed-of-thought analytics for their business users, share their lessons learned on and best practices, and talk about enhancements and tips for DB2 LUW and DB2 BLU. If you are planning for increased workloads, look out for the session on scaling up BLU acceleration in a high concurrency environment.
Learn more about upgrading to the Data Server Manager for DB2 and simplify database administration, optimize performance with expert advice & reduce costs across the enterprise.  Apart from this you can hear how our clients achieved cost savings and reduced time-to-market by migrating to DB2 LUW. Also on the menu is a Database Administration Crash course for DB2 LUW that will be conducted by top IBM champions in the field.

There is a lot that will be take place in Las Vegas. A week of high-quality educational sessions, hands-on-labs and panel discussions awaits you so attendees can walk away with better insights into how DB2 integrates into big data analysis and how it delivers in the cloud and more. We look forward to meeting you in Las Vegas for Insight 2015; and whatever happens in Vegas (at Insight) should definitely not stay in Vegas!!!

A list of all the sessions can be found at the below links

DB2 for SAP:   http://bit.ly/db2sapatinsight
Core DB2 for the enterprise: http://bit.ly/db2coreatinsight
DB2 with BLU Acceleration: http://bit.ly/db2bluatinsight
DB2 LUW tools / Administration: http://bit.ly/db2toolsatinsight

So start planning your agenda for Insight 2015 .

Follow us on Twitter (@IBM_DB2 ), Facebook (IBM DB2) for regular updates around the conference and key sessions.

Top ten reasons to love IBM Data Server Manager

Radha

By Radha Gowda

Technical Marketing, IBM Analytics

Today many in-house or multi-vendor data management tools lack end-to-end visibility, usability and scalability. Database administrators often struggle to keep workflows and analytics operating at optimum efficiency and are unable to provide the continuous availability needed for ever-increasing transaction volumes. By effectively integrating database management functions, organizations can streamline routine maintenance and free up resources for more strategic, business-enhancing projects.

IBM presents IBM® Data Server Manager, an integrated database management tool, to administer, monitor, manage and optimize the performance of hundreds of IBM DB2® for Linux, UNIX and Windows databases across the enterprise. It is easy to install, cloud ready and can be accessed remotely from any supported browser.

Find out more reasons to love Data Server Manager from this infographic and try the base edition today – ibm.biz/dsmdownload.

IBM Data Server Manager Top10

Continuous availability benefits of pureScale now available in a new low cost DB2 offering

KellySchlambKelly Schlamb
DB2 pureScale and PureData Systems Specialist, IBM

Today, IBM has announced a set of new add-on offerings for DB2, which includes the IBM DB2 Performance Management Offering, IBM DB2 BLU Acceleration In-Memory Offering, IBM DB2 Encryption Offering, and the IBM DB2 Business Application Continuity Offering. More details on these offerings can be found here. Generally speaking, the intention of these offerings is to make some of the significant capabilities and features of DB2 available as low cost options for those not using the advanced editions of DB2, which already include these capabilities.

If you’ve read any of my past posts you know that I’m a big proponent of DB2’s pureScale technology. And staying true to form, the focus of my post here is on the IBM DB2 Business Application Continuity (BAC) offering, which is a new deployment and licensing model for pureScale. This applies to DB2 10.5 starting with fix pack 5 (the current fix pack level released in December 2014).

For more information on DB2 pureScale itself, I suggest taking a look here and here. But to boil it down to a few major points, it’s an active/active, shared data, clustering solution that provides continuous availability in the event of both planned and unplanned outages. pureScale is available in the DB2 Advanced Workgroup Server Edition (AWSE) and Advanced Enterprise Server Edition (AESE). Its architecture consists of the Cluster Caching Facilities (CF), which provide centralized locking and data page management for the cluster, and DB2 members, which service the database transaction requests from applications. This multi-member architecture allows workloads to scale-out and workload balance across up to 128 members.

While that scale-out capability is attractive to many people, some have told me that they love the availability that pureScale provides but that they don’t have the scalability needs for it. And in this case they can’t justify the cost of the additional software licenses to have this active/active type of environment – or to even move from their current DB2 Workgroup Server Edition (WSE) or Enterprise Server Edition (ESE) licensing up to the corresponding advanced edition that contains pureScale.

This is where BAC comes in. With BAC – which is a purchasable option on top of WSE and ESE – you can create a two member pureScale cluster. The difference, and what makes this offering interesting and attractive for some, is that the cluster can be used in an active/active way, but it’s licensed as an active/passive cluster. Specifically, one member of the cluster is used to run your application workloads and the other member is available as a standby in case that primary member fails or has to be brought down for maintenance. But isn’t that passive? No… and the reason is that this secondary member doesn’t just sit idle waiting for that to happen. Under the BAC offering terms, you are also allowed to run administrative operations on this secondary “admin” member. In fact, you are allowed to do all of the following types of work on this member:

  • Backup, Restore
  • Runstats
  • Reorg
  • Monitoring (including DB2 Explain and any diagnostic or problem determination activities)
  • Execution of DDL
  • Database Manager and database configuration updates
  • Log based capture utilities for the purpose of data capture
  • Security administration and setup

By offloading this administrative work off of the primary member, you leave it with more capacity to run your application workloads. And with BAC, you are only fully licensing the one primary member where your applications are running (for either WSE or ESE plus BAC). The licensing of the secondary member, on the other hand, falls under DB2’s warm/idle standby licensing which means a much reduced cost for it (e.g. for PVU pricing the secondary member would only be 100 PVUs of WSE or ESE plus 100 PVUs of BAC). For more details on actual software costs, please talk to your friendly neighborhood IBM rep.

BACgraphicAnd because this is still pureScale at work here, if there’s a failure of the primary member, the application workloads will automatically failover to the secondary member. Likewise, the database will stay up and remain accessible to applications on the secondary member when the primary member undergoes maintenance – like during a DB2 fix pack update. In both of these cases the workload is allowed to run on the secondary member and when the primary member is brought back up, the workloads will failback to it. All of the great availability characteristics of pureScale at a lower cost!

If you contrast this with something like Oracle RAC One Node, which has some similar characteristics to IBM DB2 BAC, only the primary node (instance) in Oracle RAC One Node is active and the standby node is not. In fact, it’s not even started until the work has to failover, so there’s a period of time where the cluster is completely unavailable. So a longer outage, slower recovery times, and no ability to run administrative work on this idle node like you can do with BAC.

Sounds great, right?

And for those of you that do want the additional scale-out capability, but like the idea of having that standby admin member at a reduced cost, IBM has thought of you too. Using AWSE or AESE (the BAC offering isn’t involved here), you can implement a pureScale cluster with multiple primary members with a single standby admin member. The multiple primary members are each fully licensed for AWSE or AESE, but the single standby admin member is only licensed as a passive server in the cluster (again, using the PVU example that would only be 100 PVUs of either AWSE or AESE). In this case, you can do any of that administrative work previously described on the standby member, and it’s also available for workloads to failover to if there are outages for one or more of the primary members in the cluster.

Happy clustering!