IBM DB2 sessions at IBM Insight at World of Watson conference

by Sajan Kuttappa,  Marketing Manager- IBM Analytics Platform

As organizations develop next-generation applications for the digital era, many are using cognitive computing ushered in by IBM Watson technology. To make the most of these next-generation applications, you need a next-generation database that must handle a massive volume of data while delivering high performance to support real-time analytics. At the same time, it must provide data availability for demanding applications, scalability for growth and flexibility for responding to changes

IBM DB2 enables you to meet these challenges by providing enterprise-class scalability while also leveraging adaptive in-memory BLU Acceleration technology to support the analytics needs of your business. DB2 also handles structured and semi-structured data from a variety of sources to provide deep insight. With the ability to support thousands of terabytes, you can use historic and current data to identify trends and make sound decisions. The new release DB2 11.1 that was announced earlier this year comes packed with many enhancements for BLU, OLTP, PureScale, security, SQL, and more!

Whether you are interested in an overview of the improvements available with the new release or an in-depth understanding of the new enhancements, IBM World of Watson is the place to be.  The IBM Insight conference is now part of IBM World of Watson 2016 on October 24-27 and continues to be the premiere industry event for data and analytics professionals, delivering unmatched value and exciting onsite opportunities to connect with peers, hear from thought leaders, experience engaging content, and receive training and certification.  This article will highlight the key DB2 sessions at the IBM World of Watson conference.

We will start with Session #3483 by Matt Huras, IBM DB2 Architect who will provide a technical overview of the new release and the value the new features provide for your installations. We also have the following sessions that provide deeper coverage for the new enhancements available with the new release

  • DB2 11.1 includes significant enhancements in the area of availability — particularly around the pureScale feature. You can attend the Session #1433 – “The Latest and Greatest on Availability and pureScale in DB2 11.1” to learn about these enhancements, including simplification of deployment, new operating system and virtualization options, HADR updates, and improvements in the areas of management and multitenancy.
  • DB2 11.1 packs several enhancements to protect your data whether they are on-premises or on the cloud. Do look out for Session #1038 – “DB2 Security: From the Data Center to the Cloud” for an overview of the various security mechanisms that are available with the latest version of DB2 for Linux, UNIX, and Windows, as well as introduce you to several things that must be taken into consideration if you plan on moving your DB2 database environment from the data center to the cloud.
  • There is a lot of talk about in-memory computing and columnar multi-partitioned databases to improve analytic query performance. DB2 1 brings MPP scale to BLU! If you need a detailed step-by-step approach to implement the newest version of DB2, come learn about often overlooked but very important best practices to understand before and after upgrading by attending the Session #1290– “Upgrading to DB2 with the Latest Version of BLU Acceleration”  
  • DB2 11.1 is the foundation for hybrid cloud database deployments. In addition to being available to install on cloud-based infrastructure it is also the foundation of DB2 on Cloud and dashDB cloud data service offerings. Attend the Session #1444 – “Hybrid Cloud Data Management with DB2 and dashDB” to learn more about these different options and when you’d want to choose one over another.
  • If you are deploying DB2 for SAP applications, we have lined up Session #2629 by SAP and IBM experts – “IBM DB2 on SAP – V11.1 Update and Recent Developments”.  In this session, we will give an overview of recent SAP on DB2 extensions and which DB2 V11.1 features are most important for SAP applications.  One of our clients – BCBS of TN will also share their experiences with DB2 V11.1 around analytics and the benefits that they’ve seen.

Our clients Nordea Group and Argonne National Laboratory will also share their experience with deploying IBM Data Server Manager.  The hands–on-labs HOL 1766B – “DB2 High Availability and Disaster Recovery with Single or Multiple Standby Databases” allows you to configure and manage a production database with single or multiple standby databases using DB2 HA/DR facilities.

If you are a new user of DB2, you can also read this guide to the introductory DB2 sessions . Whether you are determining your next move or optimizing your existing investments in data and analytics capabilities, the IBM World of Watson 2016 conference is the place for you. This is your opportunity to get the training, answers, certifications and insights you need to be at the top of your game . If you have not yet registered for the conference, we suggest you visit this link and register yourself  –  bit.ly/WorldofWatson 

IBM DB2 – the database for the cognitive era at IBM World of Watson 2016

sajan

 

 

by Sajan Kuttappa,  Marketing Manager- IBM Analytics Platform

IBM Insight, the premiere data, analytics and cognitive IBM conference, is now part of IBM World of Watson 2016 to be held at Las Vegas from October 24-27.  This year attendees will be able to experience first-hand a world of cognitive capabilities that IBM has been at the forefront of. World of Watson incorporates the kind of information you gained from IBM Insight — the tools and best practices to manage your data — and raises the game. You’ll also see how Watson’s capabilities give you a broad view of your business, its competitive landscape and what it takes to make your customers act. Our CEO – Ginni Rometty will deliver a keynote at this year’s conference. And on the evening of October 26th, our special event will feature Grammy winner Imagine Dragons.

Whether you’re a beginner or a seasoned DB2 professional, there is a treasure trove of information that you could walk away with. IBM experts and your peer speakers will share information about migration guidelines, new features of recent releases, implementation experiences, and much more. Likewise, our hands-on-labs (HOL) complement these topics to further enrich the experience.

For users new to DB2, we recommend attending session 3585 on “DB2 v11.1 Fundamentals” by Roger Sanders.  This presentation will provide a great overview of DB2 for Linux, UNIX and Windows. It will take attendees through the concepts covered on the DB2 11.1 Fundamentals certification exam: planning, security, working with databases and data objects, using SQL, and data concurrency. It will also provide a brief introduction to other DB2 based offerings like DB2 on Cloud and dashDB.

IBM provides number of database options for organizations who would like to deploy applications on the cloud – be it fully managed or hosted environment.  IBM dashDB for transactions provides a fully managed database service in the cloud that is optimized for online transaction processing workloads. DB2 on Cloud is a hosted service that offers the agility of cloud deployment and the management control you enjoy with the on-premises software.

  • If you would like to understand the capabilities of the dashDB for Transactions offering, consider attending session 3471 on “dashDB for Transactions: Fully Managed and Truly Awesome,” where we will discuss key features of this enterprise class service and its design and implementation for availability and performance.
  • DB2 on Cloud offering gives you everything you know and love about DB2 for Linux, UNIX and Windows software in a cloud environment hosted by IBM. You still have full DBA control to customize the database. You can rapidly provision it for instant productivity. And the monthly subscription-based licensing makes it easier to predict and control costs. As with any OLTP database supporting your critical applications, high availability and disaster recovery concerns are top of mind. We have lined up a session (Session #1439) that will help you understand how to “Implement High Availability and Disaster Recovery for DB2 on the Cloud.”

You can learn how to further optimize DB2 performance with management tools like IBM Data Server. The Hands-on-Lab  3141A- Secrets of the Pros: Using Data Server Manager to Monitor, manage and Mitigate Performance Problems will teach you how to use the latest version of IBM Data Server Manager to diagnose and resolve performance problems.

We hope that you can take advantage of these sessions by attending the World of Watson conference. Stay tuned for our next article on sessions for “Intermediate” skill sets and “Advanced” users.

We look forward to seeing you in Vegas. If you have not yet registered, please visit this link for more details – http://bit.ly/WorldofWatson

DB2 Direct: A new way of consuming your Database

headshots 012

by Phillip Downey, WW program Director, IBM Analytics Platform Hybrid Cloud Strategy

 

In DB2 11.1, we introduced two new and easy to consume DB2 Direct editions: DB2 Direct Advanced and DB2 Direct Standard. Both editions bring a new dimension to the database offerings for the small and larger enterprise clients that are looking for the flexibility and scalability of the hybrid cloud. They can be acquired directly online via passport advantage and offer a simplified licensing metric and monthly subscription pricing model that are ideal for private, public and hybrid cloud deployments.

Packaging

·        DB2 Direct Advanced Edition

The DB2 Direct Advanced Edition has all DB2 Server and Client features from DB2 Advanced Server Edition including encryption, multitenant deployments, adaptive compression, BLU Acceleration, SQL compatibility with PL/SQL, Data Server Manager, pureScale and database partitioning feature options. It also includes federation capabilities providing access to non-DB2 database sources like Oracle, MS SQL, Teradata, Hadoop, Netezza, Spark and other solutions.

Advanced Federation Capabilities

Phil blog image

 

 

 

 

 

It also includes access to 10 User licenses of Infosphere Data Architect per installation for designing and deploying database implementations.

·        DB2 Direct Standard Edition

DB2 Direct Standard Edition is modelled on DB2 Workgroup Edition, which provides encryption, pureScale for Continuously available HA deployments, Multitenant Deployments, SQL compatibility with PL/SQL, Data Server Manager Base Edition, Table partitioning, multi-dimensional clustering, parallel query and concurrent Connection pooling. It is limited to 16 cores and 128GB of RAM and is ideal for small to mid-sized database applications providing enterprise level availability, Query performance and Security as well as unlimited database size

You can take advantage of the new subscription model to lower costs and enjoy licensing flexibility for on-premises and cloud deployments:

Licensing Metrics:

Virtual ProcessorCore (VPC) Charge metric

  • Virtual processor core licensing gives you flexibility and simplified sub capacity licensing options that enables you to optimize your licensing to meet your business requirements.
  • There are two Licensing Scenarios you can apply
    • Simply license the sum of all available Virtual Processor Cores on all Virtual Servers the Direct edition is installed on
    • OR when you can identify a Server and it is more cost effective to do so simply license all available Processor Cores on the Physical Server regardless of the number of virtual machines on the system.
  • Benefits: This makes it simple for private and public Cloud deployments alike and enables you to optimise your licensing

Pricing Structure:

Subscription based pricing

      • DB2 Direct Advanced $354 USD per month per VPC
      • DB2 Standard Edition $135 USD per month per VPC

(Prices as of May 10th, 2016 in the United States.)

Each Deployment requires a minimum of 2 VPCs except in the case of Warm standby, which requires only one VPC.

These editions are ideal for customers who want to move to a subscription based model on their private cloud or a 3rd party vendors (hosts) and pay as their applications grow in size. It is also ideal for ISV’s who charge their applications to customers on a subscription model and want an easy to order database at competitive subscription pricing.

Understanding the Virtual Process Core Metric

Virtual Processor Cores are defined to simplify licensing in the private or public cloud deployment environment. You can deploy DB2 licenses with confidence even though you may or may not be aware of the underlying infrastructure. It enables customers to easily analyze their Licensing requirements including in sub-capacity situations.

A Virtual Processor Core is a Processor Core in an unpartitioned Physical Server, or a virtual core assigned to a Virtual Server.  The Licensee must obtain entitlement for each Virtual Processor Core made available to the Program.

For each Physical Server, the Licensee must have sufficient entitlements for the lesser of

  1. the sum of all available Virtual Processor Cores on all Virtual Servers made available to the Program or
  2. all available Processor Cores on the Physical Server.

Other key Virtual Processor Core considerations for you to understand

    • If the number of VPCs is greater than the physical cores, then you only need to license the number of physical cores on the machine
    • Minimum of 2 VPCs per deployment (1 VPC for idle/warm standby)

You can determine the VPC requirement through DB2 Itself by executing the following on each Physical or logical server DB2 is installed on and take the Online CPU Count and divided it by the HMTdegree result (threading degree) to get the count of Virtual CPU’s present.

“Db2pd –osinfo”

An example of this In a Cloud deployment

  • A customer buys a Virtual Cloud Server as a Service on a internal private cloud or MSP like Softlayer/Azure/ Amazon/rackspace ….
  • They purchase an 8 core Virtual CPU Environment
  • The customer runs “ Db2pd –osinfo” is run on the machine and shows HMTDegree of 1 and OnlineCPU of 8

The customer must license 8 VPC for this environment

An Example of a Private Cloud deployment using VM-Ware

  • A customer deploys Multiple VMWare Hosts are created on a server to run DB2. The server is a 2 Socket server, 8 cores per processor, with hyper-threading turned on to a degree of 2 (16 physical cores) Each of the 11 virtual VMs deployed Reports 6 Virtual Processors.
  • The Customer runs “db2pd –osinfo” across all VMWare Hosts reporting a total of Online CPU of 64 across 11 Virtual Machines (HMTDegree of 1 for all VMs)

As the Hardware can be physically Identified as a 16 core server the customer only has to pay for 16 VPC’s not 64 as some competitor programs would as it is the lesser of the two numbers.

Stay tuned for more information around the enhancements that DB2 v 11.1 comes with.  You may also want to attend the upcoming webinar on June 14th to learn how to maximize your data infrastructure investments. Register here http://bit.ly/v11launchwebcast

 

IBM Insight 2015 – A guide to the DB2 sessions

sajan

By   Sajan Kuttappa,  Marketing Manager, Analytics Platform Services

In just a few weeks from now, thousands of people will converge in Las Vegas for the much talked about IBM Insight 2015 conference at Mandalay Bay, Las Vegas.

If you are a DB2 professional, an information architect or a database professional interested in knowing about the latest in in-memory technology, DB2 for SAP workloads and database Administration tools, there is an excellent lineup of sessions by subject matter experts that has been planned for you at the Insight conference. This article will highlight the topics that will be covered so that you can create your agenda in advance

IBM DB2 continues to be the best database option for SAP environments. Experts will share DB2 BLU Best Practices for SAP systems and the latest features of DB2 that enable in-memory, high-availability and scalability for SAP. For those interested in new deployment options like Cloud, we recommend sessions covering IBM’s portfolio of Cloud solutions for SAP on DB2 customers. The Hands-on-Labs at the conference will showcase how to best leverage DB2 BLU for SAP Business Warehouse.

Don’t miss the many client stories about how they benefited from DB2’s in memory technology (BLU Acceleration) to enable speed-of-thought analytics for their business users, share their lessons learned on and best practices, and talk about enhancements and tips for DB2 LUW and DB2 BLU. If you are planning for increased workloads, look out for the session on scaling up BLU acceleration in a high concurrency environment.
Learn more about upgrading to the Data Server Manager for DB2 and simplify database administration, optimize performance with expert advice & reduce costs across the enterprise.  Apart from this you can hear how our clients achieved cost savings and reduced time-to-market by migrating to DB2 LUW. Also on the menu is a Database Administration Crash course for DB2 LUW that will be conducted by top IBM champions in the field.

There is a lot that will be take place in Las Vegas. A week of high-quality educational sessions, hands-on-labs and panel discussions awaits you so attendees can walk away with better insights into how DB2 integrates into big data analysis and how it delivers in the cloud and more. We look forward to meeting you in Las Vegas for Insight 2015; and whatever happens in Vegas (at Insight) should definitely not stay in Vegas!!!

A list of all the sessions can be found at the below links

DB2 for SAP:   http://bit.ly/db2sapatinsight
Core DB2 for the enterprise: http://bit.ly/db2coreatinsight
DB2 with BLU Acceleration: http://bit.ly/db2bluatinsight
DB2 LUW tools / Administration: http://bit.ly/db2toolsatinsight

So start planning your agenda for Insight 2015 .

Follow us on Twitter (@IBM_DB2 ), Facebook (IBM DB2) for regular updates around the conference and key sessions.

Inside IBM Insight 2014: See How Clients Are Revolutionizing Their Business With DB2

cassieBy Cassandra Desens
IBM Software Group, Information Management  

This will be my second year at Insight and I must say, it is my favorite time of the work year. As frenzied and busy as the planning is, the return on intangible investments is worth it. Ask anyone – after just a few short days in Vegas you walk away with a renewed sense of excitement around IBM, its products, lots of free stuff, and usually a head cold  (I’m no doctor, but casinos are definitely germ incubators).

Last year I was amazed at the hundreds of events and opportunities the conference presented by customers. Demos, a city-sized EXPO floor, labs, 1-on-1s, both large general & breakout sessions,  product specific & networking events…the list goes on and on. There are so many different ways to spend your short week with us at Mandalay Bay. The question is – what will you choose to attend?
Allow me to suggest the client-led sessions.

As I said in my last blog entry, Balluff loves BLU Acceleration too: “Hearing customer stories puts our product claims into perspective. Success stories give us the ultimate answer to the elusive question “How does this relate to me and my business?”.”

There is nothing that gets me more excited than hearing how a customer was able to revolutionize their business with DB2 for Linux, UNIX and Windows, so today I want to recommend 5 Insight2014 sessions for you to attend to hear the way companies around the world are leveraging DB2. I encourage you to weave these into your busy schedule next week. I promise you won’t be disappointed!

1) IWA-4145A: Keeping Data Analysts in the Pink: IBM DB2 with BLU Acceleration on Power System at BlueCross BlueShield of TN
Date & Time: Monday October 27th, from 2:00-3:00pm
Speakers: Randy Wilson Lead DB2 for LUW DBA at Blue Cross Blue Shield of Tennessee, and Danny Arnold, Software Client Leader at IBM
Abstract: When BlueCross BlueShield of Tennessee (BCBS of TN) data analysts ran certain queries, the queries did not run in a timely manner and took hours to complete. In need of a solution, the company decided to test the IBM DB2 10.5 with BLU Acceleration technology. The initial results were astounding, showing queries completing in just seconds, not hours. The company then installed DB2 10.5 with BLU Acceleration on a large test environment on the IBM POWER7 system. It worked with its data analyst team of IBM Cognos and SAS users to identify queries that could benefit from the BLU Acceleration technology. In this session, you learn about BCBS of TN’s experience with DB2 BLU and how it allows the company to make better business decisions.

2) IWS-5729A: IBM BLU Acceleration: Adoption Guidelines and Lessons Learned at Coca-Cola Bottling Company

Date & Time: Monday October 27th, from 3:30-4:30pm
Speakers: Andrew Juarez, Lead SAP Basis DBA at Coca-Cola Bottling Co., and Naresh Chainani, DB2 Development at IBM
IBM DB2 with BLU Acceleration can provide performance improvements that range from 10 times to 25 times and beyond for analytical queries with minimal tuning. If you are eager to understand how your business can take advantage of this technology for your warehouse or data marts, attend this session. You learn how quick and easy it is to get started with BLU. You see a specific real-world example based on the Coca Cola Bottling Company’s experience with DB2 BLU, from installation to performance tests. You also see how DB2 BLU fits into the SAP BW environment.
3) IWS-5338B: Why the University of Toronto Loves BLU: Faster, Smaller, and Simpler in Only a Few Hours
Date & Time: Tuesday October 28th, 10:00-11:00am
Speakers: Brenda Boshoff, Senior Database Administrator at University of Toronto, and Adam Storm, Senior Technical Staff Member and Master Inventor at IBM
Abstract: It’s not every day that a new database technology promises revolutionary performance gains, improved compression and radically reduced time to value. It’s even less often that such aggressive claims can be validated by customers. This session takes you through the adoption cycle of IBM BLU Acceleration at the University of Toronto. You learn how an aggressive one-day proof of concept achieved dramatic performance gains in less than three hours. You gain insight into the process of taking BLU Acceleration through the QA cycle and into production. If you’re considering moving to BLU Acceleration, attend this session to see how the University of Toronto successfully adopted the technology in its environment and achieved dramatic results.

4) IWS-5787A: Simplify your SAP Landscape – Superior SAP BI Performance @ Yazaki with DB2 BLU
Date & Time
: Wednesday October 29th from 11:15-12:15pm
Speakers: Juergen Laudien, Infrastructure Manager at Yazaki Europe, Bernd Ertl, Competitive Sales Lead at IBM, Karsten Stoehr, Competitive Database Sales Specialist at IBM, Carsten Dieterle, SAP Infrastructure Solution Architects (SOLAR) – SAP Growth Program at IBM
Abstract:The topic is about simplifying your SAP BW landscape by taking SAP BWA and HANA appliances out of the IT infrastructure and replace it with DB2 BLU instead of SAP HANA – Many DB2 clients have done this. They have gained their experiences in proof of concepts as well as in production environments which prove, that DB2 BLU needs up to 85% less cores/storage/RAM compared to SAP BWA or HANA appliances. This huge amount of HW savings and the reduced SW license costs allow clients to run their business faster and more efficient on DB2 BLU.

5) IDB-4606A: Ameriprise Financial’s Deployment Experiences with IBM DB2 High Availability Disaster Recovery
Date & Time: Wednesday October 29th from 4:30-5:45pm
Speakers: Chip MicAllister, UDB Database Technical Lead at Ameriprise and Dale McInnis STSM: Chief Architect IM Cloud Availability
Abstract:  In this session, you learn about the high availability disaster recovery (HADR) feature in IBM DB2. Then, you will hear about Ameriprise Financial’s experience when deploying DB2 HADR in its production environment.

Safe travels everyone and see you soon!

pureScale at the Beach. – What’s New in the DB2 “Cancun Release”

KellySchlamb Kelly Schlamb
DB2 pureScale and PureData Systems Specialist, IBM

cancun_beachToday, I’m thinking about the beach. We’re heading into the last long weekend of the summer, the weather is supposed to be nice, and later today I’ll be going up to the lake with my family. But that’s not really why the beach is on my mind. Today, the DB2 “Cancun Release” was announced and made available, and as somebody that works extensively with DB2 and pureScale, it’s a pretty exciting day.

I can guarantee you that you that over the next little while, you’re going to be hearing a lot about the various new features and capabilities in the “Cancun Release” (also referred to as Cancun Release 10.5.0.4 or DB2 10.5 FP4). For instance, the new Shadow Tables feature — which exploits DB2 BLU Acceleration — allows for real-time analytics processing and reporting on your transactional database system. Game changing stuff. However, I’m going to leave those discussions up to others or for another time and today I’m going to focus on what’s new for pureScale.

As with any major new release, some things are flashy and exciting, while other things don’t have that same flash but make a real difference in the every day life of a DBA. Examples of the latter in Cancun include the ability to perform online table reorgs and incremental backups (along with support for DB2 Merge Backup) in a pureScale environment, additional Optim Performance Manager (OPM) monitoring metrics and alerts around the use of HADR with pureScale, and being able to take GPFS snapshot backups. All of this leads to improved administration and availability.

There’s a large DB2 pureScale community out there and over the last few years we’ve received a lot of great feedback on the up and running experience. Based on this, various enhancements have been made to provide faster time to value, with the improved ease of use and serviceability of installation, configuration, and updates. This includes improved installation documentation, enhanced prerequisite checking, beefing up some of the more common error and warning messages, improved usability for online fix pack updates, and the ability to perform version upgrades of DB2 members and CFs in parallel.

In my opinion, the biggest news (and yes, the flashiest stuff) is the addition of new deployment options for pureScale. Previously, the implementation of a DB2 pureScale cluster required specialized network adapters — RDMA-capable InfiniBand or RoCE (RDMA over Converged Ethernet) adapter cards. RDMA stands for Remote Direct Memory Access and it allows for direct memory access from one computer into that of another without involving either one’s kernel, so there’s no interrupt handling and no context-switching that takes place as part of sending a message via RDMA (unlike with TCP/IP-based communication). This allows for very high-throughput, low-latency message passing, which DB2 pureScale uniquely exploits for very fast performance and scalability. Great upside, but a downside is the requirement on these adapters and an environment that supports them.

Starting in the DB2 Cancun Release, a regular, commodity TCP/IP-based interconnect can be used instead (often referred to as using “TCP/IP sockets”). What this gives you is an environment that has all of the high availability aspects of an RDMA-based pureScale cluster, but it isn’t necessarily going to perform or scale as well as an RDMA-based cluster will. However, this is going to be perfectly fine for many scenarios. Think about your daily drive to work. While you’d like to have a fast sports car for the drive in, it isn’t necessary for that particular need (maybe that’s a bad example — I’m still trying to convince my wife of that one). With pureScale, there are cases where availability is the predominant motivator for using it and there might not be a need to drive through massive amounts of transactions per second or scale up to tens of nodes. Your performance and scalability needs will dictate whether RDMA is required or not for your environment. By the way, you might see this feature referred to as pureScale “lite”. I’m still slowly warming up to that term, but the important thing is people know that “lite” doesn’t imply lower levels of availability.

With the ability to do this TCP/IP sockets-based communication between nodes, it also opens up more virtualization options. For example DB2 pureScale can be implemented using TCP/IP sockets in both VMware (Linux) and KVM (Linux) on Intel, as well as in AIX LPARs on Power boxes. These virtualized environments provide a lower cost of entry and are perfect for development, production environments with moderate workloads, QA, or just getting yourself some hands-on experience with pureScale.

It’s also worth pointing out that DB2 pureScale now supports and is optimized for IBM’s new POWER8 platform.

Having all of these new deployment options changes the economics of continuous availability, allowing broad infrastructure choices at every price point.

One thing that all of this should show you is the continued focus and investment in the DB2 pureScale technology by IBM research and development. With all of the press and fanfare around BLU, people often ask me if this is at the expense of IBM’s other technologies such as pureScale. You can see that this is definitely not the case. In fact, if you happen to be at Insight 2014 (formerly known as IOD) in Las Vegas in October, or at IDUG EMEA in Prague in November, I’ll be giving a presentation on everything new for pureScale in DB2 10.5, up to and including the “Cancun Release”. It’s an impressive amount of features that’s hard to squeeze into an hour. 🙂

For more information on what’s new for pureScale and DB2 in general with this new release, check out the fix pack summary page in the DB2 Information Center.

Tweetchat on Fraud Prevention in Banking

RadhaBy Radha Gowda
Technical Marketing, IBM Analytics

On August 7 ‘14, at 11 AM EDT, IBM Data Management team is privileged to have Robert L. Palmer, James Kobielus, and Wilson Davis join us on a tweetchat to share their expertise on #FraudPrevention in Banking.  Some topics that we shall be soliciting your opinion(s) on are:

  • Q1: Are fraudulent activities in banking increasing or decreasing? Why?
  • Q2: What are some key business impacts of fraud?
  • Q3: What measures can be taken to identify potential fraudulent transactions?
  • Q4: What analytics do you need to detect fraud?
  • Q5: What data sources can contribute to the analytics?
  • Q6: How can your systems analyze transactions as they occur?
  • Q7: How can new technologies such as in-memory analytics help in fraud detection?
  • Q8: Where can I learn more?

Here’s what you need to do to join our conversation to contribute or just listen:

  • Go to twubs.com or tweetdeck.com
  • Sign in with your twitter handle
  • Search on #FraudPrevention
  • A new window will open that makes it easy for you to follow and contribute.

If you plan to contribute to our tweetchat, please review the tips at slideshare since the chat can be very fast paced. Suggested resources relevant to the topic include:

  1. How to Mitigate Fraud and Cyber Threats with Big Data and Analytics
  2. IBM data management for banking
  3. Best practices to deploy IBM Banking Data Warehouse model to #IBMBLU for production
  4. Attract and retain customers with always-on digital mobile banking services
  5. Fight against fraud in real-time and save on operating expenses
  6. Customize offers to your clients with the data already at your fingertips
  7. World’s top 5 most secure bank is becoming more strategic and more profitable
  8. Regulatory reporting headaches? See how @Handelsbanken solved their reporting challenges

Tweetchat-AugustMore about our panelists:

Robert L. Palmer (@bigdatabusinessGlobal Banking Industry Marketing, Big Data, IBM

Bob’s expertise is applying B2B software to optimize key business processes.  He is a subject matter expert in financial services, and writes about business challenges, Big Data, analytics, CRM, cognitive computing, and information management.

James Kobielus   (@jameskobielus) Senior Program Director, Big Data Analytics, IBM

James is a popular speaker and thought leader in big data, Hadoop, enterprise data warehousing, advanced analytics, business intelligence, data management and next best action technologies.

Wilson Davis (@wilsondavisibm) Executive Technical Consultant – Counter Fraud iCoC, IBM

Wilson’s specialties include financial and operational data analytics, counter-fraud and anti-money laundering, straight-through-processing, and game changing improvements in business processes and application systems for the financial services industry.

The data advantage: Creating value into today’s digital world

IBM Institute for Business Value is looking to understand how organizations around the globe are creating business value from analytics. If you can spare a few minutes to participate in the survey, you’d be the first to receive a copy of the study when it is released in October 2014.  2014 Analytics Survey

Follow Radha on Twitter @radgo1

When Your Database Can’t Cut It, Your Business Suffers

larry By Larry Heathcote
Program Director, IBM Data Management

 

Your database is critical to your business. Applications depend on it. Business users depend on it. And when your database is not working well, your business suffers.

IBM DB2 offers high performance support for both transactional processing and speed-of-thought analytics, providing the right foundation for today’s and tomorrow’s needs.

We’ve all heard the phrase “garbage in, garbage out,” and this is so true in today’s big data world. But it’s not just about good data; it’s also about the infrastructure that captures and delivers data to business applications and provides timely and actionable insights to those who need to understand, to make decisions, to act, to move the business forward.

 

It’s one thing to pull together a sandbox to examine new sources of data and write sophisticated algorithms that draw out useful insights. But it’s another matter to roll this out into production where Line of Business users depend on good data, reliable applications and insightful analytics. This is truly where the rubber meets the road – the production environment…and your database better be up to it.

Lenny Liebmann, InformationWeek Contributing Editor, and I recorded a webinar recently titled “Is Your Database Really Ready for Big Data.” And Lenny posted a blog talking about the role of DataOps in the modern data infrastructure. I’d like to extend this one more step and talk about the importance of your database in production. The best way I can do that is through some examples.

 

1: Speed of Deployment

ERP systems are vital to many companies for effective inventory management and efficient operations. It is important to make sure that these systems are well tuned, efficient and highly available, and when a change is needed that it be done quickly. Friedrich ran the SAP environment for a manufacturing company, and he was asked to improve the performance of applications that were used for inventory management and supply chain ops. More specifically, he needed to replace their production database with one that improved application performance but kept storage growth to a minimum. Knowing that time is money, his mission was to deploy the solution quickly, which he did… 3 hours up and running in a production environment with more than 80 percent data compression and 50x performance improvement. The business impact – inventory levels were optimized, operating costs were reduced and the supply chain became far more efficient.

 

2: Performance

Rajesh’s team needed to improve performance of an online sales portal that gave his company’s reps the ability to run sales and ERP reports from their tablets and mobile phones out in the field. Queries were taking 4-5 minutes to execute, and this simply was not acceptable – btw, impatience is a virtue for a sales rep. Rajesh found that the existing database was the bottleneck, so he replaced it. With less than 20 hours of work, it was up and running in production with a 96.5 percent reduction in query times. Can you guess the impact this had? Yep, sales volumes increased significantly, Rajesh’s team became heroes and the execs were happy. And, since reps were more productive, they were also more satisfied and rep turnover was reduced.

 

3: Reliability, Availability and Scalability

In today’s 24x7x365 world, transaction system downtime is just not an option. An insurance company was having issues with performance, availability, reliability and scalability needed to support the company’s rapid growth of insurance applications. Replacing their database not only increased application availability from 80 to 95 percent, but they also saw a dramatic improvement in data processing times even after a 4x growth in the number of concurrent jobs … and, decreased their total cost of ownership by 50 percent. The company also saw customer satisfaction and stickiness improve.

These significant results happened because these clients upgraded their core database to IBM DB2. DB2 offers high performance support for both transactional processing and speed-of-thought analytics, providing the right foundation for today’s and tomorrow’s needs.

To learn more, watch our webinar.

Follow Larry on twtter at @larryheathcote

 

Join Larry and Lenny on a Tweet Chat on June 26 11 ET.  Join the conversation using #bigdatamgmt.  For the questions and more details see: http://bit.ly/Jun26TweetChat

Simplifying Oracle Database Migrations

Danny Arnold

Danny Arnold ,  Worldwide Competitive Enablement Team

As part of my role in IBM Information Management, as a technical advocate for our DB2 for LUW(Linux , Unix, Windows) product set, I often enter into discussions with clients that are currently using Oracle Database.

With the unique technologies delivered in the DB2 10 releases (10.1 and 10.5), such as

  • temporal tables to allow queries against data at a specific point-in-time,
  • row and column access control (RCAC) to provide granular row and column level security that extends the traditional RDBMS table privileges for additional data security, pureScale for near continuous availability database clusters,
  • database partitioning feature (DPF) for parallel query processing against large data sets (100s of TBs), and
  • the revolutionary new BLU Acceleration technology to allow analytic workloads to use column-organized tables to deliver performance orders of magnitude faster than conventional row-organized tables,

many clients like the capabilities and technology that DB2 for LUW provides.

However, a key concern is the level of effort to migrate an existing Oracle Database environment to DB2 .  Although DB2  provides Oracle compatibility and has had this capability built into the database engine since the DB2 9.7 release, there is still confusion on the part of clients as to what this Oracle compatibility means in terms of a migration effort.  Today, DB2 provides a native Oracle PL/SQL procedural language compiler, support for Oracle specific ANSII SQL language extensions, Oracle SQL functions, and Oracle specific data types (such as NUMBER and VARCHAR2).  This compatibility layer within DB2 allows many Oracle Database environments to be migrated to DB2 with minimal effort. Many stored procedures and application SQL that are used against Oracle Database can run unchanged against DB2 reducing both the migration effort and migration risk, as the application did not have to be modified. So the testing phase is much less effort than for a changed or heavily modified application SQL and stored procedures. Although the migration effort seems relatively straight forward, there are still questions that come up with clients and there is the need for a clear explanation of the Oracle Database to DB2 migration process.

Recently, a new solution brief entitled “Simplify your Oracle database migrations” published by IBM Data Management , provides a clear explanation of how DB2 and the PureData for Transactions appliance built upon DB2 pureScale can deliver a clustered database environment for migrating an Oracle database to DB2.  This brief provides a clear and concise overview of what an Oracle to DB2 migration requires and the assistance and tooling available from IBM to make a migration straightforward for a client’s environment.  The brief provides a concise description of the IBM tooling, IBM Database Conversion Workbench, which is available to assist a client in moving their tables, stored procedures, and data from Oracle to DB2.

The fact that DB2 for LUW makes migrating from Oracle a task that takes minimal effort, due to the Oracle compatibility built into DB2, is complemented by the PureData for Transactions system. PureData for Transactions provides an integrated, pre-built DB2 pureScale environment that allows a pureScale instance and a DB2 clustered database to be ready for use in a matter of hours. This helps simplify the implementation and configuration experience for the client. Combining the ease of Oracle migration to DB2 with the rapid implementation and configuration possible with PureData for Transactions, provides a winning combination for a client looking for a more cost effective and available alternative to the Oracle Database.

Fraud detection? Not so elementary, my dear! (Part 2)

Radha

Radha Gowda, Technical Marketing, IBM Analytics

The first part of this blog gave an overview of IBM Watson Foundation portfolio and DB2 solutions for financial fraud detection.  In this part, we’ll go over DB2 Warehouse features that help detect fraud in near-real-time.

Figure 1: DB2 warehouse for operational analytics

DatawarehouseOA

Data warehouses integrate data from one or more disparate sources to provide a single view of the business and have that single repository available to all levels of the business for analysis. To support today’s workloads, the data warehouse architecture must optimize both traditional deep analytic queries and shorter transactional type queries. It must be able to scale out under data explosion without compromising on either performance or storage. And, it must have the capacity to load and update data in real-time.  DB2 for Linux, UNIX and Windows offers you all these capabilities and more to help you build a scalable and high performing warehouse for near real-time fraud detection.

DB2 warehouse components are organized into six major categories as shown in Figure 2.  We shall discuss only the highlighted ones that help make near-real-time fraud detection a reality.

Figure 2: Warehouse components available in DB2 advanced editions

warehousecomps

As we discussed before, fraud detection is knowledge intensive. It involves sifting through vast amount of data to identify and verify patterns, and construct fraud models to help with real-time detection of fraudulent activity.

Embedded Analytics
DB2 offers embedded analytics, in the form of OLAP and data mining.

Data Mining enables you to analyze patterns and make predictions. Unlike solutions that require end users to extract data from the warehouse, independently analyze it and then send the results back to the warehouse, DB2 provides embedded data mining, modeling, and scoring capabilities.

Modeling– the process starts with historical data being gathered and put through a series of mathematical functions to classify, cluster and segment the data. It automatically finds associations and business rules in the data, which may signify interesting patterns (imagine customers’ credit card purchasing patterns). The business rules are then collected together into a model, which can have a few or tens of thousands of rules.

Visualization helps analysts evaluate the business rules to make sure that they are accurate.

Scoring involves applying the verified business rules to current data to help predict transactions that are likely to be fraudulent in real time.

For example, consider credit card spending patterns outside the norm. While outlier rules (detecting deviations in large data sets) can be applied to a banking transaction when it enters the system to help predict whether it is fraudulent, outlier handling is not usually automatic.  An expert needs to take a closer look to decide whether to take action or not.  This is where Cognos comes to help – to generate reports to visualize the outliers so a human expert can understand the nature of an outlier.

DB2 supports standard data mining model algorithms such as clustering, associations, classification and prediction; additional algorithms may be imported in industry-standard Predictive Model Markup Language (PMML) format from other PMML-compliant data mining applications including SAS and SPSS. This capability enables high-volume, high-speed, parallelized scoring of data in DB2 using third-party models.

Cubing Services provide decision makers a multidimensional view of data stored in a relational database. It supports OLAP capabilities within the data warehouse and simplifies queries that run against large and complex data stores. The multidimensional view of data leads to easier discovery and understanding of the relationships in your data for better business decisions. In addition, Cubing Services cubes are first-class data providers to the CognosBusiness Intelligence platform for incorporating predictive and analytic insights into Cognos reports.

Unstructured Data – up to 80 percent of the data within an organization is unstructured. DB2 can extract information from your unstructured business text and correlate it with your structured data to increase business insight into customer issues. DB2 also allows you to process unstructured data and create multidimensional reports using OLAP capabilities.  In addition, unstructured data can be integrated into data mining models to broaden predictive capabilities.

DB2 Spatial Extender allows you to store, manage, and analyze spatial data in DB2, which along with business data in a data warehouse helps with fraud analysis.

Temporal Data helps you implement time-based queries quickly and easily. Historical trend analysis and point-in-time queries can be constructed by using the history tables and SQL period specifications that are part of the database engine.

Performance Optimization

Database Partitioning Feature (DPF) for row based data store– As data volume increases over time, the data might become skewed and fragmented, resulting in decreased performance. DPFdistributes table data across multiple database partitions in a shared-nothing manner in which each database partition “owns” a subset of the data. It enables massive parallel processing by transparently splitting the database across multiple partitions and using the power of multiple servers to satisfy requests for large amounts of information.  This architecture allows databases to grow very large to support true enterprise data warehouses.

Data Movement and Transformation

Continuous Data Ingest (CDI) allows business-critical data to be continually loaded into the warehouse without the latency associated with periodic batch loading. It allows the warehouse to reflect the most up-to-date information and can help you make timely and accurate decisions. Consider for example, receiving a lost credit card log, a potential credit card fraud alert, from the call center. Such an event is ingested into the warehouse immediately rather than wait until a batch load occurs on pre-defined intervals.  Using such contextual information along with account transaction data can help in real-time fraud detection.

In fact, after experiencing just how beneficial CDI feature is, some of our clients have renamed their Extract, Transform, and Load (ETL) processes to Extract, Transform, and Ingest (ETI).

All these features are available in DB2 advanced editions and IBM PureData System for Operational Analytics to help you deliver near-real-time insights.

Now, are you meeting the service level agreements for performance while trying to prevent fraud in real time? Not sure? Why don’t you give DB2 with BLU Acceleration or other IBM Data Management solutions a try?  Perhaps IBM Data Management Solutions can help you achieve your business objectives.

Yes, fraud detection is not so elementary. But with the right clues, I mean with the right software and tools, it could be made elementary.

Follow Radha on Twitter @radgo1

Read the IBM Data Management for Banking whitepaper for more information on how IBM can help banks gain a competitive edge!