DB2 Direct: A new way of consuming your Database

headshots 012

by Phillip Downey, WW program Director, IBM Analytics Platform Hybrid Cloud Strategy

 

In DB2 11.1, we introduced two new and easy to consume DB2 Direct editions: DB2 Direct Advanced and DB2 Direct Standard. Both editions bring a new dimension to the database offerings for the small and larger enterprise clients that are looking for the flexibility and scalability of the hybrid cloud. They can be acquired directly online via passport advantage and offer a simplified licensing metric and monthly subscription pricing model that are ideal for private, public and hybrid cloud deployments.

Packaging

·        DB2 Direct Advanced Edition

The DB2 Direct Advanced Edition has all DB2 Server and Client features from DB2 Advanced Server Edition including encryption, multitenant deployments, adaptive compression, BLU Acceleration, SQL compatibility with PL/SQL, Data Server Manager, pureScale and database partitioning feature options. It also includes federation capabilities providing access to non-DB2 database sources like Oracle, MS SQL, Teradata, Hadoop, Netezza, Spark and other solutions.

Advanced Federation Capabilities

Phil blog image

 

 

 

 

 

It also includes access to 10 User licenses of Infosphere Data Architect per installation for designing and deploying database implementations.

·        DB2 Direct Standard Edition

DB2 Direct Standard Edition is modelled on DB2 Workgroup Edition, which provides encryption, pureScale for Continuously available HA deployments, Multitenant Deployments, SQL compatibility with PL/SQL, Data Server Manager Base Edition, Table partitioning, multi-dimensional clustering, parallel query and concurrent Connection pooling. It is limited to 16 cores and 128GB of RAM and is ideal for small to mid-sized database applications providing enterprise level availability, Query performance and Security as well as unlimited database size

You can take advantage of the new subscription model to lower costs and enjoy licensing flexibility for on-premises and cloud deployments:

Licensing Metrics:

Virtual ProcessorCore (VPC) Charge metric

  • Virtual processor core licensing gives you flexibility and simplified sub capacity licensing options that enables you to optimize your licensing to meet your business requirements.
  • There are two Licensing Scenarios you can apply
    • Simply license the sum of all available Virtual Processor Cores on all Virtual Servers the Direct edition is installed on
    • OR when you can identify a Server and it is more cost effective to do so simply license all available Processor Cores on the Physical Server regardless of the number of virtual machines on the system.
  • Benefits: This makes it simple for private and public Cloud deployments alike and enables you to optimise your licensing

Pricing Structure:

Subscription based pricing

      • DB2 Direct Advanced $354 USD per month per VPC
      • DB2 Standard Edition $135 USD per month per VPC

(Prices as of May 10th, 2016 in the United States.)

Each Deployment requires a minimum of 2 VPCs except in the case of Warm standby, which requires only one VPC.

These editions are ideal for customers who want to move to a subscription based model on their private cloud or a 3rd party vendors (hosts) and pay as their applications grow in size. It is also ideal for ISV’s who charge their applications to customers on a subscription model and want an easy to order database at competitive subscription pricing.

Understanding the Virtual Process Core Metric

Virtual Processor Cores are defined to simplify licensing in the private or public cloud deployment environment. You can deploy DB2 licenses with confidence even though you may or may not be aware of the underlying infrastructure. It enables customers to easily analyze their Licensing requirements including in sub-capacity situations.

A Virtual Processor Core is a Processor Core in an unpartitioned Physical Server, or a virtual core assigned to a Virtual Server.  The Licensee must obtain entitlement for each Virtual Processor Core made available to the Program.

For each Physical Server, the Licensee must have sufficient entitlements for the lesser of

  1. the sum of all available Virtual Processor Cores on all Virtual Servers made available to the Program or
  2. all available Processor Cores on the Physical Server.

Other key Virtual Processor Core considerations for you to understand

    • If the number of VPCs is greater than the physical cores, then you only need to license the number of physical cores on the machine
    • Minimum of 2 VPCs per deployment (1 VPC for idle/warm standby)

You can determine the VPC requirement through DB2 Itself by executing the following on each Physical or logical server DB2 is installed on and take the Online CPU Count and divided it by the HMTdegree result (threading degree) to get the count of Virtual CPU’s present.

“Db2pd –osinfo”

An example of this In a Cloud deployment

  • A customer buys a Virtual Cloud Server as a Service on a internal private cloud or MSP like Softlayer/Azure/ Amazon/rackspace ….
  • They purchase an 8 core Virtual CPU Environment
  • The customer runs “ Db2pd –osinfo” is run on the machine and shows HMTDegree of 1 and OnlineCPU of 8

The customer must license 8 VPC for this environment

An Example of a Private Cloud deployment using VM-Ware

  • A customer deploys Multiple VMWare Hosts are created on a server to run DB2. The server is a 2 Socket server, 8 cores per processor, with hyper-threading turned on to a degree of 2 (16 physical cores) Each of the 11 virtual VMs deployed Reports 6 Virtual Processors.
  • The Customer runs “db2pd –osinfo” across all VMWare Hosts reporting a total of Online CPU of 64 across 11 Virtual Machines (HMTDegree of 1 for all VMs)

As the Hardware can be physically Identified as a 16 core server the customer only has to pay for 16 VPC’s not 64 as some competitor programs would as it is the lesser of the two numbers.

 

IBM Insight 2015 – A guide to the DB2 sessions

sajan

By   Sajan Kuttappa,  Marketing Manager, Analytics Platform Services

In just a few weeks from now, thousands of people will converge in Las Vegas for the much talked about IBM Insight 2015 conference at Mandalay Bay, Las Vegas.

If you are a DB2 professional, an information architect or a database professional interested in knowing about the latest in in-memory technology, DB2 for SAP workloads and database Administration tools, there is an excellent lineup of sessions by subject matter experts that has been planned for you at the Insight conference. This article will highlight the topics that will be covered so that you can create your agenda in advance

IBM DB2 continues to be the best database option for SAP environments. Experts will share DB2 BLU Best Practices for SAP systems and the latest features of DB2 that enable in-memory, high-availability and scalability for SAP. For those interested in new deployment options like Cloud, we recommend sessions covering IBM’s portfolio of Cloud solutions for SAP on DB2 customers. The Hands-on-Labs at the conference will showcase how to best leverage DB2 BLU for SAP Business Warehouse.

Don’t miss the many client stories about how they benefited from DB2’s in memory technology (BLU Acceleration) to enable speed-of-thought analytics for their business users, share their lessons learned on and best practices, and talk about enhancements and tips for DB2 LUW and DB2 BLU. If you are planning for increased workloads, look out for the session on scaling up BLU acceleration in a high concurrency environment.
Learn more about upgrading to the Data Server Manager for DB2 and simplify database administration, optimize performance with expert advice & reduce costs across the enterprise.  Apart from this you can hear how our clients achieved cost savings and reduced time-to-market by migrating to DB2 LUW. Also on the menu is a Database Administration Crash course for DB2 LUW that will be conducted by top IBM champions in the field.

There is a lot that will be take place in Las Vegas. A week of high-quality educational sessions, hands-on-labs and panel discussions awaits you so attendees can walk away with better insights into how DB2 integrates into big data analysis and how it delivers in the cloud and more. We look forward to meeting you in Las Vegas for Insight 2015; and whatever happens in Vegas (at Insight) should definitely not stay in Vegas!!!

A list of all the sessions can be found at the below links

DB2 for SAP:   http://bit.ly/db2sapatinsight
Core DB2 for the enterprise: http://bit.ly/db2coreatinsight
DB2 with BLU Acceleration: http://bit.ly/db2bluatinsight
DB2 LUW tools / Administration: http://bit.ly/db2toolsatinsight

So start planning your agenda for Insight 2015 .

Follow us on Twitter (@IBM_DB2 ), Facebook (IBM DB2) for regular updates around the conference and key sessions.

Continuous availability benefits of pureScale now available in a new low cost DB2 offering

KellySchlambKelly Schlamb
DB2 pureScale and PureData Systems Specialist, IBM

Today, IBM has announced a set of new add-on offerings for DB2, which includes the IBM DB2 Performance Management Offering, IBM DB2 BLU Acceleration In-Memory Offering, IBM DB2 Encryption Offering, and the IBM DB2 Business Application Continuity Offering. More details on these offerings can be found here. Generally speaking, the intention of these offerings is to make some of the significant capabilities and features of DB2 available as low cost options for those not using the advanced editions of DB2, which already include these capabilities.

If you’ve read any of my past posts you know that I’m a big proponent of DB2’s pureScale technology. And staying true to form, the focus of my post here is on the IBM DB2 Business Application Continuity (BAC) offering, which is a new deployment and licensing model for pureScale. This applies to DB2 10.5 starting with fix pack 5 (the current fix pack level released in December 2014).

For more information on DB2 pureScale itself, I suggest taking a look here and here. But to boil it down to a few major points, it’s an active/active, shared data, clustering solution that provides continuous availability in the event of both planned and unplanned outages. pureScale is available in the DB2 Advanced Workgroup Server Edition (AWSE) and Advanced Enterprise Server Edition (AESE). Its architecture consists of the Cluster Caching Facilities (CF), which provide centralized locking and data page management for the cluster, and DB2 members, which service the database transaction requests from applications. This multi-member architecture allows workloads to scale-out and workload balance across up to 128 members.

While that scale-out capability is attractive to many people, some have told me that they love the availability that pureScale provides but that they don’t have the scalability needs for it. And in this case they can’t justify the cost of the additional software licenses to have this active/active type of environment – or to even move from their current DB2 Workgroup Server Edition (WSE) or Enterprise Server Edition (ESE) licensing up to the corresponding advanced edition that contains pureScale.

This is where BAC comes in. With BAC – which is a purchasable option on top of WSE and ESE – you can create a two member pureScale cluster. The difference, and what makes this offering interesting and attractive for some, is that the cluster can be used in an active/active way, but it’s licensed as an active/passive cluster. Specifically, one member of the cluster is used to run your application workloads and the other member is available as a standby in case that primary member fails or has to be brought down for maintenance. But isn’t that passive? No… and the reason is that this secondary member doesn’t just sit idle waiting for that to happen. Under the BAC offering terms, you are also allowed to run administrative operations on this secondary “admin” member. In fact, you are allowed to do all of the following types of work on this member:

  • Backup, Restore
  • Runstats
  • Reorg
  • Monitoring (including DB2 Explain and any diagnostic or problem determination activities)
  • Execution of DDL
  • Database Manager and database configuration updates
  • Log based capture utilities for the purpose of data capture
  • Security administration and setup

By offloading this administrative work off of the primary member, you leave it with more capacity to run your application workloads. And with BAC, you are only fully licensing the one primary member where your applications are running (for either WSE or ESE plus BAC). The licensing of the secondary member, on the other hand, falls under DB2’s warm/idle standby licensing which means a much reduced cost for it (e.g. for PVU pricing the secondary member would only be 100 PVUs of WSE or ESE plus 100 PVUs of BAC). For more details on actual software costs, please talk to your friendly neighborhood IBM rep.

BACgraphicAnd because this is still pureScale at work here, if there’s a failure of the primary member, the application workloads will automatically failover to the secondary member. Likewise, the database will stay up and remain accessible to applications on the secondary member when the primary member undergoes maintenance – like during a DB2 fix pack update. In both of these cases the workload is allowed to run on the secondary member and when the primary member is brought back up, the workloads will failback to it. All of the great availability characteristics of pureScale at a lower cost!

If you contrast this with something like Oracle RAC One Node, which has some similar characteristics to IBM DB2 BAC, only the primary node (instance) in Oracle RAC One Node is active and the standby node is not. In fact, it’s not even started until the work has to failover, so there’s a period of time where the cluster is completely unavailable. So a longer outage, slower recovery times, and no ability to run administrative work on this idle node like you can do with BAC.

Sounds great, right?

And for those of you that do want the additional scale-out capability, but like the idea of having that standby admin member at a reduced cost, IBM has thought of you too. Using AWSE or AESE (the BAC offering isn’t involved here), you can implement a pureScale cluster with multiple primary members with a single standby admin member. The multiple primary members are each fully licensed for AWSE or AESE, but the single standby admin member is only licensed as a passive server in the cluster (again, using the PVU example that would only be 100 PVUs of either AWSE or AESE). In this case, you can do any of that administrative work previously described on the standby member, and it’s also available for workloads to failover to if there are outages for one or more of the primary members in the cluster.

Happy clustering!

Learn about DB2 at Kolkata India DB2 user event

There is nothing more exciting than hearing how to revolutionize your business with DB2 for Linux, UNIX and Windows, so here is your chance to unlock the best practices and learn from the experts. I encourage you to weave this event into your busy schedule this week. I promise you won’t be disappointed!
Join technical experts form TCS Capegemini and IBM to learn how to maximize your IT opportunities using the keys to master the DB2 LUW locking
During this half-day agenda, you will learn how to make the right decisions for your current and future architecture:

There is NO REGISTRATION FEE to attend this Non-IBM event & the LUNCH on event day will be SPONSORED by IBM
When : 15th Nov 2014 (Saturday) from 9:30 Am to 3:30 PM
Venue : Techno India Campus,Salt Lake,Sector V, Kolkata, India
Who can join : Anyone who is having interest in DB2 or working on DB2
How to book your seat : Send a mail from your official mail id to kidug.india@gmail.com with subject line “I will attend”

Leading Speakers from : Capgemini, MJunction, TCS, IBM
KIDUG_Nov2014_DB2_Event_Invite_mailer

Mastering the DB2 10.1 Certification Exam – Part 2: Security

It’s hard to argue against the benefits of becoming a DB2 Certified Professional. Aside from gaining a better understanding of DB2, it helps keep you up to date with the latest versions of the product.  It also gives you professional credentials that you can put on your resume to show that you know what you say you know.

But many people are reluctant to put in the time and effort it takes to prepare for the exams. Some just don’t like taking tests, others don’t feel they have the time or money to prepare. That’s where we come in – the DB2 team has put together a great list of resources to help you conquer the certification exams.

We caught up with Anas Mosaad and Mohamed El-Bishbeashy who are part of the DB2 team to that developed the DB2 10.1 Fundamentals Certification Exam 610 Prep – a 6 part tutorial series aimed at helping DBA’s prepare for the certification exam.

What products are focused on in this tutorial?

In this tutorial we’ve focused completely on DB2 10.1 LUW

Tell us a little about what students can hope to learn about in this tutorial?

It is the second in a series of six tutorials designed to help you prepare for the DB2 Fundamentals Exam (610). It puts in your hands all the details needed to successfully pass security related question in the exam. It introduces the concepts of authentication, authorization, privileges, and roles as they relate to DB2 10.1. It also introduces granular access control and trusted contexts.

Why should a DBA be interested in this certification?

IBM professional certifications are recognized world wide, so you will get recognized! In addition, this one is the first milestone in the advanced DB2 certification paths (development, DBA and advanced DBA). It acknowledges that you are knowledgeable about the fundamental concepts of DB2 10.1. It shows that you have an in-depth knowledge of the basic to intermediate tasks required in day-to-day administration, basic SQL (Structured Query Language), understand which additional products are available with DB2 10.1, understand how to create databases and database objects, and have a basic knowledge of database security and transaction isolation.

Do have any special tips?

Absolutely, here are a few of our favorite tips for preparing for the certification exam:

  • Practice with DB2
  • If you don’t have access to DB2, download the fully functional DB2 Express-C for free
  • Read the whole tutorial before taking the exam
  • Be a friend of DB2 Knowledge Center (formerly infocenter)
  • When in doubt, don’t hesitate, post and collaborate in the forums.

For more information:

DB2 10.1 fundamentals certification exam 610 prep, Part 2: DB2 security

For the entire series of tutorials for Exam 610 DB2 Fundamentals, include the following:
Part 1: Planning
Part 2: DB2 security
Part 3: Working with databases and database objects
Part 4: Working with DB2 Data using SQL
Part 5: Working with tables, views, and indexes
Part 6: Data concurrency

About the authors:
Anas MosaadAnas Mosaad, a DB2 solutions migration consultant with IBM Egypt, has more than eight years of experience in the software development industry. He is a member of IBM’s Information Management Technology Ecosystem Team focusing on enabling and porting customer, business partner, and ISV solutions to the IBM Information Management portfolio, which includes DB2, Netezza, and BigInsights. Anas’ expertise includes portal and J2EE, database design, tuning, and database application development.

Mohamed El-BishbeashyMohamed El-Bishbeashy is an IM specialist for IBM Cairo Technology Development Center (C-TDC), Software Group. He has 12+ years of experience in the software development industry (8 of those are with IBM). His technical experience includes application and product development, DB2 administration, and persistence layer design and development. Mohamed is an IBM Certified Advanced DBA and IBM Certified Application Developer. He also has experience in other IM areas including PureData Systems for Analytics (Netezza), BigInsights, and InfoSphere Information server.

Balluff loves BLU Acceleration too

cassieBy Cassandra Desens
IBM Software Group, Information Management  

BLU Acceleration is a pretty darn exciting advancement in database technology. As a marketing professional, I can tell you why it’s cool..
BLU provides instant insight from real-time operation data,
BLU provides breakthrough performance without the constraints of other in-memory solutions,
BLU provides simplicity with a load-and-go setup,
etcetera, etcetera ..you get the point.

You can read our brochures and watch our videos to hear how DB2 with BLU Acceleration will transform your business. We think it’s the next best thing since sliced bread because we invented it. But is it all it’s cracked up to be? The answer is YES.

Clients all over the world are sharing how BLU Acceleration made a huge, positive difference to their business. Hearing customer stories puts our product claims into perspective. Success stories give us the ultimate answer to the elusive question “How does this relate to me and my business?”. Which is why I want to share with you one of our most recent stories: Balluff.

Balluff is a worldwide company with headquarters in Germany. They have over 50 years of sensor experience and are considered a world leader and one of the most efficient manufacturers of sensor technology.  Balluff relies on SAP solutions to manage their business, including SAP Business Warehouse for their data analysis and reporting.

Over the last few years Balluff experienced significant growth, which resulted in slowed data delivery. As Bernhard Herzog, Team Manager Information Technology SAP at Balluff put it “Without timely, accurate information we risked making poor investment decisions, and were unable to deliver the best possible service to our customers.”

The company sought a solution that would transform the speed and reliability of their information management system. They chose DB2 with BLU Acceleration to accelerate access to their enormous amount of data. With BLU Acceleration Balluff achieved:

  • Reduced reporting time for individual reports by up to 98%
  • Reduced backup data volumes by 30%
  • Batch mode data processing improvements by 25%
  • A swift transition with no customization needed; Balluff transferred 1.5 terabytes of data within 17 hours with no downtime

These improvements have a direct impact on their business. As Bernhard Herzog put it, “Today, sales staff have immediate access to real-time information about customer turnover and other important indicators. With faster access to key business data, sales managers at Balluff can gain a better overview, sales reps can improve customer service and the company can increase sales”.

Impressive, right? While you could argue it’s no sliced bread, it certainly is a technology that is revolutionizing reporting and analytics, and worth try. Click here for more information about DB2 with BLU Acceleration and to take it for a test drive.

_________________________________________________________________

For the full success story, click here to read the Balluff IBM Case Study
You can also click here to read Balluff’s success as told by ComputerWoche (Computer World Germany). Open in Google Chrome for a translation option.

Exclusive Opportunity to Influence IBM Product Usability: Looking for Participants for Usability Test Sessions – Data Warehousing and Analytics

Arno thumbnail 2By Arno C. Huang, CPE
Designer, IBM Information Management Design
IBM Design making the user the center of our productsThe IBM Design Team is seeking people with a variety of database, data warehousing and analytics backgrounds to participate in usability test sessions. We are currently looking for people who work in one of the following roles: DBA, Architect, Data Scientist, Business Analyst or Developer. As a test participant, you will provide your feedback about current or future designs we are considering, thus making an impact on the design of an IBM product and letting us know what is important to you.

Participating in a study typically consists of a web conference or on-site meeting scheduled around your availability. IBM will provide you with an honorarium for your participation. There are several upcoming sessions, so if you’re interested, we’ll help you find a session that best suits your schedule. If you are interested, please contact Arno C. Huang at achuang@us.ibm.com

Troubles Are Out of Reach With Instant Insights

RadhaBy Radha Gowda
Technical Marketing, IBM Analytics

Bet you have been hearing a lot about shadow tables in DB2 “Cancun Release” lately.  Umm… do shadow and Cancun remind you of On the beach by Cliff Richards and the Shadows?  Seriously, DB2 shadow tables can make you dance to a rock ‘n’ roll on the beach because you will be trouble free with real-time insights into your operations and of course, lots of free time.

What is a shadow table?

Shadow tables have been around since the beginning of modern computing – primarily for improving performance.  So what does the DB2 shadow table offer? The best of both OLTP and OLAP worlds!  You can now run your analytic reports directly in OLTP environment with better performance.

Typically organizations have separate OLTP and OLAP environments – either due to resource constraints or to ensure the best OLTP performance.   The front-end OLTP is characterized by very small, but high volume transactions. Indexes are created to improve performance.  In contrast, the back-end OLAP has long-running complex transactions that are relatively small in number. Indexes are created, but they may be different from OLTP indexes.  Of course, an ETL operation must transfer data from OLTP database to OLAP data mart/warehouse at time intervals that may vary from minutes to days.

DB2 can help you simplify your infrastructure and operations with shadow tables. Shadow table is a column organized copy of a row-organized table within the OLTP environment, and may include all or a subset of columns.  Because the table is column organized, you get the benefit of enhanced performance that BLU Acceleration provides for analytic queries.

How do shadow table work?

shadow tables

Shadow table is implemented as a materialized query table (MQT) that is maintained by replication. IBM InfoSphere Change Data Capture for DB2, available in advanced editions, maintains shadow tables through automatic and incremental synchronization of row-organized tables.

While all applications can access the row-organized table by default, DB2 optimizer will perform the latency-based routing to determine whether a query needs to be routed to shadow tables or the row-organized source.

A truly flexible and trouble-free OLTP world

Shadow tables offer the incredible speed you have come to expect from BLU Acceleration while the source tables remain row-organized to best suit OLTP operations.  In fact, with shadow tables, the performance of analytical queries can improve by 10x or more, with equal or greater transactional performance*.

With instant insight into “as it happens” data for all your questions and all the free time you’ll have with no more indexing/tuning, what’s not to like? Try DB2 today

* Based on internal IBM testing of sample transactional and analytic workloads by replacing 4 secondary analytical indexes in the transactional environment with BLU Shadow Tables. Performance improvement figures are cumulative of all queries in the workload. Individual results will vary depending on individual workloads, configurations and conditions.

Is Your Database a Hero or a Hindrance?

KellySchlamb Kelly Schlamb
DB2 pureScale and PureData Systems Specialist, IBM

Here’s a big question for you – Is your database a hero or a hindrance? In other words, is your database environment one that’s helping your organization meet your performance, scalability, and availability needs or is it holding you back from meeting your SLAs and keeping up with ever changing business needs?

Join me for an Information Week webinar on this topic next week — Thursday, September 4th at 12pm EDT — where I’ll be talking about these types of challenges faced by IT organizations and how DB2 has the capabilities to address those challenges.  News about some of these capabilities will be hot off the press and so you won’t want to miss it.

Click here to register

Webcast(Hero)-LIVE

Steps toward the Future: How IBM DB2 is changing the Game

toriTori McClellan
Super Awesome Social Media Intern

 

Welcome to the New Age of database technology!

IBM DB2 with BLU Acceleration changes the game for in-memory computing.  Due the importance of in-memory computing, we created a dedicated website to take you through all the details, references, and more: www.ibmbluhub.com !  This website is in place to help clients and prospects understand what next-gen in-memory computing can do for them and why IBM BLU is the ideal in-memory database to deliver fast answers.

A few examples of how IBM BLU has helped other clients find their ideal balance between speed and quality:

  1. Regulatory reporting is a huge challenge for all banks – Handelsbanken, one of the most profitable banks in the world, is currently doing reports monthly but are expected to do them daily in the near future. DB2 with BLU Acceleration has helped Handelsbanken analysts get the data they need for daily reports via columnar store. Learn more by watching this video: http://bit.ly/1u7urAA
  2.  Deploying DB2 with BLU Acceleration is simple – with only a handful of commands, you can turn on analytics mode, create a new or auto-configure an existing database to make best use of your hardware for analytics, and then load the data. Learn more from this IBM Redbook that introduces the concepts of DB2 with BLU Acceleration from ground up and describe the technologies that work hand-in-hand with BLU Acceleration: Architecting and Deploying IBM DB2 with BLU Acceleration in Your Analytical Environment.
  3.  Get the FACTS and stay current by subscribing to the ibmbluhub.com newsletter.

– IBM DB2 with BLU Acceleration is a revolutionary technology and delivers breakthrough performance improvements for analytic queries by using dynamic in-memory columnar technologies.

– Different from other vendor solutions, BLU Acceleration allows the unified computing of online transaction processing (OLTP) and analytics data inside a single database, therefore, removing barriers and accelerating results for users. With observed hundredfold improvement in query response time, BLU Acceleration provides a simple, fast, and easy-to-use solution for the needs of today’s organizations; quick access to business answers can be used to gain a competitive edge, lower costs, and more.

– Subscribe to the newsletter to continue learning about this hot in-memory database.  You will receive a periodic iNews email, which links to what’s new.  Just click and learn: http://www.ibmbluhub.com/blu-inews/

ToriBlog

If this information suits your needs, be sure to follow @IBM_DB2 on twitter. Get your information as it is being published.

Follow

Get every new post delivered to your Inbox.

Join 65 other followers