Posts Tagged ‘DB2’

SoftLayer Direct Link Brings Hybrid Cloud to System z and Power

June 26, 2014

Back in February, IBM announced that SoftLayer was integrating IBM Power Systems into its cloud infrastructure, a move that promised to deliver a level and breadth of services beyond what has traditionally been available over the cloud. Combined with new services and tools announced at the same time, this would help organizations deploy hybrid and private cloud environments.

Back then IBM included the System z in the announcement as well by bolstering its System z cloud portfolio with IBM Wave for z/VM. IBM Wave promises to provide rapid insight into an organization’s virtualized infrastructure with intelligent visualization, simplified monitoring and unified management. Specifically, Wave helps the organization more easily manage large numbers of virtual machines.

Now it is June, the snow has finally melted and IBM’s SoftLayer is introducing Direct Link to the computing public. Direct Link had previously been available to only a select few customers. Direct Link, in effect, is a specialized content delivery network for creating hybrid clouds. Organizations would connect their private IT infrastructure to public cloud resources by going directly to the SoftLayer platform, which streamlines delivery over the network. Direct Link users avoid the need to traverse the public Internet.

The focus here is on hybrid clouds. When an organization with a private cloud, say a mainframe hosting a large amount of IT resources and services behind the firewall, needs resources such as extra capacity or services it doesn’t have, it can turn to the public cloud for those extra resources or services. The combination of the private cloud and tightly connected public cloud resources form a hybrid cloud.  If you’re attending a webinar on hybrid clouds at this point the speaker usually says …and then you just punch out to the public cloud to get x, y, or z resource or service. It always sounds so simple, right?

As far as the System z goes, SoftLayer was not actually integrated with the z in the February announcement, although DancingDinosaur expects it will be eventually if IBM is serious about enterprise cloud computing. For now, the z sits in the on-premise data center, a private cloud so to speak. It runs CICS and DB2 and all the systems it is known for and, especially, security. From there, however, it can connect to an application server, dedicated or virtual, on the SoftLayer Cloud Server to form a Hybrid System z-Enterprise Cloud. As presented at SHARE this past spring, the resulting Hybrid System z-Cloud Enterprise Architecture (slides 46-49) provides the best of both worlds, secure transactions combined with the dynamics of the cloud.

Direct Link itself consists of a physical, dedicated network connection from your data center, on-premise private cloud, office, or co-location facility to SoftLayer’s data centers and private network through one of the company’s 18 network Points of Presence (PoPs) around the world. These PoPs reside within facilities operated by SoftLayer partners including Equinix, Telx, Coresite, Terremark, Pacnet, InterXion and TelecityGroup, which provide access for SoftLayer customers, especially those with infrastructure co-located in the same facilities.

Direct Link, essentially an appliance, eliminates the need to traverse the public Internet to connect to the SoftLayer private network. Direct Link enables organizations to completely control access to their infrastructure and services, the speed of their connection to SoftLayer, and how data is routed. In the process, IBM promises:

  • Higher network performance consistency and predictability
  • Streamlined and accelerated workload and data migration
  • Improved data and operational security

If you are not co-located in any of the above facilities operated by one of SoftLayer’s POP partners then it appears you will have will to set up an arrangement with one of them. SoftLayer promises to hold your hand and walk you through the set up process.

When you do have it set up Direct Link pricing appears quite reasonable. Available immediately, Direct Link pricing starts at $147/month for a 1Gbps network connection and $997/month for a 10Gbps network connection.

According to Trevor Jones, writing for Tech Target, IBM’s pricing undercuts AWS slightly and Microsoft’s by far. Next month Microsoft, on a discounted rate for its comparable Express Route service, will charge $600 per month for 1 Gbps and $10,000 for 10 Bbps per month. Amazon uses its Direct Connect service priced at $0.30 per hour for 1 Gbps and 10 Gbps at $2.25 per hour.

Your System z or new Power server integrated with SoftLayer can provide a solid foundation for hybrid cloud nirvana. Just add Direct Link and make arrangements with public cloud resources and services. Presto, you have a hybrid cloud.

BTW, IBM Enterprise 2014 is coming in Oct. to Las Vegas. DancingDinosaur expects to hear a lot of the z and Power, SoftLayer, and hybrid clouds there.

DancingDinosaur is Alan Radding. Follow him on Twitter, @mainframeblog and at Technologywriter.com

The Next Generation of Mainframers

March 6, 2014

With seemingly every young person with any technology inclinations aiming to become the next WhatsApp and walk away with some of Facebook’s millions it is fair to wonder: Where is the next generation of mainframers going to come from and who are they going to be?

The answer: IBM is lining them up now. As the mainframe turns 50 you’ll have a chance to meet some of these up and coming mainframers as part of IBM’s 50th Mainframe Anniversary celebration in New York, April 8, when IBM announces winners of the World Championship round of its popular Master of the Mainframe competition.

According to IBM, the Championship is designed to assemble the best university students from around the globe who have demonstrated superior technical skills through participation in their regional IBM Master the Mainframe Contests. Out of the 20,000 students who have engaged in country-level Master the Mainframe Contests over the last three years, the top 44 students from 22 countries have been invited to participate in the inaugural IBM Master the Mainframe World Championship.

These students will spend the month of March working through the Systems of Engagement concept, an expansion of the traditional Systems of Record—core transaction systems—that have been the primary workload of mainframe computing. The students will deploy Systems of Record mainframe business applications written with Java and COBOL using DB2 for z/OS API’s to demonstrate how the Systems of Engagement concept takes full advantage of the mainframe’s advanced capabilities. In short, the mainframe is designed to support tomorrow’s most demanded complex workloads  Big Data, Cloud, and Mobile computing workloads and do them all with the most effective enterprise-class security. The students will showcase their applications on April 7, 2014 in New York City where judges will determine which student earns the distinction of “Master the Mainframe World Champion.”

Representing the United States are Mugdha Kadam from the University of Florida, Elton Cheng from the University of California San Diego, and Rudolfs Dambis from the University of Nevada Las Vegas. You can follow the progress of the competitors here.  After March 17 the site will include a leaderboard so you can follow your favorites. No rumors of betting pools being formed yet but it wouldn’t surprise DancingDinosaur.  Win or not, each competitor should be a prime candidate if your organization needs mainframe talent.

This is part of IBM’s longstanding System z Academic Initiative, which has been expanding worldwide and now encompasses over 64,000 students at more than 1000 schools across 67 countries.  And now high school students are participating in the Master the Mainframe competition. Over 360 companies are actively recruiting from these students, including Baldor, Dillards, JB Hunt, Wal-mart, Cigna, Compuware, EMC, Fidelity, JP Morgan Chase, and more.

Said Jeff Gill, at VISA: “Discovering IBM’s Academic Initiative has been a critical success factor in building a lifeline to our future—a new base of Systems Engineers and Applications Developers who will continue to evolve our mainframe applications into flexible open enterprise solutions while maintaining high volume / high availability demands. Without the IBM Academic Initiative, perhaps we could have found students with aptitude – but participation in the Academic Initiative demonstrates a student’s interest in mainframe technology which, to us, translates to a wise long-term investment.“ Gill is one of the judges of the Masters the Mainframe World Championship.

Added Martin Kennedy of Citigroup: “IBM’s Master the Mainframe Contest offers a great resource to secure candidates and helps the company get critical skills as quickly as possible.”

The Master of the Mainframe Championship and even the entire 50th Anniversary celebration that will continue all year are not really IBM’s primary mainframe thrust this year.  IBM’s real focus is on emphasizing the forward-moving direction of the mainframe. As IBM puts in: “By continually adapting to trends and evolving IT, we’re driving new approaches to cloud, analytics, security and mobile computing to help tackle challenges never before thought possible.  The pioneering innovations of the mainframe all serve one mission—deliver game-changing technology that makes the extraordinary possible and improves the way the world works.

DancingDinosaur covers the mainframe and other enterprise-class technology. Watch this blog for more news on the mainframe and other enterprise systems including Power, enterprise storage, and enterprise-scale cloud computing.

With that noted, please plan to attend Edge 2014, May 19-23 in Las Vegas. Being billed as an infrastructure and storage technology conference, it promises to be an excellent follow-on to last year’s Edge conference.  DancingDinosaur will be there, no doubt hanging out in the blogger’s lounge where everyone is welcome. Watch this blog for upcoming details on the most interesting sessions.

And follow DancingDinosaur on Twitter, @mainframeblog

Latest in System z Software Pricing—Value Unit Edition

December 5, 2013

Some question how sensitive IBM is to System z costs and pricing.  Those that attended any of David Chase’s several briefings on System z software pricing at Enterprise 2013 this past October, however would realize the convulsions the organization goes through for even what seems like the most trivial of pricing adjustments. So, it is not a small deal that IBM is introducing something called Value Unit Edition (VUE) pricing for System z software.

VUE began with DB2. The purpose is to give z data centers greater pricing flexibility while encouraging new workloads on the z. VUE specifically is aimed at key business initiatives such as SOA, Web-based applications, pureXML, data warehousing and operational business intelligence (BI), and commercial (packaged) applications such as SAP, PeopleSoft, and Siebel. What started as a DB2 initiative has now been extended to WebSphere MQ, CICS, and IMS workloads.

In short, VUE pricing gives you a second pricing option for eligible (meaning new) z workloads. BTW, this eligibility requirement isn’t unusual with the z; it applies to the System z Solution Edition deals too. Specifically, VUE allows you to opt to pay for the particular software as a one-time capital expenditure (CAPEX) in the form of a one-time charge (OTC) rather than as a monthly license charge (MLC), which falls into the OPEX category.

Depending on your organization’s particular circumstances the VUE option could be very helpful. Whether it is more advantageous for you, however, to opt for OTC or MLC with any eligible workload is a question only your corporate accountant can answer (and one, hopefully, that is savvy about System z software pricing overall).  This is not something z data center managers are likely to answer on their own.

Either way you go, IBM in general has set the pricing to be cost neutral with a five-year breakeven. Under some circumstances you can realize discounts around the operating systems; in those cases you may do better than a five-year breakeven. But mainly this is more about how you pay, not how much you pay. VUE pricing is available for every System z model, even older ones. Software running under VUE will have to run in its own LPAR so IBM can check its activity as it does with other software under SCRT.

In summary, the main points of VUE are:

  • One-time-charge (OTC) pricing option across key middleware and packaged applications
  • The ability to consolidate or grow new workloads without increasing operational expense
  • Deployment on a z New Application License Charge (zNALC) LPAR, which, as expected, runs under the zNALC terms and conditions
  • Of course, new applications must be qualified; it really has to be new
  • Allows a reduced price for the z/OS operating system
  • Runs as a mixed environment, some software MLC  some OTC
  • Selected ISV offerings qualify for VUE

Overall, System z software pricing can be quite baffling. There is nothing really comparable in the distributed world. The biggest benefit of VUE comes from the flexibility it allows, OPEX or CAPEX, not from not from any small discount on z/OS. Given the set of key software and middleware VUE applies to the real opportunity lies in using its availability to take bring on new projects that expand the footprint of the z in your organization. As DancingDinosaur has pointed out before, the more workloads you run on the z the lower your cost-per-workload.

Follow DancingDinosaur on Twitter, @mainframeblog

IBM Big Data Innovations Heading to System z

April 4, 2013

Earlier this week IBM announced new technologies intended to help companies and governments tackle Big Data by making it simpler, faster and more economical to analyze massive amounts of data. Its latest innovations, IBM suggested, would drive reporting and analytics results as much as 25 times faster.

The biggest of IBM’s innovations is BLU Acceleration, targeted initially for DB2. It combines a number of techniques to dramatically improve analytical performance and simplify administration. A second innovation, referred to as the enhanced Big Data Platform, improves the use and performance of the InfoSphere BigInsights and InfoSphere Streams products. Finally, it announced the new IBM PureData System for Hadoop, designed to make it easier and faster to deploy Hadoop in the enterprise.

BLU Acceleration is the most innovative of the announcements, probably a bona fide industry first, although others, notably Oracle, are scrambling to do something similar. BLU Acceleration enables much faster access to information by extending the capabilities of in-memory systems. It allows the loading of data into RAM instead of residing on hard disks for faster performance and dynamically moves unused data to storage.  It even works, according to IBM, when data sets exceed the size of the memory.

Another innovation included in BLU Acceleration is data skipping, which allows the system to skip over irrelevant data that doesn’t need to be analyzed, such as duplicate information. Other innovations include the ability to analyze data in parallel across different processors; the ability to analyze data transparently to the application, without the need to develop a separate layer of data modeling; and actionable compression, where data no longer has to be decompressed to be analyzed because the data order has been preserved.   Finally, it leverages parallel vector processing, which enables multi-core and SIMD (Single Instruction Multiple Data) parallelism.

During testing, IBM reported, some queries in a typical analytics workload ran more than 1000x faster when using the combined innovations of BLU Acceleration. It also resulted in 10x storage space savings during beta tests. BLU acceleration will be used first in DB2 10.5 and Informix 12.1 TimeSeries for reporting and analytics. It will be extended for other data workloads and to other products in the future.

BLU Acceleration promises to be as easy to use as load-and-go.  BLU tables coexist with traditional row tables; using the same schema, storage, and memory. You can query any combination of row or BLU (columnar) tables, and IBM assures easy conversion of conventional tables to BLU tables.

DancingDinosaur likes seeing the System z included as an integral part of the BLU Acceleration program.  The z has been a DB2 workhorse and apparently will continue to be as organizations move into the emerging era of big data analytics. On top of its vast processing power and capacity, the z brings its unmatched quality of service.

Specifically, IBM has called out the z for:

  • InfoSphere BigInsights via the zEnterprise zBX for data exploration and online archiving
  • IDAA (in-memory Netezza technology) for reporting and analytics as well as operational analytics
  • DB2 for SQL and NoSQL transactions with enhanced Hadoop integration in DB2 11 (beta)
  • IMS for highest performance transactions with enhanced Hadoop integration  in IMS 13 (beta)

Of course, the zEnterprise is a full player in hybrid computing through the zBX so zEnterprise shops have a few options to tap when they want to leverage BLU Accelerator and IBM’s other big data innovations.

Finally, IBM announced the new IBM PureData System for Hadoop, which should simplify and streamline the deployment of Hadoop in the enterprise. Hadoop has become the de facto open systems approach to organizing and analyzing vast amounts of unstructured as well as structured data, such as posts to social media sites, digital pictures and videos, online transaction records, and cell phone location data. The problem with Hadoop is that it is not intuitive for conventional relational DBMS staff and IT. Vendors everywhere are scrambling to overlay a familiar SQL approach on Hadoop’s map/reduce method.

The new IBM PureData System for Hadoop promises to reduce from weeks to minutes the ramp-up time organizations need to adopt enterprise-class Hadoop technology with powerful, easy-to-use analytic tools and visualization for both business analysts and data scientists. It also provides enhanced big data tools for management, monitoring, development, and integration with many more enterprise systems.  The product represents the next step forward in IBM’s overall strategy to deliver a family of systems with built-in expertise that leverages its decades of experience in reducing the cost and complexity associated with information technology.

IBM PureData Brings New Analytics Platform

October 18, 2012

IBM finally has started to expand its PureSystems family of systems with the introduction of the PureData System.  The system promises to let organizations more efficiently manage and quickly analyze petabytes of data and then intelligently apply those insights in addressing business issues across their organization.

This is not a surprise. From the start, IBM talked about a family of PureSystems beyond the initial PureFlex and PureApplications. When the PureSystems family was introduced last spring, DancingDinosaur expected IBM to quickly add new expert servers starting with something it guessed would be called PureAnalytics and maybe another called PureTransactions.  PureData isn’t that far off. The new systems are being optimized specifically for transactional operations and data analytics workloads.

Specifically, PureData System for Transactions has been integrated and optimized as a ready-to-run database platform designed and tuned specifically for transactional data workloads. It supports both DB2 applications unchanged and Oracle database applications with only minimal changes. The machines come as three workload-specific models optimized either for transactional, operational, and big data analytics. They are:

  • PureData System for Transactions: Aimed at retail and credit card processing environments that depend on rapid handling of transactions and interactions these transactions may be small, but the volume and frequency require fast and efficient processing. The new system provides hardware and software configurations integrated and optimized for flexibility, integrity, availability and scalability for any transaction workload.
  • PureData System for Analytics: Enables organizations to quickly and easily analyze and explore big data, up to multi petabytes in volume. The new system simplifies and optimizes performance of data warehouse services and analytics applications. Powered by Netezza technology (in-memory analytics), the new system aims to accelerate analytics and boasts what IBM describes as the largest library of in-database analytic functions on the market today. Organizations can use it to predict and avoid customer churn in seconds, create targeted advertising and promotions using predictive and spatial analysis, and prevent fraud.
  • PureData System for Operational Analytics: Here organizations can receive actionable insights concurrently on more than 1,000 business operations to support real-time decision making. Operational warehouse systems are used for fraud detection during credit card processing, to deliver customer insights to call center operations (while the customer is still on the call or online), and track and predict real-time changes in supply and demand.

All the systems include PureSystems pattern-based expertise and automation. From a configuration standpoint, the full rack system can be pretty rich: 386 x86 processor cores, 6.2 TB DRAM, 19.2 TB flash (SSD), 128 TB disk (HDD), advanced storage tiering, up to 10x compression, a high speed RDMA interconnect, and dual internal 10 GB network links. Systems, however, can range from 96 cores to 386 cores. IBM reports early customer results of 10-100x faster performance over traditional custom-built systems and 20x greater concurrency and throughput for tactical queries resulting, in part, from IBM’s patented MPP hardware acceleration.

IBM hasn’t disclosed pricing, which is highly subject to the particular configuration anyway. However, the company is quick to tout its introductory deals: Credit-qualified clients that elect IBM financing can see immediate benefits with PureData System by deferring their first payment until January 2013 or obtaining a zero percent (interest-free) loan for 12, 24 or 36 months.

PureData may be better thought of as a data appliance delivering data services fed by applications that generate the data and reside elsewhere. With its factory built-in expertise, patterns, and appliance nature organizations can have, according to IBM, a PureData system up and running in hours, not days or weeks; run complex analytics in minutes, not hours; and handle more than 100 databases on a single system. PureData can be deployed in one step simply by specifying the cluster name, description, and applicable topology pattern. Built-in expertise handles the rest.

Now the game is to guess what the next PureSystems expert server will be. DancingDinosaur’s guess: a highly scalable implementation of VDI, maybe called PureDesktop.

IBM zEnterprise—the Software Difference

April 4, 2011

You could argue that there is no high end server available today to match the IBM zEnterprise/zBX in processing power, reliability, and scalability. It holds its own in terms of speeds and feeds, number of cores, memory. Software, however, may turn out to be the biggest differentiator among high end servers, and IBM has optimized a ton of software for the z, something others mainly just talk about.

The high end server market has suddenly entered a period of change. In March Oracle announced that will no longer support Itanium processors. HP immediately countered with a statement of support for Itanium. SGI announced a 256-core Xeon Windows system.  Also in March, Quanta Computer, a Chinese operation, reported squeezing 512 cores into a pizza box server running the Tilera multi-core processor. Tilera’s roadmap goes out to 2013 when it expects to pack 200 cores onto a processor.  Of course, IBM launched the first hybrid server, the zEnterprise consisting of the multi-core z196 coupled with the zBX last summer.

This recent flurry of server activity at the large-scale, multi-core end of the market leaves server buyers somewhat confused. One writes to DancingDinosaur asking: What will be the ultimate retail price per core?  What’s the current price per core of, perhaps, a chassis full of 8-core IBM System p blades, an HP Superdome, or an SGI UV 1000 running Windows or Linux?

Fair questions, for sure. The published OEM price last year was $900 per chip for a 64-core Tilera processor, which rounds to $14 per core. SGI reports that the Altix UV starts at $50,000 with Microsoft software an additional $2,999 per four sockets (32 cores). A buyer could end up facing different vendors and technologies competing at the $50, $100, $500, $1000, $5,000 and $10,000 per core price points. Each vendor will be promoting a different architecture, configuration, memory optimization, performance, and even form factor (multi u, pizza box, blades) attributes.

This is not just about price but integration, internal communication speeds, optimization, and more. At this point all the vendors need to be more forthcoming and transparent.

But this may not turn out to be a hardware, processor, memory, speeds and feeds battle. It may not even turn into a price-per-core battle or a total cost of ownership (TCO) vs. total cost of acquisition (TCA) battle. Ultimately, it has to come down to workloads supported and delivered, and that means software. And when it comes to workload optimization and software IBM already has an advantage, especially when compared to Oracle and HP.

A quick peek at IBM’s software lineup suggests the company has a lot of topnotch software to run on its hardware.  Factor in the ISV ecosystem and the IBM picture gets even better.

Let’s start with Gartner naming IBM the worldwide market share leader overall in the application infrastructure and middleware software segment.  If you drill down into the various sub-markets, IBM often comes up as leader there too. For example, IBM leads the business process management (BPM) market, with better than double the share of its closest competitor. IBM also leads in the message oriented middleware market, the transaction processing monitor market, and the combined markets for Enterprise Service Bus (ESB) and integration appliances.

Critical segments for sure, but businesses need more. For that IBM offers DB2 a powerful, enterprise database management system that can rival Oracle. WebSphere goes far beyond being just an application server; it encompasses a wide range of functionality including portals and commerce. With Rational, IBM can cover the entire application development lifecycle, and with Lotus IBM nails down communication and collaboration. And don’t forget Cognos, a proven BI tool, plus all the IBM Smart Analytics tools. Finally, IBM provides the Tivoli product set to manage both systems and storage.

The point: when it comes to high end servers it is not just about processor cores. It’s about systems optimized for the software you need to run your workloads. With enterprise data centers that will often be IBM

 


Follow

Get every new post delivered to your Inbox.

Join 675 other followers

%d bloggers like this: