Posts Tagged ‘mobile’

z System-Power-Storage Still Live at IBM

January 5, 2017

A mid-December briefing by Tom Rosamilia, SVP, IBM Systems, reassured some that IBM wasn’t putting its systems and platforms on the backburner after racking up financial quarterly losses for years. Expect new IBM systems in 2017. A few days later IBM announced that Japan-based APLUS Co., Ltd., which operates credit card and settlement service businesses, selected IBM LinuxONE as its mission-critical system for credit card payment processing. Hooray!

linuxone-emperor-2

LinuxONE’s security and industry-leading performance will ensure APLUS achieves its operational objectives as online commerce heats up and companies rely on cloud applications to draw and retain customers. Especially in Japan, where online and mobile shopping has become increasingly popular, the use of credit cards has grown, with more than 66 percent of consumers choosing that method for conducting online transactions. And with 80 percent enterprise hybrid cloud adoption predicted by 2017, APLUS is well positioned to connect cloud transactions leveraging LinuxONE. Throw in IBM’s expansion of blockchain capabilities and the APLUS move looks even smarter.

With the growth of international visitors spending money, IBM notes, and the emergence of FinTech firms in Japan have led to a diversification of payment methods the local financial industry struggles to respond. APLUS, which issues well-known credit cards such as T Card Plus, plans to offer leading-edge financial services by merging groups to achieve lean operations and improved productivity and efficiency. Choosing to update its credit card payment system with LinuxONE infrastructure, APLUS will benefit from an advanced IT environment to support its business growth by helping provide near-constant uptime. In addition to updating its server architecture, APLUS has deployed IBM storage to manage mission-critical data, the IBM DS8880 mainframe-attached storage that delivers integration with IBM z Systems and LinuxONE environments.

LinuxONE, however, was one part of the IBM Systems story Rosamilia set out to tell.  There also is the z13s, for encrypted hybrid clouds and the z/OS platform for Apache Spark data analytics and even more secure cloud services via blockchain on LinuxONE, by way of Bluemix or on premises.

z/OS will get attention in 2017 too. “z/OS is the best damn OLTP system in the world,” declared Rosamilia. He went on to imply that enhancements and upgrades to key z systems were coming in 2017, especially CICS, IMS, and a new release of DB2. Watch for new announcements coming soon as IBM tries to push z platform performance and capacity for z/OS and OLTP.

Rosamilia also talked up the POWER story. Specifically, Google and Rackspace have been developing OpenPOWER systems for the Open Compute Project.  New POWER LC servers running POWER8 and the NVIDIA NVLink accelerator, more innovations through the OpenCAPI Consortium, and the team of IBM and Nvidia to deliver PowerAI, part of IBM’s cognitive efforts.

As much as Rosamilia may have wanted to talk about platforms and systems IBM continues to avoid using terms like systems and platforms. So Rosamilia’s real intent was to discuss z and Power in conjunction with IBM’s strategic initiatives.  Remember these: cloud, big data, mobile, analytics. Lately, it seems, those initiatives have been culled down to cloud, hybrid cloud, and cognitive systems.

IBM’s current message is that IT innovation no longer comes from just the processor. Instead, it comes through scaling performance by workload and sustaining leadership through ecosystem partnerships.  We’ve already seen some of the fruits of that innovation through the Power community. Would be nice to see some of that coming to the z too, maybe through the open mainframe project. But that isn’t about z/0S. Any boost in CICS, DB2, and IMS will have to come from the core z team. The open mainframe project is about Linux on z.

The first glimpse we had of this came last spring in a system dubbed Minsky, which was described back then by commentator Timothy Prickett Morgan. With the Minsky machine, IBM is using NVLink ports on the updated Power8 CPU, which was shown in April at the OpenPower Summit and is making its debut in systems actually manufactured by ODM Wistron and rebadged, sold, and supported by IBM. The NVLink ports are bundled up in a quad to deliver 80 GB/sec bandwidth between a pair of GPUs and between each GPU and the updated Power8 CPU.

The IBM version, Morgan describes, aims to create a very brawny node with very tight coupling of GPUs and CPUs so they can better share memory, have fewer overall GPUs, and more bandwidth between the compute elements. IBM is aiming Minsky at HPC workloads, according to Morgan, but there is no reason it cannot be used for deep learning or even accelerated databases.

Is this where today’s z data center managers want to go?  No one is likely to spurn more performance, especially if it is accompanied with a price/performance improvement.  Whether rank-and-file z data centers are queueing up for AI or cognitive workloads will have to be seen. The sheer volume and scale of expected activity, however, will require some form of automated intelligent assist.

DancingDinosaur is Alan Radding, a veteran information technology analyst, writer, and ghost-writer. Please follow DancingDinosaur on Twitter, @mainframeblog. See more of his IT writing at technologywriter.com and here

IBM Power System S822LC for HPC Beat Sort Record by 3.3x

November 17, 2016

The new IBM Power System S822LC for High Performance Computing servers set a new benchmark for sorting by taking less than 99 seconds (98.8 seconds) to finish sorting 100 terabytes of data in the Indy GraySort category, improving on last year’s best result, 329 seconds, by a factor of 3.3. The win proved a victory not only for the S822LC but for the entire OpenPOWER community. The team of Tencent, IBM, and Mellanox has been named the Winner of the Sort Benchmark annual global computing competition for 2016.

rack-of-new-ibm-power-systems-s822lc-for-high-performance-computing-servers-1Power System S822LC for HPC

Specifically, the machine, an IBM Power S822LC for High Performance Computing (HPC), features NVIDIA NVLink technology optimized for the Power architecture and NVIDIA’s latest GPU technology. The new system supports emerging computing methods of artificial intelligence, particularly deep learning. The combination, newly dubbed IBM PowerAI, provides a continued path for Watson, IBM’s cognitive solutions platform, to extend its artificial intelligence expertise in the enterprise by using several deep learning methods to train Watson.

Actually Tencent Cloud Data Intelligence (the distributed computing platform of Tencent Cloud) won each category in both the GraySort and MinuteSort benchmarks, establishing four new world records with its performance, outperforming the 2015 best speeds by 2-5x. Said Zeus Jiang, Vice President of Tencent Cloud and General Manager of Tencent’s Data Platform Department: “In the future, the ability to manage big data will be the foundation of successful Internet businesses.”

To get this level of performance Tencent runs 512 IBM OpenPOWER LC servers and Mellanox’100Gb interconnect technology, improving the performance of Tencent Cloud big data products with the infrastructure. Online prices for the S822LC starts at about $9600 for 2-socket, 2U with up to 20 cores (2.9-3.3Ghz), 1 TB memory (32 DIMMs), 230 GB/sec sustained memory bandwidth, 2x SFF (HDD/SSD), 2 TB storage, 5 PCIe slots, 4 CAPI enabled, up to 2 NVidia K80 GPU. Be sure to shop for volume discounts.

The 2016 Sort Benchmark Results below (apologies in advance if this table breaks apart)

Sort Benchmark Competition 20 Records (Tencent Cloud ) 2015 World Records 2016 Improvement
Daytona GraySort 44.8 TB/min 15.9 TB/min 2.8X greater performance
Indy GraySort 60.7 TB/min 18.2 TB/min 3.3X greater performance
Daytona MinuteSort 37 TB/min 7.7 TB/min 4.8X greater performance
Indy MinuteSort 55 TB/min 11 TB/min 5X greater performance

Pretty impressive, huh. As IBM explains it: Tencent Cloud used 512 IBM OpenPOWER servers and Mellanox’100Gb interconnect technology, improving the performance of Tencent Cloud big data products with the infrastructure. Then Tom Rosamilia, IBM Senior VP weighed in: “Industry leaders like Tencent are helping IBM and our OpenPOWER partners push performance boundaries for a cognitive era defined by big data and advanced analytics.” The computing record achieved by Tencent Cloud on OpenPOWER turned out to be an important milestone for the OpenPOWER Foundation too.

Added Amir Prescher, Sr. Vice President, Business Development, at Mellanox Technologies: “Real-time-analytics and big data environments are extremely demanding, and the network is critical in linking together the extra high performance of IBM POWER-based servers and Tencent Cloud’s massive amounts of data,” In effect, Tencent Cloud developed an optimized hardware/software platform to achieve new computing records while demonstrating that Mellanox’s 100Gb/s Ethernet technology can deliver total infrastructure efficiency and improve application performance, which should make it a favorite for big data applications.

Behind all of this was the new IBM Power System S822LC for High Performance Computing servers. Currently the servers feature a new IBM POWER8 chip designed for demanding workloads including artificial intelligence, deep learning and advanced analytics.  However, a new POWER9 chips has already been previewed and is expected next year.  Whatever the S822LC can do running POWER8 just imagine how much more it will do running POWER9, which IBM describes as a premier acceleration platform. DancingDinosaur covered POWER9 in early Sept. here.

To capitalize on the hardware, IBM is making a new deep learning software toolkit available, PowerAI, which runs on the recently announced IBM Power S822LC server built for artificial intelligence that features NVIDIA NVLink interconnect technology optimized for IBM’s Power architecture. The hardware-software combination provides more than 2X performance over comparable servers with 4 GPUs running AlexNet with Caffe. The same 4-GPU Power-based configuration running AlexNet with BVLC Caffe can also outperform 8 M40 GPU-based x86 configurations, making it the world’s fastest commercially available enterprise systems platform on two versions of a key deep learning framework.

Deep learning is a fast growing, machine learning method that extracts information by crunching through millions of pieces of data to detect and ranks the most important aspects of the data. Publicly supported among leading consumer web and mobile application companies, deep learning is quickly being adopted by more traditional enterprises across a wide range of industry sectors; in banking to advance fraud detection through facial recognition; in automotive for self-driving automobiles; and in retail for fully automated call centers with computers that can better understand speech and answer questions. Is your data center ready for deep learning?

DancingDinosaur is Alan Radding, a veteran information technology analyst, writer, and ghost-writer. Please follow DancingDinosaur on Twitter, @mainframeblog. See more of his IT writing at technologywriter.com and here.

 

 

BMC Mainframe Survey Confirms z System Is Here to Stay

November 11, 2016

No surprise there. BMC’s 11th annual mainframe survey covering 1,200 mainframe executives and tech professionals found 58% of respondents reported usage of the mainframe is increasing as they look to capitalize on every infrastructure advantage it provides and add more workloads. Another 23% consider the mainframe as the best option to run critical work.

ibm_system_z10

IBM z10

Driving the continuing interest in the mainframe are the new demands for data handling, scalable processing, analytics, and more. According to the BMC survey nearly 60% of companies are seeing increased data and transaction volumes. They opt to stay with the mainframe for its highly secure, superior data handling and transaction serving, particularly as digital business adds unpredictability and volatility to workloads.

Overall respondents fell into three primary groups: 1) entrenched mainframe shops, 58% that are on board for the long haul; 2) shops, 23% that intend to maintain a steady amount of work on the mainframe; and 3) the 19% that are moving away from the mainframe.  The first two groups, committed mainframe shops, amount to just over survey 80% of the respondents.

Many companies surveyed are focused on addressing the increased workload demands, especially the rapidly growing demand for new applications. But surprisingly, the survey does not directly touch on hybrid cloud, cognitive computing or any of the latest technologies IBM has been promoting, not even DevOps, which can streamline mainframe application development and deployment. “We are not hearing much about a hybrid cloud environments or blockchain yet. Most companies seem to be in the early tire kicking stage, observed John McKenny, BMC Vice President, Strategy and Operations.

Eighty-eight percent of companies in the first group, entrenched mainframe shops, for example, are looking to increase the workloads they run on Java on the mainframe, primarily to address new application demands. It also doesn’t hurt that Java on the mainframe also can help lower data center costs by directing workloads to lower cost assist processors.

Other interesting BMC survey findings:

  • Half of the respondents report keeping 50% of their data on the mainframe and continue to invest in the platform for reasons you already know—security, availability, data serving capability
  • Continued steady growth of Linux in production on the z: 41% in 2014, 48% in 2015, 52% in 2016
  • Increased use of Java on the mainframe report as 67% of respondents cite need to meet growing application demand

Those looking to reduce mainframe presence cited three reasons: 1) perception of high cost, 2) outdated management understanding, and 3) looking for ways to reduce workloads over time.  DancingDinosaur has spoken with mainframe shops intending to migrate off the z and they cite the usual reasons, especially #1 above.

Top mainframe priorities for 2016 according to the BMC survey:  Cost reduction/optimization (65%); data privacy, compliance, security (50%); application availability (49%); application modernization (41%. Responses indicated the priorities for next year haven’t changed at all.

Surprisingly, many of the latest technologies for the z that IBM has touted recently have not yet shown up in the BMC survey responses, except maybe Java and Linux. This would include hybrid clouds, blockchain, IoT, and cognitive computing. IDC, for example, already is projecting cognitive computing to grow at a CAGR of 55.1% from 2016 to 2020. For z shops, however, cognitive computing appears almost invisible.

In some case with surveys like this you need to read between the lines. Where respondents report changes in activity levels driving application growth or the growth of interest in Java or the frequency of application changes and references to operational analytics they’re making oblique references to mobile or big data or even cognitive computing or other recent technologies for the z.

At its best, the BMC notes that digital technologies are transforming the ways in which mainframe shops conduct business and interact with their customers.  Adds BMC mainframe customer Credit Suisse: “IT departments are moving toward centralized, virtualized, and highly automated environments. This is being pursued to drive cost and processing efficiencies. Many companies realize that the Mainframe has provided these benefits for many years and is a mature and stable environment,” said Frank Cortell, Credit Suisse Director of Information Technology.

DancingDinosaur is Alan Radding, a veteran information technology analyst, writer, and ghost-writer. Please follow DancingDinosaur on Twitter, @mainframeblog. See more of his IT writing at technologywriter.com and here.

 

 

Compuware Triples Down on Promised Quarterly z System Releases

October 14, 2016

Since Jan 2015 Compuware has been releasing enhancements to its mainframe software portfolio quarterly.  The latest quarterly release, dated Oct. 3, delivers REST APIs for ISPW source code management and DevOps release automation; Integration of Compuware Abend-AID with Syncsort Ironstream to create their own custom cross-platform DevOps toolchains; and a new Seasoft Plug-In for Topaz Workbench. The Seasoft plug-in will help less skilled IBM z/OS developers to manage mainframe batch processing along with other z platform tasks

compuware-blended-ecosystem

Compuware’s point is to position the mainframe at the heart of agile DevOps computing. As part of the effort, it needs to deliver slick, modern tools that will appear to the non-mainframers who are increasingly moving into multi-platform development roles that include the mainframe. These people want to work as if they are dealing with a Windows or Linux machine. They aren’t going to wrestle with arcane mainframe constructs like Abends or JCL.  Traditional mainframe dev, test and code promotion processes are simply too slow to meet the demands of today’s fast-moving markets. The new dev and ops people who are filling out data center ranks haven’t the patience to learn what they view as antiquated mainframe concepts. They need intelligent tools that visualize the issue and let them intuitively click, drag, drop, and swipe their way through whatever needs to be done.

This is driven by the long-expected attrition of veteran mainframers and the mainframe knowledge and application insight they brought. Only the recession that began in 2008 slowed the exit of aging mainframers. Now they are leaving; one mainframe credit card processor reportedly lost 50 mainframe staff in a month.  The only way to replace this kind of experience is with intelligent and easy to learn tools and expert automation.

Compuware’s response has been to release new tools and enhancements every quarter. It started with Topaz in 2015. DancingDinosaur covered it Jan. 2015 here.  The beauty of Topaz lies in its graphical ease-of-use. Data center newbies didn’t need to know z/OS; they could understand what they were seeing and do meaningful work. With each quarterly release Compuware, in one way or another, has advanced this basic premise.

The most recent advances are streamlining the DevOps process in a variety of ways.  DevOps has emerged as critical with mainframe shops scrambling to remain relevant and effective in a rapidly evolving app dev environment. Just look at Bluemix if you want to see where things are heading.

In the first announcement, Compuware extended mainframe DevOps innovation with REST APIs for ISPW SCM and release automation. The new APIs enable large enterprises to flexibly integrate their numerous other mainframe and non-mainframe DevOps tools with ISPW to create their own custom cross-platform DevOps toolchains. Part of that was  the acquisition of the assets associated with Itegrations’s source code management (SCM) migration practice and methodology, which will  enable Compuware users to more easily migrate their SCM systems from Agile-averse products such as CA Endevor, CA Panvalet, CA Librarian, and Micro Focus/Serena ChangeMan as well as internally developed SCM systems—to ISPW

According to Compuware, these DevOps toolchains are becoming increasingly important for two reasons:

  • Enterprises must aggressively adopt DevOps disciplines in their mainframe environments to fulfill business requirements for digital agility. Traditional mainframe dev, test and code promotion processes are simply too slow to meet the demands of today’s fast-moving markets to counter new, digitally nimble market disruptors.
  • Data centers need to better integrate the toolchains that support their newly adopted mainframe DevOps workflows with those that support DevOps across their various other platforms. This is because mainframe applications and data so often function as back-end systems-of-record for front-end web and mobile systems-of-engagement in multi-tier/cross-platform environments.

In the second announcement Compuware integrated Abend-AID and Syncsort’s Ironstream to give fast, clear insight into mainframe issues. Specifically, the integration of Abend-AID and Ironstream \ enables IT to more quickly discover and act upon correlations between application faults and broader conditions in the mainframe environment. This is particularly important, notes Compuware, as enterprises, out of necessity, shift operational responsibilities for the platform to staffs with limited experience on z/OS. Just put yourself into the shoes of a distributed system manager now dealing with a mainframe. What might appear to be a platform issue may turn out to be software faults, and vice versa.  The retired 30-year mainframe veterans would probably see it immediately (but not always). Mainframe newcomers need a tool with the intelligence to recognize it for them.

With the last announcement Compuware and Software Engineering of America (SEA) introduced the release of SEA’s JCLplus+ Remote Plug-In and $AVRS Plug-In for Compuware’s Topaz Workbench mainframe IDE. Again think about mainframe neophytes. The new plug-ins for Topaz significantly ease challenging JCL- and output-related tasks, according to Compuware, effectively enabling both expert and novice IT staff to perform those tasks more quickly and more accurately in the context of their other mainframe DevOps activities.

An encouraging aspect of this is that Compuware is not doing this alone. The company is teaming up with SEA and with Syncsort to make this happen. As the mainframe vendors work to make mainframe computing easier and more available to lesser trained people it will be good for the mainframe industry as a whole and maybe even help lower the cost of mainframe operations.

DancingDinosaur is Alan Radding, a veteran information technology analyst, writer, and ghostwriter. Please follow DancingDinosaur on Twitter, @mainframeblog. See more of his IT writing at technologywriter.com and here.

 

Meet the POWER9 Chip Family

September 2, 2016

When you looked at a chip in the past you primarily were concerned with two things: the speed of the chip, usually expressed in GHz, and how much power it consumed. Today the IBM engineers preparing the newest POWER chip, the 14nm POWER9, are tweaking the chips for the different workloads it might run, such as cognitive or cloud, and different deployment options, such as scale-up or scale-out, and a host of other attributes.  EE Times described it in late August from the Hot Chips conference where it was publicly unveiled.

ibm power9 bandwidth

IBM POWER9 chip

IBM describes it as a chip family but maybe it’s best described as the product of an entire chip community, the Open POWER Foundation. Innovations include CAPI 2.0, New CAPI, Nvidia’s NVLink 2.0, PCle Gen4, and more. It spans a range of acceleration options from HSDC clusters to extreme virtualization capabilities for the cloud. POWER9 is not just about high speed transaction processing; IBM wants the chip to interpret and reason, ingest and analyze.

POWER has gone far beyond the POWER chips that enabled Watson to (barely) beat the human Jeopardy champions. Going forward, IBM is counting on POWER9 and Watson to excel at cognitive computing, a combination of high speed analytics and self-learning. POWER9 systems should not only be lightning fast but get smarter with each new transaction.

For z System shops, POWER9 offers a glimpse into the design thinking IBM might follow with the next mainframe, probably the z14 that will need comparable performance and flexibility. IBM already has set up the Open Mainframe Project, which hasn’t delivered much yet but is still young. It took the Open POWER group a couple of years to deliver meaningful innovations. Stay tuned.

The POWER9 chip is incredibly dense (below). You can deploy it as either a scale-up or scale-out architecture. You have a choice of two-socket servers with 8 DDR4 ports and another for multiple chips per server with buffered DIMMs.

power9 chip

IBM POWER9 silicon layout

IBM describes the POWER9 as a premier acceleration platform. That means it offers extreme processor/accelerator bandwidth and reduced latency; coherent memory and virtual addressing capability for all accelerators; and robust accelerated compute options through the OpenPOWER community.

It includes State-of-the-Art I/O and Acceleration Attachment Signaling:

  • PCIe Gen 4 x 48 lanes – 192 GB/s duplex bandwidth
  • 25G Link x 48 lanes – 300 GB/s duplex bandwidth

And robust accelerated compute options based on open standards, including:

  • On-Chip Acceleration—Gzip x1, 842 Compression x2, AES/SHA x2
  • CAPI 2.0—4x bandwidth of POWER8 using PCIe Gen 4
  • NVLink 2.0—next generation of GPU/CPU bandwidth and integration using 25G Link
  • New CAPI—high bandwidth, low latency and open interface using 25G Link

In scale-out mode it employs direct attached memory through 8 direct DDR4 ports, which deliver:

  • Up to 120 GB/s of sustained bandwidth
  • Low latency access
  • Commodity packaging form factor
  • Adaptive 64B / 128B reads

In scale-up mode it uses buffered memory through 8 buffered channels to provide:

  • Up to 230GB/s of sustained bandwidth
  • Extreme capacity – up to 8TB / socket
  • Superior RAS with chip kill and lane sparing
  • Compatible with POWER8 system memory
  • Agnostic interface for alternate memory innovations

POWER9 was publicly introduced at the Hot Chips conference last spring. Commentators writing in EE Times noted that POWER9 could become a break out chip, seeding new OEM and accelerator partners and rejuvenating IBM’s efforts against Intel in high-end servers. To achieve that kind of performance IBM deploys large chunks of memory—including a 120 Mbyte embedded DRAM in shared L3 cache while riding a 7 Tbit/second on-chip fabric. POWER9 should deliver as much as 2x the performance of the Power8 or more when the new chip arrives next year, according to Brian Thompto, a lead architect for the chip, in published reports.

As noted above, IBM will release four versions of POWER9. Two will use eight threads per core and 12 cores per chip geared for IBM’s Power virtualization environment; two will use four threads per core and 24 cores/chip targeting Linux. Both will come in two versions — one for two-socket servers with 8 DDR4 ports and another for multiple chips per server with buffered DIMMs.

The diversity of choices, according to Hot Chips observers, could help attract OEMs. IBM has been trying to encourage others to build POWER systems through its OpenPOWER group that now sports more than 200 members. So far, it’s gaining most interest from China where one partner plans to make its own POWER chips. The use of standard DDR4 DIMMs on some parts will lower barriers for OEMs by enabling commodity packaging and lower costs.

DancingDinosaur is Alan Radding, a veteran information technology analyst and writer. Please follow DancingDinosaur on Twitter, @mainframeblog. See more of his IT writing at technologywriter.com and here.

 

 

 

IBM’s Latest Flash Announcements Target Dell/EMC

August 26, 2016

The newest IBM storage, announced here earlier this week, aims to provide small, midsize, and global enterprises with virtualized SDS for primary storage and for cloud or cognitive applications and workloads. Central to the effort is IBM Spectrum Virtualize, which automates Storwize all-flash solutions intended to reduce the cost and complexity of data center and cloud environments. Entry pricing for the new storage starts at $19,000, which IBM describes as cost-conscious.storwize logo

IBM All-Flash for the midrange

In addition, IBM announced Flash In, a no-cost storage migration program targeting Dell/EMC customers that IBM hopes will bail out of the merged operation.

SDS in the form of IBM Spectrum Virtualize is central to making IBM’s latest all-flash offerings work for the broad set of use cases IBM envisions.  As IBM puts it: organizations today are embracing all-flash storage to deliver speed and response times necessary to support growing data workloads across public, private, and hybrid cloud environments, as well as the emerging demands of cognitive applications and workloads.

IBM Spectrum Virtualize promises to improve storage efficiency through features such as real-time data compression, thin provisioning, and snapshotting across nearly 400 different storage arrays from a multitude of vendors. That means organizations can leverage, even repurpose, physical storage capacity they already have as they scramble to meet the storage needs of new workloads.

Spectrum Virtualize also optimizes data security, reliability and operational costs. For example, the software automatically tiers and migrates data from one storage array to another, provides secure data-at-rest encryption, and remotely replicates data for disaster recovery and business continuity

The announcement centers around two products, the enterprise-class IBM Storwize V7000F and a midsize IBM Storwize 5030F,  which promise enterprise-class availability and function in a mid-range and entry-level all-flash storage array.  At the same time, both offer greater performance and require less time to provision and optimize systems. Coincidentally, IBM has just been recognized, for the third year in a row as a leader for Flash Storage in the Gartner Magic Quadrant for Solid-State Arrays (SSA).

Specifically, the all-flash IBM Storwize V7000F improves performance by up to 45 percent and supports four times the clustering for scale-out and scale-up growth to help organizations manage rapidly growing datasets.  The midrange and entry level all flash IBM Storwize 5030F offers high performance and availability at a discounted entry point (noted above) to help clients control costs.

The all-flash Storwize V7000F and Storwize V5030F are also built to manage a variety of primary storage workloads, from database management systems, such as SQL Server and MySQL, to digital media sources that include broadcast, real-time streaming, and video surveillance. The new technology can also handle huge data volumes, such as IoT data.

Given the product line confusion that typically characterizes big technology platform mergers, IBM is looking to entice some Dell or, more likely, EMC storage customers to the new Storwize offerings. To that end, IBM is offering what it describes as a no-cost migration initiative for organizations that are not current IBM customers and seeking a smooth transition path from their EMC or Dell storage to the IBM family of all-flash arrays. BTW: EMC is a leading provider of z System storage.

While too early to spot any Dell or EMC customer response, one long time IBM customer, Royal Caribbean Cruises Ltd, has joined the flash storage party. “With ever increasing volumes of customer and operational information, flexible and secure data storage is crucial to keeping our operation afloat (hope the pun was intended) as our company expands to hundreds of destinations worldwide,” said Leonardo Irastorza, Technology Revitalization & Global Shared Services Manager. The cruise line is counting on IBM flash storage to play a critical role, especially when it comes to ensuring exceptional guest experiences across its brands.

And more is coming: IBM released the following statement of direction: IBM intends to enhance IBM Spectrum Virtualize with additional capabilities for flash drive optimization and management. These capabilities are intended to help increase the service life and usability of flash drives, particularly read-intensive flash drives. The planned capabilities will likely include:

  • Data deduplication for workloads and use cases where it complements IBM’s existing industry leading compression technology
  • Improved flash memory management (mainly for garbage collection)
  • Additional flash drive wear management and reporting.

By implementing these capabilities in IBM Spectrum Virtualize they will be available for IBM Storwize family, FlashSystem V9000, and SAN Volume Controller offerings as well as VersaStack (the IBM/Cisco collaboration) and IBM PurePower systems.

DancingDinosaur is Alan Radding, a veteran information technology analyst and writer. Please follow DancingDinosaur on Twitter, @mainframeblog. See more of his IT writing at technologywriter.com and here.

 

IBM Discounts z/OS Cloud Activity

August 12, 2016

The latest iteration of IBM’s z/OS workload pricing aims at to lower the cost of running cloud workloads.  In a recent announcement, z Systems Workload Pricing for Cloud (zWPC) for z/OS seeks to minimize the impact of new public cloud workload transaction growth on Sub-Capacity license charges. IBM did the same thing with mobile workloads when they started driving up the 4-hour workload averages on the z. As more z workloads interact with public clouds this should start to add up, if it hasn’t already.

bluemix garage -ni_5554516560

Bluemix Garages in the Cloud

As IBM puts it: zWPC applies to any organization that has implemented Sub-Capacity pricing via the basic AWLC or AEWLC pricing mechanisms for the usual MLC software suspects. These include z/OS, CICS, DB2, IMS, MQ and WebSphere Application Server (WAS).  An eligible transaction is one classified as Public Cloud-originated, connecting to a z/OS hosted transactional service and/or data source via a REST or SOAP web service.  Public cloud workloads are defined as transactions processed by named Public cloud application transactions identified as originating from a recognized Public Cloud offering, including but not limited to, Amazon Web Services (AWS), Microsoft Azure, IBM Bluemix, and more.

IBM appears to have simplified how you identify eligible workloads. As the company notes: zWPC does not require you to isolate the public cloud work in separate partitions, but rather offers an enhanced way of reporting. The z/OS Workload Manager (WLM) allows clients to use WLM classification rules to distinguish cloud workloads, effectively easing the data collection requirements for public cloud workload transactions.

So how much will you save? It reportedly reduces eligible hourly values by 60 percent. The discount produces an adjusted Sub-Capacity value for each reporting hour. What that translates into on your monthly IBM software invoice once all the calculations and fine print are considered amounts to a guess at this point. But at least you’ll save something. The first billing eligible under this program starts Dec. 1, 2016.

DancingDinosaur expects IBM to eventually follow with discounted z/OS workload pricing for IoT and blockchain transactions and maybe even cognitive activity. Right now the volume of IoT and blockchain activity is probably too low to impact anybody’s monthly license charges. Expect those technologies ramp up in coming years with many industry pundits projecting huge numbers—think billions and trillions—that will eventually impact the mainframe data center and associated software licensing charges.

Overall, Workload License Charges (WLC) constitute a monthly software license pricing metric applicable to IBM System z servers running z/OS or z/TPF in z/Architecture (64-bit) mode.  The driving principle of WLS amounts to pay-for-what-you-use, a laudable concept. In effect it lowers the cost of incremental growth while further reducing software costs by proactively managing associated peak workload utilization.

Generally, DancingDinosaur applauds anything IBM does to lower the cost of mainframe computing.  Playing with workload software pricing in this fashion, however, seems unnecessary. Am convinced there must be simpler ways to lower software costs without the rigmarole of metering and workload distribution tricks. In fact, a small mini-industry has cropped up among companies offering tools to reduce costs, primarily through various ways to redistribute workloads to avoid peaks.

A modification to WLC, the variable WLC (VWLC) called AWLC (Advanced) and the EWLC (Entry), aligns with most of the z machines introduced over the past couple of years.  The result, according to IBM, forms a granular cost structure based on MSU (CPU) capacity that applies to VWLC and associated pricing mechanisms.

From there you can further tweak the cost by deploying Sub-Capacity and Soft Capping techniques.  Defined Capacity (DC), according to IBM, allows the sizing of an LPAR in MSU such that the LPAR will not exceed the designated MSU amount.  Group Capacity Limit (GCL) extends the Defined Capacity principle for a single LPAR to a group of LPARs, allowing MSU resources to be shared accordingly.  BTW, a potential downside of GCL is that is one LPAR in the group can consume all available MSUs due to a rogue transaction. Again, an entire mini industry, or maybe no so mini, has emerged to help handle workload and capacity pricing on the z.

At some point in most of the conference pricing sessions the eyes of many attendees glaze over.  By Q&A time the few remaining pop up holding a copy of a recent invoice and ask what the hell this or that means and what the f$#%@#$ they can do about it.

Have to admit that DancingDinosaur did not attend the most recent SHARE conference, where pricing workshops can get quite energetic, so cannot attest to the latest fallout. Still, the general trend with mobile and now with cloud pricing discounts should be lower costs.

DancingDinosaur is Alan Radding, a veteran information technology analyst and writer. Please follow DancingDinosaur on Twitter, @mainframeblog. See more of his IT writing at technologywriter.com and here.

 

IBM Leads in TBR Private and Hybrid Cloud Surveys

August 4, 2016

IBM has been named number one in private clouds by independent technology market research firm Technology Business Research (TBR) as well as number one in TBR’s hybrid cloud environments survey. Ironically, as fast as IBM has been trying to distance itself from its legacy platform heritage it brings an advantage when it comes to clouds for some customers. “A footprint in legacy IT solutions and management is a strong predictor of private cloud vendor success, as private cloud solutions are typically the first step toward hybrid IT environments,” wrote TBR Cloud Senior Analyst Cassandra Mooshian.

1800FLOWERS Taps IBM Commerce Cloud

Courtesy of IBM: 1800 FLOWERS Taps IBM Cloud

Coming out on top of IBM’s 2Q16 financials reported here, were the company’s strategic initiatives, mainly cloud, analytics, and mobile, which generated positive revenue results. The TBR reports provide welcome reinforcement for IBM strategy doubters. As reported by IBM, the annual run rate for cloud as-a-service revenue — a subset of total cloud revenue — increased to $6.7 billion from $4.5 billion in the second quarter of 2015.  Revenues from analytics increased 5 percent.  Revenues from mobile increased 43 percent while security revenue increased 18 percent.

The TBR report also noted IBM leadership in overall vendor adoption for private cloud and in select private cloud segments due to its broad cloud and IT services portfolio, its variety of deployment options, and accompanying integration and optimization support. As a result, the company’s expertise and knowledge of both cloud and legacy technology make it easier for customers to opt for an IBM migration path to both private and hybrid clouds.

TBR also specifically called out of IBM cloud-friendly capabilities, including the comprehensive portfolio of cloud and hardware assets with security; cloud professional services that can span a customer’s entire IT environment; and a vertical approach to cloud combined with Watson technology. As for hybrid clouds, Kelsey Mason, Cloud Analyst at TBR, noted in the announcement: “Hybrid integration is the next stage in cloud adoption and will be the end state for many enterprise IT environments.” Enterprise hybrid adoption, TBR observed, now matches public adoption of a year ago, which it interprets as signaling a new level of maturity in companies’ cloud strategies.

What really counts, however, are customers who vote with their checkbooks.  Here IBM has been racking up cloud wins. For example, Pratt & Whitney, a United Technologies Corp. company in July announced it will move the engine manufacturer’s business, engineering, and manufacturing enterprise systems to a fully managed and supported environment on the IBM Cloud infrastructure.

Said Brian Galovich, vice president and chief information officer, Pratt & Whitney, in the published announcement:  “Working with IBM and moving our three enterprise systems to a managed cloud service will give us the ability to scale quickly and meet the increased demands for computing services, data processing and storage based on Pratt & Whitney’s forecasted growth over the next decade.

Also in July, Dixons Carphone Group, Europe’s largest telecommunications retail and services company as the result of a 2014 merger, announced plans to migrate to the IBM Cloud from IBM datacenters in the United Kingdom to integrate two distinct infrastructures and enable easy scaling to better manage the peaks and valleys of seasonal shopping trends. Specifically, the company expects to migrate about 2,500 server images from both enterprises with supporting database and middleware components from both infrastructures to an IBM hybrid cloud platform that comprises a private IBM Cloud with bare metal servers for production workloads and public IBM Cloud platform for non-production workloads.

As a merged company it saw an opportunity to consolidate the infrastructures by leveraging cloud solutions for flexibility, performance and cost savings. After assessing the long-term values and scalability of multiple cloud providers, the company turned to IBM Cloud for a smooth transition to a hybrid cloud infrastructure. “We can trust IBM Cloud to seamlessly integrate the infrastructures of both companies into one hybrid cloud that will enable us to continue focusing on other parts of the business,” said David Hennessy, IT Director, Dixons Carphone, in the announcement.

As IBM’s 2Q16 report makes clear, once both these companies might have bought new IBM hardware platforms but that’s not the world today. At least they didn’t opt for AWS or Azure.

DancingDinosaur is Alan Radding, a veteran information technology analyst and writer. Please follow DancingDinosaur on Twitter, @mainframeblog. See more of his IT writing at technologywriter.com and here.

 

IBM’s DeepFlash 150 Completes Its Flash Lineup for Now

July 29, 2016

Two years ago DancingDinosaur wrote about new IBM Flash storage for the mainframe. That was about the DS8870, featuring 6-nines (99.9999) of availability and real-time-compression. Then this past May DancingDinosaur reported on another new IBM all-flash initiative, including the all-flash IBM DS8888 for the z, which also boasts 6-nines availability. Just this week IBM announced it is completing its flash lineup with the IBM DeepFlash 150, intended as a building block for SDS (software defined storage) infrastructures.

IBM DeepFlash 150IBM DeepFlash 150, courtesy of IBM

As IBM reports, the DeepFlash 150 does not use conventional solid-state drives (SSD). Instead, it relies on a systems-level approach that enables organizations to manage much larger data sets without having to manage individual SSD or disk drives. DeepFlash 150 comes complete with all the hardware necessary for enterprise and hyper-scale storage, including up to 64 purpose-engineered flash cards in a 3U chassis and 12-Gbps SAS connectors for up to eight host servers. The wide range of IBM Spectrum Storage and other SDS solutions available for DeepFlash 150 provides flash-optimized scale out and management along with large capacity for block, file and object storage.

The complication for z System shops is that you access the DeepFlash 150 through IBM Spectrum Scale. Apparently you can’t just plug the DeepFlash 150 into the z the way you would plug in the all flash DS8888. IBM Spectrum Scale works with Linux on z Systems servers or IBM LinuxONE systems running RHEL or SLES. Check out the documentation here.

As IBM explains in the Red Book titled IBM Spectrum Scale (GPFS) for Linux on z Systems: IBM Spectrum Scale provides a highly available clustering solution that augments the strengths of Linux on z by helping the z data center control costs and achieve higher levels of quality of service. Spectrum Scale, based on IBM General Parallel File System (GPFS) technology, is a high performance shared-disk file management solution that provides fast, reliable access to data from multiple nodes in a cluster environment. Spectrum Scale also allows data sharing in a mixed platform environment, which can provide benefits in cloud or analytics environments by eliminating the need of transferring data across platforms. When it comes to the DeepFlash 150 IBM is thinking about hyperscale data centers.

Hyperscale data centers can’t absorb the costs of constructing, managing, maintaining and cooling massive hyper- scale environments that use conventional mechanical storage, according to IBM. Those costs are driving the search for storage with a smaller physical footprint, lower costs, greater density, and, of course, much higher performance.

Enter DeepFlash 150, which introduces what IBM considers breakthrough economics for active data sets. The basic DeepFlash 150 hardware platform is priced under $1/GB. For big data deployments IBM recommends IBM Spectrum Scale with DeepFlash 150, providing customers with the overlying storage services and functionality critical for optimization of their big data workloads.

But even at $1/GB DeepFlash 150 isn’t going to come cheap. For starters consider how many gigabytes are in the terabytes or petabytes you will want to install. You can do the math. Even at $1/GB this is going to cost. Then you will need IBM Spectrum Scale. With DeepFlash 150 IBM did achieve extreme density of up to 170TB per rack unit, which adds up to a maximum 7PB of flash in a single rack enclosure.

IBM Spectrum Scale and the DeepFlash 150 are intended to support a wide range of file, object and Hadoop Distributed File System (HDFS) analytics work-loads. According to IBM, as a true SDS solution IBM Spectrum Scale can utilize any appropriate hardware and is designed specifically to maximize the benefits of hyper-scale storage systems like DeepFlash 150. Using a scale-out architecture, IBM Spectrum Scale can add servers or multiple storage types and incorporate them automatically into a single managed resource to maximize performance, efficiency, and data protection.

Although DeepFlash 150 can be used with a private cloud IBM seems to be thinking more in terms of hybrid clouds. To address today’s need for seamlessly integrating high-performance enterprise storage such as DeepFlash 150 with the nearly unlimited resources and capabilities of the cloud, IBM Spectrum Scale offers transparent cloud tiering to place data on cloud-based object storage or in a public cloud service. As IBM explains, the transparent cloud tiering feature of IBM Spectrum Scale can connect on- premises storage such as DeepFlash 150 directly to object storage or a commodity-priced cloud service. This allows enterprises to simultaneously leverage the economic, collaboration, and scale benefits of both on- premises and cloud storage while providing a single, powerful view of all data assets.

A Tech Target report on enterprise flash storage profiled 15 flash storage product lines. In general, the products claim maximum read IOPS ranging from 200,000 to 9 million, peak read throughput from 2.4 GBps to 46 GBps, and read latencies from 50 microseconds to 1 millisecond. The guide comes packed with a ton of caveats. And that’s why DancingDinosaur doesn’t think the DeepFlash 150 is the end of IBM’s flash efforts. Stay tuned.

DancingDinosaur is Alan Radding, a veteran information technology analyst and writer. Please follow DancingDinosaur on Twitter, @mainframeblog. See more of his IT writing at technologywriter.com and here.

 

IBM 2Q 2016 Report—Where’s z System and POWER?

July 22, 2016

“IBM continues to establish itself as the leading cognitive solutions and cloud platform company,” said Ginni Rometty, IBM chairman, president and chief executive officer, in a statement accompanying the latest IBM 2Q financial report. The strategic imperatives grew; second-quarter revenues from its cloud, analytics, and engagement units— increased 12 percent year to year.

IBM Quantum Computing Lab - Friday, April 29, 2016, Yorktown Heights, NY (Jon Simon/Feature Photo Service for IBM)

IBM Quantum Experience delivered via Cloud (Jon Simon/Feature Photo Service for IBM)

Where’s z and POWER? The z and POWER platforms continued to flounder: revenues of $2.0 billion, down 23.2 percent. Revenue reflects z Systems product cycle dynamics; gross profit margin improved in both z Systems and Power. “Product cycle dynamics” refers to the lack of a new z.  In the past year IBM introduced the new LinuxONE and, more recently a new z13s, essentially what used to be known as a Business Class mainframe.

There is no hint, however, of a new z, a z14 that will drive product dynamics upward. IBM showed a POWER roadmap going all the way out to the POWER10 in 2020 but nothing comparable for the z.

DancingDinosaur, a longtime big iron bigot, remains encouraged by IBM’s focus on its strategic initiatives and statements like this: “And we continue to invest for growth with recent breakthroughs in quantum computing, Internet of Things and blockchain solutions for the IBM Cloud.” IBM strategic initiatives in cloud, mobile, IoT, and blockchain will drive new use of the mainframe, especially as the projected volumes of things, transactions, users, and devices skyrocket.

Second-quarter revenues from the company’s strategic imperatives — cloud, analytics and engagement — increased 12 percent year to year.  Cloud revenues (public, private and hybrid) for the quarter increased 30 percent.  Cloud revenue over the trailing 12 months was $11.6 billion.  The annual run rate for cloud as-a-service revenue — a subset of total cloud revenue — increased to $6.7 billion from $4.5 billion in the second quarter of 2015.  Revenues from analytics increased 5 percent.  Revenues from mobile increased 43 percent and from security increased 18 percent.

IBM indirectly is trying to boost the z and the cloud. CSC and IBM  announced an alliance with IBM in which IBM will provide CSC Cloud Managed Services for z Systems. CSC already includes IBM SoftLayer as part of its “Service-enabled Enterprise” strategy. “Cloud for z” extends that offering and will be of interest to current and potential mainframe customers in healthcare, insurance, and finance. CSC still sees life in the managed mainframe market, and IBM Global Technology Services, a competitor to CSC, apparently is happy to let them sell managed cloud services for mainframes. All this is taking place as IBM scrambles to secure a leadership share of cloud revenue, and any cloud billing CSC brings will help.

Microsoft, like IBM, claimed big cloud momentum on its fourth quarter conference call, according to a report in Fortune Magazine. It was enough to send Microsoft share price up 4% at one point in after hours trading.

As Fortune notes, for Microsoft as for IBM and other legacy IT providers like Oracle, putting up big cloud numbers is mandatory as more companies change the way they buy IT products. Instead of purchasing hundreds or thousands of new servers or storage boxes every few years, more companies are running their software and storing their data on shared public cloud infrastructure, like Microsoft Azure, Amazon Web Services, the Google Compute Platform, or the IBM Cloud.

For reporting purposes, Microsoft combines Azure with other products in its intelligent cloud product segment. Overall, that segment’s revenue grew about 7% year over year to $6.7 billion from about $6.3 billion.

Oracle, too, is facing the same scramble to establish an enterprise cloud presence. Cloud software as a service (SaaS) and platform as a service (PaaS) revenues were $690 million, up 66% in U.S. dollars. Total Cloud revenues, including infrastructure as a service (IaaS), were $859 million, up 49% in U.S. dollars. At the same time, Oracle’s hardware revenue fell by 7% to $1.3 billion, and its software license revenue fell by 2% to $7.6 billion.

“We added more than 1,600 new SaaS customers and more than 2,000 new PaaS customers in Q4” (which ended in June), said Oracle CEO, Mark Hurd. “In Fusion ERP alone, we added more than 800 new cloud customers. Today, Oracle has nearly 2,600 Fusion ERP customers in the Oracle Public Cloud — that’s ten-times more cloud ERP customers than Workday.”

Hewlett Packard Enterprise (HPE) is the last of the big public enterprise platform vendors, along with IBM and Oracle. (Dell is private and acquired EMC). HPE recently reported its best quarter in years. Second quarter net revenue of $12.7 billion, up 1% from the prior-year period. “Today’s results represent our best performance since I joined in 2011,” said Meg Whitman, president and chief executive officer, Hewlett Packard Enterprise. The businesses comprising HPE grew revenue over the prior-year period on an as-reported basis for the first time in five years.

IBM needs to put up some positive numbers. Seventeen consecutive losing quarters is boring. Wouldn’t it be exciting if a turnaround started with a new enterprise z14?

DancingDinosaur is Alan Radding, a veteran information technology analyst and writer. Please follow DancingDinosaur on Twitter, @mainframeblog. See more of his IT writing at technologywriter.com and here.

 


%d bloggers like this: