IBM z System and Power Fuel Hybrid Cloud

September 30, 2016

DancingDinosaur has been cheerleading the z as a cloud player since 2011 and even before, mainly as an ideal vehicle for private clouds. So it should be no surprise to readers that last week IBM announced z and Power cloud-ready systems, services, and solutions for the hybrid cloud environment. My only question: What took them so long?

hybrid-cloud-systems

Power and z accelerate transformation to hybrid cloud

The world, indeed, is changing fast, and IT data centers especially have to scramble to keep up as they try to blend public cloud, private cloud, and traditional IT data centers and integrate it all seamlessly. “Today’s business environment is very dynamic and filled with disruption. A hybrid cloud model enables clients to continuously adapt while also optimizing on-premises investments and delivering the flexibility clients need across IBM Systems and the cloud,” according to Tom Rosamilia, senior vice president, IBM Systems.

At the heart of the IBM’s systems for what it calls the hybrid cloud era are three technologies we should be generally already familiar:

  • z Systems for cloud. IBM z Systems Operational Insights is a new SaaS-based offering that provides analytic insights on cloud operations for new levels of efficiency and application performance. It allows users to make better business and application decisions based on trends and embedded expertise on performance data through a GUI dashboard. This accompanies IBM OMEGAMON Application Performance Management, which provides quick identification of problem components through end-to-end visibility for clients’ hybrid workloads that include z Systems applications.  In addition, the newly available IBM Common Data Provider enables z Systems operational data to be efficiently consumed in near real time by the clients’ cloud or local enterprise operational analytics platform. An OK start, but you can quibble with OMEGAMON as IBM’s performance analysis and management choice. At its core, this is old technology. DancingDinosaur would prefer, say, Watson.
  • Power Systems for cloud. With integrated OpenStack-based cloud management and elastic consumption models, according to IBM, these new enterprise-class IBM Power Systems enable organizations to transform their IT infrastructure to a local cloud for AIX, IBM i and Linux workloads and extend them with rapid access to compute services in the IBM Cloud. DancingDinosaur covered the new LC here.
  • IBM Spectrum Copy Data Management and Protect. This brings a new solution that drives operational and development agility and efficiency across new and traditional applications that allow detailed, easy management of data copies.  Additionally, IBM Spectrum Protect has expanded its extensive hybrid cloud solution integration with cloud object storage options for use in hybrid cloud deployments.

About the only thing missing above is LinuxONE but that will come up below when IBM gets to openness, which is critical to hybrid clouds. In its announcement, IBM also promised a series of new and expanded collaborations with IBM Systems for hybrid cloud environments, including:

  • Canonical: Canonical and IBM are extending their ongoing alliance to make Ubuntu OpenStack available today on LinuxONE, z Systems, Power Systems, and OpenPOWER-based systems, including the new line of LC servers. This enables organizations to leverage Canonical’s portfolio across the three platforms with simplified and automated OpenStack management.
  • Hortonworks: IBM and Hortonworks,  a Hadoop platform, are jointly entering the marketplace to make Hortonworks Hadoop distribution available on POWER. Whoopee, Hadoop already runs native on z.
  • Mirantis: Mirantis and IBM are collaborating to develop reference architectures enabling Mirantis OpenStack to manage compute nodes hosted on IBM Power Systems servers, and to validate a host of core applications to run its OpenStack private cloud. With this integration, Mirantis will now bring its OpenStack based private cloud management to the POWER platform. This enables organizations to leverage the efficiency of IBM Power Systems for data-driven workloads in a seamless and compatible way for their data center through Mirantis’ OpenStack cloud management.
  • NGINX: NGINX’s application delivery platform now supports servers based on IBM’s POWER architecture with the latest release of its commercial load balancer and web accelerator software, NGINX Plus R10. The combination of NGINX Plus and POWER brings new agility to enterprises, allowing them to scale their infrastructure and application delivery solutions across any environment – public, private, and hybrid cloud; containers; and bare metal – providing a consistent user experience.
  • Red Hat: Red Hat and IBM are expanding their long-standing alliance to better help organizations embrace hybrid cloud. Through joint engineering and deeper product collaboration, the two companies plan to deliver solutions built on key components of Red Hat’s portfolio of open source products, including Red Hat Enterprise Linux, (RHEL)., By the way, RHEL is the #2 Linux distro on the z. RHEL also enables IBM Power Systems as a featured component of Red Hat’s hybrid cloud strategy spanning platform infrastructure located both on and off an enterprise’s premises.

There is enough openness to enable you to do what you need so DancingDinosaur has no real complaints with IBM’s choices above. Just wish it was more z- and LinuxONE-centric.

DancingDinosaur is Alan Radding, a veteran information technology analyst and writer and mainframe bigot. Please follow DancingDinosaur on Twitter, @mainframeblog. See more of his IT writing at technologywriter.com and here.

 

IBM Applies Fit-For-Purpose to Accelerating Hybrid Cloud Adoption

September 16, 2016

Almost every company is using the cloud, but not for everything, according to a recently announced study from IBM’s Institute of Business Value (IBV). Although 78 percent of the executives surveyed reported cloud initiatives as coordinated or fully integrated, almost half of computing workloads are expected to remain on dedicated, on-premises servers. This forces IT managers to consider applying IBM’s old server fit-for-purpose methodology to the cloud era, something mainframe shops haven’t seen since 2009-2010 with the arrival of hybrid mainframes.

bluemix-garages-collage-2

Given cloud evolution, notes the study researchers, it is imperative that organizations determine and regularly re-assess which combination of traditional IT, public cloud, and private cloud best suits their needs. This sounds eerily similar to IBM’s fit-for-purpose mantra that it used to steer buyers between the company’s various server offerings: enterprise mainframes, midrange machines, and entry level PC servers. Only in today’s cloud era, the choices are on-premises private clouds, off-premises public clouds, and hybrid clouds.

Picking up the fit-for-purpose mantra, IBM’s Marie Wieck said:  “Enterprises are moving to the cloud, especially hybrid cloud, faster than anyone expected to support their digital transformation, drive business model innovation and fuel growth.” She concludes by rephrasing the old server fit-for-purpose pitch for the cloud era: “Successful clients have integrated plans to place workloads where they fit best, and IBM’s hybrid cloud strategy provides this on-ramp for flexibility and growth.”

DancingDinosaur has no problem with the fit-for-purpose approach.  It has proven immeasurably useful to me over the years when pressed to explain differences between IBM platforms. That the company is starting to apply it to various ways of deploying cloud computing seems both fitting and ironic. It was the emergence of the Internet in the first place as a forerunner to the cloud that saved IBM from the incompatibilities of its myriad platforms. This was after IBM’s internal effort to make its systems at least interoperate in some rudimentary way, no matter how kludgy, failed. This happened under an initiative called Systems Application Architecture (SAA), which started in the late 1980s and was gone and forgotten by 2000. Vestiges of SAA actually may still linger in some systems. Read about it here.

It should be no surprise that the cloud has emerged as a strategic imperative for IBM. In addition to being where the revenue is heading, it is doing good things for its customers. The latest study finds that the top reasons executives cite for adopting hybrid cloud solutions are: lowering total cost of ownership (54 percent), facilitating innovation (42 percent), enhancing operational efficiencies (42 percent), and enabling them to more readily meet customer expectations (40 percent).

Furthermore, the researchers found that organizations are steadily increasing their use of cloud technologies to address wide-ranging requirements. Specifically companies reported undertaking cloud initiatives that enabled them to expand into new industries (76 percent), create new revenue sources (71 percent), and create and support new business models (69 percent).

Given the widely varying possible cloud starting points from which any company might begin its cloud journey the fit-for-purpose approach makes even more send. As the researchers noted: The particular needs and business conditions of each enterprise help define its optimal hybrid solution: most often, a blend of public cloud, private cloud, and traditional IT services. Finding the right cloud technology mix or deployment approach starts with deciding what to move to the cloud and addressing the challenges affecting migration. In the study, executives achieved the strongest results by integrating cloud initiatives company-wide, and by tapping external resources for access to reliable skills and greater efficiency.

Cloud computing undeniably is hot. According to IDC, worldwide spending on public cloud services is expected to grow from $96.5 billion this year to more than $195 billion in 2020. Even as cloud adoption matures and expands, organizations surveyed expect that about 45 percent of their workloads will continue to need on-premises, dedicated servers – nearly the same percentage as both today and two years ago. Clearly organizations are reluctant to give up the control and security they retain by keeping certain systems on-premises, behind their own firewalls.

Hybrid cloud solutions promise a way to move forward incrementally. Hybrid clouds by definition include a tailored mix of on-premises and public cloud services intended to work in unison and are expected to be widely useful across industries. Each organization’s unique business conditions and requirements will define its optimal hybrid technology landscape. Each organization’s managers will have to balance cost, risk, and performance considerations with customer demands for innovation, change, and price along with its own skills and readiness for change. Sounds like fit-for-purpose all over again.

DancingDinosaur is Alan Radding, a veteran information technology analyst and writer. Please follow DancingDinosaur on Twitter, @mainframeblog. See more of his IT writing at technologywriter.com and here.

 

Revamped IBM Power Systems LC Takes on x86

September 9, 2016

To hear IBM, its revamped and refreshed Power Systems LC lineup will undermine x86 (Intel), HPE, Dell/EMC, and any other purveyor of x86-based systems. Backed by accelerators provided by OpenPower community members, IBM appears ready extend the x86 battle to on premises, in the cloud, and the hybrid cloud. It promises to deliver better performance at lower cost for all the hot workloads too: artificial intelligence, deep learning, high performance data analytics, and compute-heavy workloads.

ibm-power-systems-s821lc

Two POWER8 processors, 1U config, priced 30% less than an x86 server

Almost a year ago, Oct. 2015, DancingDinosaur covered IBM previous Power Systems LC announcement here. The LC designation stands for Linux Community, and the company is tapping accelerators and more from the OpenPower community, just as it did with its recent announcement of POWER9 expected in 2017, here.

The new Power LC systems feature a set of community delivered technologies IBM has dubbed POWERAccel, a family of I/O technologies designed to deliver composable system performance enabled by accelerators. For GPU acceleration the NVDIA NVLink delivers nearly 5x better integration between POWER processors and the NVIDIA GPUs.  For FPGA acceleration IBM tapped its own CAPI architecture to integrate accelerators that run natively as part of the application.

This week’s Power Systems LC announcement features three new machines:

  • S821LC (pictured above)—includes 2 POWER8 sockets in a 1U enclosure and intended for environments requiring dense computing.
  • S822LC—brings 2 POWER8 sockets for big data workloads and adds big data acceleration through CAPI and GPUs.
  • S822LC—intended for high performance computing, it incorporates the new POWER8 processor with the NVDIA NVLink to deliver 2.8x the bandwidth to GPU accelerators and up to 4 integrated NVIDIA Pascal GPUs.

POWER8 with NVLink delivers 2.8 x the bandwidth compared to a PCle data pipe. According to figures provided by IBM comparing the price-performance of the Power S822LC for HPC (20-core, 256 GB, 4x Pascal) with a Dell C4130 (20-core, 256 GB 4xK80) and measured by total queries per hour (gph) the Power System delivered 2.1x better price-performance.  The Power Systems server cost more ($66,612) vs. the Dell ($57,615) but the Power System delivered 444 qph vs. Dell’s 185 qph.

The story plays out similarly for big data workloads running MongoDB on the IBM Power S8221LC for big data (20-core, 128 GB) vs. an HP DL380 (20-core, 128 GB). Here the system cost (server, OS, MongoDB annual subscription) came to $24,870 for IBM Power and $29,915 for HP.  Power provided 40% more performance at a 31% lower hardware/maintenance cost.

When it comes to the cloud the new IBM Power Systems LC offerings get even more interesting from a buyer’s standpoint. IBM declared the cloud a strategic imperative about 2 years ago and needs to demonstrate adoption that can rival the current cloud leaders; AWS, Google, and Microsoft (Azure). To that end IBM has started to tack on free cloud usage.

For example, during the industry analyst launch briefing IBM declared: Modernize your Power infrastructure for the Cloud, get access to IBM Cloud for free and cut your current operating costs by 50%. Whether you’re talking on-premises cloud or hybrid infrastructure the freebies just come. The free built-in cloud deployment service options include:

  • Cloud Provisioning and Automation
  • Infrastructure as a Service
  • Cloud Capacity Pools across Data Centers
  • Hybrid Cloud with BlueMix
  • Automation for DevOps
  • Database as a Service

These cover both on-premises, where you can transform your traditional infrastructure with automation, self-service, and elastic consumption models or a hybrid infrastructure where you can securely extend to Public Cloud with rapid access to compute services and API integration. Other freebies include open source automation, installation and configuration recipes, cross data center inventory, performance monitoring via the IBM Cloud, optional DR as a service for Power, and free access and capacity flexibility with SolfLayer (12 month starter pack).

Will the new LC line and its various cloud freebies get the low cost x86 monkey off IBM’s back? That’s the hope in Armonk. The new LC servers can be acquired at a lower price and can deliver 80% more performance per dollar spent over x86-based systems, according to IBM. This efficiency enables businesses and cloud service providers to lower costs and combat data center sprawl.

DancingDinosaur has developed TCO and ROI analyses comparing mainframe and Power systems to x86 for a decade, maybe more.  A few managers get it, but most, or their staff, have embedded bias and will never accept non-x86 machines. To them, any x86 system always is cheaper regardless of the specs and the math. Not sure even free will change their minds.

The new Power Systems LC lineup is price-advantaged over comparatively configured Intel x86-based servers, costing 30% less in some configurations.  Online LC pricing begins at $5999. Additional models with smaller configurations sport lower pricing through IBM Business Partners. All but the HPC machine are available immediately. The HPC machine will ship Sept. 26.

DancingDinosaur is Alan Radding, a veteran information technology analyst and writer. Please follow DancingDinosaur on Twitter, @mainframeblog. See more of his IT writing at technologywriter.com and here.

 

Meet the POWER9 Chip Family

September 2, 2016

When you looked at a chip in the past you primarily were concerned with two things: the speed of the chip, usually expressed in GHz, and how much power it consumed. Today the IBM engineers preparing the newest POWER chip, the 14nm POWER9, are tweaking the chips for the different workloads it might run, such as cognitive or cloud, and different deployment options, such as scale-up or scale-out, and a host of other attributes.  EE Times described it in late August from the Hot Chips conference where it was publicly unveiled.

ibm power9 bandwidth

IBM POWER9 chip

IBM describes it as a chip family but maybe it’s best described as the product of an entire chip community, the Open POWER Foundation. Innovations include CAPI 2.0, New CAPI, Nvidia’s NVLink 2.0, PCle Gen4, and more. It spans a range of acceleration options from HSDC clusters to extreme virtualization capabilities for the cloud. POWER9 is not just about high speed transaction processing; IBM wants the chip to interpret and reason, ingest and analyze.

POWER has gone far beyond the POWER chips that enabled Watson to (barely) beat the human Jeopardy champions. Going forward, IBM is counting on POWER9 and Watson to excel at cognitive computing, a combination of high speed analytics and self-learning. POWER9 systems should not only be lightning fast but get smarter with each new transaction.

For z System shops, POWER9 offers a glimpse into the design thinking IBM might follow with the next mainframe, probably the z14 that will need comparable performance and flexibility. IBM already has set up the Open Mainframe Project, which hasn’t delivered much yet but is still young. It took the Open POWER group a couple of years to deliver meaningful innovations. Stay tuned.

The POWER9 chip is incredibly dense (below). You can deploy it as either a scale-up or scale-out architecture. You have a choice of two-socket servers with 8 DDR4 ports and another for multiple chips per server with buffered DIMMs.

power9 chip

IBM POWER9 silicon layout

IBM describes the POWER9 as a premier acceleration platform. That means it offers extreme processor/accelerator bandwidth and reduced latency; coherent memory and virtual addressing capability for all accelerators; and robust accelerated compute options through the OpenPOWER community.

It includes State-of-the-Art I/O and Acceleration Attachment Signaling:

  • PCIe Gen 4 x 48 lanes – 192 GB/s duplex bandwidth
  • 25G Link x 48 lanes – 300 GB/s duplex bandwidth

And robust accelerated compute options based on open standards, including:

  • On-Chip Acceleration—Gzip x1, 842 Compression x2, AES/SHA x2
  • CAPI 2.0—4x bandwidth of POWER8 using PCIe Gen 4
  • NVLink 2.0—next generation of GPU/CPU bandwidth and integration using 25G Link
  • New CAPI—high bandwidth, low latency and open interface using 25G Link

In scale-out mode it employs direct attached memory through 8 direct DDR4 ports, which deliver:

  • Up to 120 GB/s of sustained bandwidth
  • Low latency access
  • Commodity packaging form factor
  • Adaptive 64B / 128B reads

In scale-up mode it uses buffered memory through 8 buffered channels to provide:

  • Up to 230GB/s of sustained bandwidth
  • Extreme capacity – up to 8TB / socket
  • Superior RAS with chip kill and lane sparing
  • Compatible with POWER8 system memory
  • Agnostic interface for alternate memory innovations

POWER9 was publicly introduced at the Hot Chips conference last spring. Commentators writing in EE Times noted that POWER9 could become a break out chip, seeding new OEM and accelerator partners and rejuvenating IBM’s efforts against Intel in high-end servers. To achieve that kind of performance IBM deploys large chunks of memory—including a 120 Mbyte embedded DRAM in shared L3 cache while riding a 7 Tbit/second on-chip fabric. POWER9 should deliver as much as 2x the performance of the Power8 or more when the new chip arrives next year, according to Brian Thompto, a lead architect for the chip, in published reports.

As noted above, IBM will release four versions of POWER9. Two will use eight threads per core and 12 cores per chip geared for IBM’s Power virtualization environment; two will use four threads per core and 24 cores/chip targeting Linux. Both will come in two versions — one for two-socket servers with 8 DDR4 ports and another for multiple chips per server with buffered DIMMs.

The diversity of choices, according to Hot Chips observers, could help attract OEMs. IBM has been trying to encourage others to build POWER systems through its OpenPOWER group that now sports more than 200 members. So far, it’s gaining most interest from China where one partner plans to make its own POWER chips. The use of standard DDR4 DIMMs on some parts will lower barriers for OEMs by enabling commodity packaging and lower costs.

DancingDinosaur is Alan Radding, a veteran information technology analyst and writer. Please follow DancingDinosaur on Twitter, @mainframeblog. See more of his IT writing at technologywriter.com and here.

 

 

 

IBM’s Latest Flash Announcements Target Dell/EMC

August 26, 2016

The newest IBM storage, announced here earlier this week, aims to provide small, midsize, and global enterprises with virtualized SDS for primary storage and for cloud or cognitive applications and workloads. Central to the effort is IBM Spectrum Virtualize, which automates Storwize all-flash solutions intended to reduce the cost and complexity of data center and cloud environments. Entry pricing for the new storage starts at $19,000, which IBM describes as cost-conscious.storwize logo

IBM All-Flash for the midrange

In addition, IBM announced Flash In, a no-cost storage migration program targeting Dell/EMC customers that IBM hopes will bail out of the merged operation.

SDS in the form of IBM Spectrum Virtualize is central to making IBM’s latest all-flash offerings work for the broad set of use cases IBM envisions.  As IBM puts it: organizations today are embracing all-flash storage to deliver speed and response times necessary to support growing data workloads across public, private, and hybrid cloud environments, as well as the emerging demands of cognitive applications and workloads.

IBM Spectrum Virtualize promises to improve storage efficiency through features such as real-time data compression, thin provisioning, and snapshotting across nearly 400 different storage arrays from a multitude of vendors. That means organizations can leverage, even repurpose, physical storage capacity they already have as they scramble to meet the storage needs of new workloads.

Spectrum Virtualize also optimizes data security, reliability and operational costs. For example, the software automatically tiers and migrates data from one storage array to another, provides secure data-at-rest encryption, and remotely replicates data for disaster recovery and business continuity

The announcement centers around two products, the enterprise-class IBM Storwize V7000F and a midsize IBM Storwize 5030F,  which promise enterprise-class availability and function in a mid-range and entry-level all-flash storage array.  At the same time, both offer greater performance and require less time to provision and optimize systems. Coincidentally, IBM has just been recognized, for the third year in a row as a leader for Flash Storage in the Gartner Magic Quadrant for Solid-State Arrays (SSA).

Specifically, the all-flash IBM Storwize V7000F improves performance by up to 45 percent and supports four times the clustering for scale-out and scale-up growth to help organizations manage rapidly growing datasets.  The midrange and entry level all flash IBM Storwize 5030F offers high performance and availability at a discounted entry point (noted above) to help clients control costs.

The all-flash Storwize V7000F and Storwize V5030F are also built to manage a variety of primary storage workloads, from database management systems, such as SQL Server and MySQL, to digital media sources that include broadcast, real-time streaming, and video surveillance. The new technology can also handle huge data volumes, such as IoT data.

Given the product line confusion that typically characterizes big technology platform mergers, IBM is looking to entice some Dell or, more likely, EMC storage customers to the new Storwize offerings. To that end, IBM is offering what it describes as a no-cost migration initiative for organizations that are not current IBM customers and seeking a smooth transition path from their EMC or Dell storage to the IBM family of all-flash arrays. BTW: EMC is a leading provider of z System storage.

While too early to spot any Dell or EMC customer response, one long time IBM customer, Royal Caribbean Cruises Ltd, has joined the flash storage party. “With ever increasing volumes of customer and operational information, flexible and secure data storage is crucial to keeping our operation afloat (hope the pun was intended) as our company expands to hundreds of destinations worldwide,” said Leonardo Irastorza, Technology Revitalization & Global Shared Services Manager. The cruise line is counting on IBM flash storage to play a critical role, especially when it comes to ensuring exceptional guest experiences across its brands.

And more is coming: IBM released the following statement of direction: IBM intends to enhance IBM Spectrum Virtualize with additional capabilities for flash drive optimization and management. These capabilities are intended to help increase the service life and usability of flash drives, particularly read-intensive flash drives. The planned capabilities will likely include:

  • Data deduplication for workloads and use cases where it complements IBM’s existing industry leading compression technology
  • Improved flash memory management (mainly for garbage collection)
  • Additional flash drive wear management and reporting.

By implementing these capabilities in IBM Spectrum Virtualize they will be available for IBM Storwize family, FlashSystem V9000, and SAN Volume Controller offerings as well as VersaStack (the IBM/Cisco collaboration) and IBM PurePower systems.

DancingDinosaur is Alan Radding, a veteran information technology analyst and writer. Please follow DancingDinosaur on Twitter, @mainframeblog. See more of his IT writing at technologywriter.com and here.

 

IBM Taps PCM to Advance Neuron-based Cognitive Computing

August 19, 2016

Just a couple of months ago DancingDinosaur reported a significant IBM advance in phase change memory (PCM). Then earlier this month IBM announced success in creating randomly spiking neurons using phase-change materials to store and process data. According to IBM, this represents a significant step toward achieving energy-efficient, ultra-dense, integrated neuromorphic technologies for application in cognitive computing.

IBM Phase Change Neurons

Phase Change Neurons

This also represents big step toward a cognitive computer. According to IBM, scientists have theorized for decades that it should be possible to imitate the versatile computational capabilities of large populations of neurons as the human brain does. With PCM it appears to be happening sooner than the scientists expected. “We have been researching phase-change materials for memory applications for over a decade, and our progress in the past 24 months has been remarkable,” said IBM Fellow Evangelos Eleftheriou.

As the IBM researchers explain: Phase-change neurons consist of a chip with large arrays of phase-change devices that store the state of artificial neuronal populations in their atomic configuration. In the graphic above individual devices are accessed by means of an array of probes to allow for precise characterization, modeling and interrogation. The tiny squares are contact pads that are used to access the nanometer-scale, phase-change cells (not visible). The sharp probes touch the contact pads to change the phase configuration stored in the cells in response to the neuronal input. Each set of probes can access a population of 100 cells. The chip hosts only the phase-change devices that are the heart of the neurons. There are thousands to millions of these cells on one chip that can be accessed (in this particular graphic) by means of the sharp needles of the probe card.

Not coincidentally, this seems to be dovetailing with IBM’s sudden rush to cognitive computing overall, one of the company’s recent strategic initiatives that has lately moved to the forefront.  Just earlier this week IBM was updating industry analysts on the latest with Watson and IoT and, sure enough, cognitive computing plays a prominent role.

As IBM explains it, the artificial neurons designed by IBM scientists in Zurich consist of phase-change materials, including germanium antimony telluride, which exhibit two stable states, an amorphous one (without a clearly defined structure) and a crystalline one (with structure). These artificial neurons do not store digital information; they are analog, just like the synapses and neurons in our biological brain, which is what makes them so tempting for cognitive computing.

In the published demonstration, the team applied a series of electrical pulses to the artificial neurons, which resulted in the progressive crystallization of the phase-change material, ultimately causing the neurons to fire. In neuroscience, this function is known as the integrate-and-fire property of biological neurons. This forms the foundation for event-based computation and, in principle, is similar to how our brain triggers a response when we touch something hot.

Even a single neuron can exploit this integrate-and-fire property to detect patterns and discover correlations in real-time streams of event-based data. To that end, IBM scientists have organized hundreds of artificial neurons into populations and used them to represent fast and complex signals. Moreover, the artificial neurons have been shown to sustain billions of switching cycles, which would correspond to multiple years of operation at an update frequency of 100 Hz. The energy required for each neuron update was less than five picojoule and the average power less than 120 microwatts (for comparison, 60 million microwatts power a 60 watt lightbulb).

The examples the researchers have provided so far seem pretty conventional.  For example, IoT sensors can collect and analyze volumes of weather data collected at the network edge for faster forecasts. Artificial neurons could be used to detect patterns in financial transactions that identify discrepancies. Even data from social media can be used to discover new cultural trends in real time. To make this work, large populations of these high-speed, low-energy nano-scale neurons would most likely be used in neuromorphic coprocessors with co-located memory and processing units, effectively mixing neuron-based cognitive computing with conventional digital computing.

Makes one wonder if IBM might regret spending millions to dump its chip fabrication capabilities.  According to published reports Samsung is very interested in this chip technology and wants to put the new processing power to work fast. The processor, reportedly dubbed TrueNorth by IBM, uses 4,096 separate processing cores to form one standard chip. Each can operate independently and are designed for low power consumption. Samsung hopes  the chip can help with visual pattern recognition for use in autonomous cars, which might be just a few years away. So, how is IBM going to make any money from this with its chip fab gone and commercial cognitive computers still off in the future?

DancingDinosaur is Alan Radding, a veteran information technology analyst and writer. Please follow DancingDinosaur on Twitter, @mainframeblog. See more of his IT writing at technologywriter.com and here.

 

IBM Discounts z/OS Cloud Activity

August 12, 2016

The latest iteration of IBM’s z/OS workload pricing aims at to lower the cost of running cloud workloads.  In a recent announcement, z Systems Workload Pricing for Cloud (zWPC) for z/OS seeks to minimize the impact of new public cloud workload transaction growth on Sub-Capacity license charges. IBM did the same thing with mobile workloads when they started driving up the 4-hour workload averages on the z. As more z workloads interact with public clouds this should start to add up, if it hasn’t already.

bluemix garage -ni_5554516560

Bluemix Garages in the Cloud

As IBM puts it: zWPC applies to any organization that has implemented Sub-Capacity pricing via the basic AWLC or AEWLC pricing mechanisms for the usual MLC software suspects. These include z/OS, CICS, DB2, IMS, MQ and WebSphere Application Server (WAS).  An eligible transaction is one classified as Public Cloud-originated, connecting to a z/OS hosted transactional service and/or data source via a REST or SOAP web service.  Public cloud workloads are defined as transactions processed by named Public cloud application transactions identified as originating from a recognized Public Cloud offering, including but not limited to, Amazon Web Services (AWS), Microsoft Azure, IBM Bluemix, and more.

IBM appears to have simplified how you identify eligible workloads. As the company notes: zWPC does not require you to isolate the public cloud work in separate partitions, but rather offers an enhanced way of reporting. The z/OS Workload Manager (WLM) allows clients to use WLM classification rules to distinguish cloud workloads, effectively easing the data collection requirements for public cloud workload transactions.

So how much will you save? It reportedly reduces eligible hourly values by 60 percent. The discount produces an adjusted Sub-Capacity value for each reporting hour. What that translates into on your monthly IBM software invoice once all the calculations and fine print are considered amounts to a guess at this point. But at least you’ll save something. The first billing eligible under this program starts Dec. 1, 2016.

DancingDinosaur expects IBM to eventually follow with discounted z/OS workload pricing for IoT and blockchain transactions and maybe even cognitive activity. Right now the volume of IoT and blockchain activity is probably too low to impact anybody’s monthly license charges. Expect those technologies ramp up in coming years with many industry pundits projecting huge numbers—think billions and trillions—that will eventually impact the mainframe data center and associated software licensing charges.

Overall, Workload License Charges (WLC) constitute a monthly software license pricing metric applicable to IBM System z servers running z/OS or z/TPF in z/Architecture (64-bit) mode.  The driving principle of WLS amounts to pay-for-what-you-use, a laudable concept. In effect it lowers the cost of incremental growth while further reducing software costs by proactively managing associated peak workload utilization.

Generally, DancingDinosaur applauds anything IBM does to lower the cost of mainframe computing.  Playing with workload software pricing in this fashion, however, seems unnecessary. Am convinced there must be simpler ways to lower software costs without the rigmarole of metering and workload distribution tricks. In fact, a small mini-industry has cropped up among companies offering tools to reduce costs, primarily through various ways to redistribute workloads to avoid peaks.

A modification to WLC, the variable WLC (VWLC) called AWLC (Advanced) and the EWLC (Entry), aligns with most of the z machines introduced over the past couple of years.  The result, according to IBM, forms a granular cost structure based on MSU (CPU) capacity that applies to VWLC and associated pricing mechanisms.

From there you can further tweak the cost by deploying Sub-Capacity and Soft Capping techniques.  Defined Capacity (DC), according to IBM, allows the sizing of an LPAR in MSU such that the LPAR will not exceed the designated MSU amount.  Group Capacity Limit (GCL) extends the Defined Capacity principle for a single LPAR to a group of LPARs, allowing MSU resources to be shared accordingly.  BTW, a potential downside of GCL is that is one LPAR in the group can consume all available MSUs due to a rogue transaction. Again, an entire mini industry, or maybe no so mini, has emerged to help handle workload and capacity pricing on the z.

At some point in most of the conference pricing sessions the eyes of many attendees glaze over.  By Q&A time the few remaining pop up holding a copy of a recent invoice and ask what the hell this or that means and what the f$#%@#$ they can do about it.

Have to admit that DancingDinosaur did not attend the most recent SHARE conference, where pricing workshops can get quite energetic, so cannot attest to the latest fallout. Still, the general trend with mobile and now with cloud pricing discounts should be lower costs.

DancingDinosaur is Alan Radding, a veteran information technology analyst and writer. Please follow DancingDinosaur on Twitter, @mainframeblog. See more of his IT writing at technologywriter.com and here.

 

IBM Leads in TBR Private and Hybrid Cloud Surveys

August 4, 2016

IBM has been named number one in private clouds by independent technology market research firm Technology Business Research (TBR) as well as number one in TBR’s hybrid cloud environments survey. Ironically, as fast as IBM has been trying to distance itself from its legacy platform heritage it brings an advantage when it comes to clouds for some customers. “A footprint in legacy IT solutions and management is a strong predictor of private cloud vendor success, as private cloud solutions are typically the first step toward hybrid IT environments,” wrote TBR Cloud Senior Analyst Cassandra Mooshian.

1800FLOWERS Taps IBM Commerce Cloud

Courtesy of IBM: 1800 FLOWERS Taps IBM Cloud

Coming out on top of IBM’s 2Q16 financials reported here, were the company’s strategic initiatives, mainly cloud, analytics, and mobile, which generated positive revenue results. The TBR reports provide welcome reinforcement for IBM strategy doubters. As reported by IBM, the annual run rate for cloud as-a-service revenue — a subset of total cloud revenue — increased to $6.7 billion from $4.5 billion in the second quarter of 2015.  Revenues from analytics increased 5 percent.  Revenues from mobile increased 43 percent while security revenue increased 18 percent.

The TBR report also noted IBM leadership in overall vendor adoption for private cloud and in select private cloud segments due to its broad cloud and IT services portfolio, its variety of deployment options, and accompanying integration and optimization support. As a result, the company’s expertise and knowledge of both cloud and legacy technology make it easier for customers to opt for an IBM migration path to both private and hybrid clouds.

TBR also specifically called out of IBM cloud-friendly capabilities, including the comprehensive portfolio of cloud and hardware assets with security; cloud professional services that can span a customer’s entire IT environment; and a vertical approach to cloud combined with Watson technology. As for hybrid clouds, Kelsey Mason, Cloud Analyst at TBR, noted in the announcement: “Hybrid integration is the next stage in cloud adoption and will be the end state for many enterprise IT environments.” Enterprise hybrid adoption, TBR observed, now matches public adoption of a year ago, which it interprets as signaling a new level of maturity in companies’ cloud strategies.

What really counts, however, are customers who vote with their checkbooks.  Here IBM has been racking up cloud wins. For example, Pratt & Whitney, a United Technologies Corp. company in July announced it will move the engine manufacturer’s business, engineering, and manufacturing enterprise systems to a fully managed and supported environment on the IBM Cloud infrastructure.

Said Brian Galovich, vice president and chief information officer, Pratt & Whitney, in the published announcement:  “Working with IBM and moving our three enterprise systems to a managed cloud service will give us the ability to scale quickly and meet the increased demands for computing services, data processing and storage based on Pratt & Whitney’s forecasted growth over the next decade.

Also in July, Dixons Carphone Group, Europe’s largest telecommunications retail and services company as the result of a 2014 merger, announced plans to migrate to the IBM Cloud from IBM datacenters in the United Kingdom to integrate two distinct infrastructures and enable easy scaling to better manage the peaks and valleys of seasonal shopping trends. Specifically, the company expects to migrate about 2,500 server images from both enterprises with supporting database and middleware components from both infrastructures to an IBM hybrid cloud platform that comprises a private IBM Cloud with bare metal servers for production workloads and public IBM Cloud platform for non-production workloads.

As a merged company it saw an opportunity to consolidate the infrastructures by leveraging cloud solutions for flexibility, performance and cost savings. After assessing the long-term values and scalability of multiple cloud providers, the company turned to IBM Cloud for a smooth transition to a hybrid cloud infrastructure. “We can trust IBM Cloud to seamlessly integrate the infrastructures of both companies into one hybrid cloud that will enable us to continue focusing on other parts of the business,” said David Hennessy, IT Director, Dixons Carphone, in the announcement.

As IBM’s 2Q16 report makes clear, once both these companies might have bought new IBM hardware platforms but that’s not the world today. At least they didn’t opt for AWS or Azure.

DancingDinosaur is Alan Radding, a veteran information technology analyst and writer. Please follow DancingDinosaur on Twitter, @mainframeblog. See more of his IT writing at technologywriter.com and here.

 

IBM’s DeepFlash 150 Completes Its Flash Lineup for Now

July 29, 2016

Two years ago DancingDinosaur wrote about new IBM Flash storage for the mainframe. That was about the DS8870, featuring 6-nines (99.9999) of availability and real-time-compression. Then this past May DancingDinosaur reported on another new IBM all-flash initiative, including the all-flash IBM DS8888 for the z, which also boasts 6-nines availability. Just this week IBM announced it is completing its flash lineup with the IBM DeepFlash 150, intended as a building block for SDS (software defined storage) infrastructures.

IBM DeepFlash 150IBM DeepFlash 150, courtesy of IBM

As IBM reports, the DeepFlash 150 does not use conventional solid-state drives (SSD). Instead, it relies on a systems-level approach that enables organizations to manage much larger data sets without having to manage individual SSD or disk drives. DeepFlash 150 comes complete with all the hardware necessary for enterprise and hyper-scale storage, including up to 64 purpose-engineered flash cards in a 3U chassis and 12-Gbps SAS connectors for up to eight host servers. The wide range of IBM Spectrum Storage and other SDS solutions available for DeepFlash 150 provides flash-optimized scale out and management along with large capacity for block, file and object storage.

The complication for z System shops is that you access the DeepFlash 150 through IBM Spectrum Scale. Apparently you can’t just plug the DeepFlash 150 into the z the way you would plug in the all flash DS8888. IBM Spectrum Scale works with Linux on z Systems servers or IBM LinuxONE systems running RHEL or SLES. Check out the documentation here.

As IBM explains in the Red Book titled IBM Spectrum Scale (GPFS) for Linux on z Systems: IBM Spectrum Scale provides a highly available clustering solution that augments the strengths of Linux on z by helping the z data center control costs and achieve higher levels of quality of service. Spectrum Scale, based on IBM General Parallel File System (GPFS) technology, is a high performance shared-disk file management solution that provides fast, reliable access to data from multiple nodes in a cluster environment. Spectrum Scale also allows data sharing in a mixed platform environment, which can provide benefits in cloud or analytics environments by eliminating the need of transferring data across platforms. When it comes to the DeepFlash 150 IBM is thinking about hyperscale data centers.

Hyperscale data centers can’t absorb the costs of constructing, managing, maintaining and cooling massive hyper- scale environments that use conventional mechanical storage, according to IBM. Those costs are driving the search for storage with a smaller physical footprint, lower costs, greater density, and, of course, much higher performance.

Enter DeepFlash 150, which introduces what IBM considers breakthrough economics for active data sets. The basic DeepFlash 150 hardware platform is priced under $1/GB. For big data deployments IBM recommends IBM Spectrum Scale with DeepFlash 150, providing customers with the overlying storage services and functionality critical for optimization of their big data workloads.

But even at $1/GB DeepFlash 150 isn’t going to come cheap. For starters consider how many gigabytes are in the terabytes or petabytes you will want to install. You can do the math. Even at $1/GB this is going to cost. Then you will need IBM Spectrum Scale. With DeepFlash 150 IBM did achieve extreme density of up to 170TB per rack unit, which adds up to a maximum 7PB of flash in a single rack enclosure.

IBM Spectrum Scale and the DeepFlash 150 are intended to support a wide range of file, object and Hadoop Distributed File System (HDFS) analytics work-loads. According to IBM, as a true SDS solution IBM Spectrum Scale can utilize any appropriate hardware and is designed specifically to maximize the benefits of hyper-scale storage systems like DeepFlash 150. Using a scale-out architecture, IBM Spectrum Scale can add servers or multiple storage types and incorporate them automatically into a single managed resource to maximize performance, efficiency, and data protection.

Although DeepFlash 150 can be used with a private cloud IBM seems to be thinking more in terms of hybrid clouds. To address today’s need for seamlessly integrating high-performance enterprise storage such as DeepFlash 150 with the nearly unlimited resources and capabilities of the cloud, IBM Spectrum Scale offers transparent cloud tiering to place data on cloud-based object storage or in a public cloud service. As IBM explains, the transparent cloud tiering feature of IBM Spectrum Scale can connect on- premises storage such as DeepFlash 150 directly to object storage or a commodity-priced cloud service. This allows enterprises to simultaneously leverage the economic, collaboration, and scale benefits of both on- premises and cloud storage while providing a single, powerful view of all data assets.

A Tech Target report on enterprise flash storage profiled 15 flash storage product lines. In general, the products claim maximum read IOPS ranging from 200,000 to 9 million, peak read throughput from 2.4 GBps to 46 GBps, and read latencies from 50 microseconds to 1 millisecond. The guide comes packed with a ton of caveats. And that’s why DancingDinosaur doesn’t think the DeepFlash 150 is the end of IBM’s flash efforts. Stay tuned.

DancingDinosaur is Alan Radding, a veteran information technology analyst and writer. Please follow DancingDinosaur on Twitter, @mainframeblog. See more of his IT writing at technologywriter.com and here.

 

IBM 2Q 2016 Report—Where’s z System and POWER?

July 22, 2016

“IBM continues to establish itself as the leading cognitive solutions and cloud platform company,” said Ginni Rometty, IBM chairman, president and chief executive officer, in a statement accompanying the latest IBM 2Q financial report. The strategic imperatives grew; second-quarter revenues from its cloud, analytics, and engagement units— increased 12 percent year to year.

IBM Quantum Computing Lab - Friday, April 29, 2016, Yorktown Heights, NY (Jon Simon/Feature Photo Service for IBM)

IBM Quantum Experience delivered via Cloud (Jon Simon/Feature Photo Service for IBM)

Where’s z and POWER? The z and POWER platforms continued to flounder: revenues of $2.0 billion, down 23.2 percent. Revenue reflects z Systems product cycle dynamics; gross profit margin improved in both z Systems and Power. “Product cycle dynamics” refers to the lack of a new z.  In the past year IBM introduced the new LinuxONE and, more recently a new z13s, essentially what used to be known as a Business Class mainframe.

There is no hint, however, of a new z, a z14 that will drive product dynamics upward. IBM showed a POWER roadmap going all the way out to the POWER10 in 2020 but nothing comparable for the z.

DancingDinosaur, a longtime big iron bigot, remains encouraged by IBM’s focus on its strategic initiatives and statements like this: “And we continue to invest for growth with recent breakthroughs in quantum computing, Internet of Things and blockchain solutions for the IBM Cloud.” IBM strategic initiatives in cloud, mobile, IoT, and blockchain will drive new use of the mainframe, especially as the projected volumes of things, transactions, users, and devices skyrocket.

Second-quarter revenues from the company’s strategic imperatives — cloud, analytics and engagement — increased 12 percent year to year.  Cloud revenues (public, private and hybrid) for the quarter increased 30 percent.  Cloud revenue over the trailing 12 months was $11.6 billion.  The annual run rate for cloud as-a-service revenue — a subset of total cloud revenue — increased to $6.7 billion from $4.5 billion in the second quarter of 2015.  Revenues from analytics increased 5 percent.  Revenues from mobile increased 43 percent and from security increased 18 percent.

IBM indirectly is trying to boost the z and the cloud. CSC and IBM  announced an alliance with IBM in which IBM will provide CSC Cloud Managed Services for z Systems. CSC already includes IBM SoftLayer as part of its “Service-enabled Enterprise” strategy. “Cloud for z” extends that offering and will be of interest to current and potential mainframe customers in healthcare, insurance, and finance. CSC still sees life in the managed mainframe market, and IBM Global Technology Services, a competitor to CSC, apparently is happy to let them sell managed cloud services for mainframes. All this is taking place as IBM scrambles to secure a leadership share of cloud revenue, and any cloud billing CSC brings will help.

Microsoft, like IBM, claimed big cloud momentum on its fourth quarter conference call, according to a report in Fortune Magazine. It was enough to send Microsoft share price up 4% at one point in after hours trading.

As Fortune notes, for Microsoft as for IBM and other legacy IT providers like Oracle, putting up big cloud numbers is mandatory as more companies change the way they buy IT products. Instead of purchasing hundreds or thousands of new servers or storage boxes every few years, more companies are running their software and storing their data on shared public cloud infrastructure, like Microsoft Azure, Amazon Web Services, the Google Compute Platform, or the IBM Cloud.

For reporting purposes, Microsoft combines Azure with other products in its intelligent cloud product segment. Overall, that segment’s revenue grew about 7% year over year to $6.7 billion from about $6.3 billion.

Oracle, too, is facing the same scramble to establish an enterprise cloud presence. Cloud software as a service (SaaS) and platform as a service (PaaS) revenues were $690 million, up 66% in U.S. dollars. Total Cloud revenues, including infrastructure as a service (IaaS), were $859 million, up 49% in U.S. dollars. At the same time, Oracle’s hardware revenue fell by 7% to $1.3 billion, and its software license revenue fell by 2% to $7.6 billion.

“We added more than 1,600 new SaaS customers and more than 2,000 new PaaS customers in Q4” (which ended in June), said Oracle CEO, Mark Hurd. “In Fusion ERP alone, we added more than 800 new cloud customers. Today, Oracle has nearly 2,600 Fusion ERP customers in the Oracle Public Cloud — that’s ten-times more cloud ERP customers than Workday.”

Hewlett Packard Enterprise (HPE) is the last of the big public enterprise platform vendors, along with IBM and Oracle. (Dell is private and acquired EMC). HPE recently reported its best quarter in years. Second quarter net revenue of $12.7 billion, up 1% from the prior-year period. “Today’s results represent our best performance since I joined in 2011,” said Meg Whitman, president and chief executive officer, Hewlett Packard Enterprise. The businesses comprising HPE grew revenue over the prior-year period on an as-reported basis for the first time in five years.

IBM needs to put up some positive numbers. Seventeen consecutive losing quarters is boring. Wouldn’t it be exciting if a turnaround started with a new enterprise z14?

DancingDinosaur is Alan Radding, a veteran information technology analyst and writer. Please follow DancingDinosaur on Twitter, @mainframeblog. See more of his IT writing at technologywriter.com and here.

 


%d bloggers like this: