Posts Tagged ‘hybrid computing’

IBM z System and Power Fuel Hybrid Cloud

September 30, 2016

DancingDinosaur has been cheerleading the z as a cloud player since 2011 and even before, mainly as an ideal vehicle for private clouds. So it should be no surprise to readers that last week IBM announced z and Power cloud-ready systems, services, and solutions for the hybrid cloud environment. My only question: What took them so long?

hybrid-cloud-systems

Power and z accelerate transformation to hybrid cloud

The world, indeed, is changing fast, and IT data centers especially have to scramble to keep up as they try to blend public cloud, private cloud, and traditional IT data centers and integrate it all seamlessly. “Today’s business environment is very dynamic and filled with disruption. A hybrid cloud model enables clients to continuously adapt while also optimizing on-premises investments and delivering the flexibility clients need across IBM Systems and the cloud,” according to Tom Rosamilia, senior vice president, IBM Systems.

At the heart of the IBM’s systems for what it calls the hybrid cloud era are three technologies we should be generally already familiar:

  • z Systems for cloud. IBM z Systems Operational Insights is a new SaaS-based offering that provides analytic insights on cloud operations for new levels of efficiency and application performance. It allows users to make better business and application decisions based on trends and embedded expertise on performance data through a GUI dashboard. This accompanies IBM OMEGAMON Application Performance Management, which provides quick identification of problem components through end-to-end visibility for clients’ hybrid workloads that include z Systems applications.  In addition, the newly available IBM Common Data Provider enables z Systems operational data to be efficiently consumed in near real time by the clients’ cloud or local enterprise operational analytics platform. An OK start, but you can quibble with OMEGAMON as IBM’s performance analysis and management choice. At its core, this is old technology. DancingDinosaur would prefer, say, Watson.
  • Power Systems for cloud. With integrated OpenStack-based cloud management and elastic consumption models, according to IBM, these new enterprise-class IBM Power Systems enable organizations to transform their IT infrastructure to a local cloud for AIX, IBM i and Linux workloads and extend them with rapid access to compute services in the IBM Cloud. DancingDinosaur covered the new LC here.
  • IBM Spectrum Copy Data Management and Protect. This brings a new solution that drives operational and development agility and efficiency across new and traditional applications that allow detailed, easy management of data copies.  Additionally, IBM Spectrum Protect has expanded its extensive hybrid cloud solution integration with cloud object storage options for use in hybrid cloud deployments.

About the only thing missing above is LinuxONE but that will come up below when IBM gets to openness, which is critical to hybrid clouds. In its announcement, IBM also promised a series of new and expanded collaborations with IBM Systems for hybrid cloud environments, including:

  • Canonical: Canonical and IBM are extending their ongoing alliance to make Ubuntu OpenStack available today on LinuxONE, z Systems, Power Systems, and OpenPOWER-based systems, including the new line of LC servers. This enables organizations to leverage Canonical’s portfolio across the three platforms with simplified and automated OpenStack management.
  • Hortonworks: IBM and Hortonworks,  a Hadoop platform, are jointly entering the marketplace to make Hortonworks Hadoop distribution available on POWER. Whoopee, Hadoop already runs native on z.
  • Mirantis: Mirantis and IBM are collaborating to develop reference architectures enabling Mirantis OpenStack to manage compute nodes hosted on IBM Power Systems servers, and to validate a host of core applications to run its OpenStack private cloud. With this integration, Mirantis will now bring its OpenStack based private cloud management to the POWER platform. This enables organizations to leverage the efficiency of IBM Power Systems for data-driven workloads in a seamless and compatible way for their data center through Mirantis’ OpenStack cloud management.
  • NGINX: NGINX’s application delivery platform now supports servers based on IBM’s POWER architecture with the latest release of its commercial load balancer and web accelerator software, NGINX Plus R10. The combination of NGINX Plus and POWER brings new agility to enterprises, allowing them to scale their infrastructure and application delivery solutions across any environment – public, private, and hybrid cloud; containers; and bare metal – providing a consistent user experience.
  • Red Hat: Red Hat and IBM are expanding their long-standing alliance to better help organizations embrace hybrid cloud. Through joint engineering and deeper product collaboration, the two companies plan to deliver solutions built on key components of Red Hat’s portfolio of open source products, including Red Hat Enterprise Linux, (RHEL)., By the way, RHEL is the #2 Linux distro on the z. RHEL also enables IBM Power Systems as a featured component of Red Hat’s hybrid cloud strategy spanning platform infrastructure located both on and off an enterprise’s premises.

There is enough openness to enable you to do what you need so DancingDinosaur has no real complaints with IBM’s choices above. Just wish it was more z- and LinuxONE-centric.

DancingDinosaur is Alan Radding, a veteran information technology analyst and writer and mainframe bigot. Please follow DancingDinosaur on Twitter, @mainframeblog. See more of his IT writing at technologywriter.com and here.

 

Meet the POWER9 Chip Family

September 2, 2016

When you looked at a chip in the past you primarily were concerned with two things: the speed of the chip, usually expressed in GHz, and how much power it consumed. Today the IBM engineers preparing the newest POWER chip, the 14nm POWER9, are tweaking the chips for the different workloads it might run, such as cognitive or cloud, and different deployment options, such as scale-up or scale-out, and a host of other attributes.  EE Times described it in late August from the Hot Chips conference where it was publicly unveiled.

ibm power9 bandwidth

IBM POWER9 chip

IBM describes it as a chip family but maybe it’s best described as the product of an entire chip community, the Open POWER Foundation. Innovations include CAPI 2.0, New CAPI, Nvidia’s NVLink 2.0, PCle Gen4, and more. It spans a range of acceleration options from HSDC clusters to extreme virtualization capabilities for the cloud. POWER9 is not just about high speed transaction processing; IBM wants the chip to interpret and reason, ingest and analyze.

POWER has gone far beyond the POWER chips that enabled Watson to (barely) beat the human Jeopardy champions. Going forward, IBM is counting on POWER9 and Watson to excel at cognitive computing, a combination of high speed analytics and self-learning. POWER9 systems should not only be lightning fast but get smarter with each new transaction.

For z System shops, POWER9 offers a glimpse into the design thinking IBM might follow with the next mainframe, probably the z14 that will need comparable performance and flexibility. IBM already has set up the Open Mainframe Project, which hasn’t delivered much yet but is still young. It took the Open POWER group a couple of years to deliver meaningful innovations. Stay tuned.

The POWER9 chip is incredibly dense (below). You can deploy it as either a scale-up or scale-out architecture. You have a choice of two-socket servers with 8 DDR4 ports and another for multiple chips per server with buffered DIMMs.

power9 chip

IBM POWER9 silicon layout

IBM describes the POWER9 as a premier acceleration platform. That means it offers extreme processor/accelerator bandwidth and reduced latency; coherent memory and virtual addressing capability for all accelerators; and robust accelerated compute options through the OpenPOWER community.

It includes State-of-the-Art I/O and Acceleration Attachment Signaling:

  • PCIe Gen 4 x 48 lanes – 192 GB/s duplex bandwidth
  • 25G Link x 48 lanes – 300 GB/s duplex bandwidth

And robust accelerated compute options based on open standards, including:

  • On-Chip Acceleration—Gzip x1, 842 Compression x2, AES/SHA x2
  • CAPI 2.0—4x bandwidth of POWER8 using PCIe Gen 4
  • NVLink 2.0—next generation of GPU/CPU bandwidth and integration using 25G Link
  • New CAPI—high bandwidth, low latency and open interface using 25G Link

In scale-out mode it employs direct attached memory through 8 direct DDR4 ports, which deliver:

  • Up to 120 GB/s of sustained bandwidth
  • Low latency access
  • Commodity packaging form factor
  • Adaptive 64B / 128B reads

In scale-up mode it uses buffered memory through 8 buffered channels to provide:

  • Up to 230GB/s of sustained bandwidth
  • Extreme capacity – up to 8TB / socket
  • Superior RAS with chip kill and lane sparing
  • Compatible with POWER8 system memory
  • Agnostic interface for alternate memory innovations

POWER9 was publicly introduced at the Hot Chips conference last spring. Commentators writing in EE Times noted that POWER9 could become a break out chip, seeding new OEM and accelerator partners and rejuvenating IBM’s efforts against Intel in high-end servers. To achieve that kind of performance IBM deploys large chunks of memory—including a 120 Mbyte embedded DRAM in shared L3 cache while riding a 7 Tbit/second on-chip fabric. POWER9 should deliver as much as 2x the performance of the Power8 or more when the new chip arrives next year, according to Brian Thompto, a lead architect for the chip, in published reports.

As noted above, IBM will release four versions of POWER9. Two will use eight threads per core and 12 cores per chip geared for IBM’s Power virtualization environment; two will use four threads per core and 24 cores/chip targeting Linux. Both will come in two versions — one for two-socket servers with 8 DDR4 ports and another for multiple chips per server with buffered DIMMs.

The diversity of choices, according to Hot Chips observers, could help attract OEMs. IBM has been trying to encourage others to build POWER systems through its OpenPOWER group that now sports more than 200 members. So far, it’s gaining most interest from China where one partner plans to make its own POWER chips. The use of standard DDR4 DIMMs on some parts will lower barriers for OEMs by enabling commodity packaging and lower costs.

DancingDinosaur is Alan Radding, a veteran information technology analyst and writer. Please follow DancingDinosaur on Twitter, @mainframeblog. See more of his IT writing at technologywriter.com and here.

 

 

 

IBM’s Latest Flash Announcements Target Dell/EMC

August 26, 2016

The newest IBM storage, announced here earlier this week, aims to provide small, midsize, and global enterprises with virtualized SDS for primary storage and for cloud or cognitive applications and workloads. Central to the effort is IBM Spectrum Virtualize, which automates Storwize all-flash solutions intended to reduce the cost and complexity of data center and cloud environments. Entry pricing for the new storage starts at $19,000, which IBM describes as cost-conscious.storwize logo

IBM All-Flash for the midrange

In addition, IBM announced Flash In, a no-cost storage migration program targeting Dell/EMC customers that IBM hopes will bail out of the merged operation.

SDS in the form of IBM Spectrum Virtualize is central to making IBM’s latest all-flash offerings work for the broad set of use cases IBM envisions.  As IBM puts it: organizations today are embracing all-flash storage to deliver speed and response times necessary to support growing data workloads across public, private, and hybrid cloud environments, as well as the emerging demands of cognitive applications and workloads.

IBM Spectrum Virtualize promises to improve storage efficiency through features such as real-time data compression, thin provisioning, and snapshotting across nearly 400 different storage arrays from a multitude of vendors. That means organizations can leverage, even repurpose, physical storage capacity they already have as they scramble to meet the storage needs of new workloads.

Spectrum Virtualize also optimizes data security, reliability and operational costs. For example, the software automatically tiers and migrates data from one storage array to another, provides secure data-at-rest encryption, and remotely replicates data for disaster recovery and business continuity

The announcement centers around two products, the enterprise-class IBM Storwize V7000F and a midsize IBM Storwize 5030F,  which promise enterprise-class availability and function in a mid-range and entry-level all-flash storage array.  At the same time, both offer greater performance and require less time to provision and optimize systems. Coincidentally, IBM has just been recognized, for the third year in a row as a leader for Flash Storage in the Gartner Magic Quadrant for Solid-State Arrays (SSA).

Specifically, the all-flash IBM Storwize V7000F improves performance by up to 45 percent and supports four times the clustering for scale-out and scale-up growth to help organizations manage rapidly growing datasets.  The midrange and entry level all flash IBM Storwize 5030F offers high performance and availability at a discounted entry point (noted above) to help clients control costs.

The all-flash Storwize V7000F and Storwize V5030F are also built to manage a variety of primary storage workloads, from database management systems, such as SQL Server and MySQL, to digital media sources that include broadcast, real-time streaming, and video surveillance. The new technology can also handle huge data volumes, such as IoT data.

Given the product line confusion that typically characterizes big technology platform mergers, IBM is looking to entice some Dell or, more likely, EMC storage customers to the new Storwize offerings. To that end, IBM is offering what it describes as a no-cost migration initiative for organizations that are not current IBM customers and seeking a smooth transition path from their EMC or Dell storage to the IBM family of all-flash arrays. BTW: EMC is a leading provider of z System storage.

While too early to spot any Dell or EMC customer response, one long time IBM customer, Royal Caribbean Cruises Ltd, has joined the flash storage party. “With ever increasing volumes of customer and operational information, flexible and secure data storage is crucial to keeping our operation afloat (hope the pun was intended) as our company expands to hundreds of destinations worldwide,” said Leonardo Irastorza, Technology Revitalization & Global Shared Services Manager. The cruise line is counting on IBM flash storage to play a critical role, especially when it comes to ensuring exceptional guest experiences across its brands.

And more is coming: IBM released the following statement of direction: IBM intends to enhance IBM Spectrum Virtualize with additional capabilities for flash drive optimization and management. These capabilities are intended to help increase the service life and usability of flash drives, particularly read-intensive flash drives. The planned capabilities will likely include:

  • Data deduplication for workloads and use cases where it complements IBM’s existing industry leading compression technology
  • Improved flash memory management (mainly for garbage collection)
  • Additional flash drive wear management and reporting.

By implementing these capabilities in IBM Spectrum Virtualize they will be available for IBM Storwize family, FlashSystem V9000, and SAN Volume Controller offerings as well as VersaStack (the IBM/Cisco collaboration) and IBM PurePower systems.

DancingDinosaur is Alan Radding, a veteran information technology analyst and writer. Please follow DancingDinosaur on Twitter, @mainframeblog. See more of his IT writing at technologywriter.com and here.

 

IBM’s DeepFlash 150 Completes Its Flash Lineup for Now

July 29, 2016

Two years ago DancingDinosaur wrote about new IBM Flash storage for the mainframe. That was about the DS8870, featuring 6-nines (99.9999) of availability and real-time-compression. Then this past May DancingDinosaur reported on another new IBM all-flash initiative, including the all-flash IBM DS8888 for the z, which also boasts 6-nines availability. Just this week IBM announced it is completing its flash lineup with the IBM DeepFlash 150, intended as a building block for SDS (software defined storage) infrastructures.

IBM DeepFlash 150IBM DeepFlash 150, courtesy of IBM

As IBM reports, the DeepFlash 150 does not use conventional solid-state drives (SSD). Instead, it relies on a systems-level approach that enables organizations to manage much larger data sets without having to manage individual SSD or disk drives. DeepFlash 150 comes complete with all the hardware necessary for enterprise and hyper-scale storage, including up to 64 purpose-engineered flash cards in a 3U chassis and 12-Gbps SAS connectors for up to eight host servers. The wide range of IBM Spectrum Storage and other SDS solutions available for DeepFlash 150 provides flash-optimized scale out and management along with large capacity for block, file and object storage.

The complication for z System shops is that you access the DeepFlash 150 through IBM Spectrum Scale. Apparently you can’t just plug the DeepFlash 150 into the z the way you would plug in the all flash DS8888. IBM Spectrum Scale works with Linux on z Systems servers or IBM LinuxONE systems running RHEL or SLES. Check out the documentation here.

As IBM explains in the Red Book titled IBM Spectrum Scale (GPFS) for Linux on z Systems: IBM Spectrum Scale provides a highly available clustering solution that augments the strengths of Linux on z by helping the z data center control costs and achieve higher levels of quality of service. Spectrum Scale, based on IBM General Parallel File System (GPFS) technology, is a high performance shared-disk file management solution that provides fast, reliable access to data from multiple nodes in a cluster environment. Spectrum Scale also allows data sharing in a mixed platform environment, which can provide benefits in cloud or analytics environments by eliminating the need of transferring data across platforms. When it comes to the DeepFlash 150 IBM is thinking about hyperscale data centers.

Hyperscale data centers can’t absorb the costs of constructing, managing, maintaining and cooling massive hyper- scale environments that use conventional mechanical storage, according to IBM. Those costs are driving the search for storage with a smaller physical footprint, lower costs, greater density, and, of course, much higher performance.

Enter DeepFlash 150, which introduces what IBM considers breakthrough economics for active data sets. The basic DeepFlash 150 hardware platform is priced under $1/GB. For big data deployments IBM recommends IBM Spectrum Scale with DeepFlash 150, providing customers with the overlying storage services and functionality critical for optimization of their big data workloads.

But even at $1/GB DeepFlash 150 isn’t going to come cheap. For starters consider how many gigabytes are in the terabytes or petabytes you will want to install. You can do the math. Even at $1/GB this is going to cost. Then you will need IBM Spectrum Scale. With DeepFlash 150 IBM did achieve extreme density of up to 170TB per rack unit, which adds up to a maximum 7PB of flash in a single rack enclosure.

IBM Spectrum Scale and the DeepFlash 150 are intended to support a wide range of file, object and Hadoop Distributed File System (HDFS) analytics work-loads. According to IBM, as a true SDS solution IBM Spectrum Scale can utilize any appropriate hardware and is designed specifically to maximize the benefits of hyper-scale storage systems like DeepFlash 150. Using a scale-out architecture, IBM Spectrum Scale can add servers or multiple storage types and incorporate them automatically into a single managed resource to maximize performance, efficiency, and data protection.

Although DeepFlash 150 can be used with a private cloud IBM seems to be thinking more in terms of hybrid clouds. To address today’s need for seamlessly integrating high-performance enterprise storage such as DeepFlash 150 with the nearly unlimited resources and capabilities of the cloud, IBM Spectrum Scale offers transparent cloud tiering to place data on cloud-based object storage or in a public cloud service. As IBM explains, the transparent cloud tiering feature of IBM Spectrum Scale can connect on- premises storage such as DeepFlash 150 directly to object storage or a commodity-priced cloud service. This allows enterprises to simultaneously leverage the economic, collaboration, and scale benefits of both on- premises and cloud storage while providing a single, powerful view of all data assets.

A Tech Target report on enterprise flash storage profiled 15 flash storage product lines. In general, the products claim maximum read IOPS ranging from 200,000 to 9 million, peak read throughput from 2.4 GBps to 46 GBps, and read latencies from 50 microseconds to 1 millisecond. The guide comes packed with a ton of caveats. And that’s why DancingDinosaur doesn’t think the DeepFlash 150 is the end of IBM’s flash efforts. Stay tuned.

DancingDinosaur is Alan Radding, a veteran information technology analyst and writer. Please follow DancingDinosaur on Twitter, @mainframeblog. See more of his IT writing at technologywriter.com and here.

 

IBM 2Q 2016 Report—Where’s z System and POWER?

July 22, 2016

“IBM continues to establish itself as the leading cognitive solutions and cloud platform company,” said Ginni Rometty, IBM chairman, president and chief executive officer, in a statement accompanying the latest IBM 2Q financial report. The strategic imperatives grew; second-quarter revenues from its cloud, analytics, and engagement units— increased 12 percent year to year.

IBM Quantum Computing Lab - Friday, April 29, 2016, Yorktown Heights, NY (Jon Simon/Feature Photo Service for IBM)

IBM Quantum Experience delivered via Cloud (Jon Simon/Feature Photo Service for IBM)

Where’s z and POWER? The z and POWER platforms continued to flounder: revenues of $2.0 billion, down 23.2 percent. Revenue reflects z Systems product cycle dynamics; gross profit margin improved in both z Systems and Power. “Product cycle dynamics” refers to the lack of a new z.  In the past year IBM introduced the new LinuxONE and, more recently a new z13s, essentially what used to be known as a Business Class mainframe.

There is no hint, however, of a new z, a z14 that will drive product dynamics upward. IBM showed a POWER roadmap going all the way out to the POWER10 in 2020 but nothing comparable for the z.

DancingDinosaur, a longtime big iron bigot, remains encouraged by IBM’s focus on its strategic initiatives and statements like this: “And we continue to invest for growth with recent breakthroughs in quantum computing, Internet of Things and blockchain solutions for the IBM Cloud.” IBM strategic initiatives in cloud, mobile, IoT, and blockchain will drive new use of the mainframe, especially as the projected volumes of things, transactions, users, and devices skyrocket.

Second-quarter revenues from the company’s strategic imperatives — cloud, analytics and engagement — increased 12 percent year to year.  Cloud revenues (public, private and hybrid) for the quarter increased 30 percent.  Cloud revenue over the trailing 12 months was $11.6 billion.  The annual run rate for cloud as-a-service revenue — a subset of total cloud revenue — increased to $6.7 billion from $4.5 billion in the second quarter of 2015.  Revenues from analytics increased 5 percent.  Revenues from mobile increased 43 percent and from security increased 18 percent.

IBM indirectly is trying to boost the z and the cloud. CSC and IBM  announced an alliance with IBM in which IBM will provide CSC Cloud Managed Services for z Systems. CSC already includes IBM SoftLayer as part of its “Service-enabled Enterprise” strategy. “Cloud for z” extends that offering and will be of interest to current and potential mainframe customers in healthcare, insurance, and finance. CSC still sees life in the managed mainframe market, and IBM Global Technology Services, a competitor to CSC, apparently is happy to let them sell managed cloud services for mainframes. All this is taking place as IBM scrambles to secure a leadership share of cloud revenue, and any cloud billing CSC brings will help.

Microsoft, like IBM, claimed big cloud momentum on its fourth quarter conference call, according to a report in Fortune Magazine. It was enough to send Microsoft share price up 4% at one point in after hours trading.

As Fortune notes, for Microsoft as for IBM and other legacy IT providers like Oracle, putting up big cloud numbers is mandatory as more companies change the way they buy IT products. Instead of purchasing hundreds or thousands of new servers or storage boxes every few years, more companies are running their software and storing their data on shared public cloud infrastructure, like Microsoft Azure, Amazon Web Services, the Google Compute Platform, or the IBM Cloud.

For reporting purposes, Microsoft combines Azure with other products in its intelligent cloud product segment. Overall, that segment’s revenue grew about 7% year over year to $6.7 billion from about $6.3 billion.

Oracle, too, is facing the same scramble to establish an enterprise cloud presence. Cloud software as a service (SaaS) and platform as a service (PaaS) revenues were $690 million, up 66% in U.S. dollars. Total Cloud revenues, including infrastructure as a service (IaaS), were $859 million, up 49% in U.S. dollars. At the same time, Oracle’s hardware revenue fell by 7% to $1.3 billion, and its software license revenue fell by 2% to $7.6 billion.

“We added more than 1,600 new SaaS customers and more than 2,000 new PaaS customers in Q4” (which ended in June), said Oracle CEO, Mark Hurd. “In Fusion ERP alone, we added more than 800 new cloud customers. Today, Oracle has nearly 2,600 Fusion ERP customers in the Oracle Public Cloud — that’s ten-times more cloud ERP customers than Workday.”

Hewlett Packard Enterprise (HPE) is the last of the big public enterprise platform vendors, along with IBM and Oracle. (Dell is private and acquired EMC). HPE recently reported its best quarter in years. Second quarter net revenue of $12.7 billion, up 1% from the prior-year period. “Today’s results represent our best performance since I joined in 2011,” said Meg Whitman, president and chief executive officer, Hewlett Packard Enterprise. The businesses comprising HPE grew revenue over the prior-year period on an as-reported basis for the first time in five years.

IBM needs to put up some positive numbers. Seventeen consecutive losing quarters is boring. Wouldn’t it be exciting if a turnaround started with a new enterprise z14?

DancingDinosaur is Alan Radding, a veteran information technology analyst and writer. Please follow DancingDinosaur on Twitter, @mainframeblog. See more of his IT writing at technologywriter.com and here.

 

IBM to Acquire EZSource to Bolster IBM z System for Digital Transformation

June 9, 2016

Over the past two years you have been reading in DancingDinosaur the new tricks that your z System can do—real time analytics, IoT, Blockchain, and more. This is part of the digital transformation that is blowing through enterprises everywhere. EZSource facilitates and simplifies how you can play in this new areas. See the IBM announcement here.

zOS Security2 PNG ez Complexity

EZSource Dashboard, Credit: EZSource

EZSource expedites digital transformations by unlocking core business logic and apps. Specifically it will pinpoint your valuable mainframe code assets in preparation for leveraging them through a hybrid cloud strategy. In the process it will enable the understanding business-critical assets in preparation of deployment of a z-centered hybrid cloud. This also enables enterprise DevOps, which is necessary to keep up with the pace of changes overtaking existing business processes.

Specifically, this can entail the need to:

  • Identify API candidates to play in the API economy
  • Embrace micro services to deliver versatile apps fast
  • Identify code quality concerns, including dead code, to improve reliability and maintainability
  • Mitigate risk of change through understanding code, data and schedule interdependencies
  • Aid in the sizing of a change effort
  • Automate documentation to improve understanding
  • Reduce learning curve as new people are on-boarded
  • Add application understanding to DevOps lifecycle information to identify opportunities for work optimization

Managers a z data centers often shy away from modifying aging business-critical applications for fear of breaking something—if it ain’t broke, don’t fix it—often is the mantra. They also are rationing the use of their few remaining experienced z veterans with the domain expertise and deep knowledge of software that turns out to be quite esoteric code.  This is further aggravated by poorly documented mainframe code. The way to mitigate this risk of change is through understanding code, data, and interdependencies. EZSource can handle this visually and with ease; you no longer need to be an experienced z code expert.

So what do you have to do to get on the digitization transformation bandwagon? Start by identifying your mainframe assets that are most often accessed. Most of them will be what the mobile apps are calling, usually a CICS routine or two or three. Then expose these business critical services through APIs and micro-services. This may require re-writing parts of them as platform agnostic language and Java components to work within the context of a hybrid cloud. As noted just above, EZSource can help with much of this too.

In short, EZSource performs app discovery, which facilitates code quality improvement. It helps clean up code. It also applies analytics to DevOps, in effect enabling Cognitive DevOps, which makes sense in the dynamic hybrid cloud. The result: you focus only on the relevant code and, of that, what is particularly problematic.

The goal is to increase competitiveness and business innovation through digital forms of engagement; the engagement currently being fueled by mobile, social, analytic, and cognitive computing in a hybrid cloud environment. The expectation is that you will be able to tap the accumulated software assets for insights while modernizing business critical applications already resident on the z. IBM contends that this is the fastest and most cost effective way to drive new value and agility and DancingDinosaur agrees.

Is it worth it?  Most DancingDinosaur readers probably already believe that the mainframe and its data and software assets sit smack at the center of a digital enterprise. (Just a glimpse of the growth of monthly software peak workload charges should confirm that). It makes no sense not to leverage this complex resource to the max. EZSource, with its smart code analytics and visual capabilities, can save thousands of hours of work, avoid mistakes, and speed the delivery of the kind of slick new hybrid cloud apps that customers are demanding.  EZSource is primarily a Linux, Windows, and Java tool with only a few pieces residing on the z to handle necessary z-specific connectivity.

DancingDinosaur is Alan Radding, a veteran information technology analyst and writer. Please follow DancingDinosaur on Twitter, @mainframeblog. See more of his IT writing at technologywriter.com and here.

 

2016 State of OpenStack Adoption Shows Continued Progress

March 10, 2016

Sixty-one percent of over 600 survey respondents are adopting OpenStack to combat the expense of public cloud alternatives, reports Talligent, provider of cost and capacity management solutions for OpenStack and hybrid clouds, which conducted most recent study of OpenStack adoption. Almost as many respondents, 59%, have opted for OpenStack to improve the responsiveness of IT service delivery.

openstack-logo

OpenStack is a cloud operating system that controls large pools of compute, storage, and networking resources throughout a datacenter, all managed through a dashboard that gives administrators control while empowering their users to provision resources through a web interface. As OpenStack puts it: A key part of the OpenStack Foundation mission is to inform, and with the ever expanding ecosystem, we felt it was a good time to cut through the noise to give our members the facts needed to make sound decisions.

In that spirit, make the OpenStack Marketplace one of your first steps in planning an OpenStack effort. There you will find the technology broken down into digestible chunks with details like which components are included, the versions used, and the APIs exposed. The community has also implemented interoperability testing to validate products displaying OpenStack logos. The results are now available in the Marketplace for public clouds, hosted private clouds, distributions & appliances.

DancingDinosaur has covered OpenStack numerous times; for example here and here, IBM fully committed to OpenStack. Late last spring it announced an expanded suite of OpenStack services that allow organizations to integrate applications and data across hybrid clouds including public, dedicated and local cloud environments without the fear of vendor lock-in or costly customization.

IBM may be a bit in front of the market on this. The Talligent survey found private clouds will not be replaced by public clouds very soon, with 54% of respondents still expecting their cloud use to be ALL or mostly private five years from now.

But whether this will occur in two years or five years developers and enterprises using the IBM Cloud OpenStack Services will be able to launch applications on local, on-premises installations and public clouds hosted on the SoftLayer infrastructure, VMware, or the IBM Cloud. This can all be done without changing code or configurations. As a result, developers can build and test an application in a public cloud and use the interoperability of OpenStack to seamlessly deploy that same application and data across any combination of clouds; public, dedicated and local/private.

The Talligent survey also found OpenStack deployments, once in place, are expected to expand quickly beyond development environments, growing from 43% to 89% within 12 months. For QA/Test the expected growth will be a tad stronger, from 47% to 91% within 12 months.

Other interesting tidbits from the survey: the top three workloads currently delivered on OpenStack include: new green field applications (69%); containers (61%), web applications (58%). No surprise there.  Also, as noted above, private clouds should continue to thrive as OpenStack users expect high levels of private cloud use within the next 5 years. Fourteen percent, however, are expecting to deploy across a balanced mix of private and public clouds. At the same time, the survey suggests that PaaS, Containers, and privately managed OpenStack are expected to grow in use while proprietary public clouds and legacy virtualization are likely to decline.

Finally, the survey respondents voiced their opinions on the OpenStack providers. Although industry vendors like VMware, IBM, HPE, Cisco and more are exploring ways to support customers in a hybrid cloud mix, the respondents, as previously noted, are not quite ready to move to a hybrid model. Still, the respondents voiced a clear desire for more operational tools.

Similarly, a majority of respondents currently using OpenStack are still prepared to maintain most of their environment on-premises, with 54% saying they will continue to be more than 80% private over the next 5 years. This may reflect ongoing concerns of corporate management about security in the public cloud. The survey, however, picked up some ambivalence on this point: 30% of the respondents using OpenStack report planning to move more than 80% of their environments to the public cloud over the next 5 years. Could this be a signal that security concerns may be fading?

DancingDinosaur is Alan Radding, a veteran information technology analyst and writer. Please follow DancingDinosaur on Twitter, @mainframeblog. See more of his IT writing at technologywriter.com and here.

New IBM z13s Brings Built-in Encrypted Security to Entry Level

February 19, 2016

Earlier this week IBM introduced the z13s, what it calls World’s most secure server, built for hybrid cloud, and sized for mid-sized organizations.  The z13s promises better business outcomes, faster decision making, less regulatory exposure, greater scale, and better fraud protection. And at the low end it is accessible to smaller enterprises, maybe those who have never tried a z before.

Advanced Security New z13s

z13s features embedded cryptography that brings the benefits of the mainframe to mid-sized organizations . Courtesy IBM

A machine like the low end z13s used to be referred to as a business class (BC) mainframe.  IBM declined to quote a price, except to say z13s will go “for about the same price as previous generations for the equivalent capacity.”  OK, back in July 2013 IBM published the base price of the zEC12 BC machine at $75,000. IBM made a big deal of that pricing at the time.

The key weasel phrase in IBM’s statement is: “for the equivalent capacity.”  Two and a half years ago the $75k zEC12 BC offered significantly more power than its predecessor. Figuring out equivalent capacity today given all the goodies IBM is packing into the new machine, like built-in chip-based cryptography and more, is anybody’s guess. However, given the plummeting costs of IT components over the past two years, you should get it at a base price of $100k or less. If not, call Intel. Adds IBM: The infrastructure costs of z13s are comparable to the Public Cloud infrastructure costs with enterprise support; significant software savings result from core consolidation on the z13s.

But the z13s is not just about price. As digital business becomes a standard practice and transaction volumes increase, especially mobile transaction volumes, the need for increased security becomes paramount. Cybercrime today has shifted. Rather than stealing data criminals are compromising data accuracy and reliability. This is where the z13s’ bolstered built-in security and access to APIs and microservices in a hybrid cloud setting can pay off by keeping data integrity intact.

IBM’s z13s, described as the new entry point to the z Systems portfolio for enterprises of all sizes, is packed with a number of security innovations. (DancingDinosaur considered the IBM LinuxONE Rockhopper as the current z entry point but it is a Linux-only machine.) For zOS the z13s will be the entry point. The security innovations include:

  • Ability to encrypt sensitive data without compromising transactional throughput and response time through its updated cryptographic and tamper-resistant hardware-accelerated cryptographic coprocessor cards with faster processors and more memory. In short: encryption at twice the speed equates to processing twice as many online or mobile device purchases in the same time, effectively helping to lower the cost per transaction.
  • Leverage the z Systems Cyber Security Analytics offering, which delivers an advanced level of threat monitoring based on behavior analytics. Also part of the package, IBM® Security QRadar® security software correlates data from more than 500 sources to help organizations determine if security-related events are simply anomalies or potential threats, This z Systems Cyber Security Analytics service will be available at no-charge, as a beta offering for z13 and z13s customers.
  • IBM Multi-factor Authentication for z/OS (MFA) is now available on z/OS. The solution adds another layer of security by requiring privileged users to enter a second form of identification, such as a PIN or randomly generated token, to gain access to the system. This is the first time MFA has been tightly integrated in the operating system, rather than through an add-on software solution. This level of integration is expected to deliver more streamlined configuration and better stability and performance.

Hybrid computing and hybrid cloud also play a big part in IBM’s thinking latest around z Systems. As IBM explains, hybrid cloud infrastructure offers advantages in flexibility but can also present new vulnerabilities. When paired with z Systems, IBM’s new security solutions can allow clients to establish end-to-end security in their hybrid cloud environment.

Specifically, IBM Security Identity Governance and Intelligence can help prevent inadvertent or malicious internal data loss by governing and auditing access based on known policies while granting access to those who have been cleared as need-to-know users. IBM Security Guardium uses analytics to help ensure data integrity by providing intelligent data monitoring, which tracks users as they access specific data and help to identify threat sources quickly in the event of a breach. IBM Security zSecure and QRadar use real-time alerts to focus on the identified critical security threats that matter the most.

Conventional z System data centers should have no difficulty migrating to the z13 or even the z13s.  IBM told DancingDinosaur it will continue to protect a client’s investment in technology with serial number preservation on the IBM z13s.  The company also is offering upgrades from the zEnterprise BC12 (zBC12) and from the zEnterprise 114 (z114) to the z13s.   Of course, it supports upgradeability within the IBM z13 family; a z13s N20 model can be upgraded to the z13 N30 model. And once the z13s is installed it allows on demand offerings to access temporary or permanent capacity as needed.

DancingDinosaur is Alan Radding, a veteran information technology analyst and writer. Please follow DancingDinosaur on Twitter, @mainframeblog. See more of his IT writing at technologywriter.com and here.

IBM Gets Serious about Linux on z Systems

February 12, 2016

 

It has taken the cloud, open source, and mobile for IBM to finally, after more than a decade of Linux on z, for the company to turn it into the agile development machine it should have been all along. Maybe z data centers weren’t ready back then, maybe they aren’t all that ready now, but it is starting to happen.

Primary_LinuxONE_LeftAngle-1 (1)

LinuxONE Rockhopper, Refreshed for Hybrid Cloud Innovation

In March, IBM will make its IBM Open Platform available for the IBM LinuxONE (IOP) portfolio available at no cost. IOP includes a broad set of industry standard Apache-based capabilities for analytics and big data. The components supported include Apache Spark, Apache HBase and more, as well as Apache Hadoop 2.7.1. Continuing its commitment to contributing back to the open source community, IBM has optimized the Open Managed Runtime project (OMR) for LinuxONE. Now IBM innovations in virtual machine technology for new dynamic scripting languages will be brought to enterprise-grade strength.

It doesn’t stop there. IBM has ported the Go programming language to LinuxOne too. Go was developed by Google and is designed for building simple, reliable and efficient software, making it easier for developers to combine the software tools they know with the speed, security and scale offered by LinuxONE. IBM expects to begin contributing code to the Go community this summer.

Back in December IBM brought Apple’s Swift programming to the party, first to the IBM Watson iOS SDK, which gives developers a Swift API to simplify integration with many of the Watson Developer Cloud services, including the Watson Dialog, Language Translation, Natural Language Classifier, Personality Insights, Speech To Text, Text to Speech, Alchemy Language, or Alchemy Vision services – all of which are available today, and can now be integrated with just a few lines of code.

Following Apple’s introduction of Swift as the new language for OS X and iOS application development. IBM began partnering with Apple to bring the power of Swift open source programming to the z. This will be closely tied to Canonical’s Ubuntu port to the z expected this summer.

Also, through new work by SUSE to collaborate on technologies in the OpenStack space, SUSE tools will be employed to manage public, private, and hybrid clouds running on LinuxONE.  Open source, OpenStack, open-just-about-everything appears to be the way IBM is pushing the z.

At a presentation last August on Open Source & ISV Ecosystem Enablement for LinuxONE and IBM z, Dale Hoffman, Program Director, IBM’s Linux SW Ecosystem & Innovation Lab, introduced the three ages of mainframe development; our current stage being the third.

  1. Traditional mainframe data center, 1964–2014 includes • Batch • General Ledger • Transaction Systems • Client Databases • Accounts payable / receivable • Inventory, CRM, ERP Linux & Java
  2. Internet Age, 1999–2014 includes–• Server Consolidation • Oracle Consolidation • Early Private Clouds • Email • Java®, Web & eCommerce
  3. Cloud/Mobile/Analytics (CAMSS2) Age, 2015–2020 includes– • On/Off Premise, Hybrid Cloud • Big Data & Analytics • Enterprise Mobile Apps • Security solutions • Open Source LinuxONE and IBM z ecosystem enablement

Hoffman didn’t suggest what comes after 2020 but we can probably imagine: Cognitive Computing, Internet of Things, Blockchain. At least those are trends starting to ramp up now.

He does, however, draw a picture of the state of Linux on the mainframe today:

  • 27% of total installed capacity run Linux
  • Linux core capacity increased 16% from 2Q14 to 2Q15
  • 40% of customers have Linux cores
  • 80% of the top 100 customers (in terms of installed MIPS) run Linux on the mainframe
  • 67% of new accounts run Linux

To DancingDinosaur, this last point about the high percentage of new z accounts running Linux speaks to where the future of the z is heading.

Maybe as telling are the following:

  • 64% of companies participate in Open Source projects
  • 78% of companies run on open source
  • 88% of companies to increase open source contributions in the next 2-3 year
  • 47% to release internal tools & projects as OSS
  • 53% expect to reduce barriers to employee participation in open source
  • 50% report that more than half of their engineers are working on open source projects
  • 66% of companies build software on open source

Remember when open source and Linux first appeared for z, data center managers were shocked at the very concept. It was anti-capitalist at the very least, maybe even socialist or communist. Look at the above percentages; open source has gotten about as mainstream as it gets.

It will be interesting to see how quickly developers move to LinuxONE for their CAMSS projects. IBM hasn’t said anything about the pricing of the refreshed Rockhopper model or about the look and feel of the tools. Until the developers know, DancingDinosaur expects they will continue to work on the familiar x86 tools they are using now.

DancingDinosaur is Alan Radding, a veteran information technology analyst and writer. Please follow DancingDinosaur on Twitter, @mainframeblog. See more of his IT writing at technologywriter.com and here.

Exploiting the IBM z13 for Maximum Price/Performance Advantage

February 4, 2016

The z13 is the most powerful general purpose computer IBM has ever made. The key to capturing the maximum value from the z13, however, lies in how you plan, design, configure, and optimize your systems and software for everything from COBOL and Java to process parallelization and analytics. What you do in this regard will have significant impact on not only the price/performance you experience but on your success at achieving the business outcomes you are expecting.

z13-under the covers

IBM System z13

This really becomes a software configuration challenge. By tapping approximately 600 internal processors IBM already has optimized the hardware, input, output, memory, and networking/communications about as much as it can be. Your job is to optimize the software you are running, which will require working closely with your ISV.

The place to start is by leveraging the z13’s new compiler technology, parallelism, zIIP and assist processors. This will enable you to save significant money while boosting workload performance. You will literally be doing more for less.

Similarly, in the not too distant past Moore’s Law would virtually guarantee a 15-20% price/performance gain automatically just by taking a new machine out of the box and plugging it in. That’s no longer the case. Now you will have to partner with your ISV to exploit advanced software to maximize the hardware payback and continue the ride along the favorable Moore’s Law price/performance slope.

Then look at the latest COBOL V5.x and its compiler on the z13. Out of the box it is better optimized than previous compilers. In general, the strategic value of COBOL V5.x comes from migrating high CPU usage programs as quickly as possible, effectively saving organizations considerable money by running optimized code.

Some organizations report a 15% on average reduction of CPU time, which adds up to significant savings in monthly CPU charges. How significant? Up to $150k less on a $1 million bill, with some reporting even higher percentage reductions producing even greater savings. Just migrate to COBOL V5.2 (or at least V5.1) to achieve the savings. In general, staying on the software curve with the latest releases of the OS, languages, and compilers with applications optimized for them is the best way to ensure your workloads are achieving top performance in the most cost-effective way.

For example, the new z13 processor leverages a new Vector Facility for certain COBOL statements and expands the use of Decimal Floating Point Facility for packed decimal calculations. Well-structured, compute-intensive batch applications running on z13 and compiled with the Enterprise COBOL V5.2  compiler have shown CPU reduction usage of up to 14% over the same applications running on zEC12 (compiled with the GA release of Enterprise COBOL V5.1), according to IBM. The result: improved workload price/performance.

Enterprise COBOL V5.2 also includes new features to improve programmability, developer productivity, and application modernization. Supporting JSON, for instance, will provide mobile applications easy access to data and the processing they need from business critical production applications written in COBOL.

The z13 and its z sister, the latest LinuxONE dedicated Linux models, were designed and optimized from the start for cloud, mobile, and analytics. They were intended to run alongside traditional mainframe workloads with z/OS or Linux running on the appropriate models.

Finally, plan to take advantage of the new assist processors and expanded memory capacity to further boost performance and lower cost. With the z13, there is a mandatory migration of all zAAP-enabled applications to zIIP. Expect the usage of the zIIP assist processors to surge when all those Java applications move from the zAAP.  ISVs like Compuware should be able to help with this.  In addition, if you enable SMT on the z13, you’ll immediately get more Java capacity.  Applications that run under IBM WebSphere (WAS) on z/OS will benefit too.

The z13 and especially the LinuxONE are breaking new ground. IBM has established, in conjunction with the Linux Foundation, an Open Mainframe Project to support and advance ongoing open source Linux innovation on the mainframe. IBM also is breaking with its traditional mainframe pricing model by offering a pay-per-use option in the form of a fixed monthly payment with costs scaling up or down based on usage. It also offers per-core pricing with software licenses for designated cores. See DancingDinosaur here.

An upcoming DancingDinosaur will look at more of the enhancements being added to these machines, including some of the latest LinuxOne enhancements like support for Google’s Go language and Cloudant’s NoSQL services. The message: the new z System can take you to the places you will want to be in this emerging cloud-mobile-analytics era.

DancingDinosaur is Alan Radding, a veteran information technology analyst and writer. Please follow DancingDinosaur on Twitter, @mainframeblog. See more of his IT writing at technologywriter.com and here.

 


%d bloggers like this: