Posts Tagged ‘System z’

AI and IBM Watson Fuel Interest in App Dev among Mainframe Shops

December 1, 2016

BMC’s 2016 mainframe survey, covered by DancingDinosaur here, both directly and indirectly pointed to increased activity in regard to data center applications. Mainly this took the form of increased interest in Java on the z as a platform for new applications. Specifically, 72% of overall respondents reported using Java today while 88% reported plans to increase their use Java. At the same time, the use of Linux on the z has been steadily growing year over year; 41% in 2014, 48% in 2015, 52% in 2016. This growth of both point to a heightened interest in application development, management, and change.

ibm-project-dataworks-visualization-1

IBM’s Project DataWorks uses Watson Analytics to create complex visualizations with one line of code

IBM has been feeding this kind of AppDev interest with its continued enhancement of Bluemix and the rollout of the Bluemix Garage method.  More recently, it recently announced a partnership with Topcoder, a global software development community comprised of more than one million designers, developers, data scientists, and competitive programmers with the aim of stimulating developers looking to harness the power of Watson to create the next generation AI apps, APIs, and solutions.

According to Forrester VP and Principal Analyst JP Gownder in the IBM announcement, by 2019, automation will change every job category by at least 25%. Additionally, IDC predicts that 75% of developer teams will include cognitive/AI functionality in one or more applications by 2018. The industry is driving toward a new level of computing potential not witnessed since the introduction of Big Data

To further drive the cultivation of this new style of developer, IBM is encouraging participation in Topcoder-run hackathons and coding competitions. Here developers can easily access a range of Watson services – such as Conversation, Sentiment Analysis, or speech APIs – to build powerful new tools with the help of cognitive computing and artificial intelligence. Topcoder hosts 7,000 code challenges a year and has awarded $80 million to its community. In addition, now developers will have the opportunity to showcase and monetize their solutions on the IBM Marketplace, while businesses will be able to access a new pipeline of talent experienced with Watson and AI.

In addition to a variety of academic partnerships, IBM recently announced the introduction of an AI Nano degree program with Udacity to help developers establish a foundational understanding of artificial intelligence.  Plus, IBM offers the IBM Learning Lab, which features more than 100 curated online courses and cognitive uses cases from providers like Codeacademy, Coursera, Big Data University, and Udacity. Don’t forget, IBM DeveloperWorks, which offers how-to tutorials and courses on IBM tools and open standard technologies for all phases of the app dev lifecycle.

To keep the AI development push going, recently IBM unveiled the experimental release of Project Intu, a new system-agnostic platform designed to enable embodied cognition. The new platform allows developers to embed Watson functions into various end-user devices, offering a next generation architecture for building cognitive-enabled experiences.

Project Intu is accessible via the Watson Developer Cloud and also available on Intu Gateway and GitHub. The initiative simplifies the process for developers wanting to create cognitive experiences in various form factors such as spaces, avatars, robots, or IoT devices. In effect, it extends cognitive technology into the physical world. The platform enables devices to interact more naturally with users, triggering different emotions and behaviors and creating more meaningful and immersive experiences for users.

Developers can simplify and integrate Watson services, such as Conversation, Language, and Visual Recognition with the capabilities of the device to act out the interaction with the user. Instead of a developer needing to program each individual movement of a device or avatar, Project Intu makes it easy to combine movements that are appropriate for performing specific tasks like assisting a customer in a retail setting or greeting a visitor in a hotel in a way that is natural for the visitor.

Project Intu is changing how developers make architectural decisions about integrating different cognitive services into an end-user experience – such as what actions the systems will take and what will trigger a device’s particular functionality. Project Intu offers developers a ready-made environment on which to build cognitive experiences running on a wide variety of operating systems – from Raspberry PI to MacOS, Windows to Linux machines.

With initiatives like these, the growth of cognitive-enabled applications will likely accelerate. As IBM reports, IDC estimates that “by 2018, 75% of developer teams will include Cognitive/AI functionality in one or more applications/services.”  This is a noticeable jump from last year’s prediction that 50% of developers would leverage cognitive/AI functionality by 2018

For those z data centers surveyed by BMC that worried about keeping up with Java and big data, AI adds yet an entirely new level of complexity. Fortunately, the tools to work with it are rapidly falling into place.

DancingDinosaur is Alan Radding, a veteran information technology analyst, writer, and ghostwriter. Please follow DancingDinosaur on Twitter, @mainframeblog. See more of his IT writing at technologywriter.com and here.

 

 

 

BMC Mainframe Survey Confirms z System Is Here to Stay

November 11, 2016

No surprise there. BMC’s 11th annual mainframe survey covering 1,200 mainframe executives and tech professionals found 58% of respondents reported usage of the mainframe is increasing as they look to capitalize on every infrastructure advantage it provides and add more workloads. Another 23% consider the mainframe as the best option to run critical work.

ibm_system_z10

IBM z10

Driving the continuing interest in the mainframe are the new demands for data handling, scalable processing, analytics, and more. According to the BMC survey nearly 60% of companies are seeing increased data and transaction volumes. They opt to stay with the mainframe for its highly secure, superior data handling and transaction serving, particularly as digital business adds unpredictability and volatility to workloads.

Overall respondents fell into three primary groups: 1) entrenched mainframe shops, 58% that are on board for the long haul; 2) shops, 23% that intend to maintain a steady amount of work on the mainframe; and 3) the 19% that are moving away from the mainframe.  The first two groups, committed mainframe shops, amount to just over survey 80% of the respondents.

Many companies surveyed are focused on addressing the increased workload demands, especially the rapidly growing demand for new applications. But surprisingly, the survey does not directly touch on hybrid cloud, cognitive computing or any of the latest technologies IBM has been promoting, not even DevOps, which can streamline mainframe application development and deployment. “We are not hearing much about a hybrid cloud environments or blockchain yet. Most companies seem to be in the early tire kicking stage, observed John McKenny, BMC Vice President, Strategy and Operations.

Eighty-eight percent of companies in the first group, entrenched mainframe shops, for example, are looking to increase the workloads they run on Java on the mainframe, primarily to address new application demands. It also doesn’t hurt that Java on the mainframe also can help lower data center costs by directing workloads to lower cost assist processors.

Other interesting BMC survey findings:

  • Half of the respondents report keeping 50% of their data on the mainframe and continue to invest in the platform for reasons you already know—security, availability, data serving capability
  • Continued steady growth of Linux in production on the z: 41% in 2014, 48% in 2015, 52% in 2016
  • Increased use of Java on the mainframe report as 67% of respondents cite need to meet growing application demand

Those looking to reduce mainframe presence cited three reasons: 1) perception of high cost, 2) outdated management understanding, and 3) looking for ways to reduce workloads over time.  DancingDinosaur has spoken with mainframe shops intending to migrate off the z and they cite the usual reasons, especially #1 above.

Top mainframe priorities for 2016 according to the BMC survey:  Cost reduction/optimization (65%); data privacy, compliance, security (50%); application availability (49%); application modernization (41%. Responses indicated the priorities for next year haven’t changed at all.

Surprisingly, many of the latest technologies for the z that IBM has touted recently have not yet shown up in the BMC survey responses, except maybe Java and Linux. This would include hybrid clouds, blockchain, IoT, and cognitive computing. IDC, for example, already is projecting cognitive computing to grow at a CAGR of 55.1% from 2016 to 2020. For z shops, however, cognitive computing appears almost invisible.

In some case with surveys like this you need to read between the lines. Where respondents report changes in activity levels driving application growth or the growth of interest in Java or the frequency of application changes and references to operational analytics they’re making oblique references to mobile or big data or even cognitive computing or other recent technologies for the z.

At its best, the BMC notes that digital technologies are transforming the ways in which mainframe shops conduct business and interact with their customers.  Adds BMC mainframe customer Credit Suisse: “IT departments are moving toward centralized, virtualized, and highly automated environments. This is being pursued to drive cost and processing efficiencies. Many companies realize that the Mainframe has provided these benefits for many years and is a mature and stable environment,” said Frank Cortell, Credit Suisse Director of Information Technology.

DancingDinosaur is Alan Radding, a veteran information technology analyst, writer, and ghost-writer. Please follow DancingDinosaur on Twitter, @mainframeblog. See more of his IT writing at technologywriter.com and here.

 

 

Can SDS and Flash Resurrect IBM Storage?

November 4, 2016

As part of IBM’s ongoing string of quarterly losses storage has consistently contributed to the red ink, but the company is betting on cloud storage, all-flash strategy, and software defined storage (SDS) to turn things around. Any turn-around, however, is closely tied to the success of IBM’s strategic imperatives, which have emerged as bright spots amid the continuing quarterly losses; especially cloud, analytics, and cognitive computing.

climate-data-requires-fast-access-1

Climate study needs large amounts of fast data access

As a result, IBM needs to respond to two challenges created by its customers: 1) changes like the increased adoption of cloud, analytics, and most recently cognitive computing and 2) the need by customers to reduce the cost of the IT infrastructure. The problem as IBM sees it is this: How do I simultaneously optimize the traditional application infrastructure and free up money to invest in a new generation application infrastructure, especially if I expect move forward into the cognitive era at some point? IBM’s answer is to invest in flash and SDS.

A few years ago DancingDinosaur was skeptical, for example, that flash deployment would lower storage costs except in situations where low cost IOPS was critical. Today between the falling cost of flash and new ways to deploy increasingly cheaper flash DancingDinosaur now believes Flash storage can save IT real money.

According to the Evaluator Group and cited by IBM, flash and hybrid cloud technologies are dramatically changing the way companies deploy storage and design applications. As new applications are created–often for mobile or distributed access–the ability to store data in the right place, on the right media, and with the right access capability will become even more important.

In response, companies are adding cloud to lower costs, flash to increase performance, and SDS to add flexibility. IBM is integrating these capabilities together with security and data management for faster return on investment.  Completing the IBM pitch, the company offers choice among on-premise storage, SDS, or storage as a cloud service.

In an announcement earlier this week IBM introduced six products:

  • IBM Spectrum Virtualize 7.8 with transparent cloud tiering
  • IBM Spectrum Scale 4.2.2 with cloud data sharing
  • IBM Spectrum Virtualize family flash enhancements
  • IBM Storwize family upgrades
  • IBM DS8880 High Performance Flash Enclosure Gen2
  • IBM DeepFlash Elastic Storage Server
  • VersaStack—a joint IBM-Cisco initiative

In short, these announcements address Hybrid Cloud enablement, as a standard feature for new and existing users of Spectrum Virtualize to enable data sharing to the cloud through Spectrum Scale, which can sync file and object data across on-premises and cloud storage to connect cloud native applications. Plus, more high density, highly scalable all-flash storage now sports a new high density expansion enclosure that includes new 7TB and 15TB flash drives.

IBM Storwize, too, is included, now able to grow up to 8x larger than previously without disruption. That means up to 32PB of flash storage in only four racks to meet the needs of fast-growing cloud workloads in space-constrained data centers. Similarly, IBM’s new DeepFlash Elastic Storage Server (ESS) offers up to 8x better performance than HDD-based solutions for big data and analytics workloads. Built with IBM Spectrum Scale ESS includes virtually unlimited scaling, enterprise security features, and unified file, object, and HDFS support.

The z can play in this party too. IBM’s DS8888 now delivers 2x better performance and 3x more efficient use of rack space for mission-critical applications such as credit card and banking transactions as well as airline reservations running on IBM’s z System or IBM Power Systems. DancingDinosaur first reported on the all flash z, the DS8888, when it was introduced last May.

Finally hybrid cloud enablement for existing and new on-premises storage enhancements through IBM Spectrum Virtualize, which brings hybrid cloud capabilities for block storage to the Storwize family, FlashSystem V9000, SVC, and VersaStack, the IBM-Cisco collaboration.

Behind every SDS deployment lies some actual physical storage of some type. Many opt for generic, low cost white box storage to save money.  As part of IBM’s latest SDS offerings you can choose among any of nearly 400 storage systems from IBM and others. Doubt any of those others are white box products but at least they give you some non-IBM options to potentially lower your storage costs.

DancingDinosaur is Alan Radding, a veteran information technology analyst, writer, and ghostwriter. Please follow DancingDinosaur on Twitter, @mainframeblog. See more of his IT writing at technologywriter.com and here.

 

IBM 3Q16 Results Telegraph a New z System in 2017

October 27, 2016

DancingDinosaur usually doesn’t like to read too much into the statements of IBM suits at financial briefings. This has been especially true since IBM introduced a new presentation format this year to downplay its platform business and emphasize its strategic imperatives. (Disclaimer: DancingDinosaur is NOT a financial analyst but a technology analyst.)

But this quarter the CFO said flat out: “Our z Systems results reflect a product cycle dynamic, seven quarters into the z13 cycle; revenue was down while margins continue to expand. We continue to add new clients to the platform and we are introducing new technologies like block chain. We announced new services to make it easier to build and test block chain networks in a secure environment as we build our block chain platform it’s been engineered to run on multiple platforms but is optimized for scale, security and resilience on both the IBM mainframe and the IBM cloud.”

linuxone-emperorLinuxONE Emperor

If you parse the first sentence–reflect a product cycle dynamic–he is not too subtly hinting that IBM needs a z System refresh if they want to stop the financial losses with z. You don’t have to be a genius to expect a new z, probably the z14, in 2017. Pictured above is the LinuxONE Emperor, a z optimized to run Linux. The same suit said “We’ve been shifting our platform to address Linux, and in the third quarter Linux grew at a double digit rate, faster than the market.” So based on that we can probably guess that the z14 (or whatever it will be called) will run z/OS, followed shortly by a LinuxONE version to further expand the z System’s Linux footprint.

Timothy Prickett Morgan picked that up too and more. He expects a z14 processor complex will be announced next year around the same time that the Power9 chip ships. In both cases, Power and z customers who can wait will wait, or, if they are smart, will demand very steep discounts on current Power8 hardware to make up for the price/performance improvements that are sure to accompany the upcoming Power9 and z machines.

When it comes to revenue 3Q16 was at best flat, but actually was down again overall. The bright spot again was IBM’s strategic imperatives. As the suit stated: in total, we continue to deliver double-digit revenue growth in our strategic imperatives led by our cloud business. Specifically, cognitive solutions were up 5% and, within that, solution software was up 8%.

Overall, growth in IBM’s strategic imperatives rose 15%. Over the last 12 months, strategic imperatives delivered nearly $32 billion in revenue and now represent 40% of IBM. The suit also emphasized strong performance in IBM’s cloud offerings which increased over 40%, led by the company’s as-a-service offerings. IBM ended the third quarter with an as-a-service run rate of $7.5 billion, up from $6.7 billion last quarter. Most of that was attributed to organic growth, not acquisitions. Also strong was IBM’s revenue performance in security and mobile. In addition, the company experienced growth in its analytic offerings, up 14% this quarter with contributions from the core analytics platform, especially the Watson platform, Watson Health, and Watson IoT.

IBM apparently is convinced that cognitive computing, defined as using data and adding intelligence into products and services to help companies make better decisions, is the wave of the future. As the company sees it, real value lies in providing cognitive capabilities via the IBM cloud. A critical element of its strategy is IBM’s industry focus. Initially industry platforms will address two substantial opportunity areas, financial services and block chain solutions. You can probably add healthcare too.

Blockchain may emerge as the sleeper, although DancingDinosaur has long been convinced that blockchain is ideal for z shops—the z already handles the transactions and delivers the reliability, scalability, availability, and security to do it right.  As IBM puts it, “we believe block chain has the potential to do for trusted transactions what the Internet did for information.” Specifically, IBM is building a complete block chain platform and is now working with over 300 clients to pioneer block chain for business, including CLS, which settles $5 trillion per day in the currency markets, to implement a distributed ledger in support of its payment netting service, and Bank of Tokyo Mitsubishi, for smart contracts to manage service level agreements and automate multi party transactions.

Says Morgan: “IBM is very enthusiastic about using Blockchain in commercial transaction processing settings, and has 40 clients testing it out on mainframes, but this workload will take a long time to grow. Presumably, IBM will also push Blockchain on Power as well.”  Morgan may be right about blockchain coming to Power, but it is a natural for the z right now, whether as a new z14 or a new z-based LinuxONE machine.

DancingDinosaur is Alan Radding, a veteran information technology analyst, writer, and ghostwriter. Please follow DancingDinosaur on Twitter, @mainframeblog. See more of his IT writing at technologywriter.com and here.

 

Compuware Triples Down on Promised Quarterly z System Releases

October 14, 2016

Since Jan 2015 Compuware has been releasing enhancements to its mainframe software portfolio quarterly.  The latest quarterly release, dated Oct. 3, delivers REST APIs for ISPW source code management and DevOps release automation; Integration of Compuware Abend-AID with Syncsort Ironstream to create their own custom cross-platform DevOps toolchains; and a new Seasoft Plug-In for Topaz Workbench. The Seasoft plug-in will help less skilled IBM z/OS developers to manage mainframe batch processing along with other z platform tasks

compuware-blended-ecosystem

Compuware’s point is to position the mainframe at the heart of agile DevOps computing. As part of the effort, it needs to deliver slick, modern tools that will appear to the non-mainframers who are increasingly moving into multi-platform development roles that include the mainframe. These people want to work as if they are dealing with a Windows or Linux machine. They aren’t going to wrestle with arcane mainframe constructs like Abends or JCL.  Traditional mainframe dev, test and code promotion processes are simply too slow to meet the demands of today’s fast-moving markets. The new dev and ops people who are filling out data center ranks haven’t the patience to learn what they view as antiquated mainframe concepts. They need intelligent tools that visualize the issue and let them intuitively click, drag, drop, and swipe their way through whatever needs to be done.

This is driven by the long-expected attrition of veteran mainframers and the mainframe knowledge and application insight they brought. Only the recession that began in 2008 slowed the exit of aging mainframers. Now they are leaving; one mainframe credit card processor reportedly lost 50 mainframe staff in a month.  The only way to replace this kind of experience is with intelligent and easy to learn tools and expert automation.

Compuware’s response has been to release new tools and enhancements every quarter. It started with Topaz in 2015. DancingDinosaur covered it Jan. 2015 here.  The beauty of Topaz lies in its graphical ease-of-use. Data center newbies didn’t need to know z/OS; they could understand what they were seeing and do meaningful work. With each quarterly release Compuware, in one way or another, has advanced this basic premise.

The most recent advances are streamlining the DevOps process in a variety of ways.  DevOps has emerged as critical with mainframe shops scrambling to remain relevant and effective in a rapidly evolving app dev environment. Just look at Bluemix if you want to see where things are heading.

In the first announcement, Compuware extended mainframe DevOps innovation with REST APIs for ISPW SCM and release automation. The new APIs enable large enterprises to flexibly integrate their numerous other mainframe and non-mainframe DevOps tools with ISPW to create their own custom cross-platform DevOps toolchains. Part of that was  the acquisition of the assets associated with Itegrations’s source code management (SCM) migration practice and methodology, which will  enable Compuware users to more easily migrate their SCM systems from Agile-averse products such as CA Endevor, CA Panvalet, CA Librarian, and Micro Focus/Serena ChangeMan as well as internally developed SCM systems—to ISPW

According to Compuware, these DevOps toolchains are becoming increasingly important for two reasons:

  • Enterprises must aggressively adopt DevOps disciplines in their mainframe environments to fulfill business requirements for digital agility. Traditional mainframe dev, test and code promotion processes are simply too slow to meet the demands of today’s fast-moving markets to counter new, digitally nimble market disruptors.
  • Data centers need to better integrate the toolchains that support their newly adopted mainframe DevOps workflows with those that support DevOps across their various other platforms. This is because mainframe applications and data so often function as back-end systems-of-record for front-end web and mobile systems-of-engagement in multi-tier/cross-platform environments.

In the second announcement Compuware integrated Abend-AID and Syncsort’s Ironstream to give fast, clear insight into mainframe issues. Specifically, the integration of Abend-AID and Ironstream \ enables IT to more quickly discover and act upon correlations between application faults and broader conditions in the mainframe environment. This is particularly important, notes Compuware, as enterprises, out of necessity, shift operational responsibilities for the platform to staffs with limited experience on z/OS. Just put yourself into the shoes of a distributed system manager now dealing with a mainframe. What might appear to be a platform issue may turn out to be software faults, and vice versa.  The retired 30-year mainframe veterans would probably see it immediately (but not always). Mainframe newcomers need a tool with the intelligence to recognize it for them.

With the last announcement Compuware and Software Engineering of America (SEA) introduced the release of SEA’s JCLplus+ Remote Plug-In and $AVRS Plug-In for Compuware’s Topaz Workbench mainframe IDE. Again think about mainframe neophytes. The new plug-ins for Topaz significantly ease challenging JCL- and output-related tasks, according to Compuware, effectively enabling both expert and novice IT staff to perform those tasks more quickly and more accurately in the context of their other mainframe DevOps activities.

An encouraging aspect of this is that Compuware is not doing this alone. The company is teaming up with SEA and with Syncsort to make this happen. As the mainframe vendors work to make mainframe computing easier and more available to lesser trained people it will be good for the mainframe industry as a whole and maybe even help lower the cost of mainframe operations.

DancingDinosaur is Alan Radding, a veteran information technology analyst, writer, and ghostwriter. Please follow DancingDinosaur on Twitter, @mainframeblog. See more of his IT writing at technologywriter.com and here.

 

Put your z System at the Center of Blockchain

October 6, 2016

The zSystem has been a leading platform for the world’s top banks for decades and with blockchain the z could capture even more banking and financial services data centers. Two recent IBM Institute for Business Value (IBV) studies show commercial blockchain solutions are rapidly being adopted throughout banking and financial markets dramatically faster than initially expected, according to an IBM announcement late in Sept.  Of course, not every blockchain deployment runs on z but more should be.

blockchainexplained-willian-mougayer

Copyright William Mougayer

According to an IBV study, more than 70 percent of early adopters are prioritizing blockchain efforts in order to break down current barriers to creating new business models and reaching new markets. IBV analyst report the respondents are better positioned to defend themselves against competitors, including those untraditional disruptors like non-bank startups. The majority of respondents are focusing their blockchain efforts on four areas: clearing and settlement, wholesale payments, equity and debt issuance, and reference data.

But blockchain isn’t just a financial services story. Mougayer identifies government services, healthcare, energy, supply chains, and world trade as blockchain candidates. IoT will also be an important area for blockchain, according to a new book on IoT by Maciej Kranz, an IoT pioneer.

As Kranz explains: blockchain has emerged as a technology that allows a secure exchange of value between entities in a distributed fashion. The technology first appeared on most IT radar screens a few years ago in the form of Bitcoin, a virtual currency that relies on blockchain technology to ensure its security and integrity. Although Bitcoin’s future is still uncertain, blockchain is a completely different story.

Blockchain is attracting considerable attention for its ability to ensure the integrity of transactions over the network between any entities. Automobile companies are considering the technology to authenticate connected vehicles in the vehicle-to-vehicle (V2V) environment, notes Kranz. Still others are looking at blockchain to trace the sources of goods, increase food safety, create smart contracts, perform audits, and do much more. Blockchain also provides a natural complement to IoT security in a wide variety of use cases.

The z and especially the newest generation of z Systems is ideal for blockchain work. Zero downtime, industry-leading security, massive I/O, flexibility, high performance at scale, and competitive price/performance along with its current presence in the middle of most transactions, especially financial transactions, makes z a natural for blockchain.

A key driver for blockchain, especially in the banking and financial services segment is the Linux Foundation’s HyperLedger project. This entails a collaborative, open source effort to establish an open blockchain platform that will satisfy a variety of use cases across multiple industries to streamline business processes. Through a cross-industry, open standard for distributed ledgers, virtually any digital exchange of value, such as real estate contracts, energy trades, even marriage licenses can securely and cost-effectively be tracked and traded.

According to Linux Foundation documents, “the Hyperledger Project has ramped up incredibly fast, a testament to how much pent-up interest, potential, and enterprise demand there is for a cross-industry open standard for distributed ledgers.” Linux Foundation members of the Hyperledger Project are moving blockchain technology forward at remarkable speed. IBM has been an early and sizeable contributor of code to the project. It contributed 44,000 lines of code as a founding member.

That it is catching on so quickly in the banking and financial services sector shouldn’t be a surprise either.  What blockchain enables is highly secure and unalterable distributed transaction tracking at every stage of the transaction.  Said Likhit Wagle, Global Industry General Manager, IBM Banking and Financial Markets, when ticking off blockchain advantages: To start, first movers are setting business standards and creating new models that will be used by future adopters of blockchain technology. We’re also finding that these early adopters are better able to anticipate disruption and fight off new competitors along the way.

It is the larger banks leading the charge to embrace blockchain technology with early adopters twice as likely to be large institutions with more than a hundred thousand employees. Additionally, 77 percent of these larger banks are retail banking organizations.

As the IBV surveys found, trailblazers expect the benefits from blockchain technology to impact several business areas, including reference data (83 percent), retail payments (80 percent) and consumer lending (79 percent). When asked which blockchain-based new business models could emerge, 80 percent of banks surveyed identified trade finance, corporate lending, and reference data as having the greatest potential.

IBM is making it easy to tap blockchain by making it available through Docker containers, as a signed and certified distribution of IBM’s code submission to Hyperledger, and through Bluemix services. As noted above, blockchain is a natural fit for the z and LinuxOne. To that end, Bluemix Blockchain Services and a fully integrated DevOps Tool is System z- and IoT-enabled.

DancingDinosaur is Alan Radding, a veteran information technology analyst, writer, and ghostwriter. Please follow DancingDinosaur on Twitter, @mainframeblog. See more of his IT writing at technologywriter.com and here.

 

IBM z System and Power Fuel Hybrid Cloud

September 30, 2016

DancingDinosaur has been cheerleading the z as a cloud player since 2011 and even before, mainly as an ideal vehicle for private clouds. So it should be no surprise to readers that last week IBM announced z and Power cloud-ready systems, services, and solutions for the hybrid cloud environment. My only question: What took them so long?

hybrid-cloud-systems

Power and z accelerate transformation to hybrid cloud

The world, indeed, is changing fast, and IT data centers especially have to scramble to keep up as they try to blend public cloud, private cloud, and traditional IT data centers and integrate it all seamlessly. “Today’s business environment is very dynamic and filled with disruption. A hybrid cloud model enables clients to continuously adapt while also optimizing on-premises investments and delivering the flexibility clients need across IBM Systems and the cloud,” according to Tom Rosamilia, senior vice president, IBM Systems.

At the heart of the IBM’s systems for what it calls the hybrid cloud era are three technologies we should be generally already familiar:

  • z Systems for cloud. IBM z Systems Operational Insights is a new SaaS-based offering that provides analytic insights on cloud operations for new levels of efficiency and application performance. It allows users to make better business and application decisions based on trends and embedded expertise on performance data through a GUI dashboard. This accompanies IBM OMEGAMON Application Performance Management, which provides quick identification of problem components through end-to-end visibility for clients’ hybrid workloads that include z Systems applications.  In addition, the newly available IBM Common Data Provider enables z Systems operational data to be efficiently consumed in near real time by the clients’ cloud or local enterprise operational analytics platform. An OK start, but you can quibble with OMEGAMON as IBM’s performance analysis and management choice. At its core, this is old technology. DancingDinosaur would prefer, say, Watson.
  • Power Systems for cloud. With integrated OpenStack-based cloud management and elastic consumption models, according to IBM, these new enterprise-class IBM Power Systems enable organizations to transform their IT infrastructure to a local cloud for AIX, IBM i and Linux workloads and extend them with rapid access to compute services in the IBM Cloud. DancingDinosaur covered the new LC here.
  • IBM Spectrum Copy Data Management and Protect. This brings a new solution that drives operational and development agility and efficiency across new and traditional applications that allow detailed, easy management of data copies.  Additionally, IBM Spectrum Protect has expanded its extensive hybrid cloud solution integration with cloud object storage options for use in hybrid cloud deployments.

About the only thing missing above is LinuxONE but that will come up below when IBM gets to openness, which is critical to hybrid clouds. In its announcement, IBM also promised a series of new and expanded collaborations with IBM Systems for hybrid cloud environments, including:

  • Canonical: Canonical and IBM are extending their ongoing alliance to make Ubuntu OpenStack available today on LinuxONE, z Systems, Power Systems, and OpenPOWER-based systems, including the new line of LC servers. This enables organizations to leverage Canonical’s portfolio across the three platforms with simplified and automated OpenStack management.
  • Hortonworks: IBM and Hortonworks,  a Hadoop platform, are jointly entering the marketplace to make Hortonworks Hadoop distribution available on POWER. Whoopee, Hadoop already runs native on z.
  • Mirantis: Mirantis and IBM are collaborating to develop reference architectures enabling Mirantis OpenStack to manage compute nodes hosted on IBM Power Systems servers, and to validate a host of core applications to run its OpenStack private cloud. With this integration, Mirantis will now bring its OpenStack based private cloud management to the POWER platform. This enables organizations to leverage the efficiency of IBM Power Systems for data-driven workloads in a seamless and compatible way for their data center through Mirantis’ OpenStack cloud management.
  • NGINX: NGINX’s application delivery platform now supports servers based on IBM’s POWER architecture with the latest release of its commercial load balancer and web accelerator software, NGINX Plus R10. The combination of NGINX Plus and POWER brings new agility to enterprises, allowing them to scale their infrastructure and application delivery solutions across any environment – public, private, and hybrid cloud; containers; and bare metal – providing a consistent user experience.
  • Red Hat: Red Hat and IBM are expanding their long-standing alliance to better help organizations embrace hybrid cloud. Through joint engineering and deeper product collaboration, the two companies plan to deliver solutions built on key components of Red Hat’s portfolio of open source products, including Red Hat Enterprise Linux, (RHEL)., By the way, RHEL is the #2 Linux distro on the z. RHEL also enables IBM Power Systems as a featured component of Red Hat’s hybrid cloud strategy spanning platform infrastructure located both on and off an enterprise’s premises.

There is enough openness to enable you to do what you need so DancingDinosaur has no real complaints with IBM’s choices above. Just wish it was more z- and LinuxONE-centric.

DancingDinosaur is Alan Radding, a veteran information technology analyst and writer and mainframe bigot. Please follow DancingDinosaur on Twitter, @mainframeblog. See more of his IT writing at technologywriter.com and here.

 

IBM’s Latest Flash Announcements Target Dell/EMC

August 26, 2016

The newest IBM storage, announced here earlier this week, aims to provide small, midsize, and global enterprises with virtualized SDS for primary storage and for cloud or cognitive applications and workloads. Central to the effort is IBM Spectrum Virtualize, which automates Storwize all-flash solutions intended to reduce the cost and complexity of data center and cloud environments. Entry pricing for the new storage starts at $19,000, which IBM describes as cost-conscious.storwize logo

IBM All-Flash for the midrange

In addition, IBM announced Flash In, a no-cost storage migration program targeting Dell/EMC customers that IBM hopes will bail out of the merged operation.

SDS in the form of IBM Spectrum Virtualize is central to making IBM’s latest all-flash offerings work for the broad set of use cases IBM envisions.  As IBM puts it: organizations today are embracing all-flash storage to deliver speed and response times necessary to support growing data workloads across public, private, and hybrid cloud environments, as well as the emerging demands of cognitive applications and workloads.

IBM Spectrum Virtualize promises to improve storage efficiency through features such as real-time data compression, thin provisioning, and snapshotting across nearly 400 different storage arrays from a multitude of vendors. That means organizations can leverage, even repurpose, physical storage capacity they already have as they scramble to meet the storage needs of new workloads.

Spectrum Virtualize also optimizes data security, reliability and operational costs. For example, the software automatically tiers and migrates data from one storage array to another, provides secure data-at-rest encryption, and remotely replicates data for disaster recovery and business continuity

The announcement centers around two products, the enterprise-class IBM Storwize V7000F and a midsize IBM Storwize 5030F,  which promise enterprise-class availability and function in a mid-range and entry-level all-flash storage array.  At the same time, both offer greater performance and require less time to provision and optimize systems. Coincidentally, IBM has just been recognized, for the third year in a row as a leader for Flash Storage in the Gartner Magic Quadrant for Solid-State Arrays (SSA).

Specifically, the all-flash IBM Storwize V7000F improves performance by up to 45 percent and supports four times the clustering for scale-out and scale-up growth to help organizations manage rapidly growing datasets.  The midrange and entry level all flash IBM Storwize 5030F offers high performance and availability at a discounted entry point (noted above) to help clients control costs.

The all-flash Storwize V7000F and Storwize V5030F are also built to manage a variety of primary storage workloads, from database management systems, such as SQL Server and MySQL, to digital media sources that include broadcast, real-time streaming, and video surveillance. The new technology can also handle huge data volumes, such as IoT data.

Given the product line confusion that typically characterizes big technology platform mergers, IBM is looking to entice some Dell or, more likely, EMC storage customers to the new Storwize offerings. To that end, IBM is offering what it describes as a no-cost migration initiative for organizations that are not current IBM customers and seeking a smooth transition path from their EMC or Dell storage to the IBM family of all-flash arrays. BTW: EMC is a leading provider of z System storage.

While too early to spot any Dell or EMC customer response, one long time IBM customer, Royal Caribbean Cruises Ltd, has joined the flash storage party. “With ever increasing volumes of customer and operational information, flexible and secure data storage is crucial to keeping our operation afloat (hope the pun was intended) as our company expands to hundreds of destinations worldwide,” said Leonardo Irastorza, Technology Revitalization & Global Shared Services Manager. The cruise line is counting on IBM flash storage to play a critical role, especially when it comes to ensuring exceptional guest experiences across its brands.

And more is coming: IBM released the following statement of direction: IBM intends to enhance IBM Spectrum Virtualize with additional capabilities for flash drive optimization and management. These capabilities are intended to help increase the service life and usability of flash drives, particularly read-intensive flash drives. The planned capabilities will likely include:

  • Data deduplication for workloads and use cases where it complements IBM’s existing industry leading compression technology
  • Improved flash memory management (mainly for garbage collection)
  • Additional flash drive wear management and reporting.

By implementing these capabilities in IBM Spectrum Virtualize they will be available for IBM Storwize family, FlashSystem V9000, and SAN Volume Controller offerings as well as VersaStack (the IBM/Cisco collaboration) and IBM PurePower systems.

DancingDinosaur is Alan Radding, a veteran information technology analyst and writer. Please follow DancingDinosaur on Twitter, @mainframeblog. See more of his IT writing at technologywriter.com and here.

 

IBM Discounts z/OS Cloud Activity

August 12, 2016

The latest iteration of IBM’s z/OS workload pricing aims at to lower the cost of running cloud workloads.  In a recent announcement, z Systems Workload Pricing for Cloud (zWPC) for z/OS seeks to minimize the impact of new public cloud workload transaction growth on Sub-Capacity license charges. IBM did the same thing with mobile workloads when they started driving up the 4-hour workload averages on the z. As more z workloads interact with public clouds this should start to add up, if it hasn’t already.

bluemix garage -ni_5554516560

Bluemix Garages in the Cloud

As IBM puts it: zWPC applies to any organization that has implemented Sub-Capacity pricing via the basic AWLC or AEWLC pricing mechanisms for the usual MLC software suspects. These include z/OS, CICS, DB2, IMS, MQ and WebSphere Application Server (WAS).  An eligible transaction is one classified as Public Cloud-originated, connecting to a z/OS hosted transactional service and/or data source via a REST or SOAP web service.  Public cloud workloads are defined as transactions processed by named Public cloud application transactions identified as originating from a recognized Public Cloud offering, including but not limited to, Amazon Web Services (AWS), Microsoft Azure, IBM Bluemix, and more.

IBM appears to have simplified how you identify eligible workloads. As the company notes: zWPC does not require you to isolate the public cloud work in separate partitions, but rather offers an enhanced way of reporting. The z/OS Workload Manager (WLM) allows clients to use WLM classification rules to distinguish cloud workloads, effectively easing the data collection requirements for public cloud workload transactions.

So how much will you save? It reportedly reduces eligible hourly values by 60 percent. The discount produces an adjusted Sub-Capacity value for each reporting hour. What that translates into on your monthly IBM software invoice once all the calculations and fine print are considered amounts to a guess at this point. But at least you’ll save something. The first billing eligible under this program starts Dec. 1, 2016.

DancingDinosaur expects IBM to eventually follow with discounted z/OS workload pricing for IoT and blockchain transactions and maybe even cognitive activity. Right now the volume of IoT and blockchain activity is probably too low to impact anybody’s monthly license charges. Expect those technologies ramp up in coming years with many industry pundits projecting huge numbers—think billions and trillions—that will eventually impact the mainframe data center and associated software licensing charges.

Overall, Workload License Charges (WLC) constitute a monthly software license pricing metric applicable to IBM System z servers running z/OS or z/TPF in z/Architecture (64-bit) mode.  The driving principle of WLS amounts to pay-for-what-you-use, a laudable concept. In effect it lowers the cost of incremental growth while further reducing software costs by proactively managing associated peak workload utilization.

Generally, DancingDinosaur applauds anything IBM does to lower the cost of mainframe computing.  Playing with workload software pricing in this fashion, however, seems unnecessary. Am convinced there must be simpler ways to lower software costs without the rigmarole of metering and workload distribution tricks. In fact, a small mini-industry has cropped up among companies offering tools to reduce costs, primarily through various ways to redistribute workloads to avoid peaks.

A modification to WLC, the variable WLC (VWLC) called AWLC (Advanced) and the EWLC (Entry), aligns with most of the z machines introduced over the past couple of years.  The result, according to IBM, forms a granular cost structure based on MSU (CPU) capacity that applies to VWLC and associated pricing mechanisms.

From there you can further tweak the cost by deploying Sub-Capacity and Soft Capping techniques.  Defined Capacity (DC), according to IBM, allows the sizing of an LPAR in MSU such that the LPAR will not exceed the designated MSU amount.  Group Capacity Limit (GCL) extends the Defined Capacity principle for a single LPAR to a group of LPARs, allowing MSU resources to be shared accordingly.  BTW, a potential downside of GCL is that is one LPAR in the group can consume all available MSUs due to a rogue transaction. Again, an entire mini industry, or maybe no so mini, has emerged to help handle workload and capacity pricing on the z.

At some point in most of the conference pricing sessions the eyes of many attendees glaze over.  By Q&A time the few remaining pop up holding a copy of a recent invoice and ask what the hell this or that means and what the f$#%@#$ they can do about it.

Have to admit that DancingDinosaur did not attend the most recent SHARE conference, where pricing workshops can get quite energetic, so cannot attest to the latest fallout. Still, the general trend with mobile and now with cloud pricing discounts should be lower costs.

DancingDinosaur is Alan Radding, a veteran information technology analyst and writer. Please follow DancingDinosaur on Twitter, @mainframeblog. See more of his IT writing at technologywriter.com and here.

 

IBM’s DeepFlash 150 Completes Its Flash Lineup for Now

July 29, 2016

Two years ago DancingDinosaur wrote about new IBM Flash storage for the mainframe. That was about the DS8870, featuring 6-nines (99.9999) of availability and real-time-compression. Then this past May DancingDinosaur reported on another new IBM all-flash initiative, including the all-flash IBM DS8888 for the z, which also boasts 6-nines availability. Just this week IBM announced it is completing its flash lineup with the IBM DeepFlash 150, intended as a building block for SDS (software defined storage) infrastructures.

IBM DeepFlash 150IBM DeepFlash 150, courtesy of IBM

As IBM reports, the DeepFlash 150 does not use conventional solid-state drives (SSD). Instead, it relies on a systems-level approach that enables organizations to manage much larger data sets without having to manage individual SSD or disk drives. DeepFlash 150 comes complete with all the hardware necessary for enterprise and hyper-scale storage, including up to 64 purpose-engineered flash cards in a 3U chassis and 12-Gbps SAS connectors for up to eight host servers. The wide range of IBM Spectrum Storage and other SDS solutions available for DeepFlash 150 provides flash-optimized scale out and management along with large capacity for block, file and object storage.

The complication for z System shops is that you access the DeepFlash 150 through IBM Spectrum Scale. Apparently you can’t just plug the DeepFlash 150 into the z the way you would plug in the all flash DS8888. IBM Spectrum Scale works with Linux on z Systems servers or IBM LinuxONE systems running RHEL or SLES. Check out the documentation here.

As IBM explains in the Red Book titled IBM Spectrum Scale (GPFS) for Linux on z Systems: IBM Spectrum Scale provides a highly available clustering solution that augments the strengths of Linux on z by helping the z data center control costs and achieve higher levels of quality of service. Spectrum Scale, based on IBM General Parallel File System (GPFS) technology, is a high performance shared-disk file management solution that provides fast, reliable access to data from multiple nodes in a cluster environment. Spectrum Scale also allows data sharing in a mixed platform environment, which can provide benefits in cloud or analytics environments by eliminating the need of transferring data across platforms. When it comes to the DeepFlash 150 IBM is thinking about hyperscale data centers.

Hyperscale data centers can’t absorb the costs of constructing, managing, maintaining and cooling massive hyper- scale environments that use conventional mechanical storage, according to IBM. Those costs are driving the search for storage with a smaller physical footprint, lower costs, greater density, and, of course, much higher performance.

Enter DeepFlash 150, which introduces what IBM considers breakthrough economics for active data sets. The basic DeepFlash 150 hardware platform is priced under $1/GB. For big data deployments IBM recommends IBM Spectrum Scale with DeepFlash 150, providing customers with the overlying storage services and functionality critical for optimization of their big data workloads.

But even at $1/GB DeepFlash 150 isn’t going to come cheap. For starters consider how many gigabytes are in the terabytes or petabytes you will want to install. You can do the math. Even at $1/GB this is going to cost. Then you will need IBM Spectrum Scale. With DeepFlash 150 IBM did achieve extreme density of up to 170TB per rack unit, which adds up to a maximum 7PB of flash in a single rack enclosure.

IBM Spectrum Scale and the DeepFlash 150 are intended to support a wide range of file, object and Hadoop Distributed File System (HDFS) analytics work-loads. According to IBM, as a true SDS solution IBM Spectrum Scale can utilize any appropriate hardware and is designed specifically to maximize the benefits of hyper-scale storage systems like DeepFlash 150. Using a scale-out architecture, IBM Spectrum Scale can add servers or multiple storage types and incorporate them automatically into a single managed resource to maximize performance, efficiency, and data protection.

Although DeepFlash 150 can be used with a private cloud IBM seems to be thinking more in terms of hybrid clouds. To address today’s need for seamlessly integrating high-performance enterprise storage such as DeepFlash 150 with the nearly unlimited resources and capabilities of the cloud, IBM Spectrum Scale offers transparent cloud tiering to place data on cloud-based object storage or in a public cloud service. As IBM explains, the transparent cloud tiering feature of IBM Spectrum Scale can connect on- premises storage such as DeepFlash 150 directly to object storage or a commodity-priced cloud service. This allows enterprises to simultaneously leverage the economic, collaboration, and scale benefits of both on- premises and cloud storage while providing a single, powerful view of all data assets.

A Tech Target report on enterprise flash storage profiled 15 flash storage product lines. In general, the products claim maximum read IOPS ranging from 200,000 to 9 million, peak read throughput from 2.4 GBps to 46 GBps, and read latencies from 50 microseconds to 1 millisecond. The guide comes packed with a ton of caveats. And that’s why DancingDinosaur doesn’t think the DeepFlash 150 is the end of IBM’s flash efforts. Stay tuned.

DancingDinosaur is Alan Radding, a veteran information technology analyst and writer. Please follow DancingDinosaur on Twitter, @mainframeblog. See more of his IT writing at technologywriter.com and here.

 


%d bloggers like this: