Posts Tagged ‘zEnterprise’

Z Open Terminal Emulation

September 25, 2020

You can spend a lot of time working with the Z and not find much new in terminal emulation. But there actually are a few new things, mainly because times change and people work differently, using different devices and doing new things. Sure, it all goes back to the mainframe, but it is a new world.

Terminal emulator screen

Rocket Software’s latest wrinkle in terminal emulation is BlueZone Web, which promises to simplify using the mainframe by enabling users to access host-based applications anywhere and on any type of device. It is part of a broader initiative Rocket calls Open AppDev for Z. From DancingDinosaur’s perspective its strength lies in being Zowe-compliant, an open source development environment from the Open Mainframe Project.This makes IBM Z a valuable open platform for an enterprise DevOps infrastructure.

Zowe is the first open source framework for z/OS. It facilitates DevOps teams to securely manage, control, script and develop on the mainframe like any other cloud platform. Launched in a collaboration of initial contributors IBM, CA Technologies, and Rocket Software, and supported by the Open Mainframe Project. The goal is to cultivate the next generation of mainframe developers, whether or not they have Z experience. Zowe promotes a faster team on-ramp to productivity, collaboration, knowledge sharing, and communication.

This is the critical thing about Zowe: you don’t need Z platform experience. Open source developers and programmers can use a wide range of popular open source tools, languages, and technologies–the tools they already know. Sure it’d be nice to find an experienced zOS developer  but that is increasingly unlikely, making Zowe a much better bet.   

According to the Open Source Project, IBM’s initial contribution to Zowe was an extensible z/OS framework that provides REST-based services and APIs that will allow even inexperienced developers to rapidly use new technology, tools, languages, and modern workflows with z/OS. 

IBM continues to invest in the open source environment through Zowe and other open source initiatives.  Zowe also has help from Rocket Software, which provides a web user interface, and CA, which handles the Command Line Interface. You can find more about zowe here.

IBM introduced Linux, a leading open source technology, to the Z over 20 years ago. In time it has expanded the range of the Z through open-source tools that can be combined with products developed by different communities. This does create unintentional regulatory and security risks. Rocket Open AppDev for Z helps mitigate these risks, offering a solution that provides developers with a package of open tools and languages they want, along with the security, easy management, and support IBM Z customers require.

“We wanted to solve three common customer challenges that have prevented enterprises from leveraging the flexibility and agility of open software within their mainframe environment: user and system programmer experience, security, and version latency,” said Peter Fandel, Rocket’s Product Director of Open Software for Z. “With Rocket Open AppDev for Z, we believe we have provided an innovative secure path forward for our customers,” he adds. Businesses can now extend the mainframe’s capabilities through the adoption of open source software, making IBM Z a valuable platform for their DevOps infrastructure.”

But there is an even bigger question here that Rocket turned to IDC to answer. The question: whether businesses that run mission-critical workloads on IBM Z or IBMi should remain on these platforms and modernize them by leveraging the innovative tools that exist today or replatform by moving to an alternative on-premises solution, typically x86 or the cloud.

IDC investigated more than 440 businesses that have either modernized the IBM Z or IBMi or replatformed. The results: modernizers incur lower costs for their modernizing initiative than the replatformers.  Modernizers were more satisfied with the new capabilities of their modernized platform than replatformers; and the modernizers achieved a new baseline for which they paid less in hardware, software, and staffing. There is much more of interest in this study, which DancingDinosaur will explore in the weeks or months ahead.

Alan Radding, a veteran information technology analyst, writer, and ghost-writer, is DancingDinosaur. Follow DancingDinosaur on Twitter, @mainframeblog, and see more of his work at http://technologywriter.com/.

IBM Wazi cloud-native devops for Z

June 12, 2020

In this rapidly evolving world of hybrid and multicloud systems, organizations are required to quickly evolve their processes and tooling to address business needs. Foremost among that are development environments that include IBM Z as part of their hybrid solution face, says Sanjay Chandru, Director, IBM Z DevOps.

IBM’s goal, then  is to provide a cloud native developer experience for the IBM Z that is consistent and familiar to all developers. And that requires cross platform consistency in tooling for application programmers on Z who will need to deliver innovation faster and without the backlogs that have been expected in the past.

Wazi, along with OpenShift,  is another dividend from IBM purchase of Red Hat. Here is where IBM Wazi for Red Hat CodeReady Workspaces comes in: an add-on to IBM Cloud Pak for Applications. It allows developers to use an industry standard integrated development environment (IDE),  such as Microsoft Visual Studio Code (VS Code) or Eclipse, to develop and test IBM z/OS applications in a containerized, virtual z/OS environment on Red Hat OpenShift running on x86 hardware. The container creates a sandbox. 

The combination of IBM Cloud Pak for Applications goes beyond what Zowe offers as an open source framework for z/OS and the OpenProject to enable Z development and operations teams to securely manage, control, script and develop on the mainframe like any other cloud platform. Developers who are not used to z/OS and IBM Z, which are most developers, now can  become productive faster in a familiar and accessible working environment, effectively  improving DevOps adoption across the enterprise

As IBM explained: Wazi integrates seamlessly into a standard, Git-based open tool chain to enable continuous integration and continuous delivery (CI/CD) as part of a fully hybrid devops process encompassing distributed and z systems.

IBM continues: Wazi is offered with deployment choices so that organizations can flexibly rebalance entitlement over time based on its business needs. In short, the organization can 

protect and leverage its IBM Z investments with robust and standard development capabilities that encompasses IBM Z and multicloud platforms.

The payoff comes as developers who are NOT used to z/OS and IBM Z, which is most of the developer world, can become productive faster in a familiar and accessible working environment while  improving DevOps adoption across the enterprise. IBM Wazi integrates seamlessly into a standard, Git-based open tool chain to deliver CI/CD and is offered with deployment choices so that any organization can flexibly rebalance over time based on its business needs. In short, you are protecting and leveraging your IBM Z investments with robust and standard development capabilities that encompass the Z and multicloud platforms.

As one large IBM customer put it: “We want to make the mainframe accessible. Use whatever tool you are comfortable with – Eclipse / IDz / Visual Studio Code. All of these things we are interested in to accelerate our innovation on the mainframe” 

An IT service provider added in IBM’s Wazi announcement: “Our colleagues in software development have been screaming for years for a dedicated testing environment that can be created and destroyed rapidly.” Well, now they have it in Wazi.

DancingDinosaur is Alan Radding, a veteran information technology analyst, writer, and ghost-writer. Follow DancingDinosaur on Twitter, @mainframeblog, and see more of his work athttp://technologywriter.com/

IBM Brings Red Hat Ansible to Z

March 23, 2020

From the day IBM announced its $34 billion acquisition of Red Hat last October, DancingDinosaur had two questions:  1) how could the company recoup its investment in the open source software company and 2) what did it imply for the future of the z.


With about a billion dollars in open source revenue,  Red Hat was the leading open source software player, but to get from a billion dollars to $34 billion is a big leap. In Feb.  IBM announced Red Hat’s OpenShift middleware would work with the z and LinuxONE. OpenShift is a DevOps play for hybrid cloud environments, a big interest of IBM.

Along with the availability of OpenShift for z IBM also announced that Cloud Pak for Applications is available for the z and LinuxONE. In effect, this supports the modernization of existing apps and the building of new cloud-native apps. This will be further enhanced by the delivery of new Cloud Paks for the z and LinuxONE announced by IBM last summer. Clearly the z is not being abandoned now.

Last week, IBM announced the availability of Red Hat Ansible Certified Content for IBM Z, enabling Ansible users to automate IBM Z applications and IT infrastructure.This means that no matter what mix of infrastructure or clients you are working with, IBM is bringing automation for the z,  helping you manage and integrate it across the hybrid environment through a single control panel.

Ansible functionality for z/OS, according to IBM,  will empower z clients to simplify configuration and access resources, leverage existing automation, and streamline automation of operations using the same technology stack that they can use across their entire enterprise. Delivered as a fully supported enterprise-grade solution via Content Collections, Red Hat Ansible Certified Content for z provides easy automation building blocks to accelerate the automation of z/OS and z/OS-based software. These initial core collections include connection plugins, action plugin modules, and a sample playbook to automate tasks for z/OS such as creating data sets, retrieving job output, and submitting jobs.

For those not familiar with Ansible, as Wikipedia explains, Ansible  is an open-source software provisioning, configuration management, and application-deployment tool.  Find more on Ansible, just click https://en.wikipedia.org/wiki/Ansible_(software).

IBM needed to modify Ansible to work with z and hybrid clouds. Red Hat Ansible Certified Content for IBM Z, allows Ansible users to automate z applications and IT infrastructure. The Certified Content will be available in Automation Hub, with an upstream open source version offered on Ansible Galaxy. This means that no matter what mix of infrastructure or clients you  are working with, IBM is bringing automation for z to let you manage across this hybrid environment through a single control panel.

Ansible functionality for z/OS will empower z teams to simplify the configuration and access of resources, leverage existing automation, and streamline automation of operations using the same technology stack they can use across their entire enterprise. Delivered as a fully supported enterprise-grade solution with Content Collections, Red Hat Ansible Certified Content for Z allows easy automation building blocks that can accelerate the automation of z/OS and z/OS-based software.

Over the last several months, IBM improved the z developer experience by bringing DevOps and industry-standard tools like Git and Jenkins to the z. For instance it announced IBM Z Open Editor, IBM Developer for z/OS V14.2.1, and, Zowe, an open source framework for z/OS, which DancingDinosaur covered in Aug. 2018.  In February IBM announced the availability of Red Hat OpenShift on IBM Z, which enables developers to run, build, manage and modernize cloud native workloads on their choice of architecture.

Now, Ansible allows developers and operations to break down traditional internal and historical technology silos to centralize automation — while leveraging the performance, scale, control and security provided by the z. 

What more goodies for z will IBM pull from its Red Hat acquisition?  Stockholders should hope it is at least $34 billion worth or more.

DancingDinosaur is Alan Radding, a veteran information technology analyst, writer, and ghost-writer. Follow DancingDinosaur on Twitter, @mainframeblog, and see more of his work at http://technologywriter.com/

Compuware Expedites DevOps on Z

July 13, 2018

Compuware continues its quarterly introduction of new capabilities for the mainframe, a process that has been going on for several years by now. The latest advance, Topaz for Enterprise Data, promises to expedite the way DevOps teams can access the data they need while reducing the complexity, labor, and risk through extraction, masking, and visualization of the mainframe. The result: the ability to leverage all available data sources to deliver high-value apps and analytics fast.

Topaz for Enterprise Data expedites data access for DevOps

The days when mainframe shops could take a methodical and deliberate approach—painstakingly slow—to accessing enterprise data have long passed. Your DevOps teams need to dig the value out of that data and put it into the hands of managers and LOB teams fast, in hours, maybe just minutes so they can jump on even the most fleeting opportunities.

Fast, streamlined access to high-value data has become an urgent concern as businesses seek competitive advantages in a digital economy while fulfilling increasingly stringent compliance requirements. Topaz for Enterprise Data enables developers, QA staff, operations teams, and data scientists at all skill and experience levels to ensure they have immediate, secure access to the data they need, when they need it, in any format required.

It starts with data masking, which in just the last few months has become a critical concern with the rollout of GDPR across the EU. GDPR grants considerable protections and options to the people whose data your systems have been collecting. Now you need to protect personally identifiable information (PII) and comply with regulatory mandates like GDPR and whatever similar regs will come here.

Regs like these don’t apply just to your primary transaction data. You need data masking with all your data, especially when large, diverse datasets of high business value residing on the mainframe contain sensitive business or personal information.

This isn’t going to go away anytime soon so large enterprises must start transferring responsibility for the stewardship of this data to the next generation of DevOps folks who will be stuck with it. You can bet somebody will surely step forward and say “you have to change every instance of my data that contains this or that.” Even the most expensive lawyers will not be able to blunt such requests. Better to have the tools in place to respond to this quickly and easily.

The newest tool, according to Compuware, is Topaz for Enterprise Data. It will enable even a mainframe- inexperienced DevOps team to:

  • Readily understand relationships between data even when they lack direct familiarity with specific data types or applications, to ensure data integrity and resulting code quality.
  • Quickly generate data for testing, training, or business analytics purposes that properly and accurately represents actual production data.
  • Ensure that any sensitive business or personal data extracted from production is properly masked for privacy and compliance purposes, while preserving essential data relationships and characteristics.
  • Convert file types as required.

Topaz users can access all these capabilities from within Topaz’s familiar Eclipse development environment, eliminating the need to learn yet another new and complicated tool.

Those who experience it apparently like what they find. Noted Lynn Farley, Manager of Data Management at TCF Bank: “Testing with production-like obfuscated data helps us develop and deliver better quality applications, as well as remain compliant with data privacy requirements, and Topaz provides our developers with a way to implement data privacy rules to mask multiple data types across platforms and with consistent results.”

Rich Ptak, principal of IT analyst firm Ptak Associates similarly observed: “Leveraging a modern interface for fast, simple access to data for testing and other purposes is critical to digital agility,” adding it “resolves the long-standing challenge of rapidly getting value from the reams of data in disparate sources and formats that are critical to DevOps and continuous improvement.”

“The wealth of data that should give large enterprises a major competitive advantage in the digital economy often instead becomes a hindrance due to the complexity of sourcing across platforms, databases, and formats,” said Chris O’Malley,Comp CEO of Compuware. As DancingDinosaur sees it, by removing such obstacles Compuware reduces the friction between enterprise data and business advantage.

DancingDinosaur is Alan Radding, a veteran information technology analyst, writer, and ghost-writer. Follow DancingDinosaur on Twitter, @mainframeblog. See more of his work at technologywriter.com and here.

High Cost of Ignoring Z’s Pervasive Encryption

May 17, 2018

That cost was spelled out at IBM’s Think this past spring.  Writes David Bruce, who leads IBM’s strategies for security on IBM Z and LinuxONE, data breaches are expensive, costing $3.6 million on average. And hoping to avoid one by doing business as usual is a bad bet. Bruce reports breaches are increasingly likely: an organization has a 28 percent chance of being breached in the next 24 months. You can find Bruce’s comments on security and pervasive encryption here.

9 million data records were compromised in 2015

Were any of those 9 million records from your organization? Did you end up on the front page of the newspaper? To stay out of the data breach headlines, organizations require security solutions that protect enterprise and customer data at minimal cost and effort, Bruce observes.

Encryption is the preferred solution, but it is costly, cumbersome, labor-intensive, and hit-or-miss. It is hit-or-miss because the overhead involved forces organizations to choose what to encrypt and what to skip. You have to painstakingly classify the data in terms of risk, which takes time and only adds to the costs. Outside of critical revenue transactions or key intellectual property—no brainers—you will invariably choose wrong and miss something you will regret when it shows up on the front page of the New York Times.

Adding to the cost is the compliance runaround. Auditors are scheduled to visit or maybe they aren’t even scheduled and just drop in; you now have to drop whatever your staff was hoping to do and gather the necessary documentation to prove your data is safe and secure.  Do you really need this? Life is too short as it is.

You really want to put an end to the entire security compliance runaround and all the headaches it entails. But more than that, you want protected, secure data; all data, all the time.  When someone from a ransomware operation calls asking for hundreds or thousands of dollars to get your data back you can laugh and hang up the phone. That’s what Bruce means when he talks about pervasive encryption. All your data is safely encrypted with its keys protected from the moment it is created until the moment it is destroyed by you. And you don’t have to lift a finger; the Z does it all.

That embarrassing news item about a data breach; it won’t happen to you either. Most importantly of all, customers will never see it and get upset.

In fact, at Think, Forrester discussed today’s customer-obsessed approach that leading organizations are adopting to spur growth. To obsess over customers, explained Bruce, means to take great care in protecting the customer’s sensitive data, which provides the cornerstone of a customer-obsessed Forrester zero trust security framework. The framework includes, among other security elements, encryption of all data across the enterprise. Enabling the Z’s built in pervasive encryption and automatic key protection you can ignore the rest of Forrester’s framework.

Pervasive encryption, unique to Z, addresses the security challenges while helping you thrive in this age of the customer. At Think, Michael Jordan, IBM Distinguished Engineer for IBM Z Security, detailed how pervasive encryption represents a paradigm shift in security, reported Bruce. Previously, selective field-level encryption was the only feasible way to secure data, but it was time-, cost-, and resource-intensive – and it left large portions of data unsecured.

Pervasive encryption, however, offers a solution capable of encrypting data in bulk, making it possible and practical to encrypt all data associated with an application, database, and cloud service – whether on premises or in the cloud, at-rest or in-flight. This approach also simplifies compliance by eliminating the need to demonstrate compliance at the field level. Multiple layers of encryption – from disk and tape up through applications – provide the strongest possible defense against security breaches. The high levels of security enabled by pervasive encryption help you promote customer confidence by protecting their data and privacy.

If you have a Z and have not enabled pervasive encryption, you are putting your customers and your organization at risk. Am curious, please drop me a note why.

DancingDinosaur is Alan Radding, a veteran information technology analyst, writer, and ghost-writer. Please follow DancingDinosaur on Twitter, @mainframeblog. See more of his IT writing at technologywriter.com and here.

 

IBM Launches New IoT Collaborative Initiative

February 23, 2017

Collaboration partners can pull hundreds of millions of dollars in new revenue from IoT, according to IBM’s recent IoT announcement. Having reached what it describes as a tipping point with IoT innovation the company now boasts of having over 6,000 clients and partners around the world, many of whom are now wanting to join in its new global Watson IoT center to co-innovate. Already Avnet, BNP Paribas, Capgemini, and Tech Mahindra will collocate development teams at the IBM Munich center to work on IoT collaborations.

new-ibm-watson-iot-center

IBM Opens New Global Center for Watson IoT

The IBM center also will act as an innovation space for the European IoT standards organization EEBus.  The plan, according to Harriet Green, General Manager, IBM Watson IoT, Cognitive Engagement and Education (pictured above left), calls for building a new global IoT innovation ecosystem that will explore how cognitive and IoT technologies will transform industries and our daily lives.

IoT and more recently cognitive are naturals for the z System, and POWER Systems have been the platform for natural language processing and cognitive since Watson won Jeopardy three years ago. With the latest enhancements IBM has brought to the z in the form of on-premises cognitive and machine learning the z should assume an important role as it gathers, stores, collects, and processes IoT data for cognitive analysis. DancingDinosaur first reported on this late in 2014 and again just last week. As IoT and cognitive workloads ramp up on z don’t be surprised to see monthly workload charges rise.

Late last year IBM announced that car maker BMW will collocate part of its research and development operations at IBM’s new Watson IoT center to help reimagine the driving experience. Now, IBM is announcing four more companies that have signed up to join its special industry “collaboratories” where clients and partners work together with 1,000 Munich-based IBM IoT experts to tap into the latest design thinking and push the boundaries of the possible with IoT.

Let’s look at the four newest participants starting with Avnet. According to IBM, an IT distributor and global IBM partner, Avnet will open a new joint IoT Lab within IBM’s Watson IoT HQ to develop, build, demonstrate and sell IoT solutions powered by IBM Watson. Working closely with IBM’s leading technologists and IoT experts, Avnet also plans to enhance its IoT technical expertise through hands-on training and on-the-job learning. Avnet’s team of IoT and analytics experts will also partner with IBM on joint business development opportunities across multiple industries including smart buildings, smart homes, industry, transportation, medical, and consumer.

As reported by BNP Paribas, Consorsbank, its retail digital bank in Germany, will partner with IBM´s new Watson IoT Center. The company will collocate a team of solution architects, developers and business development personnel at the Watson facility. Together with IBM’s experts, they will explore how IoT and cognitive technologies can drive transformation in the banking industry and help innovate new financial products and services, such as investment advice.

Similarly, global IT consulting and technology services provider Capgemini will collocate a team of cognitive IoT experts at the Watson center. Together they will help customers maximize the potential of Industry 4.0 and develop and take to market sector-specific cognitive IoT solutions. Capgemini plans a close link between its Munich Applied Innovation Exchange and IBM’s new Customer Experience zones to collaborate with clients in an interactive environment.

Finally, the Indian multinational provider of enterprise and communications IT and networking technology Tech Mahindra, is one of IBM’s Global System Integrators with over 3,000 specialists focused on IBM technology around the world. The company will locate a team of six developers and engineers within the Watson IoT HQ to help deliver on Tech Mahindra’s vision of generating substantial new revenue based on IBM’s Watson IoT platform. Tech Mahindra will use the center to co-create and showcase new solutions based on IBM’s Watson IoT platform for Industry 4.0 and Manufacturing, Precision Farming, Healthcare, Insurance and Banking, and automotive.

To facilitate connecting the z to IoT IBM offers a simple recipe. It requires 4 basic ingredients and 4 steps: Texas Instrument’s SensorTag, a Bluemix account, IBM z/OS Connect Enterprise Edition, and a back-end service like CICS.  Start by exposing an existing z Systems application as a RESTful AP. This is where the z/OS Connect Edition comes in.  Then enable your SensorTag device to Watson IoT Quick Start. From there connect the Cloud to your on-premises Hybrid Cloud.  Finally, enable the published IoT data to trigger a RESTful API. Sounds pretty straightforward but—full disclosure—Dancing Dinosaur has not tried it due to lacking the necessary pieces. If you try it, please tell DancingDinosaur how it works (info@radding.net). Good luck.

DancingDinosaur is Alan Radding, a veteran information technology analyst, writer, and ghost-writer. Please follow DancingDinosaur on Twitter, @mainframeblog. See more of his IT writing at technologywriter.com and here.

 

AI and IBM Watson Fuel Interest in App Dev among Mainframe Shops

December 1, 2016

BMC’s 2016 mainframe survey, covered by DancingDinosaur here, both directly and indirectly pointed to increased activity in regard to data center applications. Mainly this took the form of increased interest in Java on the z as a platform for new applications. Specifically, 72% of overall respondents reported using Java today while 88% reported plans to increase their use Java. At the same time, the use of Linux on the z has been steadily growing year over year; 41% in 2014, 48% in 2015, 52% in 2016. This growth of both point to a heightened interest in application development, management, and change.

ibm-project-dataworks-visualization-1

IBM’s Project DataWorks uses Watson Analytics to create complex visualizations with one line of code

IBM has been feeding this kind of AppDev interest with its continued enhancement of Bluemix and the rollout of the Bluemix Garage method.  More recently, it recently announced a partnership with Topcoder, a global software development community comprised of more than one million designers, developers, data scientists, and competitive programmers with the aim of stimulating developers looking to harness the power of Watson to create the next generation AI apps, APIs, and solutions.

According to Forrester VP and Principal Analyst JP Gownder in the IBM announcement, by 2019, automation will change every job category by at least 25%. Additionally, IDC predicts that 75% of developer teams will include cognitive/AI functionality in one or more applications by 2018. The industry is driving toward a new level of computing potential not witnessed since the introduction of Big Data

To further drive the cultivation of this new style of developer, IBM is encouraging participation in Topcoder-run hackathons and coding competitions. Here developers can easily access a range of Watson services – such as Conversation, Sentiment Analysis, or speech APIs – to build powerful new tools with the help of cognitive computing and artificial intelligence. Topcoder hosts 7,000 code challenges a year and has awarded $80 million to its community. In addition, now developers will have the opportunity to showcase and monetize their solutions on the IBM Marketplace, while businesses will be able to access a new pipeline of talent experienced with Watson and AI.

In addition to a variety of academic partnerships, IBM recently announced the introduction of an AI Nano degree program with Udacity to help developers establish a foundational understanding of artificial intelligence.  Plus, IBM offers the IBM Learning Lab, which features more than 100 curated online courses and cognitive uses cases from providers like Codeacademy, Coursera, Big Data University, and Udacity. Don’t forget, IBM DeveloperWorks, which offers how-to tutorials and courses on IBM tools and open standard technologies for all phases of the app dev lifecycle.

To keep the AI development push going, recently IBM unveiled the experimental release of Project Intu, a new system-agnostic platform designed to enable embodied cognition. The new platform allows developers to embed Watson functions into various end-user devices, offering a next generation architecture for building cognitive-enabled experiences.

Project Intu is accessible via the Watson Developer Cloud and also available on Intu Gateway and GitHub. The initiative simplifies the process for developers wanting to create cognitive experiences in various form factors such as spaces, avatars, robots, or IoT devices. In effect, it extends cognitive technology into the physical world. The platform enables devices to interact more naturally with users, triggering different emotions and behaviors and creating more meaningful and immersive experiences for users.

Developers can simplify and integrate Watson services, such as Conversation, Language, and Visual Recognition with the capabilities of the device to act out the interaction with the user. Instead of a developer needing to program each individual movement of a device or avatar, Project Intu makes it easy to combine movements that are appropriate for performing specific tasks like assisting a customer in a retail setting or greeting a visitor in a hotel in a way that is natural for the visitor.

Project Intu is changing how developers make architectural decisions about integrating different cognitive services into an end-user experience – such as what actions the systems will take and what will trigger a device’s particular functionality. Project Intu offers developers a ready-made environment on which to build cognitive experiences running on a wide variety of operating systems – from Raspberry PI to MacOS, Windows to Linux machines.

With initiatives like these, the growth of cognitive-enabled applications will likely accelerate. As IBM reports, IDC estimates that “by 2018, 75% of developer teams will include Cognitive/AI functionality in one or more applications/services.”  This is a noticeable jump from last year’s prediction that 50% of developers would leverage cognitive/AI functionality by 2018

For those z data centers surveyed by BMC that worried about keeping up with Java and big data, AI adds yet an entirely new level of complexity. Fortunately, the tools to work with it are rapidly falling into place.

DancingDinosaur is Alan Radding, a veteran information technology analyst, writer, and ghostwriter. Please follow DancingDinosaur on Twitter, @mainframeblog. See more of his IT writing at technologywriter.com and here.

 

 

 

IBM Discounts z/OS Cloud Activity

August 12, 2016

The latest iteration of IBM’s z/OS workload pricing aims at to lower the cost of running cloud workloads.  In a recent announcement, z Systems Workload Pricing for Cloud (zWPC) for z/OS seeks to minimize the impact of new public cloud workload transaction growth on Sub-Capacity license charges. IBM did the same thing with mobile workloads when they started driving up the 4-hour workload averages on the z. As more z workloads interact with public clouds this should start to add up, if it hasn’t already.

bluemix garage -ni_5554516560

Bluemix Garages in the Cloud

As IBM puts it: zWPC applies to any organization that has implemented Sub-Capacity pricing via the basic AWLC or AEWLC pricing mechanisms for the usual MLC software suspects. These include z/OS, CICS, DB2, IMS, MQ and WebSphere Application Server (WAS).  An eligible transaction is one classified as Public Cloud-originated, connecting to a z/OS hosted transactional service and/or data source via a REST or SOAP web service.  Public cloud workloads are defined as transactions processed by named Public cloud application transactions identified as originating from a recognized Public Cloud offering, including but not limited to, Amazon Web Services (AWS), Microsoft Azure, IBM Bluemix, and more.

IBM appears to have simplified how you identify eligible workloads. As the company notes: zWPC does not require you to isolate the public cloud work in separate partitions, but rather offers an enhanced way of reporting. The z/OS Workload Manager (WLM) allows clients to use WLM classification rules to distinguish cloud workloads, effectively easing the data collection requirements for public cloud workload transactions.

So how much will you save? It reportedly reduces eligible hourly values by 60 percent. The discount produces an adjusted Sub-Capacity value for each reporting hour. What that translates into on your monthly IBM software invoice once all the calculations and fine print are considered amounts to a guess at this point. But at least you’ll save something. The first billing eligible under this program starts Dec. 1, 2016.

DancingDinosaur expects IBM to eventually follow with discounted z/OS workload pricing for IoT and blockchain transactions and maybe even cognitive activity. Right now the volume of IoT and blockchain activity is probably too low to impact anybody’s monthly license charges. Expect those technologies ramp up in coming years with many industry pundits projecting huge numbers—think billions and trillions—that will eventually impact the mainframe data center and associated software licensing charges.

Overall, Workload License Charges (WLC) constitute a monthly software license pricing metric applicable to IBM System z servers running z/OS or z/TPF in z/Architecture (64-bit) mode.  The driving principle of WLS amounts to pay-for-what-you-use, a laudable concept. In effect it lowers the cost of incremental growth while further reducing software costs by proactively managing associated peak workload utilization.

Generally, DancingDinosaur applauds anything IBM does to lower the cost of mainframe computing.  Playing with workload software pricing in this fashion, however, seems unnecessary. Am convinced there must be simpler ways to lower software costs without the rigmarole of metering and workload distribution tricks. In fact, a small mini-industry has cropped up among companies offering tools to reduce costs, primarily through various ways to redistribute workloads to avoid peaks.

A modification to WLC, the variable WLC (VWLC) called AWLC (Advanced) and the EWLC (Entry), aligns with most of the z machines introduced over the past couple of years.  The result, according to IBM, forms a granular cost structure based on MSU (CPU) capacity that applies to VWLC and associated pricing mechanisms.

From there you can further tweak the cost by deploying Sub-Capacity and Soft Capping techniques.  Defined Capacity (DC), according to IBM, allows the sizing of an LPAR in MSU such that the LPAR will not exceed the designated MSU amount.  Group Capacity Limit (GCL) extends the Defined Capacity principle for a single LPAR to a group of LPARs, allowing MSU resources to be shared accordingly.  BTW, a potential downside of GCL is that is one LPAR in the group can consume all available MSUs due to a rogue transaction. Again, an entire mini industry, or maybe no so mini, has emerged to help handle workload and capacity pricing on the z.

At some point in most of the conference pricing sessions the eyes of many attendees glaze over.  By Q&A time the few remaining pop up holding a copy of a recent invoice and ask what the hell this or that means and what the f$#%@#$ they can do about it.

Have to admit that DancingDinosaur did not attend the most recent SHARE conference, where pricing workshops can get quite energetic, so cannot attest to the latest fallout. Still, the general trend with mobile and now with cloud pricing discounts should be lower costs.

DancingDinosaur is Alan Radding, a veteran information technology analyst and writer. Please follow DancingDinosaur on Twitter, @mainframeblog. See more of his IT writing at technologywriter.com and here.

 

IBM’s DeepFlash 150 Completes Its Flash Lineup for Now

July 29, 2016

Two years ago DancingDinosaur wrote about new IBM Flash storage for the mainframe. That was about the DS8870, featuring 6-nines (99.9999) of availability and real-time-compression. Then this past May DancingDinosaur reported on another new IBM all-flash initiative, including the all-flash IBM DS8888 for the z, which also boasts 6-nines availability. Just this week IBM announced it is completing its flash lineup with the IBM DeepFlash 150, intended as a building block for SDS (software defined storage) infrastructures.

IBM DeepFlash 150IBM DeepFlash 150, courtesy of IBM

As IBM reports, the DeepFlash 150 does not use conventional solid-state drives (SSD). Instead, it relies on a systems-level approach that enables organizations to manage much larger data sets without having to manage individual SSD or disk drives. DeepFlash 150 comes complete with all the hardware necessary for enterprise and hyper-scale storage, including up to 64 purpose-engineered flash cards in a 3U chassis and 12-Gbps SAS connectors for up to eight host servers. The wide range of IBM Spectrum Storage and other SDS solutions available for DeepFlash 150 provides flash-optimized scale out and management along with large capacity for block, file and object storage.

The complication for z System shops is that you access the DeepFlash 150 through IBM Spectrum Scale. Apparently you can’t just plug the DeepFlash 150 into the z the way you would plug in the all flash DS8888. IBM Spectrum Scale works with Linux on z Systems servers or IBM LinuxONE systems running RHEL or SLES. Check out the documentation here.

As IBM explains in the Red Book titled IBM Spectrum Scale (GPFS) for Linux on z Systems: IBM Spectrum Scale provides a highly available clustering solution that augments the strengths of Linux on z by helping the z data center control costs and achieve higher levels of quality of service. Spectrum Scale, based on IBM General Parallel File System (GPFS) technology, is a high performance shared-disk file management solution that provides fast, reliable access to data from multiple nodes in a cluster environment. Spectrum Scale also allows data sharing in a mixed platform environment, which can provide benefits in cloud or analytics environments by eliminating the need of transferring data across platforms. When it comes to the DeepFlash 150 IBM is thinking about hyperscale data centers.

Hyperscale data centers can’t absorb the costs of constructing, managing, maintaining and cooling massive hyper- scale environments that use conventional mechanical storage, according to IBM. Those costs are driving the search for storage with a smaller physical footprint, lower costs, greater density, and, of course, much higher performance.

Enter DeepFlash 150, which introduces what IBM considers breakthrough economics for active data sets. The basic DeepFlash 150 hardware platform is priced under $1/GB. For big data deployments IBM recommends IBM Spectrum Scale with DeepFlash 150, providing customers with the overlying storage services and functionality critical for optimization of their big data workloads.

But even at $1/GB DeepFlash 150 isn’t going to come cheap. For starters consider how many gigabytes are in the terabytes or petabytes you will want to install. You can do the math. Even at $1/GB this is going to cost. Then you will need IBM Spectrum Scale. With DeepFlash 150 IBM did achieve extreme density of up to 170TB per rack unit, which adds up to a maximum 7PB of flash in a single rack enclosure.

IBM Spectrum Scale and the DeepFlash 150 are intended to support a wide range of file, object and Hadoop Distributed File System (HDFS) analytics work-loads. According to IBM, as a true SDS solution IBM Spectrum Scale can utilize any appropriate hardware and is designed specifically to maximize the benefits of hyper-scale storage systems like DeepFlash 150. Using a scale-out architecture, IBM Spectrum Scale can add servers or multiple storage types and incorporate them automatically into a single managed resource to maximize performance, efficiency, and data protection.

Although DeepFlash 150 can be used with a private cloud IBM seems to be thinking more in terms of hybrid clouds. To address today’s need for seamlessly integrating high-performance enterprise storage such as DeepFlash 150 with the nearly unlimited resources and capabilities of the cloud, IBM Spectrum Scale offers transparent cloud tiering to place data on cloud-based object storage or in a public cloud service. As IBM explains, the transparent cloud tiering feature of IBM Spectrum Scale can connect on- premises storage such as DeepFlash 150 directly to object storage or a commodity-priced cloud service. This allows enterprises to simultaneously leverage the economic, collaboration, and scale benefits of both on- premises and cloud storage while providing a single, powerful view of all data assets.

A Tech Target report on enterprise flash storage profiled 15 flash storage product lines. In general, the products claim maximum read IOPS ranging from 200,000 to 9 million, peak read throughput from 2.4 GBps to 46 GBps, and read latencies from 50 microseconds to 1 millisecond. The guide comes packed with a ton of caveats. And that’s why DancingDinosaur doesn’t think the DeepFlash 150 is the end of IBM’s flash efforts. Stay tuned.

DancingDinosaur is Alan Radding, a veteran information technology analyst and writer. Please follow DancingDinosaur on Twitter, @mainframeblog. See more of his IT writing at technologywriter.com and here.

 


%d bloggers like this: