Posts Tagged ‘OpenStack’

IBM Wazi cloud-native devops for Z

June 12, 2020

In this rapidly evolving world of hybrid and multicloud systems, organizations are required to quickly evolve their processes and tooling to address business needs. Foremost among that are development environments that include IBM Z as part of their hybrid solution face, says Sanjay Chandru, Director, IBM Z DevOps.

IBM’s goal, then  is to provide a cloud native developer experience for the IBM Z that is consistent and familiar to all developers. And that requires cross platform consistency in tooling for application programmers on Z who will need to deliver innovation faster and without the backlogs that have been expected in the past.

Wazi, along with OpenShift,  is another dividend from IBM purchase of Red Hat. Here is where IBM Wazi for Red Hat CodeReady Workspaces comes in: an add-on to IBM Cloud Pak for Applications. It allows developers to use an industry standard integrated development environment (IDE),  such as Microsoft Visual Studio Code (VS Code) or Eclipse, to develop and test IBM z/OS applications in a containerized, virtual z/OS environment on Red Hat OpenShift running on x86 hardware. The container creates a sandbox. 

The combination of IBM Cloud Pak for Applications goes beyond what Zowe offers as an open source framework for z/OS and the OpenProject to enable Z development and operations teams to securely manage, control, script and develop on the mainframe like any other cloud platform. Developers who are not used to z/OS and IBM Z, which are most developers, now can  become productive faster in a familiar and accessible working environment, effectively  improving DevOps adoption across the enterprise

As IBM explained: Wazi integrates seamlessly into a standard, Git-based open tool chain to enable continuous integration and continuous delivery (CI/CD) as part of a fully hybrid devops process encompassing distributed and z systems.

IBM continues: Wazi is offered with deployment choices so that organizations can flexibly rebalance entitlement over time based on its business needs. In short, the organization can 

protect and leverage its IBM Z investments with robust and standard development capabilities that encompasses IBM Z and multicloud platforms.

The payoff comes as developers who are NOT used to z/OS and IBM Z, which is most of the developer world, can become productive faster in a familiar and accessible working environment while  improving DevOps adoption across the enterprise. IBM Wazi integrates seamlessly into a standard, Git-based open tool chain to deliver CI/CD and is offered with deployment choices so that any organization can flexibly rebalance over time based on its business needs. In short, you are protecting and leveraging your IBM Z investments with robust and standard development capabilities that encompass the Z and multicloud platforms.

As one large IBM customer put it: “We want to make the mainframe accessible. Use whatever tool you are comfortable with – Eclipse / IDz / Visual Studio Code. All of these things we are interested in to accelerate our innovation on the mainframe” 

An IT service provider added in IBM’s Wazi announcement: “Our colleagues in software development have been screaming for years for a dedicated testing environment that can be created and destroyed rapidly.” Well, now they have it in Wazi.

DancingDinosaur is Alan Radding, a veteran information technology analyst, writer, and ghost-writer. Follow DancingDinosaur on Twitter, @mainframeblog, and see more of his work athttp://technologywriter.com/

12 Ingredients for App Modernization

January 8, 2019

It is no surprise that IBM has become so enamored with the hybrid cloud. The worldwide public cloud services market is projected to grow 21.4 percent in 2018 to total $186.4 billion, up from $153.5 billion in 2017, according to Gartner.

The fastest-growing segment of the market is cloud system infrastructure services (IaaS), which is forecast to grow 35.9 percent in 2018 to reach $40.8 billion. Gartner expects the top 10 providers, often referred to as hyperscalers, to account for nearly 70 percent of the IaaS market by 2021, up from 50 percent in 2016.

Cloud computing is poised to become a “turbocharged engine powering digital transformation around the world,” states a recent Forrester report, Predictions 2019: Cloud Computing. Overall, the global cloud computing market, including cloud platforms, business services, and SaaS, will exceed $200 billion this year, expanding at more than 20%, the research firm predicts

Venkats’ recipe for app modernization; courtesy of IBM

Hybrid clouds, which include two or more cloud providers or platforms, are emerging as the preferred approach for enterprises.  Notes IBM: The digital economy is forcing organizations to a multi-cloud environment. Three of every four enterprises have already implemented more than one cloud. The growth of cloud portfolios in enterprises demands an agnostic cloud management platform — one that not only provides automation, provisioning and orchestration, but also monitors trends and usage to prevent outages. No surprise here; IBM just happens to offer hybrid cloud management.

By the start of 2019, the top seven cloud providers are AWS, Azure, Google Cloud, IBM Cloud, VMWare Cloud on AWS, Oracle Cloud, and Alibaba Cloud. These top players have been shifting positions around in 2018 and expect more shifting to continue this year and probably for years to come.

Clients, notes Venkat, are discovering that the real value of Cloud comes in a hybrid, multi-cloud world. In this model, legacy applications are modernized with a real microservices architecture and with AI embedded in the application. He does not fully explain where the AI comes from and how it is embedded. Maybe I missed something.

Driving this interest for the next couple of years, at least, is interest in application modernization. Companies are discovering that the real value comes through a hybrid multicloud. Here legacy applications are modernized through a real microservices architecture enhanced with AI embedded in the application, says Meenagi Venkat, Vice President of Technical Sales & Solutioning, at IBM Cloud. Venkat wrote what he calls a 12-ingredient recipe for application modernization here. Dancing Dinosaur will highlight a couple of the ingredients below. Click the proceeding link to see them all.

To begin, when you modernize a large portfolio of several thousand applications in a large enterprise, you need some common approaches. At the same time, the effort must allow teams to evolve to a microservices-based organization where each microservice is designed and delivered with great independence.

Start by fostering a startup culture. Fostering a startup culture that allows for fast failure is one of the most critical ingredients when approaching a large modernization program. The modernization will involve sunsetting some applications, breaking some down, and using partner services in others. A startup culture based on methods such as IBM Garage Method and Design Thinking will help bring the how-to of the culture shift.

Then, innovate via product design Venkat continues. A team heavy with developers and no product folks is likely to focus on the technical coolness rather than product innovation. Hence, these teams should be led by the product specialists who deliver the business case for new services or client experience

And don’t neglect security. Secure DevOps will require embedding security skills in the scrum teams with a product owner leading the team. The focus on the product and on designing security (and compliance) to various regimes at the start will allow the scaling of microservices and engender trust in the data and AI layers. Venkat put this after design and the startup culture. In truth, this should be a key part of the startup culture.

DancingDinosaur is Alan Radding, a veteran information technology analyst, writer, and ghost-writer. Follow DancingDinosaur on Twitter, @mainframeblog, and see more of his work at technologywriter.com.

IBM Pushes Hybrid Cloud

December 14, 2018

Between quantum computing, blockchain, and hybrid cloud IBM is pursuing a pretty ambitious agenda. Of the three, hybrid promises the most immediate payback. Cloud computing is poised to become a “turbocharged engine powering digital transformation around the world,” states a new Forrester report, Predictions 2019: Cloud Computing

Of course, IBM didn’t wait until 2019. It purchased Red Hat Linux at the end of Oct. 2018. DancingDinosaur covered it here a few days later. At that time IBM Chairman Ginni Rometty called the acquisition of Red Hat a game-changer. “It changes everything about the cloud market,” she noted. At a cost of $34 billion, 10x Red Hat’s gross revenue, it had better be a game changer.

Forrester continues, predicting that in 2019 the cloud will reach its more interesting young adult years, bringing innovative development services to enterprise apps rather than just serving up cheaper, temporary servers and storage, which is how it has primarily grown over the past decade. Who hasn’t turned to one or another cloud provider to augment its IT resources as needed, whether backup or server capacity, and network?

As Forrester puts it: The six largest hyperscale cloud leaders — Alibaba, Amazon Web Services [AWS], Google, IBM, Microsoft Azure, and Oracle — will all grow larger in 2019, as service catalogs and global regions expand. Meanwhile, the global cloud computing market, including cloud platforms, business services, and SaaS, will exceed $200 billion in 2019, expanding at more than 20%, the research firm predicts.

Hybrid clouds, which provide two or more cloud providers or platforms, are emerging as the preferred way for enterprises to go.  Notes IBM: The digital economy is forcing organizations to a multi-cloud environment. Three of every four enterprises have already implemented more than one cloud. The growth of cloud portfolios in enterprises demands an agnostic cloud management platform — one that not only provides automation, provisioning and orchestration, but that also monitors trends and usage to prevent outages.

Of course, IBM also offers a solution for this; the company’s Multicloud Manager runs on its IBM Cloud Private platform, which is based on Kubernetes container orchestration technology, described as an open-source approach for ‘wrapping’ apps in containers, and thereby making them easier and cheaper to manage across different cloud environments – from on-premises systems to the public cloud.

Along with hybrid clouds containers are huge in Forrester’s view. Powered by cloud-native open source components and tools, companies will start rolling out their own digital application platforms that will span clouds, include serverless and event-driven services, and form the foundation for modernizing core business apps for the next decade, the researchers observed. Next year’s hottest trend, according to Forrester, will be making containers easier to deploy, secure, monitor, scale, and upgrade. “Enterprise-ready container platforms from Docker, IBM, Mesosphere, Pivotal, Rancher, Red Hat, VMware, and others are poised to grow rapidly,” the researchers noted.

This may not be as straightforward as the researchers imply. Each organization must select for itself which private cloud strategy is most appropriate, they note. They anticipate greater private cloud structure emerging in 2019. It noted that organizations face three basic private cloud paths: building internally, using vSphere sprinkled with developer-focused tools and software-defined infrastructure; and having its cloud environment custom-built with converged or hyperconverged software stacks to minimize the tech burden. Or lastly, building its cloud infrastructure internally with OpenStack, relying on the hard work of its own tech-savvy team. Am sure there are any number of consultants, contractors, and vendors eager to step in and do this for you.

If you aren’t sure, IBM is offering a number of free trials that you can play with.

As Forrester puts it: Buckle up; for 2019 expect the cloud ride to accelerate.

DancingDinosaur is Alan Radding, a veteran information technology analyst, writer, and ghost-writer. Follow DancingDinosaur on Twitter, @mainframeblog, and see more of his work at technologywriter.com.

IBM Takes Red Hat for $34 Billion

November 2, 2018

“The acquisition of Red Hat is a game-changer. It changes everything about the cloud market,” declared Ginni Rometty, IBM Chairman. At a cost of $34 billion, 10x Red Hat’s gross revenue, it had better be a game changer. See IBM’s announcement earlier this week here.

IBM Multicloud Manager Dashboard

IBM has been hot on the tail of the top three cloud hyperscalers—AWS, Google, and Microsoft/Azure. Will this change the game? Your guess is as good as anyone’s.

The hybrid cloud market appears to be IBM’s primary target. As the company put it: “IBM will become the world’s #1 hybrid cloud provider, offering companies the only open cloud solution that will unlock the full value of the cloud for their businesses.” IBM projects the value of the hybrid cloud market at $1 trillion within a few years!

Most companies today are only 20 percent along their cloud journey, renting compute power to cut costs. The next chapter of the cloud, noted Rometty, requires shifting business applications to hybrid cloud, extracting more data, and optimizing every part of the business.

Nobody has a lock on this market yet. Not IBM, not Red Hat, not VMware, but one thing seems clear; whoever wins will involve open source.  Red Hat, with $3 billion in open source revenue has proven that open source can pay. The only question is how quickly it can pay back IBM’s $34 billion bet.

What’s needed is something that promotes data portability and applications across multiple clouds, data security in a multi-cloud environment, and consistent cloud management. This is the Red Hat and IBM party line.  Both believe they will be well positioned to address these issues to accelerate hybrid multi-cloud adoption. To succeed at this, the new entity will have to tap their leadership in Linux, containers, Kubernetes, multi-cloud management, and automation.

IBM first brought Linux to the Z 20 years ago, making IBM an early advocate of open source, collaborating with Red Hat to help grow enterprise-class Linux.  More recently the two companies worked to bring enterprise Kubernetes and hybrid cloud solutions to the enterprise. These innovations have become core technologies within IBM’s $19 billion hybrid cloud business.

The initial announcement made the point Red Hat will join IBM’s Hybrid Cloud team as a distinct unit, as IBM described, preserving the independence and neutrality of Red Hat’s open source development heritage and commitment, current product portfolio, go-to-market strategy, and unique development culture. Also Red Hat will continue to be led by Jim Whitehurst and Red Hat’s current management team.

That camaraderie lasted until the Q&A following the announcement, when a couple of disagreements arose following different answers on relatively trivial points. Are you surprised? Let’s be clear, nobody spends $34 billion on a $3 billion asset and gives it a completely free hand. You can bet IBM will be calling the shots on everything it is feels is important. Would you do less?

Dharmesh Thakker, a contributor to Forbes, focused more on Red Hat’s OpenShift family of development software. These tools make software developers more productive and are helping transform how software is created and implemented across most enterprises today. So “OpenShift is likely the focus of IBM’s interest in Red Hat” he observes.

A few years ago, he continued, the pendulum seemed to shift from companies deploying more-traditional, on-premises datacenter infrastructure to using public cloud vendors, mostly Amazon. In the last few years, he continued, we’ve seen most mission-critical apps inside companies continue to run on a private cloud but modernized by agile tools and microservices to speed innovation. Private cloud represents 15-20% of datacenter spend, Thakker reports, but the combo of private plus one or more public clouds – hybrid cloud—is here to stay, especially for enterprises. Red Hat’s OpenShift technology enables on-premises, private cloud deployments, giving IBM the ability to play in the hybrid cloud.

IBM isn’t closing this deal until well into 2019; expect to hear more about this in the coming months.

DancingDinosaur is Alan Radding, a veteran information technology analyst, writer, and ghost-writer. Follow DancingDinosaur on Twitter, @mainframeblog, and see more of his work at technologywriter.com.

 

 

 

Can Zowe Bring Young Developers to the Z

August 31, 2018

Are you ever frustrated by the Z? As powerful as it gets mainframes remain a difficult nut to crack, particularly for newcomers who have grown up with easier technologies. Even Linux on Z is not as simple or straightforward as on other platforms. This poses a problem for Z-based shops that are scrambling to replace retiring mainframers.

IBM – Jon Simon/Feature Photo Service

Shopping via smartphone

Certainly other organizations, mainly mainframe ISVs like Compuware and Syncsort, have succeeded in extending the GUI deeper into the Z but that alone is not enough. It remains too difficult for newcomers to take their newly acquired computer talents and readily apply them to the mainframe. Maybe Zowe can change this.

And here’s how:  Recent surveys show that flexibility, agility and speed are key.  Single platforms are out, multi-platforms, and multi-clouds are in. IBM’s reply: let’s bring things together with the announcement of Zowe, pronounced like joey starting with a z. Zowe represents the first open source framework for z/OS. As such it provides solutions for development and operations teams to securely manage, control, script, and develop on the mainframe like any other cloud platform. Launched with partners CA Technologies and Rocket Software along with the support of the Open Mainframe Project, the goal is to drive innovation for the community of next-generation mainframe developers and enable interoperability and scalability between products. Zowe promotes a faster team on-ramp to mainframe productivity, collaboration, knowledge sharing, and communication.

In short, IBM and partners are enabling users to access z/OS using a new open-source framework. Zowe, more than anything before, brings together generations of systems that were not designed to handle global networks of sensors and devices. Now, decades since IBM brought Linux to the mainframe IBM, CA, and Rocket Software are introducing Zowe, a new open-source software framework that bridges the divide between modern challenges like IoT and the mainframe.

Zowe has four components:

  1. Zowe APIs: z/OS has a set of Representational State Transfer (REST) operating system APIs. These are made available by the z/OS Management Facility (z/OSMF). Zowe uses these REST APIs to submit jobs, work with the Job Entry Subsystem (JES) queue, and manipulate data sets. Zowe Explorers are visual representations of these APIs that are wrapped in the Zowe web UI application. Zowe Explorers create an extensible z/OS framework that provides new z/OS REST services to enterprise tools and DevOps processes.
  2. Zowe API Mediation Layer: This layer has several key components, including that API Gateway built using Netflix Zuul and Spring Boot technology to forward API requests to the appropriate corresponding service through the micro-service endpoint UI and the REST API Catalog. This publishes APIs and their associated documentation in a service catalog. There also is a Discovery Service built on Eureka and Spring Boot technology, acting as the central point in the API Gateway. It accepts announcements of REST services while providing a repository for active services.
  3. Zowe Web UI: Named zLUX, the web UI modernizes and simplifies working on the mainframe and allows the user to create modern applications. This is what will enable non-mainframers to work productively on the mainframe. The UI works with the underlying REST APIs for data, jobs, and subsystems, and presents the information in a full-screen mode compared to the command-line interface.
  4. Zowe Command Line Interface (CLI): Allows users to interact with z/OS from a variety of other platforms, such as cloud or distributed systems, submit jobs, issue Time Sharing Option (TSO) and z/OS console commands, integrate z/OS actions into scripts, and produce responses as JSON documents. With this extensible and scriptable interface, you can tie in mainframes to the latest distributed DevOps pipelines and build in automation.

The point of all this is to enable any developer to manage, control, script, and develop on the mainframe like any other cloud platform. Additionally, Zowe allows teams to use the same familiar, industry-standard, open-source tools they already know to access mainframe resources and services too.

The mainframe may be older than many of the programmers IBM hopes Zowe will attract. But it opens new possibilities for next generation applications and for mainframe shops desperately needing new mission-critical applications for which customers are clamoring. This should radically reduce the learning curve for the next generation while making experienced professionals more efficient. Start your free Zowe trial here. BTW, Zowe’s code will be made available under the open-source Eclipse Public License 2.0.

DancingDinosaur is Alan Radding, a veteran information technology analyst, writer, and ghost-writer. Follow DancingDinosaur on Twitter, @mainframeblog. See more of his work at technologywriter.com and here.

 

Compuware Brings Multi-Platform DevOps to the Z

January 19, 2018

The rush has started to DevOps for Z. IBM jumped on the bandwagon with an updated release of IBM Developer for z Systems (IDz) V14.1.1, which allows Z organizations to provide new capabilities and product maintenance to users sooner than the traditional release models they previously used from IBM.

Even more recently, Compuware, which described DevOps and the mainframe as the ultimate win-win, announced a program to advance DevOps on the mainframe with integrated COBOL code coverage metrics for multi-platform DevOps.  This will make it possible for all developers in the organization to fluidly handle multi-platform code, including mainframe code, in a fast delivery DevOps approach.

SonarSource-Compuware DevOps Dashboard

The new Compuware-SonarSource integrations are expected to ease enterprise DevOps teams trying to track and validate code coverage of COBOL application testing and do it with the same ease and employing the same processes as they do with Java and other more mainstream code. This ability to automate code coverage tracking across platforms is yet another example of empowering enterprise IT to apply the same proven and essential Agile, DevOps and Continuous Integration/Continuous Delivery (CI/CD) disciplines to both core systems-of-record (mainframe) as well as systems-of-engagement (mostly distributed systems).

Code coverage metrics promise insight into the degree to which source code is executed during a test. It identifies  which lines of code have been executed, and what percentage of an application has been tested. These measurements allow IT teams to understand the scope and effectiveness of its testing as code is moved towards production.

DevOps has become increasingly critical to mainframe shops that risk becoming irrelevant and even replaceable if they cannot turn around code improvements fast enough. The mainframe continues to be valued as the secure repository of the organization’s critical data but that won’t hold off those who feel the mainframe is a costly extravagance, especially when mainframe shops can’t turn out code updates and enhancements as fast as systems regarded as more inherently agile.

As Compuware puts it, the latest integrations automatically feed code coverage results captured by its Topaz for Total Test into SonarSource’s SonarQube. This gives DevOps teams an accurate, unified view of quality metrics and milestones across platforms enterprise-wide.

For z shops specifically, such continuous code quality management across platforms promises high value to large enterprises, enabling them to bring new digital deliverables to market, which increasingly is contingent on simultaneously updating code across both back-end mainframe systems-of-record and front-end mobile/web and distributed systems-of-engagement.

Specifically, notes Compuware, integration between Topaz for Total Test and SonarQube enables DevOps teams to:

  • Gain insight into the coverage of code being promoted for all application components across all platforms
  • Improve the rigor of digital governance with strong enforcement of mainframe QA policies for coding errors, data leakage, credential vulnerabilities, and more
  • Shorten feedback loops to speed time-to-benefit and more promptly address shortfalls in COBOL skills and bottlenecks in mainframe DevOps processes

Topaz for Total Test captures code coverage metrics directly from the source code itself, rather than from a source listing, as is the case with outdated mainframe tools. This direct capture is more accurate and eliminates the need for development, Compuware reported.

The new integration actually encompasses a range of tools and capabilities. For instance:

From within a Compuware Xpediter debug session, a developer can kick off a Compuware Topaz for Total Test automated unit test and set it up to collect code coverage info as it runs. Code coverage metrics then can be automatically fed into SonarSource’s SonarQube where they can be displayed in a dashboard along with other quality metrics, such as lines going to subprograms.

It also integrates with Jenkins as a Continuous Integration (CI) platform, which acts as a process orchestrator and interacts with an SCM tool, such as Compuware ISPW, which automates software quality checks and pushes metrics onto SonarQube among other things. ISPW also is where code gets promoted to the various stages within the lifecycle and ultimately deployed. Finally Topaz is Compuware’s Eclipse-based IDE from which developers drive all these activities.

The Compuware announcement further delivers on its promise to mainstream the mainframe; that is, provide a familiar, modern, and intuitive multi-platform mainframe development environment—integrated with state-of-the-art DevOps tools for veteran mainframe developers and, more importantly, those newly engaged as IT newbies from the distributed world. In short, this is how you keep your Z relevant and invaluable going forward.

** Special note regarding last week’s DancingDinosaur reporting on chip problems here; Don’t count on an immediate solution coming from the vendors anytime soon; not Google, IBM, Intel, AMD, ARM, or others. The word among chip geeks is that the dependencies are too complex to be fully fixed with a patch. This probably requires new chip designs and fabrication. DancingDinosaur will keep you posted.

DancingDinosaur is Alan Radding, a veteran information technology analyst, writer, and ghost-writer. Please follow DancingDinosaur on Twitter, @mainframeblog. See more of his IT writing at technologywriter.com and here.

IBM Puts Open DBaaS on IBM OpenPOWER LC Servers

June 15, 2017

Sometimes IBM seems to be thrashing around looking for anything hot that’s selling, and the various NoSQL databases definitely are hot. The interest is driven by DevOps, cloud, and demand for apps fast.

A month or so ago the company took its Power LC server platform to the OpenPOWER Developer Conference in San Francisco where they pitched Database-as-a-Service (DBaaS) and a price-performance guarantee: OpenPOWER LC servers designed specifically for Big Data to deliver a 2.0x price-performance advantage over x86 for MongoDB and 1.8x for EDB PostgreSQL 9.5 guaranteed. With organizations seeking any performance advantage, these gains matter.

There are enough caveats that IBM will almost never be called to deliver on the guarantee. So, don’t expect to cash in on this very quickly. As IBM says in the miles of fine print: the company will provide additional performance optimization and tuning services consistent with IBM Best Practices, at no charge.  But the guarantee sounds intriguing. If you try it, please let DancingDinosaur know how it works out.

IBM Power System S822LC for Big Data

BTW, IBM published the price for the S822LC for big data as starting at $6,399.00 USD. Price includes shipping. Linux OS, however, comes for an additional charge.

Surprisingly, IBM is not aiming this primarily to the IBM Cloud. Rather, the company is targeting the private cloud, the on-premises local version. Its Open DBaaS toolkit, according to IBM, provides enterprise clients with a turnkey private cloud solution that pre-integrates an Open Source DB image library, OpenStack-based private cloud, and DBaaS software packages with hardware (servers/storage/network switches/rack) and a single source of support to enable a DBaaS self-service portal for enterprise developers and LOB users to provision MongoDB, Postgres, and others in minutes. But since it is built on OpenStack, it also supports hybrid cloud integration with IBM Cloud offerings via OpenStack APIs.

In terms of cost it seems remarkably reasonable. It comes in four reference configurations. The Starter configuration is ~$80k (US list price) and includes 3 Power 822LC servers, pair of network switches, rack, DBaaS Toolkit software, and IBM Lab Services. Other configurations include Entry, Cloud Scale, and Performance configurations that have been specified for additional compute, storage, and OpenStack control plane nodes along with high-capacity JBOD storage drawers. To make this even easier, each configuration can be customized to meet user requirements. Organizations also can provide their own racks and/or network switches.

Furthermore, the Power 822LC and Power 821LC form the key building blocks for the compute, storage and OpenStack control plane nodes. As a bonus, however, IBM includes the new 11-core Power 822LC, which provides an additional 10-15% performance boost over the 10-core Power 822LC for the same price.

This is a package deal, at least if you want the best price and to deploy it fast. “As the need for new applications to be delivered faster than ever increases in a digital world, developers are turning to modern software development models including DevOps, as-a-Service, and self-service to increase the volume, velocity and variety of business applications,” said Terri Virnig, VP, Power Ecosystem and Strategy at IBM. Open Platform for DBaaS on IBM in the announcement. Power Systems DBaaS package  includes:

  • A self-service portal for end users to deploy their choice of the most popular open source community databases including MongoDB, PostgreSQL, MySQL, MariaDB, Redis, Neo4j and Apache Cassandra deployable in minutes
  • An elastic cloud infrastructure for a highly scalable, automated, economical, and reliable open platform for on-premises, private cloud delivery of DBaaS
  • A disk image builder tool for organizations that want to build and deploy their own custom databases to the database image library

An open source, cloud-oriented operations manager with dashboards and tools will help you visualize, control, monitor, and analyze the physical and virtual resources. A turnkey, engineered solution comprised of compute, block and archive storage servers, JBOD disk drawers, OpenStack control plane nodes, and network switches pre-integrated with the open source DBaaS toolkit is available through GitHub here.

DancingDinosaur is Alan Radding, a veteran information technology analyst, writer, and ghost-writer. Please follow DancingDinosaur on Twitter, @mainframeblog. See more of his IT writing at technologywriter.com and here.

 

IBM z System and Power Fuel Hybrid Cloud

September 30, 2016

DancingDinosaur has been cheerleading the z as a cloud player since 2011 and even before, mainly as an ideal vehicle for private clouds. So it should be no surprise to readers that last week IBM announced z and Power cloud-ready systems, services, and solutions for the hybrid cloud environment. My only question: What took them so long?

hybrid-cloud-systems

Power and z accelerate transformation to hybrid cloud

The world, indeed, is changing fast, and IT data centers especially have to scramble to keep up as they try to blend public cloud, private cloud, and traditional IT data centers and integrate it all seamlessly. “Today’s business environment is very dynamic and filled with disruption. A hybrid cloud model enables clients to continuously adapt while also optimizing on-premises investments and delivering the flexibility clients need across IBM Systems and the cloud,” according to Tom Rosamilia, senior vice president, IBM Systems.

At the heart of the IBM’s systems for what it calls the hybrid cloud era are three technologies we should be generally already familiar:

  • z Systems for cloud. IBM z Systems Operational Insights is a new SaaS-based offering that provides analytic insights on cloud operations for new levels of efficiency and application performance. It allows users to make better business and application decisions based on trends and embedded expertise on performance data through a GUI dashboard. This accompanies IBM OMEGAMON Application Performance Management, which provides quick identification of problem components through end-to-end visibility for clients’ hybrid workloads that include z Systems applications.  In addition, the newly available IBM Common Data Provider enables z Systems operational data to be efficiently consumed in near real time by the clients’ cloud or local enterprise operational analytics platform. An OK start, but you can quibble with OMEGAMON as IBM’s performance analysis and management choice. At its core, this is old technology. DancingDinosaur would prefer, say, Watson.
  • Power Systems for cloud. With integrated OpenStack-based cloud management and elastic consumption models, according to IBM, these new enterprise-class IBM Power Systems enable organizations to transform their IT infrastructure to a local cloud for AIX, IBM i and Linux workloads and extend them with rapid access to compute services in the IBM Cloud. DancingDinosaur covered the new LC here.
  • IBM Spectrum Copy Data Management and Protect. This brings a new solution that drives operational and development agility and efficiency across new and traditional applications that allow detailed, easy management of data copies.  Additionally, IBM Spectrum Protect has expanded its extensive hybrid cloud solution integration with cloud object storage options for use in hybrid cloud deployments.

About the only thing missing above is LinuxONE but that will come up below when IBM gets to openness, which is critical to hybrid clouds. In its announcement, IBM also promised a series of new and expanded collaborations with IBM Systems for hybrid cloud environments, including:

  • Canonical: Canonical and IBM are extending their ongoing alliance to make Ubuntu OpenStack available today on LinuxONE, z Systems, Power Systems, and OpenPOWER-based systems, including the new line of LC servers. This enables organizations to leverage Canonical’s portfolio across the three platforms with simplified and automated OpenStack management.
  • Hortonworks: IBM and Hortonworks,  a Hadoop platform, are jointly entering the marketplace to make Hortonworks Hadoop distribution available on POWER. Whoopee, Hadoop already runs native on z.
  • Mirantis: Mirantis and IBM are collaborating to develop reference architectures enabling Mirantis OpenStack to manage compute nodes hosted on IBM Power Systems servers, and to validate a host of core applications to run its OpenStack private cloud. With this integration, Mirantis will now bring its OpenStack based private cloud management to the POWER platform. This enables organizations to leverage the efficiency of IBM Power Systems for data-driven workloads in a seamless and compatible way for their data center through Mirantis’ OpenStack cloud management.
  • NGINX: NGINX’s application delivery platform now supports servers based on IBM’s POWER architecture with the latest release of its commercial load balancer and web accelerator software, NGINX Plus R10. The combination of NGINX Plus and POWER brings new agility to enterprises, allowing them to scale their infrastructure and application delivery solutions across any environment – public, private, and hybrid cloud; containers; and bare metal – providing a consistent user experience.
  • Red Hat: Red Hat and IBM are expanding their long-standing alliance to better help organizations embrace hybrid cloud. Through joint engineering and deeper product collaboration, the two companies plan to deliver solutions built on key components of Red Hat’s portfolio of open source products, including Red Hat Enterprise Linux, (RHEL)., By the way, RHEL is the #2 Linux distro on the z. RHEL also enables IBM Power Systems as a featured component of Red Hat’s hybrid cloud strategy spanning platform infrastructure located both on and off an enterprise’s premises.

There is enough openness to enable you to do what you need so DancingDinosaur has no real complaints with IBM’s choices above. Just wish it was more z- and LinuxONE-centric.

DancingDinosaur is Alan Radding, a veteran information technology analyst and writer and mainframe bigot. Please follow DancingDinosaur on Twitter, @mainframeblog. See more of his IT writing at technologywriter.com and here.

 

IBM’s Strategic Initiatives Gain New All-Flash Storage

May 6, 2016

Flash storage must be the latest rage among enterprise storage vendors.  Last week IBM introduced three new all-flash storage arrays, driving down latency and price/gigabyte to unheard of levels (minimum latency of 250μs, all-flash storage as low as $1.50 per gigabyte). Earlier this week EMC announced new all-flash arrays for its Unity series at prices under $18,000 (under $10,000 for hybrid arrays.) Flash storage has long beaten hard disk in terms of cost per IOPS, but now it is rivaling hard disk in terms of cost/gigabyte.

IBM_Flash_2015_1259-C-no_shadow_A9000GlamShot2

IBM A9000 All-Flash Array

OK, it looks a little—uh—boxy to say the least. But the new FlashSystem A9000 is packed with storage goodies. It comes fully configured, which helps drive down the cost of implementing an all-flash environment. Its sister, the FlashSystem A9000R, brings a grid architecture that provides for easy scaling up to the petabyte range. Both FlashSystems incorporate data reduction features, including pattern removal, deduplication and real-time compression, as well as IBM FlashCore technology to deliver consistent low latency performance. As noted above, they are priced as low as $1.50 per gigabyte.

Driving IBM’s latest interest in flash storage are its strategic initiatives, start with cloud computing. Consumers today, notes IBM, are demanding cloud-based applications that are fast, easy, and intelligent. That means minimal latency. Cloud users are demanding sub-second response times, especially when accessing critical data. They also are demanding cloud providers deliver a unique, personalized, and positive customer experience.

To deliver this, IBM is turning to hardware innovation, specifically its MicroLatency technology, to transfers data within the flash array instead of adding another layer of software. MicroLatency technology inserts FPGAs (hardware) that connects and communicates directly with the flash and RAID controllers, eliminating the latency of software and even firmware. Instead, the FlashSystems lets hardware talk directly with hardware.

In addition, IBM is packing the new FlashSystem arrays with features designed to solve cloud requirements such as quality-of-service (QoS) to prevent the noisy neighbor problems with application performance. The new arrays also feature secure multi-tenancy, thresholding, and easy-to-deploy grid scale-out capabilities.

The z System platform is not being ignored in all of this. IBM is including a new DS model, the all-flash IBM DS 8888 optimized for enterprise-class servers: With the all-flash IBM DS8888, customer databases and data-intensive applications are accelerated, resulting in improved business performance and customer satisfaction.

Specifically, the DS888 brings faster decision making and improve customer serviceability, with 4x performance over previous generations and accelerated response time for mission critical applications. The flash storage delivers up to 2.5 million IOPS, the result of having been built on the Power8 processor. It also enables organizations to streamline operations through the performance of an all flash architected solution aligned to provide the deepest integration with System z environments. For instance, IBM promises the most robust FICON connectivity through an architecture optimized for mainframe’s 4K cache segments.

In addition, the DS8888 promises 24×7 access to data and applications through superior business continuity on high demand transaction processing workloads while delivering top operations performance through its all flash architecture. It goes beyond the usual high end 5-nines availability to deliver 6-nines availability, which translates into a mere 2.59 seconds of downtime per month.  Other availability features include flexible replication (IBM FlashCopy, Metro Mirror, Global Mirror, Metro/Global Mirror, Global Copy & Multiple Target Peer-to-Peer Remote Copy). In the early years of flash reliability and availability were a concern.  With the DS8888 and 6-nines availability it isn’t any more.

Finally, it comes with a smorgasbord of security and efficiency goodies, including self-encrypted flash drives, key interoperability management protocol, syslog protocol, an intuitive GUI (IBM has learned a few tricks from Apple), innovative storage software licensing, RESTful and OpenStack APIs to connect workloads between private and public clouds, and thin provisioning for maximum utilization and reclamation of capacity from deleted data.

All-flash solutions announced last week complement IBM’s existing all-flash portfolio including FlashSystem 900 and V9000 that also leverage IBM’s FlashCore technology. IBM’s midrange all-flash solutions consist of all-flash versions of IBM’s Storwize family, which offers the performance needed for real-time insights from business data combined with advanced management functions. IBM’s Big Data all-flash solution delivers high-density multi-petabyte scale and a low-cost flash option ideal for industries such as media, genomics, and life sciences.

DancingDinosaur used to be hired to write papers around the enterprise cost-performance tradeoffs between hard disk and SSD/flash. No matter how expensive flash was at whatever point, the cost per IOPS always favored flash and cost per gigabytes always favored hard disk. That’s no longer an analysis worth even making today.

DancingDinosaur is Alan Radding, a veteran information technology analyst and writer. Please follow DancingDinosaur on Twitter, @mainframeblog. See more of his IT writing at technologywriter.com and here.

 

 

IBM Drives Platforms to the Cloud

April 29, 2016

IBM hasn’t been shy about its shift of focus from platforms and systems to cloud, mobile, analytics, and cognitive computing. But it didn’t hit home until last week’s release of 1Q2016 financials, which mentioned the z System just once. For the quarter IBM systems hardware and operating systems software revenues (lumped into one category, almost an after-thought) rang up $1.7 billion, down 21.8 percent.

This is ugly, and DancingDinosaur isn’t even a financial analyst. After the z System showed attractive revenue growth through all of 2015 suddenly its part of a loss. You can’t even find the actual numbers for z or Power in the new report format. As IBM notes: the company has revised its financial reporting structure to reflect the transformation of the business and provide investors with increased visibility into the company’s operating model by disclosing additional information on its strategic imperatives revenue by segment. BTW, IBM did introduce new advanced storage this week, which was part of the Systems Hardware loss too. DancingDinosaur will take up the storage story here next week.

openstack-logo

But the 1Q2016 report was last week. To further emphasize its shift IBM this week announced that it was boosting support of OpenStack’s RefStack project, which is intended to advance common language between clouds and facilitate interoperability across clouds. DancingDinosaur applauds that but if you are a z data center manager you better take note that the z along with all the IBM platforms, mainly Power and storage, being pushed to the back of the bus behind IBM’s strategic imperatives.

DancingDinosaur supports the strategic initiatives and you can throw blockchain and IoT in with them too. These initiatives will ultimately save the mainframe data center. All the transactions and data swirling around and through these initiatives eventually need to land in a safe, secure, utterly reliable place where they can be processed in massive volume, kept accessible, highly available, and protected for subsequent use, for compliance, and for a variety of other purposes. That place most likely will be the z data center. It might be on premise or in the cloud but if organizations need rock solid transaction performance, security, availability, scalability, and such they will want the z, which will do it better and be highly price competitive. In short, the z data center provides the ideal back end for all the various activities going on through IBM’s strategic initiative.

The z also has a clear connection to OpenStack. Two years ago IBM announced expanding its support of open technologies by providing advanced OpenStack integration and cloud virtualization and management capabilities across IBM’s entire server portfolio through IBM Cloud Manager with OpenStack. According to IBM, Cloud Manager with OpenStack will provide support for the latest OpenStack release, dubbed Icehouse at that time, and full access to the complete core OpenStack API set to help organizations ensure application portability and avoid vendor lock-in. It also extends cloud management support to the z, in addition to Power Systems, PureFlex/Flex Systems, System x (which was still around then)  or any other x86 environment. It also would provide support for IBM z/VM on the z, and PowerVC for PowerVM on Power Systems to add more scalability and security to its Linux environments.

At the same time IBM also announced it was beta testing a dynamic, hybrid cloud solution on the IBM Cloud Manager with OpenStack platform. That would allow workloads requiring additional infrastructure resources to expand from an on premise cloud to remote cloud infrastructure.  Since that announcement, IBM has only gotten more deeply enamored with hybrid clouds.  Again, the z data center should have a big role as the on premise anchor for hybrid clouds.

With the more recent announcement RefStack, officially launched last year and to which IBM is the lead contributor, becomes a critical pillar of IBM’s commitment to ensuring an open cloud – helping to advance the company’s long-term vision of mitigating vendor lock-in and enabling developers to use the best combination of cloud services and APIs for their needs. The new functionality includes improved usability, stability, and other upgrades, ensuring better cohesion and integration of any cloud workloads running on OpenStack.

RefStack testing ensures core operability across the OpenStack ecosystem, and passing RefStack is a prerequisite for all OpenStack certified cloud platforms. By working on cloud platforms that are OpenStack certified, developers will know their workloads are portable across IBM Cloud and the OpenStack community.  For now RefStack acts as the primary resource for cloud providers to test OpenStack compatibility, RefStack also maintains a central repository and API for test data, allowing community members visibility into interoperability across OpenStack platforms.

One way or another, your z data center will have to coexist with hybrid clouds and the rest of IBM’s strategic imperatives or face being displaced. With RefStack and the other OpenStack tools this should not be too hard. In the meantime, prepare your z data center for new incoming traffic from the strategic imperatives, Blockchain, IoT, Cognitive Computing, and whatever else IBM deems strategic next.

DancingDinosaur is Alan Radding, a veteran information technology analyst and writer. Please follow DancingDinosaur on Twitter, @mainframeblog. See more of his IT writing at technologywriter.com and here.


%d bloggers like this: