Posts Tagged ‘hybrid computing’

2016 State of OpenStack Adoption Shows Continued Progress

March 10, 2016

Sixty-one percent of over 600 survey respondents are adopting OpenStack to combat the expense of public cloud alternatives, reports Talligent, provider of cost and capacity management solutions for OpenStack and hybrid clouds, which conducted most recent study of OpenStack adoption. Almost as many respondents, 59%, have opted for OpenStack to improve the responsiveness of IT service delivery.

openstack-logo

OpenStack is a cloud operating system that controls large pools of compute, storage, and networking resources throughout a datacenter, all managed through a dashboard that gives administrators control while empowering their users to provision resources through a web interface. As OpenStack puts it: A key part of the OpenStack Foundation mission is to inform, and with the ever expanding ecosystem, we felt it was a good time to cut through the noise to give our members the facts needed to make sound decisions.

In that spirit, make the OpenStack Marketplace one of your first steps in planning an OpenStack effort. There you will find the technology broken down into digestible chunks with details like which components are included, the versions used, and the APIs exposed. The community has also implemented interoperability testing to validate products displaying OpenStack logos. The results are now available in the Marketplace for public clouds, hosted private clouds, distributions & appliances.

DancingDinosaur has covered OpenStack numerous times; for example here and here, IBM fully committed to OpenStack. Late last spring it announced an expanded suite of OpenStack services that allow organizations to integrate applications and data across hybrid clouds including public, dedicated and local cloud environments without the fear of vendor lock-in or costly customization.

IBM may be a bit in front of the market on this. The Talligent survey found private clouds will not be replaced by public clouds very soon, with 54% of respondents still expecting their cloud use to be ALL or mostly private five years from now.

But whether this will occur in two years or five years developers and enterprises using the IBM Cloud OpenStack Services will be able to launch applications on local, on-premises installations and public clouds hosted on the SoftLayer infrastructure, VMware, or the IBM Cloud. This can all be done without changing code or configurations. As a result, developers can build and test an application in a public cloud and use the interoperability of OpenStack to seamlessly deploy that same application and data across any combination of clouds; public, dedicated and local/private.

The Talligent survey also found OpenStack deployments, once in place, are expected to expand quickly beyond development environments, growing from 43% to 89% within 12 months. For QA/Test the expected growth will be a tad stronger, from 47% to 91% within 12 months.

Other interesting tidbits from the survey: the top three workloads currently delivered on OpenStack include: new green field applications (69%); containers (61%), web applications (58%). No surprise there.  Also, as noted above, private clouds should continue to thrive as OpenStack users expect high levels of private cloud use within the next 5 years. Fourteen percent, however, are expecting to deploy across a balanced mix of private and public clouds. At the same time, the survey suggests that PaaS, Containers, and privately managed OpenStack are expected to grow in use while proprietary public clouds and legacy virtualization are likely to decline.

Finally, the survey respondents voiced their opinions on the OpenStack providers. Although industry vendors like VMware, IBM, HPE, Cisco and more are exploring ways to support customers in a hybrid cloud mix, the respondents, as previously noted, are not quite ready to move to a hybrid model. Still, the respondents voiced a clear desire for more operational tools.

Similarly, a majority of respondents currently using OpenStack are still prepared to maintain most of their environment on-premises, with 54% saying they will continue to be more than 80% private over the next 5 years. This may reflect ongoing concerns of corporate management about security in the public cloud. The survey, however, picked up some ambivalence on this point: 30% of the respondents using OpenStack report planning to move more than 80% of their environments to the public cloud over the next 5 years. Could this be a signal that security concerns may be fading?

DancingDinosaur is Alan Radding, a veteran information technology analyst and writer. Please follow DancingDinosaur on Twitter, @mainframeblog. See more of his IT writing at technologywriter.com and here.

New IBM z13s Brings Built-in Encrypted Security to Entry Level

February 19, 2016

Earlier this week IBM introduced the z13s, what it calls World’s most secure server, built for hybrid cloud, and sized for mid-sized organizations.  The z13s promises better business outcomes, faster decision making, less regulatory exposure, greater scale, and better fraud protection. And at the low end it is accessible to smaller enterprises, maybe those who have never tried a z before.

Advanced Security New z13s

z13s features embedded cryptography that brings the benefits of the mainframe to mid-sized organizations . Courtesy IBM

A machine like the low end z13s used to be referred to as a business class (BC) mainframe.  IBM declined to quote a price, except to say z13s will go “for about the same price as previous generations for the equivalent capacity.”  OK, back in July 2013 IBM published the base price of the zEC12 BC machine at $75,000. IBM made a big deal of that pricing at the time.

The key weasel phrase in IBM’s statement is: “for the equivalent capacity.”  Two and a half years ago the $75k zEC12 BC offered significantly more power than its predecessor. Figuring out equivalent capacity today given all the goodies IBM is packing into the new machine, like built-in chip-based cryptography and more, is anybody’s guess. However, given the plummeting costs of IT components over the past two years, you should get it at a base price of $100k or less. If not, call Intel. Adds IBM: The infrastructure costs of z13s are comparable to the Public Cloud infrastructure costs with enterprise support; significant software savings result from core consolidation on the z13s.

But the z13s is not just about price. As digital business becomes a standard practice and transaction volumes increase, especially mobile transaction volumes, the need for increased security becomes paramount. Cybercrime today has shifted. Rather than stealing data criminals are compromising data accuracy and reliability. This is where the z13s’ bolstered built-in security and access to APIs and microservices in a hybrid cloud setting can pay off by keeping data integrity intact.

IBM’s z13s, described as the new entry point to the z Systems portfolio for enterprises of all sizes, is packed with a number of security innovations. (DancingDinosaur considered the IBM LinuxONE Rockhopper as the current z entry point but it is a Linux-only machine.) For zOS the z13s will be the entry point. The security innovations include:

  • Ability to encrypt sensitive data without compromising transactional throughput and response time through its updated cryptographic and tamper-resistant hardware-accelerated cryptographic coprocessor cards with faster processors and more memory. In short: encryption at twice the speed equates to processing twice as many online or mobile device purchases in the same time, effectively helping to lower the cost per transaction.
  • Leverage the z Systems Cyber Security Analytics offering, which delivers an advanced level of threat monitoring based on behavior analytics. Also part of the package, IBM® Security QRadar® security software correlates data from more than 500 sources to help organizations determine if security-related events are simply anomalies or potential threats, This z Systems Cyber Security Analytics service will be available at no-charge, as a beta offering for z13 and z13s customers.
  • IBM Multi-factor Authentication for z/OS (MFA) is now available on z/OS. The solution adds another layer of security by requiring privileged users to enter a second form of identification, such as a PIN or randomly generated token, to gain access to the system. This is the first time MFA has been tightly integrated in the operating system, rather than through an add-on software solution. This level of integration is expected to deliver more streamlined configuration and better stability and performance.

Hybrid computing and hybrid cloud also play a big part in IBM’s thinking latest around z Systems. As IBM explains, hybrid cloud infrastructure offers advantages in flexibility but can also present new vulnerabilities. When paired with z Systems, IBM’s new security solutions can allow clients to establish end-to-end security in their hybrid cloud environment.

Specifically, IBM Security Identity Governance and Intelligence can help prevent inadvertent or malicious internal data loss by governing and auditing access based on known policies while granting access to those who have been cleared as need-to-know users. IBM Security Guardium uses analytics to help ensure data integrity by providing intelligent data monitoring, which tracks users as they access specific data and help to identify threat sources quickly in the event of a breach. IBM Security zSecure and QRadar use real-time alerts to focus on the identified critical security threats that matter the most.

Conventional z System data centers should have no difficulty migrating to the z13 or even the z13s.  IBM told DancingDinosaur it will continue to protect a client’s investment in technology with serial number preservation on the IBM z13s.  The company also is offering upgrades from the zEnterprise BC12 (zBC12) and from the zEnterprise 114 (z114) to the z13s.   Of course, it supports upgradeability within the IBM z13 family; a z13s N20 model can be upgraded to the z13 N30 model. And once the z13s is installed it allows on demand offerings to access temporary or permanent capacity as needed.

DancingDinosaur is Alan Radding, a veteran information technology analyst and writer. Please follow DancingDinosaur on Twitter, @mainframeblog. See more of his IT writing at technologywriter.com and here.

IBM Gets Serious about Linux on z Systems

February 12, 2016

 

It has taken the cloud, open source, and mobile for IBM to finally, after more than a decade of Linux on z, for the company to turn it into the agile development machine it should have been all along. Maybe z data centers weren’t ready back then, maybe they aren’t all that ready now, but it is starting to happen.

Primary_LinuxONE_LeftAngle-1 (1)

LinuxONE Rockhopper, Refreshed for Hybrid Cloud Innovation

In March, IBM will make its IBM Open Platform available for the IBM LinuxONE (IOP) portfolio available at no cost. IOP includes a broad set of industry standard Apache-based capabilities for analytics and big data. The components supported include Apache Spark, Apache HBase and more, as well as Apache Hadoop 2.7.1. Continuing its commitment to contributing back to the open source community, IBM has optimized the Open Managed Runtime project (OMR) for LinuxONE. Now IBM innovations in virtual machine technology for new dynamic scripting languages will be brought to enterprise-grade strength.

It doesn’t stop there. IBM has ported the Go programming language to LinuxOne too. Go was developed by Google and is designed for building simple, reliable and efficient software, making it easier for developers to combine the software tools they know with the speed, security and scale offered by LinuxONE. IBM expects to begin contributing code to the Go community this summer.

Back in December IBM brought Apple’s Swift programming to the party, first to the IBM Watson iOS SDK, which gives developers a Swift API to simplify integration with many of the Watson Developer Cloud services, including the Watson Dialog, Language Translation, Natural Language Classifier, Personality Insights, Speech To Text, Text to Speech, Alchemy Language, or Alchemy Vision services – all of which are available today, and can now be integrated with just a few lines of code.

Following Apple’s introduction of Swift as the new language for OS X and iOS application development. IBM began partnering with Apple to bring the power of Swift open source programming to the z. This will be closely tied to Canonical’s Ubuntu port to the z expected this summer.

Also, through new work by SUSE to collaborate on technologies in the OpenStack space, SUSE tools will be employed to manage public, private, and hybrid clouds running on LinuxONE.  Open source, OpenStack, open-just-about-everything appears to be the way IBM is pushing the z.

At a presentation last August on Open Source & ISV Ecosystem Enablement for LinuxONE and IBM z, Dale Hoffman, Program Director, IBM’s Linux SW Ecosystem & Innovation Lab, introduced the three ages of mainframe development; our current stage being the third.

  1. Traditional mainframe data center, 1964–2014 includes • Batch • General Ledger • Transaction Systems • Client Databases • Accounts payable / receivable • Inventory, CRM, ERP Linux & Java
  2. Internet Age, 1999–2014 includes–• Server Consolidation • Oracle Consolidation • Early Private Clouds • Email • Java®, Web & eCommerce
  3. Cloud/Mobile/Analytics (CAMSS2) Age, 2015–2020 includes– • On/Off Premise, Hybrid Cloud • Big Data & Analytics • Enterprise Mobile Apps • Security solutions • Open Source LinuxONE and IBM z ecosystem enablement

Hoffman didn’t suggest what comes after 2020 but we can probably imagine: Cognitive Computing, Internet of Things, Blockchain. At least those are trends starting to ramp up now.

He does, however, draw a picture of the state of Linux on the mainframe today:

  • 27% of total installed capacity run Linux
  • Linux core capacity increased 16% from 2Q14 to 2Q15
  • 40% of customers have Linux cores
  • 80% of the top 100 customers (in terms of installed MIPS) run Linux on the mainframe
  • 67% of new accounts run Linux

To DancingDinosaur, this last point about the high percentage of new z accounts running Linux speaks to where the future of the z is heading.

Maybe as telling are the following:

  • 64% of companies participate in Open Source projects
  • 78% of companies run on open source
  • 88% of companies to increase open source contributions in the next 2-3 year
  • 47% to release internal tools & projects as OSS
  • 53% expect to reduce barriers to employee participation in open source
  • 50% report that more than half of their engineers are working on open source projects
  • 66% of companies build software on open source

Remember when open source and Linux first appeared for z, data center managers were shocked at the very concept. It was anti-capitalist at the very least, maybe even socialist or communist. Look at the above percentages; open source has gotten about as mainstream as it gets.

It will be interesting to see how quickly developers move to LinuxONE for their CAMSS projects. IBM hasn’t said anything about the pricing of the refreshed Rockhopper model or about the look and feel of the tools. Until the developers know, DancingDinosaur expects they will continue to work on the familiar x86 tools they are using now.

DancingDinosaur is Alan Radding, a veteran information technology analyst and writer. Please follow DancingDinosaur on Twitter, @mainframeblog. See more of his IT writing at technologywriter.com and here.

Exploiting the IBM z13 for Maximum Price/Performance Advantage

February 4, 2016

The z13 is the most powerful general purpose computer IBM has ever made. The key to capturing the maximum value from the z13, however, lies in how you plan, design, configure, and optimize your systems and software for everything from COBOL and Java to process parallelization and analytics. What you do in this regard will have significant impact on not only the price/performance you experience but on your success at achieving the business outcomes you are expecting.

z13-under the covers

IBM System z13

This really becomes a software configuration challenge. By tapping approximately 600 internal processors IBM already has optimized the hardware, input, output, memory, and networking/communications about as much as it can be. Your job is to optimize the software you are running, which will require working closely with your ISV.

The place to start is by leveraging the z13’s new compiler technology, parallelism, zIIP and assist processors. This will enable you to save significant money while boosting workload performance. You will literally be doing more for less.

Similarly, in the not too distant past Moore’s Law would virtually guarantee a 15-20% price/performance gain automatically just by taking a new machine out of the box and plugging it in. That’s no longer the case. Now you will have to partner with your ISV to exploit advanced software to maximize the hardware payback and continue the ride along the favorable Moore’s Law price/performance slope.

Then look at the latest COBOL V5.x and its compiler on the z13. Out of the box it is better optimized than previous compilers. In general, the strategic value of COBOL V5.x comes from migrating high CPU usage programs as quickly as possible, effectively saving organizations considerable money by running optimized code.

Some organizations report a 15% on average reduction of CPU time, which adds up to significant savings in monthly CPU charges. How significant? Up to $150k less on a $1 million bill, with some reporting even higher percentage reductions producing even greater savings. Just migrate to COBOL V5.2 (or at least V5.1) to achieve the savings. In general, staying on the software curve with the latest releases of the OS, languages, and compilers with applications optimized for them is the best way to ensure your workloads are achieving top performance in the most cost-effective way.

For example, the new z13 processor leverages a new Vector Facility for certain COBOL statements and expands the use of Decimal Floating Point Facility for packed decimal calculations. Well-structured, compute-intensive batch applications running on z13 and compiled with the Enterprise COBOL V5.2  compiler have shown CPU reduction usage of up to 14% over the same applications running on zEC12 (compiled with the GA release of Enterprise COBOL V5.1), according to IBM. The result: improved workload price/performance.

Enterprise COBOL V5.2 also includes new features to improve programmability, developer productivity, and application modernization. Supporting JSON, for instance, will provide mobile applications easy access to data and the processing they need from business critical production applications written in COBOL.

The z13 and its z sister, the latest LinuxONE dedicated Linux models, were designed and optimized from the start for cloud, mobile, and analytics. They were intended to run alongside traditional mainframe workloads with z/OS or Linux running on the appropriate models.

Finally, plan to take advantage of the new assist processors and expanded memory capacity to further boost performance and lower cost. With the z13, there is a mandatory migration of all zAAP-enabled applications to zIIP. Expect the usage of the zIIP assist processors to surge when all those Java applications move from the zAAP.  ISVs like Compuware should be able to help with this.  In addition, if you enable SMT on the z13, you’ll immediately get more Java capacity.  Applications that run under IBM WebSphere (WAS) on z/OS will benefit too.

The z13 and especially the LinuxONE are breaking new ground. IBM has established, in conjunction with the Linux Foundation, an Open Mainframe Project to support and advance ongoing open source Linux innovation on the mainframe. IBM also is breaking with its traditional mainframe pricing model by offering a pay-per-use option in the form of a fixed monthly payment with costs scaling up or down based on usage. It also offers per-core pricing with software licenses for designated cores. See DancingDinosaur here.

An upcoming DancingDinosaur will look at more of the enhancements being added to these machines, including some of the latest LinuxOne enhancements like support for Google’s Go language and Cloudant’s NoSQL services. The message: the new z System can take you to the places you will want to be in this emerging cloud-mobile-analytics era.

DancingDinosaur is Alan Radding, a veteran information technology analyst and writer. Please follow DancingDinosaur on Twitter, @mainframeblog. See more of his IT writing at technologywriter.com and here.

 

Docker on IBM z System

January 7, 2016

“If you want Docker on z, you can do it in next to 30 seconds, says Dale Hoffman,Program Director, Linux SW Ecosystem & Innovation Lab.  At least if you’re running Linux on z and preferably on a LinuxONE z.  With all the work Hoffman’s team has done laying the ground work for Docker on the z, you barely have to do anything yourself.

HybridCloud_Infographic (3)

Containers are ideal for cloud computing or, more importantly, for hybrid clouds, defined as the connection of one or more clouds to other clouds. Hybrid clouds are where IBM sees the industry and the z going, and containers, particularly Docker containers, have emerged as the vehicle to get enterprises there. Click here for an FAQ on Docker with z.

z System shops can get there fast using tools Hoffman’s group has already built for the z. To get started, just click here. Or, simply go to IBM Bluemix, from which you can build and deploy Docker containers for the z and other platforms. Back in June IBM introduced enterprise class containers that make it easier for developers to deliver production applications across their hybrid environments.

IBM also offers its own IBM branded containers that allow organizations to deploy, manage, and run application components on the IBM Bluemix development platform by leveraging the open-source Docker container technology. IBM Bluemix now offers three infrastructure compute technology choices to deploy applications – Docker containers, OpenStack virtual machines, or Cloud Foundry apps. Designed for enterprise production workloads, IBM Containers can be securely deployed with integrated scalability and reliability, which enterprise customers rely upon.

In keeping with IBM’s policy of not going it alone, the company also has become a founding member of a coalition of partners and users to create the Open Container Platform (OCP) that aims to ensure containers are interoperable. Features of the IBM Containers include integrated tools such as log analytics, performance monitoring and delivery pipeline, elastic scaling, zero downtime deployments, automated image security/vulnerability scanning, and access to Bluemix’s catalog of over 100 cloud services including Watson, Analytics, IoT and Mobile.

Enterprise z shops want containers because they need to be as fast and agile as the born-in-the-cloud upstarts challenge them. Think survival. Containers like Docker really provide ease of use, portability, and fast deployment almost anywhere to get new applications into production fast. Through containers Docker basically puts its engine/runtime on top of the OS and provides the virtual containers to deploy software into the container. The appeal of this is easy portability for the application/software to any Docker container anywhere and fast deployment.

Specifically the Docker technology provides application portability by utilizing open-source, standardized, light-weight, and self-sufficient container capabilities. IBM’s implementation of the Docker technology with enterprise capabilities further strengthens IBM’s support for hybrid cloud environments. Of course, not every application at every stage in its lifecycle will run in the public cloud—many if not most won’t ever–but IBM Containers enables the developers to determine when to run containers on premise and when to deploy to the public cloud on IBM Bluemix with full Internet connectivity. Image files created within IBM Containers support portability and can be instantiated as containers on any infrastructure that runs Docker.

Through the use of containers on z you can shape your environment using system virtualization and container elements according to your landscape and your requirements with hardly any constraints in performance.  In addition, Docker on z provides greater business agility to go to market quicker and solve business problems effectively through DevOps agility via Docker containers and microservices. Then add hybrid cloud and portability by which you move the same application across multiple clouds.   In short, you can define your IT structures according to your needs, not your system constraints.

Finally, there is nothing threatening about Docker containers on z. Docker is Docker is Docker, even on z, says Hoffman; it relies on the same container technology of Linux, which has been available on z for many years. So get started with containers on z and let DancingDinosaur know when you have success deploying your z containers.

DancingDinosaur is Alan Radding, a veteran information technology analyst and writer. Please follow DancingDinosaur on Twitter, @mainframeblog. See more of his IT writing at technologywriter.com and here.

Making the IBM Mainframe Agile and Swift

December 7, 2015

Do you remember what the mainframe was like when you started on the mainframe career path? Today IBM blurs distinctions between the mainframe and distributed environments through Linux and Java as well as cloud and mobile delivery models.  Heck, you can run Windows natively on x86 cards in a zBX cabinet managed from a console on the z itself. Maybe it’s not the most efficient way to do it and expect better ways coming from IBM, but it is doable now.

seasoft devops imageDevOps in the SDLC, Courtesy Seasoft

More than just interact, the z and distributed environment must productively and seamlessly integrate and interoperate to produce a streamlined development, test, and deployment process. Compounding the challenge: they must do it fast. Organizations can no longer wait for six-month or nine-month release cycles to introduce new capabilities. If capabilities cannot be introduced in just a few weeks max, opportunities and revenue can be lost.  Agile and batch teams have no choice; they must work together.

This calls for data center adoption of DevOps, a combination of development, testing, and operations. Already IBM has instituted DevOps on the z System. The idea of bringing agile and batch together—it almost sounds like an oxymoron or the punchline from a bad computer systems joke—no longer is farfetched. Welcome to the world of hybrid computing where what was once considered disparate and incompatible systems are being brought together, often on the same platform.

The latest generations of the mainframes have been fully hybrid-capable platforms, starting with the z10. They are capable of running mixed workloads concurrently, some of which previously belonged in the distributed platform world only. Today, a mainframe shop with the latest z13 can run traditional z/OS COBOL workloads right alongside Java and Linux workloads. Those with a zBX extension cabinet can even run Windows workloads too under the same unified mainframe management console.

If that sounds a little too kludgy for you, just jump into the cloud. From Bluemix in the cloud you can get to DevOps and find just about everything you need already there, including IBM’s StrongLoop acquisition for API management and microservices.

So now the idea of bringing batch and agile computing together on the mainframe platform doesn’t seem so farfetched.  And it won’t stop there. IBM has been doing its enterprise thing with Apple for about a year. Expect more coming.

That said; an agile mainframe/distributed DevOps environment will become increasingly necessary. How often do you release software? Previously, if an IT organization released new software every year or even every 18 months customers were satisfied. Not anymore.  Today you can’t wait six months before the organization risks falling behind. LOB managers and customers won’t wait. There are too many competitors waiting for any chance to seize an advantage. Slow system refreshes and software updates just play into these competitors’ hands.

DevOps also is essential to the organization’s mobile strategy. Companies in every industry segment are deploying new mobile apps as fast as they can and then almost immediately updating them. For many of these mobile apps the mainframe is the back end, if not the middleware too. Each mobile request for information or to make a purchase or to schedule something triggers numerous back end processes that quickly make their way to the mainframe. It had gotten to the point where IBM had to discount mobile processing on the z or it would hinder mobile growth. DancingDinosaur covered it here.

Helping to drive mobile on the z, of course, is IBM’s relationship with Apple. Over the past year the two companies have been bringing out combined enterprise-mobile applications. Now Apple just announced that it is making its popular programming language, Swift, open source. It shouldn’t take much to get it onto Bluemix. Back in Sept. 2014  IBM announced it already had a preliminary version working through Bluemix.

Although Swift is known mainly for mobile client development, today it is described as combining the performance and efficiency of compiled languages with the simplicity and interactivity of popular scripting languages.  Apple’s Swift strategy seems coming right out of IBM’s recent playbook of embracing open source communities. You can get started at the Swift website, here.

DancingDinosaur is Alan Radding, a veteran information technology analyst and writer. Please follow DancingDinosaur on Twitter, @mainframeblog. See more of his IT writing at technologywriter.com and here.

IBM Enhances the DS8000 Storage Family for New Challenges

October 30, 2015

Earlier this month IBM introduced a family of business-critical hybrid data storage systems that span a wide range of price points. The family is powered by the next generation of IBM’s proven DS8000 storage platform and delivers critical application acceleration, 6-nines (99.9999) availability, and industry-leading capabilities, like integrated high performance flash.  And coming along in November and December will be new tape storage products.

IBM-DS8880.jpg.

DS8880, courtesy of IBM (click to enlarge)

The company sees demand for the new storage being driven by cloud, mobile, analytics, and security. As IBM continues to encourage data centers to expand into new workloads, it is introducing a new family of business-critical hybrid flash data systems primarily to support the latest requirements of z System- and Power-based data centers. If your shop hasn’t started to experience a ramp up of new workloads it likely will soon enough.

The new storage family, all based on POWER8 and the DS8000 software stack, currently consists 3 models:

  1. The entry model, the DS8884, delivers fast hybrid flash starting at under $50K. It offers up to 12 cores, 256 GB total system memory, 64 16GB FCP/FICON ports, and 768 HDD/SSD + 120 Flash cards in a 19”, 40u rack.
  2. The DS8886 brings a 2x performance boost, up to 48 cores, 2 TB total system memory, 128 16GB FCP/FICON ports, and 1536 HDD/SSD’s + 240 Flash cards packed into a 19”, 46u rack.
  3. The high end DS8888, according to IBM, is the industry’s fastest T1 Subsystem. It offers all-flash with up to 96 cores, 2 TB total system memory, 128 16GB FCP/FICON ports, and 480 Flash cards packed in the 19”, 40u rack. Won’t be available until spring 2016.

Being built on the DS8000 software stack, the new storage brings unparalleled integration with IBM z System. The systems are especially tuned for insight and cloud environments. They also deliver top efficiency and maximum utilization of resources including staff productivity, space utilization and lower cost through streamlined operations and a 30% reduction in footprint vs. 33″-34” racks.

The DS8888 family comes with two license options: Base function license provides Logical Configuration support for FB, Original Equipment License (OEL), IBM Database Protection, Thin Provisioning, Encryption Authorization, Easy Tier, and I/O Priority Manager. The z Synergy Service  Function license brings PAV, and Hyper-PAV, FICON and High Performance FICON (zHPF), IBM z/OS Distributed Data Backup, and a range of Copy Services Functions including FlashCopy, Metro Mirror, Global MirrorMetro/Global Mirror, z/Global Mirror & z/Global Mirror Resync, and Multi-Target PPRC .

The DS8880 family also provides 99.9999% uptime, an increase over the typical industry uptime benchmark of 99.999% uptime. That extra decimal point translates into 365.243 continuous days of uptime per year. Even the most mission-critical application can probably live with that.

The High-Performance Flash Enclosure for the DS8880 family redefines what IBM considers true enterprise hybrid flash data systems should be, especially in terms of performance for critical applications. Usually, hybrid systems combine flash and traditional spinning drives to be deployed among a variety of mixed workloads of private or public clouds, while reserving more costly all-flash storage for delivering the most extreme performance for only those applications that require it. Now IBM recommends hybrid configurations for consolidation of virtually all workloads since the DS8880 preserves the flexibility to deliver flash performance exactly where and when it is needed automatically through Easy Tier, which optimizes application performance dynamically across any DS8880 configuration without requiring administrators to manually tune and retune applications and storage.

The DS8880 also supports a wide variety of enterprise server and virtual server platforms, but not all are created equal. It includes special integration with z Systems and IBM Power Systems. This is due to the advanced microcode that has been developed and enhanced in lockstep with the mainframe’s I/O architecture over the past several decades. For Power shops the DS8880 copy services are tightly integrated with IBM PowerHA SystemMirror for AIX and IBM i, which add another level of assurance for users who need 24×7 business continuity for their critical Power systems.

For shops dealing with VMware, the DS8880 includes interoperability with VMware vStorage APIs for Array Integration, VMware vCenter Site Recovery Manager, and a VMware vCenter plug-in that allows users to offload storage management operations in VMware environments to the DS8880. Should you prefer to go the other direction, the DS8880 supports IBM Storage Management Console for VMware vCenter to help VMware administrators independently monitor and control their storage resources from the VMware vSphere Client GUI.

If you didn’t notice, there have been a series of interesting announcements coming out of IBM Insight, which wrapped up yesterday in Las Vegas. DancingDinosaur intends to recap some of the most interesting announcements in case you missed them.

DancingDinosaur is Alan Radding, a veteran information technology analyst and writer. Please follow DancingDinosaur on Twitter, @mainframeblog. See more of his IT writing at technologywriter.com and here.

IBM LinuxONE and Open Mainframe Project Expand the z System

August 20, 2015

Meet the new IBM z System; called LinuxONE Emperor (named after the Emperor Penguin.) It is a z13 running only Linux. Check out the full announcement here.

Primary LinuxOne emperor

Courtesy of IBM, LinuxONE Emperor, the newest z System

DancingDinosaur is excited by several aspects of this announcement:  IBM is establishing, in conjunction with the Linux Foundation, an Open Mainframe Project; the company is breaking with its traditional mainframe pricing model; it also is putting KVM and Ubuntu on the machine; and it is offering a smorgasbord of app-dev options, including some of the sexiest in the industry today. DancingDinosaur never believed it would refer to a mainframe as sexy (must be time to retire).

Along with LinuxONE Emperor IBM announced an entry dedicated Linux machine, the LinuxONE Rockhopper. (BTW; notice the new playfulness in IBM’s product naming.) Rockhopper appears to be very similar to what IBM used to call a Business Class z, although IBM has stepped away from that designation. The closest you may get to a z13 business class machine may be LinuxONE Rockhopper. Rockhopper, according to IBM, is designed for clients and emerging markets seeking the speed, security and availability of the mainframe but in a smaller package.

The biggest long term potential impact from the announcement may come out of the Open Mainframe Project. Like many of IBM’s community project initiatives, IBM is starting by seeding the open community with z code, in effect creating the beginning of an open z System machine.  IBM describes this as the largest single contribution of mainframe code from IBM to the open source community. A key part of the mainframe code contributions will be the z’s IT predictive analytics that constantly monitor for unusual system behavior and help prevent issues from turning into failures. In effect, IBM is handing over zAware to the open source community. It had already announced intentions to port zAware to Linux on z early this year so it might as well make it fully open. The code, notes IBM, can be used by developers to build similar sense-and-respond resiliency capabilities for other systems.

The Open Mainframe Project, being formed with the Linux Foundation, will involve a collaboration of nearly a dozen organizations across academia, government, and corporate sectors to advance development and adoption of Linux on the mainframe. It appears that most of the big mainframe ISVs have already signed on. DancingDinosaur, however, expressed concern that this approach brings the possibility of branching the underlying functionality between z and Linux versions. IBM insists that won’t happen since the innovations would be implemented at the software level, safely insulated from the hardware. And furthermore, should there emerge an innovation that makes sense for the z System, maybe some innovation around the zAware capabilities, the company is prepared to bring it back to the core z.

The newly announced pricing should also present an interesting opportunity for shops running Linux on z.  As IBM notes: new financing models for the LinuxONE portfolio provide flexibility in pricing and resources that allow enterprises to pay for what they use and scale up quickly when their business grows. Specifically, for IBM hardware and software, the company is offering a pay-per-use option in the form of a fixed monthly payment with costs scaling up or down based on usage. It also offers per-core pricing with software licenses for designated cores. In that case you can order what you need and decrease licenses or cancel on 30 days notice. Or, you can rent a LinuxONE machine monthly with no upfront payment.  At the end of the 36-month rental (can return the hardware after 1 year) you choose to return, buy, or replace. Having spent hours attending mainframe pricing sessions at numerous IBM conferences this seems refreshingly straightforward. IBM has not yet provided any prices to analysts so whether this actually is a bargain remains to be seen. But at least you have pricing option flexibility you never had before.

The introduction of support for both KVM and Ubuntu on the z platform opens intriguing possibilities.  Full disclosure: DancingDinosaur was an early Fedora adopter because he could get it to run on a memory-challenged antiquated laptop. With the LinuxONE announcement Ubuntu has been elevated to a fully z-supported Linux distribution. Together IBM and Canonical are bringing a distribution of Linux incorporating Ubuntu’s scale-out and cloud expertise on the IBM z Systems platform, further expanding the reach of both. Ubuntu combined with KVM should make either LinuxONE machine very attractive for OpenStack-based hybrid cloud computing that may involve thousands of VMs. Depending on how IBM ultimately prices things, this could turn into an unexpected bargain for Linux on z data centers that want to save money by consolidating x86 Linux servers, thereby reducing the data center footprint and cutting energy costs.  LinuxONE Emperor can handle 8000 virtual servers in a single system, tens of thousands of containers.

Finally, LinuxONE can run the sexiest app-dev tools using any of the hottest open technologies, specifically:

  • Distributions: Red Hat, SuSE and Ubuntu
  • Hypervisors: PR/SM, z/VM, and KVM
  • Languages: Python, Perl, Ruby, Rails, Erlang, Java, Node.js
  • Management: WAVE, IBM Cloud Manager, Urban Code Openstack, Docker, Chef, Puppet, VMware vRealize Automation
  • Database: Oracle, DB2LUW, MariaDB, MongoDB, PostgreSQL
  • Analytics: Hadoop, Big Insights, DB2BLU and Spark

And run the results however you want: single platform, multi-platform, on-prem and off-prem, or multiple mixed cloud environments with a common toolset. Could a combination of LinuxONE alongside a conventional z13 be the mainframe data center you really want going forward?

DancingDinosaur is Alan Radding, a veteran IT analyst and writer. Please follow DancingDinosaur on Twitter, @mainframeblog. See more of his IT writing at technologywriter.com and here.

API Economy Comes to the IBM z System

June 11, 2015

What comes to mind when you hear (or read) about a RESTful IBM z System? Hint: it is not a mainframe that is loafing. To the contrary, a RESTful mainframe probably is busier than it has ever been, now running a slew of new apps, most likely mobile or social apps with REST APIs connecting to z/OS-based web services plus its usual workloads. Remember web services when SOA first came to the mainframe? They continue today behind the new mobile, cloud, social, and analytical workloads that are putting the spotlight on the mainframe.

Travel and Transportation - Passenger Care

Courtesy of IBM: travel fuels mobile activity (click to enlarge)

A variety of Edge2015 sessions, given by Asit Dan, chief architect, z Service API Management and Glenn Anderson, IBM Lab Services and Training, put what the industry refers to as the emerging API economy in perspective. The z, it should come as no surprise, lies at the heart of this burgeoning API economy, not only handling transactions but also providing governance and management to the API phenomenon that is exploding. Check out IBM’s APIs for Dummies.

The difference between first generation SOA and today’s API economy lies in the new workloads—especially mobile and cloud—fueling the surging interest. The mobile device certainly is the fastest growing platform and will likely become the largest platform soon if it is not already, surpassing desktop and laptop systems.

SOA efforts initially focused on the capabilities of the providers of services, noted Dan, particularly the development, run-time invocation, and management of services. The API economy, on the other hand, focuses on the consumption of these services. It really aims to facilitate the efforts of application developers (internal developers and external business partners) who must code their apps for access to existing and new API-enabled services.

One goal of an enterprise API effort is to access already deployed services, such z-based CICS services or those of a partner. Maybe a more important goal, especially where the z is involved, is to drive use of mainframe software assets by customers, particularly mobile customers.  The API effort not only improves customer service and satisfaction but could also drive added revenue. (Have you ever fantasized of the z as a direct revenue generator?)

This calls, however, for a new set of interfaces. As Dan notes in a recent piece, APIs for accessing these assets, defined using well known standards such as web services and Representational State Transfer (REST) with JSON (JavaScript Object Notation), and published via an easily accessible catalog, make it efficient to subscribe to APIs for obtaining permissions and building new applications. Access to the APIs now can be controlled and tracked during run-time invocations (and even metered where revenue generation is the goal).

Now the API economy can morph into a commercial exchange of business functions, capabilities, and competencies as services using web APIs, noted Glenn Anderson at Edge2015. In-house business functions running on the z can evolve into an API as-a-service delivery vehicle, which amounts to another revenue stream for the mainframe data center.

The API economy often is associated with the concept of containers. Container technology provides a simplified way to make applications more mobile in a hybrid cloud, Anderson explained, and brings some distinct advantages. Specifically, containers are much smaller in size than virtual machines and provide more freedom in the placement of workloads in a cloud (private, public, hybrid) environment. Container technology is being integrated into OpenStack, which is supported on the z through IBM Cloud Manager. Docker is the best known container technology and it works with Linux on z.

With the combination of SOA, web services, REST, JSON, OpenStack, and Docker all z capable, a mainframe data center can fully participate in the mobile, apps, cloud API economy. BTW, POWER servers also can play the API, OpenStack, Docker game too. Even Watson can participate in the API economy through IBM’s early March acquisition of AlchemyAPI, a provider of scalable cognitive computing API services. The acquisition will drive the API economy into cognitive computing too. Welcome to the mainframe API economy.

DancingDinosaur is Alan Radding, a veteran IT analyst and writer. Please follow DancingDinosaur on Twitter, @mainframeblog. See more of his IT writing on Technologywriter.com and here.

Legacy Storage vs. Software Defined Storage at IBM Edge2015

May 21, 2015

At Edge2015 software defined storage (SDS) primarily meant IBM Spectrum Storage, the new storage software portfolio designed to address data storage inefficiencies by separating storage functionality from the underlying hardware through an intelligent software layer. To see what DancingDinosaur posted on Spectrum Storage in February when it was unveiled click here. Spectrum became the subject of dozens of sessions at the conference. Check out a general sampling of Edge2015 sessions here.

Jon Toigo, a respected storage consultant and infuriating iconoclast to some, jumped into the discussion of legacy storage vs. SDS at a session provocatively titled 50 Shades of Grey. He started by declaring “true SANs never reached the market.” On the other hand, SDS promises the world—storage flexibility, efficiency, avoidance of vendor lock-in, and on and on.

 edge2015 toigo san

Courtesy Jon Toigo (click to enlarge)

What the industry actually did as far as storage sharing, Toigo explained, was provide serial SCSI over a physical layer fabric and the use of a physical layer switch to make and break server-storage connections at high speed. Although network-like there was no management layer (which should be part of any true network model, he believes). Furthermore, the result was limited by the Fibre Channel Protocol and standards designed so that “two vendors could implement switch products that conformed to the letter of the standard…with absolute certainty that they would NOT work together,” said Toigo. iSCSI later enabled storage fabrics using TCP/IP, which made it easier to deploy the fabric since organizations already were deploying TCP/IP networks for other purposes.

Toigo’s key requirement: unified storage management, which means managing the diversity and heterogeneity of the arrays comprising the SAN. The culprit preventing this, as he sees it, are so call value-add services on array controllers that create islands of storage. You know these services: thin provisioning, on-array tiering, mirroring, replication, dedupe, and more. The same value-add services are the culprits driving the high cost of storage. “Storage hardware components are commoditized, but value-add software sustains pricing.”

With Spectrum Storage IBM incorporates more than 700 patents and is designed to help organizations transform to a hybrid cloud business model by managing massive amounts of data where they want it, how they want it, in a fast and easy manner from a single dashboard.  The software helps clients move data to the right location, at the right time to flash storage for fast access or to tape and cloud for the lowest cost.

This apparently works for Toigo, with only a few quibbles: vendors make money by adding more software, and inefficiency is added when they implement non-standard commands. IBM, however, is mostly in agreement with Toigo. According to IBM, a new approach is needed to help organizations address [storage] cost and complexity driven by tremendous data growth.  Traditional storage is inefficient in today’s world. However, Spectrum Storage software, IBM continued, helps organizations to more efficiently leverage their hardware investments to extract the full business value of data. Listen closely and you might even hear Toigo mutter Amen.

SDS may or may not be the solution. Toigo titled this session fifty shades of grey because the vendors can’t even agree on a definition for what constitutes SDS.  Yet, it is being presented as a panacea for everything that is wrong with legacy storage.

The key differentiator for Toigo is where a vendor’s storage intelligence resides; on the array controller, in the server hypervisor, or part of the software stack. As it turns out, some solutions are hypervisor dedicated or hypervisor dependent.  VMware’s Virtual SAN, for instance, only works with its hypervisor.  Microsoft’s Clustered Storage Spaces is proprietary to Microsoft, though it promises to share its storage with VMware – simple as pie, just convert your VMware workload into Microsoft VHD format and import it into Hyper-V so you can share the Microsoft SDS infrastructure.

IBM Spectrum passes Toigo’s 50 Shades test. It promises simple, efficient storage without the cost or complexity of dedicated hardware. IBM managers at Edge2015 confirmed Spectrum could run on generic servers and with generic disk arrays. With SDS you want everything agnostic for maximum flexibility.

Toigo’s preferred approach: virtualized SDS with virtual storage pools and centralized select value-add services that can be readily allocated to any workload regardless of the hypervisor. DancingDinosaur will drill down into other interesting Edge2015 sessions in subsequent posts.

DancingDinosaur is Alan Radding, a veteran IT analyst and writer. Please follow DancingDinosaur on Twitter, @mainframeblog. See more of his IT writing on Technologywriter.com and here.


Follow

Get every new post delivered to your Inbox.

Join 858 other followers

%d bloggers like this: