Posts Tagged ‘SoftLayer’

Revamped IBM Power Systems LC Takes on x86

September 9, 2016

To hear IBM, its revamped and refreshed Power Systems LC lineup will undermine x86 (Intel), HPE, Dell/EMC, and any other purveyor of x86-based systems. Backed by accelerators provided by OpenPower community members, IBM appears ready extend the x86 battle to on premises, in the cloud, and the hybrid cloud. It promises to deliver better performance at lower cost for all the hot workloads too: artificial intelligence, deep learning, high performance data analytics, and compute-heavy workloads.

ibm-power-systems-s821lc

Two POWER8 processors, 1U config, priced 30% less than an x86 server

Almost a year ago, Oct. 2015, DancingDinosaur covered IBM previous Power Systems LC announcement here. The LC designation stands for Linux Community, and the company is tapping accelerators and more from the OpenPower community, just as it did with its recent announcement of POWER9 expected in 2017, here.

The new Power LC systems feature a set of community delivered technologies IBM has dubbed POWERAccel, a family of I/O technologies designed to deliver composable system performance enabled by accelerators. For GPU acceleration the NVDIA NVLink delivers nearly 5x better integration between POWER processors and the NVIDIA GPUs.  For FPGA acceleration IBM tapped its own CAPI architecture to integrate accelerators that run natively as part of the application.

This week’s Power Systems LC announcement features three new machines:

  • S821LC (pictured above)—includes 2 POWER8 sockets in a 1U enclosure and intended for environments requiring dense computing.
  • S822LC—brings 2 POWER8 sockets for big data workloads and adds big data acceleration through CAPI and GPUs.
  • S822LC—intended for high performance computing, it incorporates the new POWER8 processor with the NVDIA NVLink to deliver 2.8x the bandwidth to GPU accelerators and up to 4 integrated NVIDIA Pascal GPUs.

POWER8 with NVLink delivers 2.8 x the bandwidth compared to a PCle data pipe. According to figures provided by IBM comparing the price-performance of the Power S822LC for HPC (20-core, 256 GB, 4x Pascal) with a Dell C4130 (20-core, 256 GB 4xK80) and measured by total queries per hour (gph) the Power System delivered 2.1x better price-performance.  The Power Systems server cost more ($66,612) vs. the Dell ($57,615) but the Power System delivered 444 qph vs. Dell’s 185 qph.

The story plays out similarly for big data workloads running MongoDB on the IBM Power S8221LC for big data (20-core, 128 GB) vs. an HP DL380 (20-core, 128 GB). Here the system cost (server, OS, MongoDB annual subscription) came to $24,870 for IBM Power and $29,915 for HP.  Power provided 40% more performance at a 31% lower hardware/maintenance cost.

When it comes to the cloud the new IBM Power Systems LC offerings get even more interesting from a buyer’s standpoint. IBM declared the cloud a strategic imperative about 2 years ago and needs to demonstrate adoption that can rival the current cloud leaders; AWS, Google, and Microsoft (Azure). To that end IBM has started to tack on free cloud usage.

For example, during the industry analyst launch briefing IBM declared: Modernize your Power infrastructure for the Cloud, get access to IBM Cloud for free and cut your current operating costs by 50%. Whether you’re talking on-premises cloud or hybrid infrastructure the freebies just come. The free built-in cloud deployment service options include:

  • Cloud Provisioning and Automation
  • Infrastructure as a Service
  • Cloud Capacity Pools across Data Centers
  • Hybrid Cloud with BlueMix
  • Automation for DevOps
  • Database as a Service

These cover both on-premises, where you can transform your traditional infrastructure with automation, self-service, and elastic consumption models or a hybrid infrastructure where you can securely extend to Public Cloud with rapid access to compute services and API integration. Other freebies include open source automation, installation and configuration recipes, cross data center inventory, performance monitoring via the IBM Cloud, optional DR as a service for Power, and free access and capacity flexibility with SolfLayer (12 month starter pack).

Will the new LC line and its various cloud freebies get the low cost x86 monkey off IBM’s back? That’s the hope in Armonk. The new LC servers can be acquired at a lower price and can deliver 80% more performance per dollar spent over x86-based systems, according to IBM. This efficiency enables businesses and cloud service providers to lower costs and combat data center sprawl.

DancingDinosaur has developed TCO and ROI analyses comparing mainframe and Power systems to x86 for a decade, maybe more.  A few managers get it, but most, or their staff, have embedded bias and will never accept non-x86 machines. To them, any x86 system always is cheaper regardless of the specs and the math. Not sure even free will change their minds.

The new Power Systems LC lineup is price-advantaged over comparatively configured Intel x86-based servers, costing 30% less in some configurations.  Online LC pricing begins at $5999. Additional models with smaller configurations sport lower pricing through IBM Business Partners. All but the HPC machine are available immediately. The HPC machine will ship Sept. 26.

DancingDinosaur is Alan Radding, a veteran information technology analyst and writer. Please follow DancingDinosaur on Twitter, @mainframeblog. See more of his IT writing at technologywriter.com and here.

 

2016 State of OpenStack Adoption Shows Continued Progress

March 10, 2016

Sixty-one percent of over 600 survey respondents are adopting OpenStack to combat the expense of public cloud alternatives, reports Talligent, provider of cost and capacity management solutions for OpenStack and hybrid clouds, which conducted most recent study of OpenStack adoption. Almost as many respondents, 59%, have opted for OpenStack to improve the responsiveness of IT service delivery.

openstack-logo

OpenStack is a cloud operating system that controls large pools of compute, storage, and networking resources throughout a datacenter, all managed through a dashboard that gives administrators control while empowering their users to provision resources through a web interface. As OpenStack puts it: A key part of the OpenStack Foundation mission is to inform, and with the ever expanding ecosystem, we felt it was a good time to cut through the noise to give our members the facts needed to make sound decisions.

In that spirit, make the OpenStack Marketplace one of your first steps in planning an OpenStack effort. There you will find the technology broken down into digestible chunks with details like which components are included, the versions used, and the APIs exposed. The community has also implemented interoperability testing to validate products displaying OpenStack logos. The results are now available in the Marketplace for public clouds, hosted private clouds, distributions & appliances.

DancingDinosaur has covered OpenStack numerous times; for example here and here, IBM fully committed to OpenStack. Late last spring it announced an expanded suite of OpenStack services that allow organizations to integrate applications and data across hybrid clouds including public, dedicated and local cloud environments without the fear of vendor lock-in or costly customization.

IBM may be a bit in front of the market on this. The Talligent survey found private clouds will not be replaced by public clouds very soon, with 54% of respondents still expecting their cloud use to be ALL or mostly private five years from now.

But whether this will occur in two years or five years developers and enterprises using the IBM Cloud OpenStack Services will be able to launch applications on local, on-premises installations and public clouds hosted on the SoftLayer infrastructure, VMware, or the IBM Cloud. This can all be done without changing code or configurations. As a result, developers can build and test an application in a public cloud and use the interoperability of OpenStack to seamlessly deploy that same application and data across any combination of clouds; public, dedicated and local/private.

The Talligent survey also found OpenStack deployments, once in place, are expected to expand quickly beyond development environments, growing from 43% to 89% within 12 months. For QA/Test the expected growth will be a tad stronger, from 47% to 91% within 12 months.

Other interesting tidbits from the survey: the top three workloads currently delivered on OpenStack include: new green field applications (69%); containers (61%), web applications (58%). No surprise there.  Also, as noted above, private clouds should continue to thrive as OpenStack users expect high levels of private cloud use within the next 5 years. Fourteen percent, however, are expecting to deploy across a balanced mix of private and public clouds. At the same time, the survey suggests that PaaS, Containers, and privately managed OpenStack are expected to grow in use while proprietary public clouds and legacy virtualization are likely to decline.

Finally, the survey respondents voiced their opinions on the OpenStack providers. Although industry vendors like VMware, IBM, HPE, Cisco and more are exploring ways to support customers in a hybrid cloud mix, the respondents, as previously noted, are not quite ready to move to a hybrid model. Still, the respondents voiced a clear desire for more operational tools.

Similarly, a majority of respondents currently using OpenStack are still prepared to maintain most of their environment on-premises, with 54% saying they will continue to be more than 80% private over the next 5 years. This may reflect ongoing concerns of corporate management about security in the public cloud. The survey, however, picked up some ambivalence on this point: 30% of the respondents using OpenStack report planning to move more than 80% of their environments to the public cloud over the next 5 years. Could this be a signal that security concerns may be fading?

DancingDinosaur is Alan Radding, a veteran information technology analyst and writer. Please follow DancingDinosaur on Twitter, @mainframeblog. See more of his IT writing at technologywriter.com and here.

IBM Expands Spectrum Storage in the Cloud with Spectrum Protect

September 18, 2015

IBM is targeting storage for hybrid clouds with Spectrum Protect. Specifically, it brings new cloud backup and a new management dashboard aimed to help businesses back up data to on-premises object storage or the cloud without the expense of cloud-gateway appliances. It also enables advanced data placement across all storage types to maximize performance, availability, and cost efficiency. Spectrum Protect represents the latest part of the IBM Spectrum storage family; which provides advanced software defined storage (SDS) storage capabilities and flexible storage either as software, an appliance, or a cloud service.  IBM announced Spectrum Protect at the end of August.

ibm Spectrum Protect Dashboard dino

Courtesy IBM: Spectrum Protect dashboard (click to enlarge)

Introduced early this year, IBM Spectrum brings a family of optimized SDS solutions designed to work together. It offers SDS file, object, and block storage with common management and a consistent user and administrator experience.  Although it is based on IBM’s existing storage hardware products like XIV, Storwize, IBM FlashSystem, and SVC you can deploy it as software on some non IBM hardware too. It also offers support for VMware environments and includes VMware API support for VASA, VAAI, and VMware SRM. With Spectrum, IBM appears to have come up with a winner; over the last six months, IBM reports more than 1,000 new clients have chosen products from the IBM Spectrum Storage portfolio.

Specifically, IBM Spectrum Protect supports IBM Cloud infrastructure today with plans to expand to other public clouds in future. IBM Spectrum Accelerate (XIV block storage) also can be accessed as a service by IBM Cloud customers via the SoftLayer cloud infrastructure. There it allows companies to deploy block storage on SoftLayer without having to buy new storage hardware or manage appliance farm.

In competitive analysis, IBM found that a single IBM Spectrum Protect server performs the work of up to 15 CommVault servers. This means that large enterprises can consolidate backup servers to reduce cost and complexity while managing data growth from mobile, social, and Internet of Things environments.  Furthermore, SMBs can eliminate the need for a slew of infrastructure devices, including additional backup servers, media servers, and deduplication appliances, thereby reducing complexity and cost. Cost analysis with several beta customers, reports IBM, indicates that the enhanced IBM Spectrum Protect software can help clients reduce backup infrastructure costs on average by up to 53 percent.

IBM reports that the Spectrum Storage portfolio can centrally manage more than 300 different storage devices and yottabytes (yotta=1024 bytes) of data.  Its device interoperability is the broadest in the industry – incorporating both IBM and non-IBM hardware and tape systems.  IBM Spectrum Storage can help reduce storage costs up to 90 percent in certain environments by automatically moving data onto the most economical storage device – either from IBM or non-IBM flash, disk, and tape systems.

IBM Spectrum Storage portfolio packages key storage software from conventional IBM storage products. These include IBM Spectrum Accelerate (IBM XIV), Spectrum Virtualize (IBM SAN Volume Controller along with IBM Storwize), Spectrum Scale (IBM General Parallel File System or GPFS technology, previously referred to as Elastic Storage), Spectrum Control (IBM Virtual Storage Center and IBM Storage Insights), Spectrum Protect (Tivoli Storage Manager family) and Spectrum Archive (various IBM tape backup products).

The portfolio is presented as a software-only product and, presumably, you can run it on IBM and some non-IBM storage hardware if you chose. You will have to compare the cost of the software license with the cost of the IBM and non-IBM hardware to decide which gets you the best deal.  It may turn out that running Spectrum Accelerate (XIV) on low cost, generic disks rather than buying a rack of XIV disk to go with it may be the lowest price. But keep in mind that the lowest cost generic disk may not meet your performance or reliability specifications.

IBM reports it also is enhancing the software-only version of IBM Spectrum Accelerate to reduce costs by consolidating storage and compute resources on the same servers. In effect, IBM is making XIV software available with portable licensing across XIV systems, on- premises servers, and cloud environments to offer greater operational flexibility. Bottom line: Possibly a good deal but be prepared to do some detailed comparative cost analysis to identify the best mix of SDS, cloud storage, and hardware at the best price for your particular needs.

In general, however, DancingDinosaur favors almost anything that increases data center configuration and pricing flexibility. With that in mind consider the IBM Spectrum options the next time you plan storage changes. (BTW, DancingDinosaur also does storage and server cost assessments should you want help.)

DancingDinosaur is Alan Radding, a veteran IT analyst and writer. Please follow DancingDinosaur on Twitter, @mainframeblog. See more of his IT writing at technologywriter.com and here.

 

 

IBM Ranked #1 in Midrange Servers and Enterprise Network Storage

August 13, 2015

Although the financial markets may be beating up IBM the technology world continues to acclaim IBM technology and products. Most recently, IBM ranked on top in the CRN Annual Report Card (ARC) Survey recognizing the best-in-class vendors in the categories of partnership, support, and product innovation.  But the accolades don’t stop there.

Mobile Security Infographic

Courtesy of IBM (click to enlarge)

IBM was named a leader in four key cloud services categories—hosting, overall cloud professional services, cloud consulting services, and systems integration—by the independent technology market research firm Technology Business Research, Inc. (TBR).  This summer Gartner also named IBM as a leader in Security Information and Event Management (SIEM) in the latest Gartner Magic Quadrant for SIEM, this for the seventh consecutive year. Gartner also named IBM as a Leader in the 2015 Magic Quadrant for Mobile Application Development Platforms, specifically calling out the IBM MobileFirst Platform.

The CRN award addresses the technology channel. According to IBM, the company and its business partners are engaging with clients in new ways to work, building the infrastructure, and deploying innovative solutions for the digital era.  This should come as no surprise to anyone reading this blog; the z 13 was designed expressly to be a digital platform for the cloud, mobile, and big data era.  IBM’s z and Power Systems servers and Storage Solutions specifically were designed to address the challenges these areas present.

Along the same lines, IBM’s commitment to open alliances has continued this year unabated, starting with its focus on innovation platforms designed for big data and superior cloud economics, which continue to be the cornerstone of IBM Power System. The company also plays a leading role in the Open Power Foundation, the Linux Foundation as well as ramping up communities around the Internet of Things, developerWorks Recipes, and the open cloud, developerWorks Open. The last two were topics DancingDinosaur tackled recently, here and here.

The TBR report, entitled Hosted Private & Professional Services Cloud Benchmark, provides a market synopsis and growth estimates for 29 cloud providers in the first quarter of 2015. In that report, TBR cited IBM as:

  • The undisputed growth leader in overall professional cloud services
  • The leader in hosted private cloud and managed cloud services
  • A leader in OpenStack vendor acquisitions and OpenStack cloud initiatives
  • A growth leader in cloud consulting services, bridging the gap between technology and strategy consulting
  • A growth leader in cloud systems integration services

According to the report: IBM’s leading position across all categories remains unchallenged as the company’s established SoftLayer and Bluemix portfolios, coupled with in-house cloud and solutions integration expertise, provide enterprises with end-to-end solutions.

Wall Street analysts and pundits clearly look at IBM differently than IT analysts.  The folks who look at IBM’s technology, strategy, and services, like those at Gartner, TBR, and the CRN report card, tell a different story. Who do you think has it right?

DancingDinosaur is Alan Radding, a veteran IT analyst and writer. Please follow DancingDinosaur on Twitter, @mainframeblog. See more of his IT writing at technologywriter.com and here.

z Systems and Cloud Lead IBM 2Q Results

July 24, 2015

DancingDinosaur generally steers clear of writing about reported quarterly revenue. Given the general focus of this blog on enterprise and cloud computing, however, IBM’s recent 2Q15 report can’t be ignored. Although it continued IBM’s recent string of negative quarterly results, the z and cloud proved to be bright spots.

Infographic - IBM Q2 2015 Earnings - Cloud - July 20 2015 - Final

Strong IBM cloud performance, Q2 2015 (click to enlarge)

As IBM reported on Monday: Revenues from z Systems mainframe server products increased 9 percent compared with the year-ago period (up 15 percent adjusting for currency).  Total delivery of z Systems computing power, as measured in MIPS, increased 24 percent.  Revenues from Power Systems were down 1 percent compared with the 2014 period (up 5 percent adjusting for currency).

It’s not clear when and how Power Systems will come back. IBM has opened up the Power platform through the Open Power Foundation. A good move in theory, which DancingDinosaur applauds. Still, much depends on the Foundation gaining increased momentum and individual members rolling out successful Power-based products. The roadmap for POWER8, POWER9, and beyond looks promising but how fast products will arrive is unclear. There also is potential for the commoditization of the Power platform, a welcome development in many quarters, but commoditization’s impact on future revenue also is not clear.

Cloud revenue was up more than 70 percent, adjusting for currency and divested businesses; up more than 50 percent as reported, according to IBM. Given that cloud, along with mobile and analytics, has been designated strategic by IBM this is an encouraging development. The company’s cloud strategy is starting to bear fruit.

The big question hanging over every vendor’s cloud strategy is how to make money at it. One of the appealing aspects of the cloud in terms of cost and pricing for IT-using organizations is what amounts to a race to the bottom. With pricing immediately apparent and lower pricing just a click away it has become a feast for the bottom grazers to whom the lowest price is all that matters. For companies like IBM and Oracle, which also has declared cloud a strategic initiative, and other large legacy enterprise platform providers the challenge is to be competitive on price while differentiating their offerings in other ways. Clearly IBM has some unique cloud offerings in Watson and Bluemix and others but can they deliver enough revenue fast enough to offset the reduction in legacy platform revenue. Remember, x86 is off IBM’s menu.

Timothy Prickett Morgan, who writes frequently about IBM technology, also had plenty to say about IBM’s 2Q15 announcement, as did a zillion other financial and industry analyst. To begin he noted the irony of IBM promoting cloud computing, primarily an x86 phenomenon while trying to convince people that Power-based systems are cost competitive—which they can be—and will do a better job for many of those workloads, correct again.

Morgan also makes an interesting point in regard to the z: “IBM doesn’t have to push the System z mainframe so much as keep it on a Moore’s Law curve of its own and keep the price/performance improving to keep those customers in the mainframe fold.” That’s harder than it may seem; DancingDinosaur addressed the Moore’ Law issue last week here. As Morgan notes, with well over $1 trillion in software assets running on the mainframe, the 6,000 or so enterprises that use mainframes are unlikely to move off the platform because of the cost, disruption, and risk such a move would entail. Just ask Union-Pacific Railroad, which seems to be doing a slow-motion platform migration off the mainframe that seemingly may never actually end. Morgan concludes: “IBM can count on a certain level of money from the System z line that it just cannot with the Power Systems line.”

As noted above, how much revenue Power can generate for IBM depends on how fast the Open Power Foundation members introduce products that expand the market and how many Power processors SoftLayer can absorb as the business unit expands its global footprint.  There also is the question of how many POWER8 servers Rackspace, a much larger cloud provider than SoftLayer, will take and whether the Rackspace initiative will catch on elsewhere.

In any event, IBM’s 2Q15 report showed enough positive momentum to encourage IT platform enthusiasts. For its part, DancingDinosaur is expecting a business class z13 in the coming months and more.

DancingDinosaur is Alan Radding, a veteran IT analyst and writer. Please follow DancingDinosaur on Twitter, @mainframeblog. See more of his IT writing at technologywriter.com and here.

OpenPOWER Starts Delivering the Goods

March 13, 2015

Are you leery of multi-vendor consortiums? DancingDinosaur as a rule is skeptical of the grand promises they make until they actually start delivering results. That was the case with OpenPOWER last spring when you read here that the OpenPOWER Foundation was introduced and almost immediately forgotten.

 power8 cpu blocks

IBM POWER8 processor, courtesy of IBM (click to enlarge)

But then last fall DancingDinosaur reported on NVIDIA and its new GPU accelerator integrated directly into the server here. This too was an OpenPOWER Foundation-based initiative. Suddenly, DancingDinosaur is thinking the OpenPOWER Foundation might actually produce results.

For example, IBM introduced a new range of systems capable of handling massive amounts of computational data faster at nearly 20 percent better price/performance than comparable Intel Xeon v3 Processor-based systems. The result:  a superior alternative to closed, commodity-based data center servers. Better performance and at a lower price. What’s not to like?

The first place you probably want to apply this improved price/performance is to big data, which generates 2.5 quintillion bytes of data across the planet every day. Even the miniscule portion of this amount that you actually generate will very quickly challenge your organization to build a sufficiently powerful technology infrastructures to gain actionable insights from this data fast enough and at a price you can afford.

The commodity x86 servers used today by most organizations are built on proprietary Intel processor technology and are increasingly stretched to their limits by workloads related to big data, cloud and mobile. By contrast, IBM is designing a new data centric approach to systems that leverages the building blocks of the OpenPOWER Foundation.

This is plausible given the success of NVIDIA with its GPU accelerator. And just this past week Altera demonstrated its OpenPOWER-based FPGA, now being used by several other Foundation members who are collaborating to develop high-performance compute solutions that integrate IBM POWER chips with Altera’s FPGA-based acceleration technologies.

Formed in late 2013, the OpenPOWER Foundation has grown quickly from 5 founders to over 100 today. All are collaborating in various ways to leverage the IBM POWER processor’s open architecture for broad industry innovation.

IBM is looking to offer the POWER8 core and other future cores under the OpenPOWER initiative but they are also making previous designs available for licensing. Partners are required to contribute intellectual property to the OpenPOWER Foundation to be able to gain high level status. The earliest successes have been around accelerators and such, some based on POWER8’s CAPI (Coherence Attach Processor Interface) expansion bus built specifically to integrate easily with external coprocessors like GPUs, ASICs and FPGAs. DancingDinosaur will know when the OpenPOWER Foundation is truly on the path to acceptance when a member introduces a non-IBM POWER8 server. Have been told that may happen in 2015.

In the meantime, IBM itself is capitalizing on the OpenPower Foundation. Its new IBM Power S824L servers are built on IBM’s POWER8 processor and tightly integrate other OpenPOWER technologies, including NVIDIA’s GPU accelerator. Built on the OpenPOWER stack, the Power S824L provides organizations the ability to run data-intensive tasks on the POWER8 processor while offloading other compute-intensive workloads to GPU accelerators, which are capable of running millions of data computations in parallel and are designed to significantly speed up compute-intensive applications.

Further leveraging the OpenPOWER Foundation at the start of March IBM announced that SoftLayer will offer OpenPOWER servers as part of its portfolio of cloud services. Organizations will then be able to select OpenPOWER bare metal servers when configuring their cloud-based IT infrastructure from SoftLayer, an IBM company. The servers were developed to help organizations better manage data-intensive workloads on public and private clouds, effectively extending their existing infrastructure inexpensively and quickly. This is possible because OpenPOWER servers leverage IBM’s licensable POWER processor technology and feature innovations resulting from open collaboration among OpenPOWER Foundation members.

Due in the second quarter, the SoftLayer bare metal servers run Linux applications and are based on the IBM POWER8 architecture. The offering, according to IBM, also will leverage the rapidly expanding community of developers contributing to the POWER ecosystem as well as independent software vendors that support Linux on Power and are migrating applications from x86 to the POWER architecture. Built on open technology standards that begin at the chip level, the new bare metal servers are built to assist a wide range of businesses interested in building custom hybrid, private, and public cloud solutions based on open technology.

BTW, it is time to register for IBM Edge2015 in Las Vegas May 10-15. Edge2015 combines all of IBM’s infrastructure products with both a technical track and an executive track.  You can be sure DancingDinosaur will be there. Watch for upcoming posts here that will highlight some of the more interesting sessions.DancingDinosaur is Alan Radding, a veteran IT analyst and writer.

Follow DancingDinosaur on Twitter, @mainframeblog. See more of his IT writing on Technologywriter.com and here.

IBM Brings Cloud DevOps to the Mainframe

December 3, 2014

Is your organization ready for DevOps?  It should be coming to System z data centers almost any day now, riding in on newly announced IBM cloud-based DevOps services, software, and infrastructure designed to help large organizations develop and deliver quality software faster.

IBM Launches Bluemix Garage at London's Level39

Launch of the Bluemix Garage in London

DevOps streamlines enterprise workflow by truncating the development, testing, and deployment process. It entails collaborative communications around the end-to-end enterprise workflow flow and incorporates a continuous feedback to expedite the process. DevOps evolved out of Agile methodologies over a decade ago.

Agile was intended to streamline the traditional waterfall IT development process by putting developers and business unit people and the deployment folks together to build, test, and deploy new applications fast. Agile teams would deliver agreed upon and tested functionality within a month. Each deliverable was short, addressing only a subset of the total functionality. Each was followed by the next containing yet more functionality. In the process, previously delivered functionality might be modified or replaced with a new deliverable.

IBM is streamlining the process further by tapping into the collaborative power of the company’s Cloud portfolio and business transformation experience to speed the delivery of software that supports new models of engagement.  To be clear, IBM definitely is not talking about using DevOps with the organization’s systems of record—the core transaction systems that are hallmark of the z and the heartbeat of the enterprise. The most likely candidates will be systems of engagement, systems of innovation, and analytics systems.  These are systems that need to be delivered fast and will change frequently.

According to IBM software-driven innovation has emerged as a primary way businesses create and deliver new value to customers. A survey of 400 business and IT executives by the IBM Institute for Business Value showed businesses that are more effective at software delivery are also more profitable than their peers nearly 70 percent of the time. DevOps provides a way for businesses to remain competitive, applying lean and agile principles to software development to speed the delivery of software that meets new market requirements.

Agile represented a radical departure from the waterfall process, which called for developers to take a full set of business requirements, disappear to two years, and return with a finished application that worked right.  Except that it often took longer for the developers to return with the code and the application didn’t work as promised. By then the application was well over budget and late.  System z shops know this well.

DevOps today establishes a continuous, iterative process flow between the development team and the deployment group and incorporates many Agile concepts, including the active involvement of the business people, frequent testing, and quick release cycles. As the IBM survey noted  DevOps was spurred by the rise of smartphones and mobile computing. Mobile users demand working functionality fast and expect frequent updates. Two-year release cycles were unacceptable; competitors would be out with newer and better apps long before.  Even six-month release cycles seem unresponsive. This is one of the realities DevOps addresses.  Another reality is extreme scaling, something z data centers understand.

According to IBM, the company’s new DevOps Innovation Services help address the challenge of scaling development, enabling enterprises to shorten their software delivery lifecycle. The hybrid cloud services combine IBM’s industry expertise from hundreds of organizational change and application development projects with the industry’s leading application development portfolio, especially Bluemix, IBM’s open DIY cloud PaaS platform. They also apply the flexibility of IBM’s enterprise-grade, hybrid cloud portfolio, which was recently ranked by Synergy Research Group as the leading hybrid and private cloud for the enterprise. These services are based on SoftLayer, IBM’s cloud infrastructure platform.

In a second DevOps-related announcement last month IBM described an initiative to bring a greater level of control, security and flexibility to cloud-based application development and delivery with a single-tenant version of Bluemix. The new initiative enables developers to build applications around their most sensitive data and deploy them in a dedicated cloud environment to help them capture the benefits of cloud while avoiding the compliance, regulatory and performance issues that are presented with public clouds. System z shops can appreciate this.

Major enterprise system vendors like IBM, EMC, Cisco, and Oracle are making noises about DevOps. As far as solid initiatives IBM appears far ahead, especially with the two November announcements.

DancingDinosaur is Alan Radding, an independent IT analyst and writer. Follow DancingDinosaur on Twitter, @mainframeblog. Find more of his IT writing at Technologywriter.com and here.

New IBM Initiatives Speed System z to Hybrid Cloud and IoT

November 20, 2014

Cloud computing, especially hybrid cloud computing, is going mainstream. Same is happening with the Internet of Things (IoT).  For mainframe shops unsure of how to get there IBM promises to speed the journey with the two recent initiatives.

Let’s start with hybrid clouds and the z. As IBM describes it, enterprises will continue to derive value from the existing investments in IT infrastructure while looking to the cloud to bolster business agility. The upshot: organizations increasingly are turning to hybrid clouds to obtain the best of both worlds by linking on-premises IT infrastructure to public cloud.

To that end, IBM has designed and tested various use cases around enterprise hybrid architecture involving System z and SoftLayer. These use cases focus on the relevant issues of security, application performance, and potential business cost.

One scenario introduces the cloud as an opportunity to enrich enterprise business services running on the z with external functionality delivered from the cloud.

hybrid use case

Here a retail payment system [click graphic to enlarge] is enriched with global functionality from a loyalty program that allows the consumer to accumulate points. It involves the z and its payment system, a cloud-based loyalty program, and the consumer using a mobile phone.

The hybrid cloud allows the z data center to maintain control of key applications and data in order to meet critical business service level agreements and compliance requirements while tapping the public cloud for new capabilities, business agility, or rapid innovation and shifting expenditure from CAPEX to OPEX.

Since the z serves as the data backbone for many critical applications it makes sense to connect on-premises System z infrastructure with an off-premises cloud environment. In its paper IBM suggests the hybrid architecture should be designed in a way that gives the businesses the flexibility to put their workloads and data where it makes most sense, mixing the right blend of public and private cloud services. And, of course, it also must ensure data security and performance. That’s why you want the z there.

To get started check out the uses cases IBM provides, like the one above. Already a number of organizations are trying the IBM hybrid cloud: Macy’s, Whirlpool, Daimler, and Sicoss Group. Overall, nearly half of IBM’s top 100 strategic outsourcing clients already implementing cloud solutions with IBM as they transition to a hybrid cloud model.

And if hybrid cloud isn’t enough to keep you busy, it also is time to start thinking about the IoT. To make it easier last month the company announced the IBM Internet of Things Foundation, an extension of Bluemix. Like Bluemix, this is a cloud service that, as IBM describes it, makes it possible for a developer to quickly extend an Internet-connected device such as a sensor or controller into the cloud, build an application alongside the device to collect the data, and send real-time insights back to the developer’s business. That data can be analyzed on the z too, using Hadoop on zLinux, which you read about here a few weeks ago.

IoT should be nothing new to System z shops. DancingDinosaur discussed it this past summer here. Basically it’s the POS or ATM network on steroids with orders on magnitude more complexity. IDC estimates that by 2020 there will be as many as 28 billion autonomous IoT devices installed. Today it estimates there are nine billion.

Between the cloud, hybrid clouds, and IoT, z data centers will have a lot to keep them busy. But with IBM’s new initiatives in both areas you can get simple, highly secure and powerful application access to the cloud, IoT devices, and data. With the IoT Foundation you can rapidly compose applications, visualization dashboards and mobile apps that can generate valuable insights when linked with back office enterprise applications like those on the z.

DancingDinosaur is Alan Radding, a veteran IT writer/analyst. You can follow DancingDinosaur on Twitter, @mainframeblog. Also check out my other IT writing at Technologywriter.com and here.

IBM Mainframe Tweet-Up with Ross Mauri Generates Action

August 15, 2014

DancingDinosaur has participated in numerous Mainframe Tweet-Ups before, most recently Enterprise2013 and Edge2014.  The Tweet-Up last Tuesday (8/12) might have been the biggest yet, generating numerous questions and responses (over 120 in one hour by DancingDinosaur’s count) on a range of topics including Linux on the mainframe, mobile on the mainframe, and more.

A Tweet-Up is a Twitter event where a panel of experts respond to questions from an audience and interactive discussions revolve around the questions. Think of the Mainframe Tweet-Up as a very mini IBM Enterprise2014. But instead of one expert panel and 100+ participants there will be over 600 expert sessions, an army of IBM experts to present and respond to questions, and over 50 case studies where you can talk directly to the user and get the real nitty-gritty.

The central attraction of Tuesday’s Mainframe Tweet-Up was Ross Mauri, General Manager of the IBM System z business.  Mauri is a veteran of enterprise servers and systems, having previously held a similar position with Power Systems. Of course he is a strong proponent of the mainframe, but he also is a big advocate for mobile on the System z.

In a recent post Mauri notes that enterprise mobility will be a $30 billion market next year with twice as many corporate employees using their own mobile devices as they are today.  According to Gartner, by 2017, 25% of all enterprises will have a mobile app store.  Check out Mauri’s post, Mobility made possible with the mainframe, here.

Mauri really sees the System z as an essential platform for mobile: “Given IBM System z’s unprecedented enterprise scale, availability, cloud, analytics, and mobile capabilities, we (the IBM mainframe team) are poised to deliver value to clients’ enterprise mobility needs. The marketplace demands mobile capabilities and has for years because their customers demand it of them.  Across industries, consumers mandate immediate, any time access to their accounts and information.  Consider what’s possible when IBM System z delivers enterprise mobility to these institutions,” he wrote.

Africa stands to gain the most from mobile mainframe, especially when it comes to banking. Mauri continued. Nearly 80% of Africa’s population – 326 million people — is unbanked, denying them the ability to get education and business loans or support their families.  First National Bank (FNB) and the mainframe are changing that.  Using System z’s mobile bank-in-a-box solutions, FNB brings secure banking to the customer in ways they’re familiar with — to the tune of 234 million monthly mobile banking transactions.  IBM’s System z bank-in-a-box solutions eliminate the need for FNB’s customers to rely on couriers.  Families have their funds in seconds instead of days and save sizable courier fees.  For the people who now use this solution, their lives have been changed forever.

DancingDinosaur has been on top of the mobile mainframe since IBM first began talking about it in the spring of 2010, and most recently here and here. The mainframe, especially with the new discounted z/OS pricing, makes an ideal cost-efficient platform for mobile computing. The z is a particularly good choice since much of the processing resulting from mobile activity will be handled right on the z, probably even the same z.

Mobile certainly was a top topic in the Mainframe Tweet-Up. One discussion addressed whether mobile would increase mainframe workloads or just shift it to coming from different devices. Instead of using an ATM to check your balance, for example, you would use the bank’s mobile app. The responses were varied: everyone agreed that mobile would increase transaction volume overall, but the transactions would follow a different cycle, a predominantly read cycle. If you have an opinion, feel welcome to weigh in with a comment.

Another discussion focused on mainframe simplification and looked at z/OSMF and CICS Explorer as two simplification/GUI tools, along with z/OS HealthChecker, RTD, and PFA. A different discussion turned to APIs and the z; concluding that the z has the APIs to effectively work with SoftLayer and also connect with APIM. Another participant added that the z works with the RESTful API. And not surprisingly there was an active discussion on Linux on z. The expert panelists and participants overall kept things very lively.

The Mainframe Tweet-Up was a small taste of what is coming in IBM Enterprise2014, Oct. 6-10 at the Venetian in Las Vegas. Register now; last year’s event sold out. IBM is expecting over 3000 attendees.  DancingDinsosaur certainly will be there.

DancingDinosaur is Alan Radding. Follow DancingDinosaur on Twitter, @mainframeblog. You also can find him at Technologywriter.com.


%d bloggers like this: