Storage Looms Large at IBMEdge 2015

April 17, 2015

Been a busy year in storage with software defined storage (SDS), real-time compression, flash, storage virtualization, OpenStack, and more all gaining traction. Similarly, big data, analytics, cloud, and mobile are impacting storage. You can expect to find them and more at IBM Edge2015, coming May 10-15 in Las Vegas.

 But storage continues to make news every week. Recently IBM scientists demonstrated an areal recording density triumph, hitting 123 billion bits of uncompressed data per square inch on low cost, particulate magnetic tape. That translates into the equivalent of a 220 terabyte tape cartridge that could fit in the palm of your hand, or comparable to 1.37 trillion mobile text messages or the text of 220 million books, which would require a 2,200 km bookshelf spanning from Las Vegas to Houston, Texas. (see graphic below)

Tape compression breakthrough

Courtesy of IBM (click to enlarge)

Let’s take a look at some sessions delving into the current hot storage topics at Edge2015, starting with tape, since we’ve been talking about it.

(sSS1335) The Future of Tape; presenter Mark Lantz. He discusses current and future scaling trends of magnetic tape technology—see announcement above—from the perspective of IBM Research. He begins by first comparing recent scaling trends of both tape and hard disk drive technology. He then looks at future capacity scaling potential of tape and hard disks. In that context he offers an in-depth look at a new world record tape areal density demonstration of more than 100 Gb/in2, performed by IBM research in collaboration with Fujifilm, using low cost particulate tape media. He also discusses the new hardware and tape media technologies developed for this demonstration as well as key challenges for the continued scaling of tape.

If you are thinking future, check out this session too. (sBA2523) Part III: A Peek into the Future; presenter Bruce Hillsberg. This session looks at novel and innovate technologies to address clients’ most challenging technical and business problems across a wide range of technologies and disciplines. The presentation looks at everything from the most fundamental materials level all the way to working on the world’s largest big data problems. Many of the technologies developed by the Storage Systems research team lead to new IBM products or become new features in existing products. Topics covered in this lecture include atomic scale storage, research into new materials, advances in current storage media, advanced object stores, cloud storage, and more.

Combine big data, flash, and the z13 all here. (sBA1952) How System z13 and IBM DS8870 Flash Technology Enables Your Hadoop Environments; presenter Renan Ugalde.  Analyzing large amounts of data introduces challenges that can impact the goals of any organization. Companies require a reliable and high performing infrastructure to extract value from their structure and unstructured data. The unique features offered by the integration of IBM System z13 and DS8870 Flash technology enable a platform to support real-time decisions such as fraud detection. This session explains how integration among System z13, DS8870, and Hadoop maximizes performance by enabling the infrastructure’s unique big data capabilities.

Jon Toigo is an outstanding non-IBM presenter and somewhat of an iconoclast when it comes to storage. This year he is offering a 3-part session on Disaster Recovery Planning in an Era of Mobile Computing and Big Data:

  • (aBA2511) Part I: For all the hype around hypervisor-based computing and new software-defined infrastructure models, the ongoing need for disaster preparedness is often being buried in the discussion. High availability server clustering is increasingly believed to trump disaster recovery preparations, despite the fact that the transition to an agile data center is fraught with disaster potentials. In the first of three sessions, Toigo looks at the trends that are occurring in IT and the potential they present for disruption.
  • sBA2512) Part II: builds on the previous session by examining the technologies available for data protection and the trend away from backups in favor of real-time mirroring and replication. He notes promising approaches, including storage virtualization and object storage that can make a meaningful contribution.
  • (sBA2513) Part III: completes his disaster recovery planning series with the use of mobile computing technologies and public clouds as an adjunct to successful business recovery following an unplanned interruption event. Here he discusses techniques and technologies that either show promise as recovery expediters or may place businesses at risk of an epic fail.

Several SDS sessions follow: (sSS0884) Software Defined Storage — Why? What? How? Presenter: Tony Pearson. Here Pearson explains why companies are excited about SDS, what storage products and solutions IBM has to offer, and how they are deployed. This session provides an overview of the new IBM Spectrum Storage family of offerings.

 A second session by Pearson. (sCV3179): IBM Spectrum Storage Integration in IBM Cloud Manager with OpenStack: IBM’s Cloud Storage Options; presenter Tony Pearson. This session will look at the value of IBM storage products in the cloud with a focus on OpenStack. Specifically, it will look at how Spectrum Virtualize can be integrated and used in a complete 3-tier app with OpenStack.

Finally, (sSS2453) Myth Busting Software Defined Storage – Top 7 Misconceptions; presenter Jeffrey Barnett. This session looks at the top misconceptions to cut through the hype and understand the real value potential. DancingDinosaur could only come up with six misconceptions. Will have to check out this session for sure.

Of course, save time for the free evening entertainment. In addition to Penn & Teller, a pair of magicians, and rocker Grace Potter, here. There also will be a weird but terrific group, 2Cellos. Stick with it to the end (about 3 min.) for the kicker.

DancingDinosaur is Alan Radding, a veteran IT analyst and writer. Follow DancingDinosaur on Twitter, @mainframeblog. See more of his IT writing on Technologywriter.com and here. And join DancingDinsosaur at IBM Edge2015. You will find me hanging out wherever people gather around available power outlets to recharge mobile devices.

IBM z Systems at Edge2015

April 9, 2015

There are so many interesting z Systems sessions at IBM Edge2015 that DancingDinosaur can’t come close to attending them all or even writing about them.  Edge2015 will be in Las Vegas, May 10-15, at the Venetian, a huge hotel that just happens to have a faux Venice canal running within it (and Vegas is in the desert, remember).

The following offers a brief summation of a few z Systems sessions that jumped out at me.  In the coming weeks Dancing Dinosaur will look at sessions on Storage, Power Systems, cross-platform sessions, and middleware. IBM bills Edge2015 as the Infrastructure Innovation Conference so this blog will try at least to touch on bits of all of it. Am including the session numbers and presenters but please note that session and presenters may change.

radcliffe mobile as the next evolutionCourtesy of IBM (click to enlarge)

Session zBA1909; Mobile and Analytics Collide – A New Tipping Point; presenter Mark Simmonds

DancingDinosaur starting following mobile on z in 2012 and was reporting IBM mobile successes as recently as last month, click here. In this session Simmonds observes organizations being driven to deliver more insight and smarter outcomes in pursuit of increasing revenue and profit while lowering business costs and risks. The ubiquity of mobile devices adds two important dimensions to business analytics, the time and location of customers. Now you have an opportunity to leverage both via the mobile channel but only if your analytics strategy can respond to the demands of the mobile moment. At this session you’ll see how customers are using IBM solutions and the z to deliver business critical insight across the mobile community and hear how organizations are setting themselves apart by delivering near real-time analytics.

Session zBA1822; Hadoop and z Systems; presenter Alan Fellwock

DancingDinosaur looked at Hadoop on z as early as 2011. At that point it was mainly an evolving promise. By this past fall it had gotten real, click here.  In this session, Fellwock notes that various use cases are emerging that require Hadoop processing in conjunction with z Systems. In one category, the data originates on the z Systems platform itself—this could be semi-structured or unstructured data held in DB2 z/OS, VSAM or log files in z/OS. In another category, the data originates outside z Systems –this could be social media data, email, machine data, etc.—but needs to be integrated with core data on z Systems. Security and z Systems governance becomes critical for use cases where data originates on z Systems. There are several z Hadoop approaches available, ranging from Hadoop on Linux to an outboard Hadoop cluster under z governance to a cloud model that integrates with SoftLayer.

Session zAD1876; Bluemix to Mainframe – Making Development Accessible in the Cloud; presenter Rosalind Radcliffe

Cloud capability and technology is changing the way enterprises go to market. DancingDinosaur interviewed Radcliffe for a posting on DevOps for the mainframe in March. DevOps is about bringing the entire organization together, including development and operations, to more efficiently deliver business value be it on premise, off premise, or in a hybrid cloud environment. This session promises to explore how IBM DevOps solutions can transform the enterprise into a high quality application factory by leveraging technology across platforms and exploiting both systems of record and systems of engagement applications. It will show how to easily expose your important data and customer applications to drive innovation in a nimble, responsive way, maintaining the logic and integrity of your time-tested systems.

Session zAD1620; APIs to the Enterprise: Unlocking Mainframe Assets for Mobile and Cloud Applications; presenter Asit Dan

The emergence of APIs has changed how organizations build innovative mobile and web applications, enter new markets, and integrate with cloud and third party applications. DancingDinosaur generally refers to this as the API economy and it will become only more important going forward. IBM z Systems data centers have valuable assets that support core business functions. Now they can leverage these assets by exposing them as APIs for both internal and external consumption. With the help of IBM API Management, these organizations can govern the way APIs are consumed and get detailed analytics on the success of the APIs and applications that are consuming them. This session shows how companies can expose z Systems based functions as APIs creating new business opportunities.

Session zAD1469; Java 8 on IBM z13 – An Unstoppable Force Meets an Immovable Object; presenter Elton De Souza

What happens when you combine the most powerful commercially available machine on the planet with the latest iteration of the most popular programming language on the planet? An up to 50% throughput improvement for your generic applications and up to 2x throughput improvement for your security-enabled applications – that’s what! This session covers innovation and performance of Java 8 and IBM z13. With features such as SMT, SIMD and cryptographic extensions (CPACF) exploitation, IBM z Systems is once again pushing the envelope on Java performance. Java 8 is packed with features such as lambdas and streams along with improved performance, RAS and monitoring that continues a long roadmap of innovation and integration with z Systems. Expect to hear a lot about z13 at Edge2015.

Of course, there is more at Edge2015 than just z Systems sessions. There also is free evening entertainment. This year the headliner act is Penn & Teller, a pair of magicians. DancingDinosaur’s favorite, however, is Grace Potter, who delivers terrific hard rock and roll. Check her out here.

DancingDinosaur is Alan Radding, a veteran IT analyst and writer. Follow DancingDinosaur on Twitter, @mainframeblog. See more of his IT writing on Technologywriter.com and here. And join DancingDinsosaur at IBM Edge2015. You will find me hanging out wherever people gather around available power outlets to recharge mobile devices.

IBM Edge 2015 as All Platforms Infrastructure Innovation Conference

April 2, 2015

Please join DancingDinosaur at IBM Edge2015 at the Venetian in Las Vegas, May 10-15. It will consist of an Executive Edge track and a Technical track. The program is crammed with hundreds of sessions.  You can find the Technical track session list here. Dare you to find even 10 sessions that don’t interest you.

 marie wieck with IBM poster

Courtesy of IBM: Marie Wieck, General Manager, Middleware (click to enlarge)

This year Edge2015 merges last year’s two events, IBMEdge and Enterprise 2014, into what IBM calls the Infrastructure Innovation Conference. It is IBM’s only US event covering all IBM platforms—System Storage, IBM z Systems, IBM Power Systems, and IBM Middleware at a single venue.  It includes three Technical Universities: System Storage, z Systems, and Power Systems for those working toward certification.

Executive Edge, which this post will look at a little more closely below, offers an exclusive two-day summit for IT executives and leaders , as IBM explains, featuring the latest innovations and technology announcements, client success stories, insightful presentations from IBM executives and industry thought leaders. Plus, IBM promises top tier, one-on-one executive meetings and exclusive networking opportunities.

The IBM Systems and Technology Group (STG) top brass will be there. This IBM Systems lineup includes: Tom Rosamilia, Senior Vice President; Stephen Leonard, General Manager, Sales; Jamie M. Thomas, General Manager, IBM Storage & Software Defined Systems; Ross Mauri, General Manager, z Systems; Doug Balog, General Manager, Power Systems; and Marie Wieck General Manager, Middleware.

And then there is the free entertainment IBM provides. The headliner act is Penn & Teller, a pair of magicians. DancingDinosaur’s favorite, however, is Grace Potter, who delivers terrific hard rock and roll. If you skip the casinos you can catch both and avoid losing money in the process.

With the Executive track IBM promises to present its most innovative approaches to using IBM Systems and Middleware as a foundation for challenging new areas of information management including:

  • Cloud Infrastructure, especially hybrid clouds
  • Big Data and Analytics
  • Systems of Record
  • Systems of Engagement
  • Mobile and Security Services
  • Flash and Storage Virtualization
  • Software Defined Infrastructure

Cloud and big data/analytics have become accepted pillars of IT business value. Mobile, flash, and software-defined infrastructure are being widely embraced as the next wave of IT value delivery. And security must be a priority for everything. Also included will be dozens of client case studies.

Throughout both the Executive and Technology tracks there will be numerous sessions citing client cases and use cases. Although not the same both show how to actually deploy technology for business value.

For example, the session (cCV0821) titled Be Hybrid or Die, revolves around hybrid clouds. The session promises a clear understanding of the term hybrid and why hybrid has become the next step in IT value creation, extraction, and efficiency gains. Citing use cases the session will show how to map your business needs to the functional business blocks of hybrid clouds and to the IBM products portfolio that address those needs. It concludes by examining where IBM is investing, its long term view, and how this will increase your IT flexibility.

Speaking of cases, a session (cIT0514) titled How to Create Rock Solid Business Cases to Get IT Projects Approved looks at the subject of case studies from a standpoint of the IT person charged with answering the unavoidable ROI question. BTW, DancingDinosaur develops business cases based on various looks at cost of ownership should you need help.  They are time consuming but necessary. Management requires an understanding of current IT costs and strengths and the expected payback to better assess new ideas and technologies. This session identifies the key elements of an effective IT cost analysis and explores how to build compelling business cases around those costs and, you hope, quantifiable benefits. Concepts discussed include complexity analysis, platform assessment, Fit for Purpose analysis, and financial case structures. Hmmm, definitely one DancingDinosaur will attend.

Another session looks at the first customer experiences using SAP HANA on Power. SAP HANA is the company’s in-memory, column-oriented RDBMS that handles both high volume transactions and complex analytical query processing on the same platform, and does so very fast since all is in-memory. The session, (lBA0464) titled SAP HANA on Power Systems: First Experiences from Customers in the Test and Evaluation Program, reports on the first experiences gathered from the pilot clients. This presentation begins with a short overview of SAP HANA in general, and then covers specific aspects in the deployment of SAP HANA on IBM Power Systems and IBM storage. You’ll hear about the advantages of SAP HANA on Power Systems (vs. x86) and discover how fast and easy it is to implement in a private cloud with full use of PowerVM capabilities.

In about six weeks DancingDinosaur will be heading to IBM Edge2015. Please join me there. You can find me hanging out wherever people gather around available power outlets to recharge mobile devices. DancingDinosaur is Alan Radding, a veteran IT analyst and writer. Follow DancingDinosaur on Twitter, @mainframeblog. See more of his IT writing on Technologywriter.com and here. And join DancingDinsosaur at IBM Edge2015.

IBM DevOps for the Mainframe

March 27, 2015

DevOps is not just for distributed platforms. IBM has a DevOps strategy for large enterprises (usually mainframe shops) too. Nationwide, a longtime mainframe shop, is an early adopter of DevOps and already is reporting significant gains: reduction in critical software defects by 80% and a 20% efficiency gain in its maintenance and support operations in just 18 months.

DevOps, an agile methodology, establishes a continuous feedback loop between software development and deployment/operations that speeds development and deployment while ensuring quality. This is a far cry from the waterfall development methodologies of the mainframe past.

 desz DevOps adoption model

Courtesy of IBM (click to enlarge)

The IBM DevOps initiative, announced last November (link above), taps into the collaborative capabilities of IBM’s Cloud portfolio to speed the delivery of software that drives new models of engagement and business. Software has become the rock star of IT with software-driven innovation becoming a primary strategy for creating and delivering new value to customers. A survey of 400 business and IT executives by the IBM Institute for Business Value showed businesses that are more effective at software delivery are also more profitable than their peers nearly 70% of the time. As such, IBM notes, DevOps provides a way for businesses to remain competitive, applying lean and agile principles to software development to speed the delivery of software that meets new market requirements.

Some mainframe shops, however, continue to operate from a software standpoint as if client/server computing and PCs were still the new game in town. Meanwhile the business units keep complaining about how long it takes to make software changes while long backlogs drag on the IT budget.

DevOps is about continuous software development and deployment. That means continuous business planning, continuous collaborative dev, continuous testing, continuous release and deployment, continuous monitoring, and continuous feedback and optimization in a never ending cycle. Basically, continuous everything.  And it really works, as Nationwide can attest.

But DevOps makes traditional mainframe shops nervous. Mainframe applications are rock solid and crashes and failures almost unheard of. How can they switch to DevOps without risking everything the mainframe stands for, zero failure?

The answer: mainframe DevOps that leads straight into continuous testing, not deployment. The testing can and should be as rigorous and extensive as is necessary to reassure that everything works as it should and anything that will fail has failed. Only then does it go into production.

It would be comforting to the data centers to say that DevOps only addresses systems of engagement; those pesky mobile, collaborative, and social systems that suddenly are making demands on the core mainframe production applications. But that is not correct. DevOps is about integrating systems of engagement with systems of record, the enterprise’s mainframe crown jewels. The trick is to bring together the culture, processes, and tools across the entire software delivery lifecycle, as IBM says, to span it all—mobile to mainframe, slowing down only to conduct as exhaustive testing as the enterprise requires.

Mainframe tools from the era of waterfall methodologies won’t cut it. Rational offers a set of tools starting with Blue Agility. IBM also offers an expanded set of tools acquired through acquisitions such as UrbanCode (release automation) and GreenHat (software quality and testing solutions for the cloud and more) that offer an integrated developer experience on open cloud platforms such as Bluemix to expedite DevOps collaboration, according to IBM.

Expect push back from any attempt to introduce DevOps into a traditional mainframe development culture. Some shops have been developing systems the same way for 30 years or more. Resistance to change is normal. Plan to start gradually, implementing DevOps incrementally.

Some shops, however, may surprise you. Here the mainframe team senses they are falling behind. IBM, of course, has tools to help (see above). Some experts recommend focusing on automated testing early on; when testing is automated DevOps adoption gets easier, they say, and old school developers feel more reassured.

At IBM Edge2015, there are at least 2 sessions on DevOps: Light Up Performance of Your LAMP Apps and DevOps with a Power Optimized Stack; and CICS Cloud DevOps = Agility2. BTW, it is a good time to register for IBM Edge2015 right away, when you can still get a discount. IBM Edge2015, being billed as the Infrastructure Innovation Conference, takes place May 11 – 15 at The Venetian in Las Vegas. DancingDinsosaur will be there. Have just started pouring over the list of sessions on hundreds of topics for every IBM platform and infrastructure subject. IBM Edge2015 combines what previously had been multiple conferences into one.

DancingDinosaur is Alan Radding, a veteran IT analyst and writer. Follow DancingDinosaur on Twitter, @mainframeblog. See more of his IT writing on Technologywriter.com and here. And join DancingDinsosaur at IBM Edge2015.

Vodafone Spain Picks IBM zSystem for Smarter Cities Initiative

March 19, 2015

The Vodafone initiative, as reported here previously, leverages the most advanced mobile communications technology including citywide sensors and a Global M2M Platform that will enable the connection of thousands of sensors to the intelligent Vodafone Connected City system. The new cloud-based system will run on IBM Linux z Systems. The Linux z Systems were selected for their high security, which protects cloud services while also delivering the speed, availability, and efficiency required to drive mobile services at scale.  To do something at scale you really do want the z System.

 vodafone zsystem running linux

Courtesy of IBM: zSystem and Linux

For Vodafone this represents the beginning of what they refer to as a Smarter Cities services initiative. The effort targets local governments and city councils with populations ranging between 20.000 – 200.000 citizens. The services provided will address customer’s needs in the following key areas: urban vitality, public lighting, energy efficiency, waste management, and citizen communications.

In effect, Vodafone is becoming a SaaS provider by leveraging their new zSystem. Vodafone’s customers for this are the government groups that opt to participate. The company announced the effort at the World Mobile Congress in Barcelona at the beginning of the month.

One of the initial participants will be Seville, the capital of the province of Andalucía, where a control and development center will be established by Vodafone. The telco will invest more than 243 million euros over two years on telecommunications infrastructure, encouraging the development of the technology sector and developing projects to create strategic growth in the region.

Initially, the center will focus on creating smart city solutions that can easily and efficiently be used by cities ranging from 20,000 to 150,000 residents; cities that otherwise may not have the funds to invest in smart city infrastructure projects on their own. This center is also expected to help make the Andalucía territory of Spain a leader in the development of Big Data and smart solutions.

IBM is delivering the full stack to Vodafone: a set of cloud services that include an enterprise zSystem Linux server (IBM zBC12), v7000 storage, IBM intelligent operations, an information services solution, and more.  Vodafone opted for the z and Linux to enable cost-efficient, highly secure cloud services while also delivering the speed, availability and efficiency required to drive mobile services at scale. IBM Intelligent Operations software will provide monitoring and management of city services. IBM’s MobileFirst platform will be used to create citizen-facing mobile applications while IBM Information Server and Maximo asset management software will round out the IBM stack.

Overall, IBM, the zSystem, and Linux brought a number of benefits to this initiative. Specifically, the zSystem proved the least expensive when running more than seven vertical services as Vodafone is planning. An example of such a vertical service is the public lighting of a city. This also is where scalability brings a big advantage. Here again, the zSystem running Linux delivers scalability along with greater security and regulatory compliance. Finally, another critical capability for Vodafone was the zSystem’s ability to isolate workloads.

In short, the zSystem’s security and regulation compliance; reliability, resilience, and robustness; strong encoding and workload isolation, workload management and ability to meet SLAs; scalability; and high efficiency clinched the Vodafone deal.

This could prove a big win for IBM and the zSystem. Vodafone has mobile operations in 26 countries, partners with mobile networks in 54 more, and runs fixed broadband operations in 17 markets. As of the end of 2014, Vodafone had 444 million mobile customers and 11.8 million fixed broadband customers. Vodafone Spain’s 14.811.000 mobile customers and 2.776.000 broadband ones will certainly take maximum advantage of the zSystem’s scalability and reliability.

…as a follow up to last week’s report on recent success coming from the OpenPower Foundation that string continued this week at the OpenPOWER Inaugural Summit with the OpenPOWER Foundation announcing more than ten hardware solutions spanning systems, boards, cards, and a new microprocessor customized for the Chinese market.  Built collaboratively by OpenPOWER members, the new solutions exploit the POWER architecture to provide more choice, customization and performance to customers, including hyperscale data centers.

Among the products and prototypes OpenPOWER members revealed are:

  • Firestone, a prototype of a new high-performance server targeting exascale computing and projected to be 5-10x faster than today’s supercomputers. It incorporate technology from NVIDIA and Mellanox.
  • The first GPU-accelerated OpenPOWER developer platform, the Cirrascale RM4950, resulting from collaboration between NVIDIA, Tyan, and Cirrascale.
  • An open server specification and motherboard mock-up combining OpenPOWER, Open Compute and OpenStack by Rackspace and designed to run OpenStack services.

Other member-developed new products leverage the Coherent Accelerator Processor Interface (CAPI), a hallmark feature built into the POWER architecture. DancingDinosaur initially covered CAPI here.

Reminder: it is time to register for IBM Edge2015 in Las Vegas May 10-15. Edge2015 combines all of IBM’s infrastructure products with both a technical track and an executive track.  You can be sure DancingDinosaur will be there. Watch for upcoming posts here that will highlight some of the more interesting sessions.

DancingDinosaur is Alan Radding, a veteran IT analyst and writer. Follow DancingDinosaur on Twitter, @mainframeblog. See more of his IT writing on Technologywriter.com and here.

OpenPOWER Starts Delivering the Goods

March 13, 2015

Are you leery of multi-vendor consortiums? DancingDinosaur as a rule is skeptical of the grand promises they make until they actually start delivering results. That was the case with OpenPOWER last spring when you read here that the OpenPOWER Foundation was introduced and almost immediately forgotten.

 power8 cpu blocks

IBM POWER8 processor, courtesy of IBM (click to enlarge)

But then last fall DancingDinosaur reported on NVIDIA and its new GPU accelerator integrated directly into the server here. This too was an OpenPOWER Foundation-based initiative. Suddenly, DancingDinosaur is thinking the OpenPOWER Foundation might actually produce results.

For example, IBM introduced a new range of systems capable of handling massive amounts of computational data faster at nearly 20 percent better price/performance than comparable Intel Xeon v3 Processor-based systems. The result:  a superior alternative to closed, commodity-based data center servers. Better performance and at a lower price. What’s not to like?

The first place you probably want to apply this improved price/performance is to big data, which generates 2.5 quintillion bytes of data across the planet every day. Even the miniscule portion of this amount that you actually generate will very quickly challenge your organization to build a sufficiently powerful technology infrastructures to gain actionable insights from this data fast enough and at a price you can afford.

The commodity x86 servers used today by most organizations are built on proprietary Intel processor technology and are increasingly stretched to their limits by workloads related to big data, cloud and mobile. By contrast, IBM is designing a new data centric approach to systems that leverages the building blocks of the OpenPOWER Foundation.

This is plausible given the success of NVIDIA with its GPU accelerator. And just this past week Altera demonstrated its OpenPOWER-based FPGA, now being used by several other Foundation members who are collaborating to develop high-performance compute solutions that integrate IBM POWER chips with Altera’s FPGA-based acceleration technologies.

Formed in late 2013, the OpenPOWER Foundation has grown quickly from 5 founders to over 100 today. All are collaborating in various ways to leverage the IBM POWER processor’s open architecture for broad industry innovation.

IBM is looking to offer the POWER8 core and other future cores under the OpenPOWER initiative but they are also making previous designs available for licensing. Partners are required to contribute intellectual property to the OpenPOWER Foundation to be able to gain high level status. The earliest successes have been around accelerators and such, some based on POWER8’s CAPI (Coherence Attach Processor Interface) expansion bus built specifically to integrate easily with external coprocessors like GPUs, ASICs and FPGAs. DancingDinosaur will know when the OpenPOWER Foundation is truly on the path to acceptance when a member introduces a non-IBM POWER8 server. Have been told that may happen in 2015.

In the meantime, IBM itself is capitalizing on the OpenPower Foundation. Its new IBM Power S824L servers are built on IBM’s POWER8 processor and tightly integrate other OpenPOWER technologies, including NVIDIA’s GPU accelerator. Built on the OpenPOWER stack, the Power S824L provides organizations the ability to run data-intensive tasks on the POWER8 processor while offloading other compute-intensive workloads to GPU accelerators, which are capable of running millions of data computations in parallel and are designed to significantly speed up compute-intensive applications.

Further leveraging the OpenPOWER Foundation at the start of March IBM announced that SoftLayer will offer OpenPOWER servers as part of its portfolio of cloud services. Organizations will then be able to select OpenPOWER bare metal servers when configuring their cloud-based IT infrastructure from SoftLayer, an IBM company. The servers were developed to help organizations better manage data-intensive workloads on public and private clouds, effectively extending their existing infrastructure inexpensively and quickly. This is possible because OpenPOWER servers leverage IBM’s licensable POWER processor technology and feature innovations resulting from open collaboration among OpenPOWER Foundation members.

Due in the second quarter, the SoftLayer bare metal servers run Linux applications and are based on the IBM POWER8 architecture. The offering, according to IBM, also will leverage the rapidly expanding community of developers contributing to the POWER ecosystem as well as independent software vendors that support Linux on Power and are migrating applications from x86 to the POWER architecture. Built on open technology standards that begin at the chip level, the new bare metal servers are built to assist a wide range of businesses interested in building custom hybrid, private, and public cloud solutions based on open technology.

BTW, it is time to register for IBM Edge2015 in Las Vegas May 10-15. Edge2015 combines all of IBM’s infrastructure products with both a technical track and an executive track.  You can be sure DancingDinosaur will be there. Watch for upcoming posts here that will highlight some of the more interesting sessions.DancingDinosaur is Alan Radding, a veteran IT analyst and writer.

Follow DancingDinosaur on Twitter, @mainframeblog. See more of his IT writing on Technologywriter.com and here.

Recent IBM Successes Target Mobile

March 6, 2015

IBM introduced its latest set of MobileFirst for iOS apps at Mobile World Conference, held this week in Barcelona. Not coincidentally, Vodafone Spain, along with IBM, announced its Connected City Initiative to help Spanish cities drive new efficiency. BTW, IBM launched its first set of iOS business apps this past December.

 ibm ios retail appCourtesy of IBM: retailers gain real-time perspective and data-driven recommendations (click to enlarge)

The Vodafone initiative, according to the company, leverages the most advanced mobile communications technology including citywide sensors and a Global M2M Platform that will enable the connection of thousands of sensors with the intelligent Vodafone Connected City system. The new cloud-based system will run on IBM Linux z Systems. The Linux z Systems were selected for their high security, which enables cloud services while also delivering the speed, availability, and efficiency required to drive mobile services at scale.  To do something at scale you want the z System.

But more interesting to DancingDinosaur readers may be the latest set of MobileFirst for iOS apps that target banking and financial services, airlines, and retail. These are strong areas of interest at IBM z Systems shops. At the announcement, more than 50 foundational clients including Air Canada, American Eagle Outfitters, Banorte, Boots UK, Citi, and Sprint have signed on for apps that deliver the complete expertise and capabilities of those companies to their employees wherever they interact with clients and do it faster, more easily, and more securely than ever before via the Apple iPhone and iPad.

These apps promise to transform the job roles and effectiveness of hundreds of millions of professionals globally. For example, Boots UK staff will gain access to real-time data and insight from across the company through their iOS device, allowing them to offer shoppers even greater levels of service, such as real-time stock availability and easy in store ordering. The result: new levels of convenience and accessibility for Boots customers.

Specifically, the latest IBM MobileFirst for iOS applications address:

  • Passenger Care (Travel)empowers customer service agents to address traveler needs from anywhere by enabling a smoother, more personalized experience while speeding check-in and easing airport congestion.
  • Dynamic Buy(Retail)—retailers gain real-time perspective and data-driven recommendations on how products are performing and data-driven recommendations that help retailers realize better return on investments.
  • Advisor Alerts(Banking and Financial Services)—uses analytics to help financial professionals prioritize client-related tasks on-the-go while backed by customized analytics that tells the advisor what’s most important through a personalized dashboard that displays recommended next steps.

ibm ios financial app

Courtesy of IBM:  analytics for financial services staff (click to enlarge)

The new apps are designed to fundamentally redefine the ways enterprises empower their professionals to interact, learn, connect, and perform. Built exclusively for iPhone and iPad, IBM MobileFirst for iOS apps are delivered in a secure environment, embedded with analytics, and linked to core enterprise processes. The apps can be customized for any organization and easily deployed, managed and upgraded via cloud services from IBM specifically for iOS devices.

Making this happen behind the scenes at many organizations will be IBM z System-based data and logic, including not only CICS but also Hadoop and other analytics running on the z, increasingly in real time. Of course, it all revolves around IBM’s MobileFirst, which allows enterprises to streamline and accelerate mobile adoption. In addition to the IBM-Apple iOS applications organizations can develop their own mobile apps using streamlined tool sets like IBM Bluemix.

DancingDinosaur is Alan Radding, a veteran IT analyst and writer. Follow DancingDinosaur on Twitter, @mainframeblog. See more of his IT writing onTechnologywriter.com and here.

BTW, it is time to register for IBM Edge2015 in Las Vegas May 10-15. Edge2015 combines all of IBM’s infrastructure products with both a technical track and an executive track.  You can be sure DancingDinosaur will be there. Watch for upcoming posts here that will highlight some of the more interesting sessions.

IBM Redefines Software Defined Storage

February 25, 2015

On Feb. 17 IBM unveiled IBM Spectrum Storage, a new storage software portfolio designed to address data storage inefficiencies by changing the economics of storage with a layer of intelligent software; in short, a software defined storage (SDS) initiative.  IBM’s new software creates an efficient data footprint that dynamically stores every bit of data at the optimal cost, helping maximize performance and ensuring security, according to the IBM announcement here.

Jared Lazarus/Feature Photo Service for IBM

Courtesy of IBM: IBM Storage GM demonstrates new Spectrum storage management dashboard

To accelerate the development of next-generation storage software, IBM included plans to invest more than $1 billion in its storage software portfolio over the next five years. The objective: extend its storage technology leadership, having recently been ranked #1 in SDS platforms for the first three quarters of 2014 by leading industry analyst firm IDC. The investment will focus on R&D of new cloud storage software, object storage, and open standard technologies including OpenStack.

“Traditional storage is inefficient in today’s world where the value of each piece of data is changing all the time,” according to Tom Rosamilia, Senior Vice President, IBM Systems, in the announcement. He went on: “IBM is revolutionizing storage with our Spectrum Storage software that helps clients to more efficiently leverage their hardware investments to extract the full business value of data.”

Two days later IBM announced another storage initiative, flash products aimed directly at, EMC. The announcement focused on two new all-flash enterprise storage solutions, FlashSystem V9000 and FlashSystem 900. Each promises industry-leading performance and efficiency, along with outstanding reliability to help lower costs and accelerate data-intensive applications. The new solutions can provide real-time analytical insights with up to 50x better performance than traditional enterprise storage, and up to 4x better capacity in less rack space than EMC XtremIO flash technology.

Driving interest in IBM Spectrum storage is research suggesting that less than 50% of storage is effectively utilized. Storage silos continue to be rampant throughout the enterprise as companies recreate islands of Hadoop-based data along with more islands of storage to support ad hoc cloud usage. Developers create yet more data silos for dev, testing, and deployment.

IBM Storage Spectrum addresses these issues and more through a SDS approach that separates storage capabilities and intelligence from the physical devices. The resulting storage is self-tuning and leverages analytics for efficiency, automation, and optimization. By capitalizing on its automatic data placement capabilities IBM reports it can meet services levels while reducing storage costs by as much as 90%.

Specifically, IBM Spectrum consists of six storage software elements:

  1. IBM Spectrum Control—analytics-driven data management to reduce costs by up to 50%
  2. IBM Spectrum Protect—optimize data protection to reduce backup costs by up to 38%
  3. IBM Spectrum Archive—fast data retention that reduces TCO for archive data by up to 90%
  4. IBM Spectrum Virtualize—virtualization of mixed environment to store up to 5x more data
  5. IBM Spectrum Accelerate—enterprise storage for cloud, which can be deployed in minutes instead of months
  6. IBM Spectrum Scale—high-performance, highly scalable storage for unstructured data

Each of these elements can be mapped back to existing IBM storage solutions.  Spectrum Accelerate, for example, uses IBM’s XIV capabilities. Spectrum virtualization is based on IBM’s San Volume Controller (SVC) technology. Spectrum Scale is based on GPFS, now called Elastic Storage, to handle file and object storage at massive scale yet within a single global name space.  Spectrum Archive, based on IBM’s LTFS, allows an organization to treat tape as a low cost, fully active tier.  In effect, with IBM Spectrum, an organization can go from flash cache to tape, all synced worldwide within a single name space.

A big part of what IBM is doing amounts to repackaging the capabilities it has built into its storage systems and proven in various products like XIV or GPFS or SVC as software components to be used as part of an SDS deployment. This raises some interesting possibilities. For instance, is it cheaper to use Spectrum Accelerate with a commodity storage array or buy the conventional XIV storage product?  The same probably could be asked of Spectrum Virtualize with SVC or Spectrum Archive with LTFS.

DancingDinosaur asked the Spectrum marketing team exactly that question.  Their response: With Accelerate you have the flexibility to size the server to the performance needs of the solution, so while the software cost remains the same regardless of the server you select. The cost of the server will vary depending on what the client needs. We will make available a sizing guide soon so each client’s situation can be modeled based on the solution requirements. In all cases it really depends on the hardware chosen vs. the (IBM) appliance. If the hardware closely matches the hardware of the appliance then costs differences will be minimal. It all depends on the price the client gets, so yes, in theory, a white box may be lower cost.

With Spectrum Accelerate (XIV), IBM continues, the client can also deploy the software on a cluster of just 3 servers (minimum) and leverage existing Ethernet networking.  This minimum configuration will be much lower cost than the minimum XIV system configuration cost. Spectrum Accelerate can also be licensed on a monthly basis, so those clients with variable needs or deploying to the cloud the client can deploy and pay for only what they need when they need it.

It is a little different for the other Spectrum offerings. DancingDinosaur will continue chasing down those details. Stay tuned. DancingDinosaur is Alan Radding, a veteran IT analyst and writer. Follow DancingDinosaur on Twitter, @mainframeblog. Follow more of his IT writing on Technologywriter.com and here.

BMC and Compuware to Drive Down Mainframe Costs

February 20, 2015

This year jumped off to an active start for the mainframe community. The introduction of the z13 in January got things going. Now Compuware and BMC are partnering to integrate offerings of some their mainframe tools to deliver cost-aware workload and performance management. The combined tools promise to reduce mainframe OPEX even as z systems shops try to leverage their high-value mainframe applications, data, and processing capacity to meet rapidly evolving business challenges.

 compuware bmc logos hi res

Not that things had been quiet before, especially if you consider IBM scrambling to reverse successive quarters on poor financial performance with a slew of initiatives. During that time Compuware went private last fall; about a year earlier BMC went private. Now you have two companies collaborating to deliver tools that will help mainframe shops reduce their software costs. DancingDinosaur has covered previous cost-saving and efficiency initiatives from each of these companies here and here.

Driving this collaboration is the incessant growth of new mainframe workloads, which will likely accelerate with the new z13. Such workload growth is continually driving up the Monthly License Charge (MLC) for IBM mainframe software, which for sub-capacity environments are generally impacted by the highest rolling four-hour average (R4HA) of mainframe utilization for all applications on each LPAR, as measured in MSUs. IBM is helping with discounts for mobile workloads and its new ICAP and country multi-plex pricing, which DancingDinosaur covered here, but more is needed.

The trick requires continually managing those workloads. In effect, IT can most effectively reduce its sizable IBM z Systems software costs by both 1) tuning each application to minimize its individual consumption of mainframe resources and 2) orchestrating application workloads to minimize the LPAR utilization peaks they generate collectively at any given time.  Good idea but not easy to implement in practice. You need automated tools.

According to Frank DeSalvo, former research director at Gartner: “The partnership between BMC and Compuware launches an integrated opportunity for mainframe customers to manage workload inefficiencies in a manner that has not been achievable to-date.”   This partnership, however, “helps organizations leverage their IT budgets by enabling them to continuously optimize their mainframe workloads, resulting in cost effective decisions for both current and future spending.,” as DeSalvo was quoted in the initial announcement.

Specifically, the Compuware-BMC collaboration brings together three products: BMC Cost Analyzer, BMC MainView, and Compuware Strobe.

  • BMC Cost Analyzer for zEnterprise brings a financially intelligent workload management tool that enables z data centers to identify MLC cost drivers and take appropriate measures to reduce those costs.
  • BMC MainView provides real-time identification of application performance issues, enabling customers to quickly eliminate wasteful MSU consumption.
  • Compuware Strobe delivers deep, granular and highly actionable insight into the behavior of application code in the z systems environment.

The partners integrated the products so they actually work together. One integration, for instance, allows BMC Cost Analyzer to call Compuware Strobe for a detailed analysis of the specific application component for peak MLC periods, enabling customers to proactively tune applications that have the greatest impact on their monthly software licensing costs. A second integration with BMC MainView allows customers to either automatically or manually invoke Strobe performance analysis—empowering mainframe staffs to more quickly, efficiently, and consistently when performing cost-saving tuning tasks.

compuware bmc screen shot Courtesy of Compuware, click to enlarge

BTW, at the same time Compuware introduced the latest version of Strobe, v 5.2. It promises deep insight into how application code—including DB2, COBOL 5.1, IMS and MQ processes—consume resources in z environments. By providing these insights while making it easy for multi-discipline mainframe ops teams to collaborate around these insights Strobe 5.2 enables IT to further drive down mainframe costs. At the same time it improves application responsiveness.

Besides the software licensing savings that can result the organization also benefits from performance gains for these applications. These too can be valuable since they positively impact end-user productivity and, more importantly, customer experience.

DancingDinosaur feels that any technology you can use to automate and streamline your systems operations will benefit you because people are always more expensive and less efficient than technology.

Alan Radding is DancingDinosaur. Follow this blog on Twitter, @mainframeblog. View my other IT writing at Technologywriter.com and here.

IBM z Systems as a Cloud Platform

February 13, 2015

DancingDinosaur wrote a long paper for an audience of x86 users. The premise of the paper: the z Systems in many cases could be a better and even lower cost alternative to x86 for a private or hybrid cloud. The following is an excerpt from that paper.

 cloud_computing_providers

BTW, IBM earlier this month announced it signed a 10-year, large-scale services agreement with Shop Direct to move the multi-brand digital retailer to a hybrid cloud model to increase flexibility and quickly respond to changes in demand as it grows, one of many such IBM wins recently. The announcement never mentioned Shop Direct’s previous platform. But it or any company in a similar position could have opted to build its own hybrid (private/public) cloud platform.

A hybrid cloud a company builds today probably runs on the x86 platform and the Windows OS. Other x86-based clouds run Linux. As demand for the organization’s hybrid cloud grows and new capabilities are added traffic increases.  The conventional response is to scale out or scale up, adding more or faster x86 processors to handle more workloads for more users.

So, why not opt for a hybrid cloud running on the z? As a platform, x86 is far from perfect; too unstable and insecure for starters. By adopting a zEC12 or a z13 to host your hybrid cloud you get one of the fastest general commercial processors in the market and the highest security rating for commercial servers, (EAL 5+). But most x86-oriented data centers would balk. Way too expensive would be their initial reaction. Even if they took a moment to look at the numbers their IT staff would be in open revolt and give you every reason it couldn’t work.

The x86 platform, however, is not nearly as inexpensive as it was believed, and there are many ways to make the z cost competitive. Due to the eccentricities of Oracle licensing on the z Systems, for instance, organizations often can justify the entire cost of the mainframe just from the annual Oracle software license savings. This can amount to hundreds of thousands of dollars or more each year. And the entry level mainframe has a list price of $75,000, not much more than an x86 system of comparable MIPS. And that’s before you start calculating the cost of x86 redundancy, failover, and zero downtime that comes built into the mainframe or consider security. Plus with the z Systems Solution Edition program, IBM is almost giving the mainframe away for free.

Some x86 shops could think of the mainframe as a potent Linux machine that can handle thousands of Linux instances without breaking a sweat. The staff wouldn’t even have to touch z/OS. It also runs Java and Hadoop. And it delivers an astonishingly fast and efficient Linux environment that provides a level of performance that would require a much great number of x86 cores to try to match. And if you want to host an on-premises or hybrid cloud at enterprise scale it takes a lot of cores. The cost of acquiring all those x86 cores, deploying them, and managing them will break almost any budget.

Just ask Jim Tussing, Chief Technology Officer for infrastructure and operations at Nationwide Insurance (DancingDinosaur has covered Tussing before): “We had literally 3000 x86 servers deployed that were underutilized,” which is common in the x86 environment even with VMware or Hyper-V virtualization. At a time when Nationwide was seeking to increase the pace of innovation across its products and channels, but rolling out new environments were taking weeks or months to provision and deploy, again not unheard of in the x86 world. The x86 environment at Nationwide was choking the company.

So, Nationwide consolidated and virtualized as many x86 servers on a mainframe as possible, creating what amounted to an on-premises and hybrid cloud. The payoff: Nationwide reduced power, cooling, and floor space requirements by 80 percent. And it finally reversed the spiraling expenditure on its distributed server landscape, saving an estimated $15 million over the first three years, money it could redirect into innovation and new products. It also could provision new virtual server instances fast and tap the hybrid cloud for new capabilities.

None of this should be news to readers of DancingDinosaur. However some mainframe shops still face organizational resistance to mainframe computing. Hope this might help reinforce the z case.

DancingDinsosaur is Alan Radding, a long-time IT analyst/writer. Follow DancingDinosaur on Twitter, @mainframeblog. See more of my IT writing at Technologywriter.com and here.


Follow

Get every new post delivered to your Inbox.

Join 769 other followers

%d bloggers like this: