Archive for the ‘Uncategorized’ Category

IBM LinuxONE and Open Mainframe Project Expand the z System

August 20, 2015

Meet the new IBM z System; called LinuxONE Emperor (named after the Emperor Penguin.) It is a z13 running only Linux. Check out the full announcement here.

Primary LinuxOne emperor

Courtesy of IBM, LinuxONE Emperor, the newest z System

DancingDinosaur is excited by several aspects of this announcement:  IBM is establishing, in conjunction with the Linux Foundation, an Open Mainframe Project; the company is breaking with its traditional mainframe pricing model; it also is putting KVM and Ubuntu on the machine; and it is offering a smorgasbord of app-dev options, including some of the sexiest in the industry today. DancingDinosaur never believed it would refer to a mainframe as sexy (must be time to retire).

Along with LinuxONE Emperor IBM announced an entry dedicated Linux machine, the LinuxONE Rockhopper. (BTW; notice the new playfulness in IBM’s product naming.) Rockhopper appears to be very similar to what IBM used to call a Business Class z, although IBM has stepped away from that designation. The closest you may get to a z13 business class machine may be LinuxONE Rockhopper. Rockhopper, according to IBM, is designed for clients and emerging markets seeking the speed, security and availability of the mainframe but in a smaller package.

The biggest long term potential impact from the announcement may come out of the Open Mainframe Project. Like many of IBM’s community project initiatives, IBM is starting by seeding the open community with z code, in effect creating the beginning of an open z System machine.  IBM describes this as the largest single contribution of mainframe code from IBM to the open source community. A key part of the mainframe code contributions will be the z’s IT predictive analytics that constantly monitor for unusual system behavior and help prevent issues from turning into failures. In effect, IBM is handing over zAware to the open source community. It had already announced intentions to port zAware to Linux on z early this year so it might as well make it fully open. The code, notes IBM, can be used by developers to build similar sense-and-respond resiliency capabilities for other systems.

The Open Mainframe Project, being formed with the Linux Foundation, will involve a collaboration of nearly a dozen organizations across academia, government, and corporate sectors to advance development and adoption of Linux on the mainframe. It appears that most of the big mainframe ISVs have already signed on. DancingDinosaur, however, expressed concern that this approach brings the possibility of branching the underlying functionality between z and Linux versions. IBM insists that won’t happen since the innovations would be implemented at the software level, safely insulated from the hardware. And furthermore, should there emerge an innovation that makes sense for the z System, maybe some innovation around the zAware capabilities, the company is prepared to bring it back to the core z.

The newly announced pricing should also present an interesting opportunity for shops running Linux on z.  As IBM notes: new financing models for the LinuxONE portfolio provide flexibility in pricing and resources that allow enterprises to pay for what they use and scale up quickly when their business grows. Specifically, for IBM hardware and software, the company is offering a pay-per-use option in the form of a fixed monthly payment with costs scaling up or down based on usage. It also offers per-core pricing with software licenses for designated cores. In that case you can order what you need and decrease licenses or cancel on 30 days notice. Or, you can rent a LinuxONE machine monthly with no upfront payment.  At the end of the 36-month rental (can return the hardware after 1 year) you choose to return, buy, or replace. Having spent hours attending mainframe pricing sessions at numerous IBM conferences this seems refreshingly straightforward. IBM has not yet provided any prices to analysts so whether this actually is a bargain remains to be seen. But at least you have pricing option flexibility you never had before.

The introduction of support for both KVM and Ubuntu on the z platform opens intriguing possibilities.  Full disclosure: DancingDinosaur was an early Fedora adopter because he could get it to run on a memory-challenged antiquated laptop. With the LinuxONE announcement Ubuntu has been elevated to a fully z-supported Linux distribution. Together IBM and Canonical are bringing a distribution of Linux incorporating Ubuntu’s scale-out and cloud expertise on the IBM z Systems platform, further expanding the reach of both. Ubuntu combined with KVM should make either LinuxONE machine very attractive for OpenStack-based hybrid cloud computing that may involve thousands of VMs. Depending on how IBM ultimately prices things, this could turn into an unexpected bargain for Linux on z data centers that want to save money by consolidating x86 Linux servers, thereby reducing the data center footprint and cutting energy costs.  LinuxONE Emperor can handle 8000 virtual servers in a single system, tens of thousands of containers.

Finally, LinuxONE can run the sexiest app-dev tools using any of the hottest open technologies, specifically:

  • Distributions: Red Hat, SuSE and Ubuntu
  • Hypervisors: PR/SM, z/VM, and KVM
  • Languages: Python, Perl, Ruby, Rails, Erlang, Java, Node.js
  • Management: WAVE, IBM Cloud Manager, Urban Code Openstack, Docker, Chef, Puppet, VMware vRealize Automation
  • Database: Oracle, DB2LUW, MariaDB, MongoDB, PostgreSQL
  • Analytics: Hadoop, Big Insights, DB2BLU and Spark

And run the results however you want: single platform, multi-platform, on-prem and off-prem, or multiple mixed cloud environments with a common toolset. Could a combination of LinuxONE alongside a conventional z13 be the mainframe data center you really want going forward?

DancingDinosaur is Alan Radding, a veteran IT analyst and writer. Please follow DancingDinosaur on Twitter, @mainframeblog. See more of his IT writing at technologywriter.com and here.

IBM Ranked #1 in Midrange Servers and Enterprise Network Storage

August 13, 2015

Although the financial markets may be beating up IBM the technology world continues to acclaim IBM technology and products. Most recently, IBM ranked on top in the CRN Annual Report Card (ARC) Survey recognizing the best-in-class vendors in the categories of partnership, support, and product innovation.  But the accolades don’t stop there.

Mobile Security Infographic

Courtesy of IBM (click to enlarge)

IBM was named a leader in four key cloud services categories—hosting, overall cloud professional services, cloud consulting services, and systems integration—by the independent technology market research firm Technology Business Research, Inc. (TBR).  This summer Gartner also named IBM as a leader in Security Information and Event Management (SIEM) in the latest Gartner Magic Quadrant for SIEM, this for the seventh consecutive year. Gartner also named IBM as a Leader in the 2015 Magic Quadrant for Mobile Application Development Platforms, specifically calling out the IBM MobileFirst Platform.

The CRN award addresses the technology channel. According to IBM, the company and its business partners are engaging with clients in new ways to work, building the infrastructure, and deploying innovative solutions for the digital era.  This should come as no surprise to anyone reading this blog; the z 13 was designed expressly to be a digital platform for the cloud, mobile, and big data era.  IBM’s z and Power Systems servers and Storage Solutions specifically were designed to address the challenges these areas present.

Along the same lines, IBM’s commitment to open alliances has continued this year unabated, starting with its focus on innovation platforms designed for big data and superior cloud economics, which continue to be the cornerstone of IBM Power System. The company also plays a leading role in the Open Power Foundation, the Linux Foundation as well as ramping up communities around the Internet of Things, developerWorks Recipes, and the open cloud, developerWorks Open. The last two were topics DancingDinosaur tackled recently, here and here.

The TBR report, entitled Hosted Private & Professional Services Cloud Benchmark, provides a market synopsis and growth estimates for 29 cloud providers in the first quarter of 2015. In that report, TBR cited IBM as:

  • The undisputed growth leader in overall professional cloud services
  • The leader in hosted private cloud and managed cloud services
  • A leader in OpenStack vendor acquisitions and OpenStack cloud initiatives
  • A growth leader in cloud consulting services, bridging the gap between technology and strategy consulting
  • A growth leader in cloud systems integration services

According to the report: IBM’s leading position across all categories remains unchallenged as the company’s established SoftLayer and Bluemix portfolios, coupled with in-house cloud and solutions integration expertise, provide enterprises with end-to-end solutions.

Wall Street analysts and pundits clearly look at IBM differently than IT analysts.  The folks who look at IBM’s technology, strategy, and services, like those at Gartner, TBR, and the CRN report card, tell a different story. Who do you think has it right?

DancingDinosaur is Alan Radding, a veteran IT analyst and writer. Please follow DancingDinosaur on Twitter, @mainframeblog. See more of his IT writing at technologywriter.com and here.

IBM Simplifies Internet of Things with developerWorks Recipes

August 6, 2015

IBM has a penchant for working through communities going back as far as Eclipse and probably before. Last week DancingDinosaur looked at the developerWorks Open community. Now let’s look at the IBM’s developerWorks Recipes community intended to address the Internet of Things (IoT).

recipes iot sensor tag

TI SensorTag

The Recipes community  will try to help developers – from novice to experienced – quickly and easily learn how to connect IoT devices to the cloud and how to use data coming from those connected devices. For example one receipe walks you through Connecting the TI Simplelink SensorTag (pictured above) to the IBM IoT foundation service in a few simple step. By following these steps a developer, according to IBM, should be able to connect the SensorTag to the IBM quickstart cloud service in less than 3 minutes. Think of recipes as simplified development patterns—so simple that almost anyone could follow it. (Wanted to try it myself but didn’t have a tag.  Still, it looked straightfoward enough.)

IoT is growing fast. Gartner forecasts 4.9 billion connected things in use in 2015, up 30% from 2014, and will reach 25 billion by 2020. In terms of revenue, this is huge. IDC predicts the worldwide IoT market to grow from $655.8 billion in 2014 to $1.7 trillion in 2020, a compound annual growth rate (CAGR) of 16.9%. For IT people who figure out how to do this, the opportunity will be boundless. Every organization will want to connect its devices to other devices via IoT. The developerWorks Recipes community seems like a perfect way to get started.

IoT isn’t exactly new. Manufacturers have cobbled together machine-to-machine (M2M) networks Banks and retailers have assembled networks of ATMs and POS terminals. DancingDinosaur has been writing about IoT for mainframe shops for several years.  Now deveoperWorks Recipes promises a way for just about anyone to set up their own IoT easily and quickly while leveraging the cloud in the process. There is a handful of recipes now but it provides a mechanism to add recipes so expect the catalog of recipes to steadily increase. And developers are certain to take existing recipes and improvise on them.

IBM has been trying to simplify  development for cloud, mobile, IoT starting with the launch of Bluemix last year. By helping users connect their IoT devices to IBM Bluemix, which today boasts more than 100 open-source tools and services, users can then run advanced analytics, utilize machine learning, and tap into additional Bluemix services to accelerate the adoption of  IoT and more.

As easy as IBM makes IoT development sound this is a nascent effort industry wide. There is a crying need for standards at every level to facilitate the interoperability and data exchange among the many and disparate devices, networks, and applications that will make up IoT.  Multiple organizations have initiated standards efforts but it will take some time to sort it all out.

And then there is the question of security. In a widely reported experiment by Wired Magazine  hackers were able to gain control of a popular smart vehicle. Given that cars are expected to be a major medium for IoT and every manufacturer is rushing to jam as much smart componentry into their vehicles you can only hope every automaker is  scrambling for security solutions .

Home appliances represent another fat, lucrative market target for manufacturers that want to embed intelligent devices and IoT into their all products. What if hackers access your automatic garage door opener? Or worse yet, what if they turn off your coffee maker and water heater? Could you start the day without a hot shower and cup of freshly brewed coffee and still function?

Running IoT through secure clouds like the IBM Cloud is part of the solution. And industry-specific clouds intended for IoT already are being announced, much like the Internet exchanges of a decade or two ago. Still, more work needs to be done on security and interoperability standards if IoT is to work seamlessly and broadly to achieve the trillions of dollars of economic value projected for it.

DancingDinosaur is Alan Radding, a veteran IT analyst and writer. Please follow DancingDinosaur on Twitter, @mainframeblog. See more of his IT writing at technologywriter.com and here.

 

 

 

 

 

 

 

 

developerWorks Open Reinforces IBM’s Commitment to Open Source Cloud

July 30, 2015

Maybe IBM’s decades-long legacy of proprietary platforms and systems makes people a little hesitant to fully embrace its open source initiatives. Still, IBM has been supporting Linux on the z System for over a decade, the Eclipse initiative for as long or longer, and gives no sign of getting ready to pull the plug on any of its open source initiatives.

Rise of Open Source Linux and OpenStackCourtesy of IBM (click to enlarge)

Or take Bluemix, an implementation of IBM’s Open Cloud Architecture based on Cloud Foundry, an open source Platform as a Service (PaaS) initiative. And the company only gets more open source by the day. Just last week IBM continued to pour more open source components into Bluemix. It announced developerWorks Open, a cloud-based environment for developers to not only download open sourced IBM code but also have access to blogs, videos, tools and techniques to accelerate their own efforts on behalf of clients.

The current model of development in the open source community, according to IBM, lacks a strategic focus on business requirements. To address this IBM is launching a set of projects in industries like healthcare, mobile, retail, insurance, and banking that ensure a strategic business focus and address real-world business challenges.

The creation of developerWorks Open, notes IBM, comes at an important time for cloud developers. Organizations are wrestling with getting the most out of their multiple clouds environments. For instance, a developer building cloud applications on Bluemix for an insurance accident claim system likely will require storing videos and repair photos for an insurance accident claim application. The developer may have difficulty making a storage choice and then implementing it. But developerWorks Open helps simplify that choice and provides the reassurance that the choice has industry support.

To that end IBM is offering a broad range of technologies for developerWorks Open that aim to help bridge this perceived development gap and remove the obstacles that inhibit developers from turning open source code into sustainable applications that solve real business issues. IBM will also offer these services on its programming platform for cloud software development, Bluemix. The goal is to remove the obstacles that hinder developers from turning open source code into sustainable enterprise-caliber applications that solve real business issues.

For that reason the company will open source a number of apps from its MobileFirst portfolio (MobileFirst was recently cited by Gartner as a leader in its Magic Quadrant mobile application development segment) that will assist developers in the following markets:

  • IBM Ready App for Healthcare tracks patient progress for at-home physical therapy programs via mobile device.
  • IBM Ready App for Retail personalizes and reshapes the specialty retail store shopping experience through direct line of communication
  • IBM Ready App for Insurance improves the relationship between homeowners and insurers and uses Internet of Things sensors to synch home with utilities.
  • IBM Ready App for Banking helps financial institution’s address the mobile needs business owners and attract prospects.

Additionally, IBM is open sourcing several analytics technologies including:

  • Activity Streams provides developers with a standard model and encoding format for describing how users engage with both the application and with one another.
  • Agentless System Crawler offers a unified cloud monitoring and analytics framework that enables visibility into all types of cloud platforms and runtimes
  • IBM Analytics for Apache Spark puts the full analytics power and capabilities of Spark at the developers fingertips. (Beta now available on Bluemix.)

IBM will also continue to open source cloud data services, including IBM Object Storage on Bluemix Service Broker, which can be used to integrate OpenStack Swift with Cloud Foundry to enable fast access to cloud data without needing to know where the data is stored.

The introduction of developerWorks Open comes at a time when organizations are starting to realize that their software and apps increasingly are their products, especially cloud, mobile, and collaboration apps, and they need a fast and efficient way to build and update them. In other cases, IBM notes, organizations are wrestling with getting the most out of their multiple clouds environments.

IBM is committed to open source; there is no going back. Company executives see it as the foundation of innovative application development in the cloud.  “With developerWorks Open we are open sourcing additional IBM innovations that we feel have the potential to grow the community and ecosystem and eventually become established technologies,” declared IBM Vice President of Cloud Architecture and Technology Dr. Angel Diaz recently.

Currently, IBM participates in and contributes to more than 150 open source projects. These projects include Spark, OpenStack, Cloud Foundry, Open Contain Project, Node.js, CouchDb, Linux, Eclipse and an already established relationship with Apache. As IBM notes, open source projects increase the skills and knowledge base around the company’s software product set. developerWorks Open is the next step in IBM’s strategy to help businesses create, use, and innovate around cloud computing systems. Coming right behind is a similar community initiative for IoT development. Stay tuned.

DancingDinosaur is Alan Radding, a veteran IT analyst and writer. Please follow DancingDinosaur on Twitter, @mainframeblog. See more of his IT writing at technologywriter.com and here.

z Systems and Cloud Lead IBM 2Q Results

July 24, 2015

DancingDinosaur generally steers clear of writing about reported quarterly revenue. Given the general focus of this blog on enterprise and cloud computing, however, IBM’s recent 2Q15 report can’t be ignored. Although it continued IBM’s recent string of negative quarterly results, the z and cloud proved to be bright spots.

Infographic - IBM Q2 2015 Earnings - Cloud - July 20 2015 - Final

Strong IBM cloud performance, Q2 2015 (click to enlarge)

As IBM reported on Monday: Revenues from z Systems mainframe server products increased 9 percent compared with the year-ago period (up 15 percent adjusting for currency).  Total delivery of z Systems computing power, as measured in MIPS, increased 24 percent.  Revenues from Power Systems were down 1 percent compared with the 2014 period (up 5 percent adjusting for currency).

It’s not clear when and how Power Systems will come back. IBM has opened up the Power platform through the Open Power Foundation. A good move in theory, which DancingDinosaur applauds. Still, much depends on the Foundation gaining increased momentum and individual members rolling out successful Power-based products. The roadmap for POWER8, POWER9, and beyond looks promising but how fast products will arrive is unclear. There also is potential for the commoditization of the Power platform, a welcome development in many quarters, but commoditization’s impact on future revenue also is not clear.

Cloud revenue was up more than 70 percent, adjusting for currency and divested businesses; up more than 50 percent as reported, according to IBM. Given that cloud, along with mobile and analytics, has been designated strategic by IBM this is an encouraging development. The company’s cloud strategy is starting to bear fruit.

The big question hanging over every vendor’s cloud strategy is how to make money at it. One of the appealing aspects of the cloud in terms of cost and pricing for IT-using organizations is what amounts to a race to the bottom. With pricing immediately apparent and lower pricing just a click away it has become a feast for the bottom grazers to whom the lowest price is all that matters. For companies like IBM and Oracle, which also has declared cloud a strategic initiative, and other large legacy enterprise platform providers the challenge is to be competitive on price while differentiating their offerings in other ways. Clearly IBM has some unique cloud offerings in Watson and Bluemix and others but can they deliver enough revenue fast enough to offset the reduction in legacy platform revenue. Remember, x86 is off IBM’s menu.

Timothy Prickett Morgan, who writes frequently about IBM technology, also had plenty to say about IBM’s 2Q15 announcement, as did a zillion other financial and industry analyst. To begin he noted the irony of IBM promoting cloud computing, primarily an x86 phenomenon while trying to convince people that Power-based systems are cost competitive—which they can be—and will do a better job for many of those workloads, correct again.

Morgan also makes an interesting point in regard to the z: “IBM doesn’t have to push the System z mainframe so much as keep it on a Moore’s Law curve of its own and keep the price/performance improving to keep those customers in the mainframe fold.” That’s harder than it may seem; DancingDinosaur addressed the Moore’ Law issue last week here. As Morgan notes, with well over $1 trillion in software assets running on the mainframe, the 6,000 or so enterprises that use mainframes are unlikely to move off the platform because of the cost, disruption, and risk such a move would entail. Just ask Union-Pacific Railroad, which seems to be doing a slow-motion platform migration off the mainframe that seemingly may never actually end. Morgan concludes: “IBM can count on a certain level of money from the System z line that it just cannot with the Power Systems line.”

As noted above, how much revenue Power can generate for IBM depends on how fast the Open Power Foundation members introduce products that expand the market and how many Power processors SoftLayer can absorb as the business unit expands its global footprint.  There also is the question of how many POWER8 servers Rackspace, a much larger cloud provider than SoftLayer, will take and whether the Rackspace initiative will catch on elsewhere.

In any event, IBM’s 2Q15 report showed enough positive momentum to encourage IT platform enthusiasts. For its part, DancingDinosaur is expecting a business class z13 in the coming months and more.

DancingDinosaur is Alan Radding, a veteran IT analyst and writer. Please follow DancingDinosaur on Twitter, @mainframeblog. See more of his IT writing at technologywriter.com and here.

IBM Extends Moore’s Law with First 7nm Test Chip

July 17, 2015

In an announcement last week, IBM effectively extended Moore’s Law for at least another generation of chips, maybe two.  This contradicts what leading vendors, including IBM, have been saying for years about the imminent diminishing returns of Moore’s Law, which postulated that chips would double in capacity every 18-24 months. Moore’s Law drove the price/performance curve the industry has been experiencing for the past several decades.

Post-Silicon-R&D_Infographic_070715_Final

Click to enlarge, courtesy of IBM

The announcement, ironically, coincides with IBM’s completion of the sale of its semi-conductor fabrication business to GLOBALFOUNDRIES, which IBM paid to take the costly facilities off its hands. To pull off the 7nm achievement IBM ended up partnering with a handful of players including public-private partnership with New York State and joint development alliance with GLOBALFOUNDRIES, Samsung, and equipment suppliers. The team is based at SUNY Poly’s NanoTech Complex in Albany.

To achieve the higher performance, lower power, and scaling benefits promised by 7nm technology, the IBM researchers turned to two main innovations, the use Silicon Germanium (SiGe) channel transistors and Extreme Ultraviolet (EUV) lithography integration at multiple levels, in effect bypassing conventional semiconductor manufacturing approaches.

Don’t expect to see new systems featuring these 7nm chips very soon. The announcement made no mention of any timetable for producing commercial products based on this technology. As Timothy Prickett Morgan, who writes extensively on IBM POWER Systems technology observed: the use of silicon germanium for portions of the transistors cuts back on power consumption for the very fast switching necessary for improving circuit performance, and the circuits are etched using extreme ultraviolet (EUV) lithography. These technologies may be difficult and expensive to put into production.

In the meantime, IBM notes that microprocessors utilizing 22nm and 14nm technology run today’s servers, cloud data centers, and mobile devices; and already 10nm technology is well on the way to becoming a mature technology. The 7nm chips promise even more: at least a 50% power/performance improvement for next mainframe and POWER systems that will fuel the Big Data, cloud and mobile era, and soon you can add the Internet of Things too.

The z13 delivers unbeatable performance today. With the zEC12 IBM boasted of the fastest commercial chip in the industry, 5.5 GHz on a 32 nm wafer. It did not make that boast with the z13. Instead the z13 runs on a 22 nm core at 5 GHz but still delivers a 40% total capacity improvement over the zEC12.

It does this by optimizing the stack top to bottom with 600 processors and 320 separate channels dedicated just to drive I/O throughput. The reason for not cranking up the clock speed on the z13, according to IBM, was the plateauing of Moore’s Law. The company couldn’t get enough boost for the tradeoffs it would have had to make. Nobody seems to be complaining about giving up that one-half GHz. Today the machine can process 2.5 billion transactions a day.

The ride up the Moore’s Law curve has been very enjoyable for all. Companies took the additional processing power to build onto the chip more capabilities that otherwise would have required additional processors.  The result: more performance and more capabilities at lower cost. But all good things come to an end.

This 7nm  breakthrough doesn’t necessarily restore Moore’s Law. At this point, the best we can guess is that it temporarily moves the price/performance curve to a new plane. Until we know the economics of mass fabrication in the 7nm silicon germanium world we can’t tell whether we’ll see a doubling as before or maybe just a half or quarter or maybe it could triple. We just don’t now.

For the past decade, Morgan reports, depending on the architecture, the thermal limits of systems imposed a clock speed limit on processors, and aside from some nominal instruction per clock (IPC) improvements with each  recent microarchitecture change, clock speeds and performance for a processor stayed more or less flat. This is why vendors went parallel with their CPU architectures, in effect adding cores to expand throughput rather than increasing clock speed to boost performance on a lower number of cores. Some, like IBM, also learned to optimize at every level of the stack. As the z13 demonstrates, lots of little improvements do add up.

Things won’t stop here. As Morgan observes, IBM Research and the Microelectronics Division were working with GLOBALFOUNDRIES and Samsung and chip-making equipment suppliers who collaborate through the SUNY Polytechnic Institute’s Colleges of Nanoscale Science and Engineering in nearby Albany to get a path to 10 nm and then 7 nm processes even as the sale of GLOBALFOUNDRIES was being finalized.

The next step, he suggests, could possibly be at 4 nm but no one is sure if this can be done in a way that is economically feasible. If it can’t, IBM already has previewed the possibility of other materials that show promise.

Moore’s Law has been a wonderful ride for the entire industry. Let’s wish them the best as they aim for ever more powerful processors.

DancingDinosaur is Alan Radding, a veteran IT analyst and writer. Please follow DancingDinosaur on Twitter, @mainframeblog. See more of his IT writing at technologywriter.com and here.

Compuware and BMC Aim to Lower Mainframe Cost

July 9, 2015

Last February, Compuware and BMC announced a joint initiative to coordinate the use of their respective tools, BMC Cost Analyzer, BMC MainView, and Compuware Strobe, to reduce mainframe software licensing cost. DancingDinosaur wrote about it here. The announcement made this week confirms that the necessary integration between the various tools has been completed and is working as promised. Compuware this month also introduced the latest rev of its Topaz graphical management toolset for the z.

compuware topaz screen

Compuware Topaz visual management screen (click to enlarge)

The tight integration between the BMC and Compuware tools enables mainframe staff to quickly and easily identify tuning opportunities that will have the greatest impact on their monthly software licensing costs. They do this by applying BMC Cost Analyzer’s visual mapping to the detailed batch and transaction information provided by Compuware Strobe. The cost savings result from moving workloads to non-peak periods, running IBM subsystems on fewer LPARs, and capping LPAR utilization.

Driving the need for this is the continuing growth of mainframe workloads, which are being pushed today by mobile workloads and, down the road, by new IoT workloads. These workloads already are driving up the Monthly License Charge (MLC) for IBM mainframe software, which led IBM to initiate z/OS discounts for mobile workloads on the z as well as introduce its new ICAP and country multiplex pricing. Compuware estimates that the integrated tools alone can reduce an organization’s mainframe software licensing costs by 10% or more.

“Mainframe cost conservation is an imperative for public- and private-sector IT organizations attempting to fulfill customers’ escalating digital expectations within ever-limited budget constraints,” said Karen Robinson, a former CIO and now a consultant. “The work BMC and Compuware are doing together is a perfect fit for this universal imperative.”

Besides discounting mobile workloads, IBM also has been actively working to help organizations drive down mainframe costs in other ways. Moving workloads to z assist processors is a top choice. Another is to take advantage of a variety of IBM discounts; DancingDinosaur’s favorite is the Solution Edition program, which offers the deepest discounts if you can live with the constraints it imposes. And, of course using tools like those from Compuware and BMC can save money every month. At a time when mainframe workloads are growing—Did someone say the mainframe was going away?—this is a sure path to relief.

Compuware made a second announcement impacting the mainframe this month. This involved adding capabilities to Topaz, its set of graphical z management tools aimed at millennial workers. The new capabilities address Java on the z, which is a key way to cash in on the savings available by shifting workloads to z assist processors, specifically the zIIP. The announcement is here. Again, DancingDinosaur initially wrote about Topaz earlier this year here.

As Compuware describes it, Topaz for Java Performance delivers comprehensive visibility into the performance and behavior of Java batch programs and WebSphere transactions running on the mainframe—including peak CPU utilization of specific Java methods and classes; garbage collection issues such as memory leaks and excessively long collection intervals; and threads that are blocked or not actually doing useful work.

According to Tonya Robison, VP Legacy Integrations, Conversion, De-commission at Alfa Insurance:  “Topaz is enabling us for a multi-channel, multi-platform future. Its functionality will allow us to work with mainframe and non-mainframe data in a common, visual, and intuitive manner, providing our next-gen developers the same agility and speed as our seasoned IT pros.” Robison is convinced that “products like Topaz will protect our current investments and enable the next generation of application developers.” These are the millennials everyone is trying to attract to the mainframe.

The new release of Topaz, the third this year alone, delivers two capabilities that enhance customers’ ability to maximize value from their mainframe:

  1. Topaz for Program Analysis gives developers intuitive, accurate visibility into the flow of data within their COBOL or PL/l applications—including how data gets into a field; how that field is used to set other fields; and how that field is used in comparisons. This promises to be especially useful for Millennials who may not have a wealth of experience with IBM z Systems.
  1. Topaz for Enterprise Data brings high-speed, compression-enabled host-to-host data copying that exploits IBM z Systems zIIP processors, thus reducing the burden on its general processors. This fast, efficient copying enables developers to complete their work more quickly and at less cost.

Efforts of Compuware, BMC, and others are rapidly changing the mainframe into an agile platform that can rival distributed platforms. Let’s hope the Millennials notice.

DancingDinosaur is Alan Radding, a veteran IT analyst and writer. Please follow DancingDinosaur on Twitter, @mainframeblog. See more of his IT writing on Technologywriter.com and here.

IBM POWER8 Tops STAC-A2 Benchmark in Win for OpenPOWER

June 25, 2015

In mid-March the Security Technology Analysis Center (STAC) released the first audited STAC-A2 Benchmark results for a server using the IBM Power8 architecture. STAC provides technology research and testing tools based on community-source standards. The March benchmark results showed that an IBM POWER8-based server can deliver more than twice the performance of the best x86 server when running standard financial industry workloads.

stac benchmark power8

IBM Power System S824

This is not IBM just blowing its own horn. The STAC Benchmark Council consists of a group of over 200 major financial firms and other algorithmic-driven enterprises as well as more than 50 leading technology vendors. Their mission is to explore technical challenges and solutions in financial services and develop technology benchmark standards that are useful to financial organizations.

The POWER8 system not only delivered more than twice the performance of the nearest x86 system but its set four new performance records for financial workloads, 2 of which apparently were new public records.  This marked the first time the IBM Power8 architecture has gone through STAC-A2 testing.

The community developed STAC-A2 benchmark set represents a class of financial risk analytics workloads characterized by Monte Carlo simulation and Greeks computations. Greeks computations cover theta, rho, delta, gamma, cross-gamma, model vega, and correlation vega. Together they are referred to as the Greeks. Quality is assessed for single assets by comparing the Greeks obtained from the Monte Carlo with Greeks obtained from a Heston closed form formula for vanilla puts and calls.  Suffice to say, this as an extremely CPU-intensive set of computations. For more detail, click here.

In this case, results were compared to other publicly-released results of warm runs on the Greeks benchmark (STAC-A2.β2.GREEKS.TIME.WARM). The two-socket Power8 server, outfitted with two 12-core 3.52 GHz Power8 processor cards, achieved:

  • 2.3x performance over the comparable x86 setup, an Intel white box with two Xeon E5-2699 v3 (Haswell EP) @ 2.30GHz.
  • 1.7x the performance of the best-performing x86 solution, an Intel white box with two Intel Xeon E5-2699 v3 processors (Haswell EP) @ 2.30GHz and one Intel Xeon Phi 7120A coprocessor.
  • Only 10% less performance than the best-performing solution, a Supermicro server with two 10-core Intel Xeon E5-2690 v2 @ 3.0GHz (Ivy Bridge) and one NVIDIA K80 GPU accelerator.

The Power server also set new records for path scaling (STAC-A2.β2.GREEKS.MAX_PATHS) and asset capacity (STAC-A2.β2.GREEKS.MAX_ASSETS). Compared to the best four-socket x86-based solution — a server comprised of four Xeon E7-4890 v2 (Ivy Bridge EX) parts running at 2.80 GHz — the Power8 server delivered:

  • Double the throughput.
  • 16 percent increase for asset capacity.

The STAC test system consisted of an IBM Power System S824 server with two 12-core 3.52 GHz POWER8 processor cards, equipped with 1TB of DRAM and running Red Hat Enterprise Linux version 7. The solution stack included the IBM-authored STAC-A2 Pack for Linux on Power Systems (Rev A), which used IBM XL, a suite for C/C++ developers that includes the C++ Compiler and the Mathematical Acceleration Subsystem libraries (MASS), and the Engineering and Scientific Subroutine Library (ESSL).

POWER8 processors are based on high performance, multi-threaded cores with each core of the Power System S824 server running up to eight simultaneous threads at 3.5 GHz. With POWER8 IBM also is able to tap the innovations of the OpenPOWER Foundation including CAPI and a variety of accelerators that have started to ship.

The S824 also brings a very high bandwidth memory interface that runs at 192 GB/s per socket which is almost three times the speed of a typical x86 processor. These factors along with a balanced system structure including a large internal 8MB per core L3 cache are the primary reasons why financial computing workloads run significantly faster on POWER8-based systems than alternatives, according to IBM.

Sumit Gupta, vice president of HPC and OpenPOWER operations at IBM, reports STAC-A2 gives a much more accurate view of the expected performance as compared to micro benchmarks or simple code loops. This is especially important when the challenge is big data.

In his blog on the topic, Gupta elaborated on the big data challenge in the financial industry and the POWER8 advantages. STAC-A2 is a set of standard benchmarks that help estimate the relative performance of full systems running complete financial applications. This enables clients in the financial industry to evaluate how systems will perform on real applications. “Those are the kind of results that matter—real results for real client challenges,” Gupta wrote.

Gupta went on to note that the S824 also has a very high bandwidth memory interface. Combined with the large L3 cache noted above it can run financial applications noticeably faster than alternatives.  Combine the STAC results with data recently published by Cabot Partners and you have convincing proof that IBM POWER8-based systems have taken the performance lead in the financial services space (and elsewhere). The Cabot Partners report evaluates functionality, performance, and price/performance across several industries, including life sciences, financial services, oil and gas, and analytics while referencing standard benchmarks as well as application-oriented benchmark data.

Having sat through numerous briefings on POWER8 performance, DancingDinosaur felt reassured, but he doesn’t have to actually run these workloads. It is encouraging, however, to see proof in the form of 3rd party benchmarks like STAC and reports from Cabot Partners. Check out Cabot’s OpenPOWER report here.

DancingDinosaur is Alan Radding, a veteran IT analyst and writer. Please follow DancingDinosaur on Twitter, @mainframeblog. See more of his IT writing on Technologywriter.com and here.

IBM Continues Open Source Commitment with Apache Spark

June 18, 2015

If anyone believes IBM’s commitment to open source is a passing fad, forget it. IBM has invested billions in Linux, open Power through the Open Power Foundation, and more. Its latest is the announcement of a major commitment to Apache Spark, a fast open source and general cluster computing system for big data.

spark VGN8668

Courtesy of IBM: developers work with Spark at Galvanize Hackathon

As IBM sees it, Spark brings essential advances to large-scale data processing. Specifically, it dramatically improves the performance of data dependent-apps and is expected to play a big role in the Internet of Things (IoT). In addition, it radically simplifies the process of developing intelligent apps, which are fueled by data. It does so by providing high-level APIs in Scala, Java, and Python, and an optimized engine that supports general computation graphs for data analysis. It also supports a rich set of higher-level tools including Spark SQL for SQL and DataFrames, MLlib for machine learning, GraphX for graph processing, and Spark Streaming for stream processing.

IBM is contributing its breakthrough IBM SystemML machine learning technology to the Spark open source ecosystem. Spark brings essential advances to large-scale data processing, such as improvements in the performance of data dependent apps. It also radically simplifies the process of developing intelligent apps, which are fueled by data. But maybe the biggest advantage is that it can handle data coming from multiple, disparate sources.

What IBM likes in Spark is that it’s agile, fast, and easy to use. It also likes it being open source, which ensures it is improved continuously by a worldwide community. That’s also some of the main reasons mainframe and Power Systems data centers should pay attention to Spark.  Spark will make it easier to connect applications to data residing in your data center. If you haven’t yet noticed an uptick in mobile transactions coming into your data center, they will be coming. These benefit from Spark. And if you look out just a year or two, expect to see IoT applications adding to and needing to combine all sorts of data, much of it ending up on the mainframe or Power System in one form or another. So make sure Spark is on your radar screen.

Over the course of the next few months, IBM scientists and engineers will work with the Apache Spark open community to accelerate access to advanced machine learning capabilities and help drive speed-to-innovation in the development of smart business apps. By contributing SystemML, IBM hopes data scientists iterate faster to address the changing needs of business and to enable a growing ecosystem of app developers who will apply deep intelligence to everything.

To ensure that happens, IBM will commit more than 3,500 researchers and developers to work on Spark-related projects at more than a dozen labs worldwide, and open a Spark Technology Center in San Francisco for the Data Science and Developer community to foster design-led innovation in intelligent applications. IBM also aims to educate more than 1 million data scientists and data engineers on Spark through extensive partnerships with AMPLab, DataCamp, MetiStream, Galvanize, and Big Data University MOOC (Massive Open Online Course).

Of course, Spark isn’t going to be the end of tools to expedite the latest app dev. With IoT just beginning to gain widespread interest expect a flood of tools to expedite developing IoT data-intensive applications and more tools to facilitate connecting all these coming connected devices, estimated to number in the tens of billions within a few years.

DancingDinosaur applauds IBM’s decade-plus commitment to open source and its willingness to put real money and real code behind it. That means the IBM z System mainframe, the POWER platform, Linux, and the rest will be around for some time. That’s good; DancingDinosaur is not quite ready to retire.

DancingDinosaur is Alan Radding, a veteran IT analyst and writer. Please follow DancingDinosaur on Twitter, @mainframeblog. See more of his IT writing on Technologywriter.com and here.

API Economy Comes to the IBM z System

June 11, 2015

What comes to mind when you hear (or read) about a RESTful IBM z System? Hint: it is not a mainframe that is loafing. To the contrary, a RESTful mainframe probably is busier than it has ever been, now running a slew of new apps, most likely mobile or social apps with REST APIs connecting to z/OS-based web services plus its usual workloads. Remember web services when SOA first came to the mainframe? They continue today behind the new mobile, cloud, social, and analytical workloads that are putting the spotlight on the mainframe.

Travel and Transportation - Passenger Care

Courtesy of IBM: travel fuels mobile activity (click to enlarge)

A variety of Edge2015 sessions, given by Asit Dan, chief architect, z Service API Management and Glenn Anderson, IBM Lab Services and Training, put what the industry refers to as the emerging API economy in perspective. The z, it should come as no surprise, lies at the heart of this burgeoning API economy, not only handling transactions but also providing governance and management to the API phenomenon that is exploding. Check out IBM’s APIs for Dummies.

The difference between first generation SOA and today’s API economy lies in the new workloads—especially mobile and cloud—fueling the surging interest. The mobile device certainly is the fastest growing platform and will likely become the largest platform soon if it is not already, surpassing desktop and laptop systems.

SOA efforts initially focused on the capabilities of the providers of services, noted Dan, particularly the development, run-time invocation, and management of services. The API economy, on the other hand, focuses on the consumption of these services. It really aims to facilitate the efforts of application developers (internal developers and external business partners) who must code their apps for access to existing and new API-enabled services.

One goal of an enterprise API effort is to access already deployed services, such z-based CICS services or those of a partner. Maybe a more important goal, especially where the z is involved, is to drive use of mainframe software assets by customers, particularly mobile customers.  The API effort not only improves customer service and satisfaction but could also drive added revenue. (Have you ever fantasized of the z as a direct revenue generator?)

This calls, however, for a new set of interfaces. As Dan notes in a recent piece, APIs for accessing these assets, defined using well known standards such as web services and Representational State Transfer (REST) with JSON (JavaScript Object Notation), and published via an easily accessible catalog, make it efficient to subscribe to APIs for obtaining permissions and building new applications. Access to the APIs now can be controlled and tracked during run-time invocations (and even metered where revenue generation is the goal).

Now the API economy can morph into a commercial exchange of business functions, capabilities, and competencies as services using web APIs, noted Glenn Anderson at Edge2015. In-house business functions running on the z can evolve into an API as-a-service delivery vehicle, which amounts to another revenue stream for the mainframe data center.

The API economy often is associated with the concept of containers. Container technology provides a simplified way to make applications more mobile in a hybrid cloud, Anderson explained, and brings some distinct advantages. Specifically, containers are much smaller in size than virtual machines and provide more freedom in the placement of workloads in a cloud (private, public, hybrid) environment. Container technology is being integrated into OpenStack, which is supported on the z through IBM Cloud Manager. Docker is the best known container technology and it works with Linux on z.

With the combination of SOA, web services, REST, JSON, OpenStack, and Docker all z capable, a mainframe data center can fully participate in the mobile, apps, cloud API economy. BTW, POWER servers also can play the API, OpenStack, Docker game too. Even Watson can participate in the API economy through IBM’s early March acquisition of AlchemyAPI, a provider of scalable cognitive computing API services. The acquisition will drive the API economy into cognitive computing too. Welcome to the mainframe API economy.

DancingDinosaur is Alan Radding, a veteran IT analyst and writer. Please follow DancingDinosaur on Twitter, @mainframeblog. See more of his IT writing on Technologywriter.com and here.


Follow

Get every new post delivered to your Inbox.

Join 795 other followers

%d bloggers like this: