Posts Tagged ‘CICS’

IBM POWER8 Tops STAC-A2 Benchmark in Win for OpenPOWER

June 25, 2015

In mid-March the Security Technology Analysis Center (STAC) released the first audited STAC-A2 Benchmark results for a server using the IBM Power8 architecture. STAC provides technology research and testing tools based on community-source standards. The March benchmark results showed that an IBM POWER8-based server can deliver more than twice the performance of the best x86 server when running standard financial industry workloads.

stac benchmark power8

IBM Power System S824

This is not IBM just blowing its own horn. The STAC Benchmark Council consists of a group of over 200 major financial firms and other algorithmic-driven enterprises as well as more than 50 leading technology vendors. Their mission is to explore technical challenges and solutions in financial services and develop technology benchmark standards that are useful to financial organizations.

The POWER8 system not only delivered more than twice the performance of the nearest x86 system but its set four new performance records for financial workloads, 2 of which apparently were new public records.  This marked the first time the IBM Power8 architecture has gone through STAC-A2 testing.

The community developed STAC-A2 benchmark set represents a class of financial risk analytics workloads characterized by Monte Carlo simulation and Greeks computations. Greeks computations cover theta, rho, delta, gamma, cross-gamma, model vega, and correlation vega. Together they are referred to as the Greeks. Quality is assessed for single assets by comparing the Greeks obtained from the Monte Carlo with Greeks obtained from a Heston closed form formula for vanilla puts and calls.  Suffice to say, this as an extremely CPU-intensive set of computations. For more detail, click here.

In this case, results were compared to other publicly-released results of warm runs on the Greeks benchmark (STAC-A2.β2.GREEKS.TIME.WARM). The two-socket Power8 server, outfitted with two 12-core 3.52 GHz Power8 processor cards, achieved:

  • 2.3x performance over the comparable x86 setup, an Intel white box with two Xeon E5-2699 v3 (Haswell EP) @ 2.30GHz.
  • 1.7x the performance of the best-performing x86 solution, an Intel white box with two Intel Xeon E5-2699 v3 processors (Haswell EP) @ 2.30GHz and one Intel Xeon Phi 7120A coprocessor.
  • Only 10% less performance than the best-performing solution, a Supermicro server with two 10-core Intel Xeon E5-2690 v2 @ 3.0GHz (Ivy Bridge) and one NVIDIA K80 GPU accelerator.

The Power server also set new records for path scaling (STAC-A2.β2.GREEKS.MAX_PATHS) and asset capacity (STAC-A2.β2.GREEKS.MAX_ASSETS). Compared to the best four-socket x86-based solution — a server comprised of four Xeon E7-4890 v2 (Ivy Bridge EX) parts running at 2.80 GHz — the Power8 server delivered:

  • Double the throughput.
  • 16 percent increase for asset capacity.

The STAC test system consisted of an IBM Power System S824 server with two 12-core 3.52 GHz POWER8 processor cards, equipped with 1TB of DRAM and running Red Hat Enterprise Linux version 7. The solution stack included the IBM-authored STAC-A2 Pack for Linux on Power Systems (Rev A), which used IBM XL, a suite for C/C++ developers that includes the C++ Compiler and the Mathematical Acceleration Subsystem libraries (MASS), and the Engineering and Scientific Subroutine Library (ESSL).

POWER8 processors are based on high performance, multi-threaded cores with each core of the Power System S824 server running up to eight simultaneous threads at 3.5 GHz. With POWER8 IBM also is able to tap the innovations of the OpenPOWER Foundation including CAPI and a variety of accelerators that have started to ship.

The S824 also brings a very high bandwidth memory interface that runs at 192 GB/s per socket which is almost three times the speed of a typical x86 processor. These factors along with a balanced system structure including a large internal 8MB per core L3 cache are the primary reasons why financial computing workloads run significantly faster on POWER8-based systems than alternatives, according to IBM.

Sumit Gupta, vice president of HPC and OpenPOWER operations at IBM, reports STAC-A2 gives a much more accurate view of the expected performance as compared to micro benchmarks or simple code loops. This is especially important when the challenge is big data.

In his blog on the topic, Gupta elaborated on the big data challenge in the financial industry and the POWER8 advantages. STAC-A2 is a set of standard benchmarks that help estimate the relative performance of full systems running complete financial applications. This enables clients in the financial industry to evaluate how systems will perform on real applications. “Those are the kind of results that matter—real results for real client challenges,” Gupta wrote.

Gupta went on to note that the S824 also has a very high bandwidth memory interface. Combined with the large L3 cache noted above it can run financial applications noticeably faster than alternatives.  Combine the STAC results with data recently published by Cabot Partners and you have convincing proof that IBM POWER8-based systems have taken the performance lead in the financial services space (and elsewhere). The Cabot Partners report evaluates functionality, performance, and price/performance across several industries, including life sciences, financial services, oil and gas, and analytics while referencing standard benchmarks as well as application-oriented benchmark data.

Having sat through numerous briefings on POWER8 performance, DancingDinosaur felt reassured, but he doesn’t have to actually run these workloads. It is encouraging, however, to see proof in the form of 3rd party benchmarks like STAC and reports from Cabot Partners. Check out Cabot’s OpenPOWER report here.

DancingDinosaur is Alan Radding, a veteran IT analyst and writer. Please follow DancingDinosaur on Twitter, @mainframeblog. See more of his IT writing on Technologywriter.com and here.

IBM Continues Open Source Commitment with Apache Spark

June 18, 2015

If anyone believes IBM’s commitment to open source is a passing fad, forget it. IBM has invested billions in Linux, open Power through the Open Power Foundation, and more. Its latest is the announcement of a major commitment to Apache Spark, a fast open source and general cluster computing system for big data.

spark VGN8668

Courtesy of IBM: developers work with Spark at Galvanize Hackathon

As IBM sees it, Spark brings essential advances to large-scale data processing. Specifically, it dramatically improves the performance of data dependent-apps and is expected to play a big role in the Internet of Things (IoT). In addition, it radically simplifies the process of developing intelligent apps, which are fueled by data. It does so by providing high-level APIs in Scala, Java, and Python, and an optimized engine that supports general computation graphs for data analysis. It also supports a rich set of higher-level tools including Spark SQL for SQL and DataFrames, MLlib for machine learning, GraphX for graph processing, and Spark Streaming for stream processing.

IBM is contributing its breakthrough IBM SystemML machine learning technology to the Spark open source ecosystem. Spark brings essential advances to large-scale data processing, such as improvements in the performance of data dependent apps. It also radically simplifies the process of developing intelligent apps, which are fueled by data. But maybe the biggest advantage is that it can handle data coming from multiple, disparate sources.

What IBM likes in Spark is that it’s agile, fast, and easy to use. It also likes it being open source, which ensures it is improved continuously by a worldwide community. That’s also some of the main reasons mainframe and Power Systems data centers should pay attention to Spark.  Spark will make it easier to connect applications to data residing in your data center. If you haven’t yet noticed an uptick in mobile transactions coming into your data center, they will be coming. These benefit from Spark. And if you look out just a year or two, expect to see IoT applications adding to and needing to combine all sorts of data, much of it ending up on the mainframe or Power System in one form or another. So make sure Spark is on your radar screen.

Over the course of the next few months, IBM scientists and engineers will work with the Apache Spark open community to accelerate access to advanced machine learning capabilities and help drive speed-to-innovation in the development of smart business apps. By contributing SystemML, IBM hopes data scientists iterate faster to address the changing needs of business and to enable a growing ecosystem of app developers who will apply deep intelligence to everything.

To ensure that happens, IBM will commit more than 3,500 researchers and developers to work on Spark-related projects at more than a dozen labs worldwide, and open a Spark Technology Center in San Francisco for the Data Science and Developer community to foster design-led innovation in intelligent applications. IBM also aims to educate more than 1 million data scientists and data engineers on Spark through extensive partnerships with AMPLab, DataCamp, MetiStream, Galvanize, and Big Data University MOOC (Massive Open Online Course).

Of course, Spark isn’t going to be the end of tools to expedite the latest app dev. With IoT just beginning to gain widespread interest expect a flood of tools to expedite developing IoT data-intensive applications and more tools to facilitate connecting all these coming connected devices, estimated to number in the tens of billions within a few years.

DancingDinosaur applauds IBM’s decade-plus commitment to open source and its willingness to put real money and real code behind it. That means the IBM z System mainframe, the POWER platform, Linux, and the rest will be around for some time. That’s good; DancingDinosaur is not quite ready to retire.

DancingDinosaur is Alan Radding, a veteran IT analyst and writer. Please follow DancingDinosaur on Twitter, @mainframeblog. See more of his IT writing on Technologywriter.com and here.

API Economy Comes to the IBM z System

June 11, 2015

What comes to mind when you hear (or read) about a RESTful IBM z System? Hint: it is not a mainframe that is loafing. To the contrary, a RESTful mainframe probably is busier than it has ever been, now running a slew of new apps, most likely mobile or social apps with REST APIs connecting to z/OS-based web services plus its usual workloads. Remember web services when SOA first came to the mainframe? They continue today behind the new mobile, cloud, social, and analytical workloads that are putting the spotlight on the mainframe.

Travel and Transportation - Passenger Care

Courtesy of IBM: travel fuels mobile activity (click to enlarge)

A variety of Edge2015 sessions, given by Asit Dan, chief architect, z Service API Management and Glenn Anderson, IBM Lab Services and Training, put what the industry refers to as the emerging API economy in perspective. The z, it should come as no surprise, lies at the heart of this burgeoning API economy, not only handling transactions but also providing governance and management to the API phenomenon that is exploding. Check out IBM’s APIs for Dummies.

The difference between first generation SOA and today’s API economy lies in the new workloads—especially mobile and cloud—fueling the surging interest. The mobile device certainly is the fastest growing platform and will likely become the largest platform soon if it is not already, surpassing desktop and laptop systems.

SOA efforts initially focused on the capabilities of the providers of services, noted Dan, particularly the development, run-time invocation, and management of services. The API economy, on the other hand, focuses on the consumption of these services. It really aims to facilitate the efforts of application developers (internal developers and external business partners) who must code their apps for access to existing and new API-enabled services.

One goal of an enterprise API effort is to access already deployed services, such z-based CICS services or those of a partner. Maybe a more important goal, especially where the z is involved, is to drive use of mainframe software assets by customers, particularly mobile customers.  The API effort not only improves customer service and satisfaction but could also drive added revenue. (Have you ever fantasized of the z as a direct revenue generator?)

This calls, however, for a new set of interfaces. As Dan notes in a recent piece, APIs for accessing these assets, defined using well known standards such as web services and Representational State Transfer (REST) with JSON (JavaScript Object Notation), and published via an easily accessible catalog, make it efficient to subscribe to APIs for obtaining permissions and building new applications. Access to the APIs now can be controlled and tracked during run-time invocations (and even metered where revenue generation is the goal).

Now the API economy can morph into a commercial exchange of business functions, capabilities, and competencies as services using web APIs, noted Glenn Anderson at Edge2015. In-house business functions running on the z can evolve into an API as-a-service delivery vehicle, which amounts to another revenue stream for the mainframe data center.

The API economy often is associated with the concept of containers. Container technology provides a simplified way to make applications more mobile in a hybrid cloud, Anderson explained, and brings some distinct advantages. Specifically, containers are much smaller in size than virtual machines and provide more freedom in the placement of workloads in a cloud (private, public, hybrid) environment. Container technology is being integrated into OpenStack, which is supported on the z through IBM Cloud Manager. Docker is the best known container technology and it works with Linux on z.

With the combination of SOA, web services, REST, JSON, OpenStack, and Docker all z capable, a mainframe data center can fully participate in the mobile, apps, cloud API economy. BTW, POWER servers also can play the API, OpenStack, Docker game too. Even Watson can participate in the API economy through IBM’s early March acquisition of AlchemyAPI, a provider of scalable cognitive computing API services. The acquisition will drive the API economy into cognitive computing too. Welcome to the mainframe API economy.

DancingDinosaur is Alan Radding, a veteran IT analyst and writer. Please follow DancingDinosaur on Twitter, @mainframeblog. See more of his IT writing on Technologywriter.com and here.

Real Time Analytics on the IBM z13

June 4, 2015

For years organizations have been putting their analytics on distributed platforms thinking that was the only way to get fast, real-time and predictive analytics. Maybe once but not anymore. Turns out the IBM z System, especially the z13 not only is ideal for real time, predictive analytics but preferable.

IBM today is so bullish on analytics, especially predictive analytics, that last month it introduced 20 pre-built industry-specific predictive analytics solutions. To build these solutions IBM tapped its own experience working on 50,000 engagements but also an array of outside organizations with success in predictive analytics, including Urban Outfitters, National Grid, Deloitte, Bolsa de Santiago, Interactive Data Managed Solutions, and Bendigo and Adelaide Bank, among others.

Web

Courtesy of IBM (click to enlarge)

The truth of the matter is that without efficient real time, predictive analytics managers get it wrong most of the time when it comes to making operational decisions, said Paul DiMarzio, IBM z Systems Big Data and Analytics Worldwide Portfolio Marketing Manager. He spoke at IBM Edge2015 in a session titled When Milliseconds Matter: Architecting Real-Time Analytics into Operational Systems. His key point: you can do this completely within the IBM z System.

The old notion of sending data to distributed systems someplace else for analytics now appears ridiculous, especially with the introduction of systems like the z13 that can handle operations and perform real time analytics concurrently. It performs analytics fast enough that you can make decisions when the action is still going on. Now the only question is whether we have the right business rules and scoring models. The data already are there and the tools are ready and waiting on the z13.

You start with the IBM SPSS Modeler with Scoring Adapter for zEnterprise. The real time predictive analytics capability delivers better, more profitable decisions at the point of customer impact. For business rules just turn to the IBM Operational Decision Manager for z/OS, which codifies business policies, practices, and regulations.

IBM SPSS improves accuracy by scoring directly within the transactional application against the latest committed data. As such it delivers the performance needed to meet operations SLAs and avoid data governance and security issues, effectively saving network bandwidth, data copying latency, and disk storage.

In addition to SPSS and the Operational Decision Manager the z13 brings many capabilities, some new for the z13 at this point. For starters, the z13 excels as a custodian of the data model, providing an accurate, secure, single copy of information that, according to IBM, ensures veracity of the data necessary for reliable analytics and provides centralized control over decision information.

Specifically, the machine brings SIMD (single instruction multiple data) and the MASS (mathematical acceleration subsystem) and ATLAS (automatically tuned linear algebra software) libraries for z/OS and Linux on z. SIMD enables the same operation to be performed on several data elements at the same time rather than sequentially. MASS and ATLAS help programmers create better and more complex analytic models.

In addition, increases in memory to as much as 10 TB, faster I/O, and simultaneous multi-threading (SMT) generally boost overall throughput of the z13, which will surely benefit any analytics being run on the machine, especially real time, predictive analytics.  In addition, analytics on the z13 gains from deep integration with core systems, the integrated architecture, and its single pane management view.

The latest IBM Red Book on analytics on the z13 sums it up as such: z Systems analytics enables organizations to improve performance and lower cost by bringing the analytic processing to where the data resides. Organizations can therefore maximize their current IT investments while adding functionality and improved price and performance with the z13. And with the new z13 features, applications can gain increased throughput for operational business intelligence (operational BI) and DB2 query workloads, which saves money (hardware, software, labor).

The Red Book suggests the following example: a user with a mobile application signs on and initiates a transaction flow through an IBM MobileFirst Platform Server running on Linux on z. The event goes to an LDAP server on z/OS to validate the user’s sign-on credentials. After successful validation, the transaction then proceeds through the z/OS transaction environment where all of the data resides in DB2 z/OS. IBM CICS transactions also are processed in the same z environment and all of the analysis is performed without moving any data, resulting in extremely fast performance. Sweet.

DancingDinosaur is Alan Radding, a veteran IT analyst and writer. Please follow DancingDinosaur on Twitter, @mainframeblog. See more of his IT writing on Technologywriter.com and here.

IBM Edge Rocks 6000 Strong for Digital Transformation

May 15, 2015

Unless you’ve been doing the Rip Van Winkle thing, you have to have noticed that a profound digital transformation is underway fueled, in this case,from the bottom. “This is being driven by people embracing technology,” noted Tom Rosamilia, Senior Vice President, IBM System. And it will only get greater with quantum computing, a peak into it provided at Edge2015 by Arvind Krishna, senior vice president and director, IBM Research.

ibm_infographic_rough draft_r5

(Quantum computing, courtesy of IBM, click to enlarge)

Need proof? Just look around. New cars are now hot spots, and it’s not just luxury cars. Retailers are adding GPS inside their store and are using it to follow and understand the movement of shoppers in real time. Eighty-two percent of millennials do their banking from their mobile phone.  As Rosamilia noted, it amounts to “an unprecedented digital disruption” in the way people go about their lives. Dealing with this digital transformation and the challenges and opportunities it presents was what IBM Edge 2015 was about. With luck you can check out much from Edge2015 at the media center here.

The first day began with a flurry of product announcements starting with a combined package of new servers and storage software and solutions aimed to accelerate the development of hybrid cloud computing.  Hybrid cloud computing was big at Edge2015. To further stimulate hybrid computing IBM introduced new flexible software licensing of its middleware to help companies speed their adoption of hybrid cloud environments.

Joining in the announcement was Rocket Software, which sponsored the entertainment, including the outstanding Grace Potter concert. As for Rocket’s actual business, the company announced Rocket Data Access Service on Bluemix for z Systems, intended to provide companies a simplified connection to data on the IBM z Systems mainframe for development of mobile applications through Bluemix. Starting in June, companies can access a free trial of the service, which works with a range of database storage systems, including VSAM, ADABASE, IMS, CICS, and DB2, and enables access through common mobile application interfaces, including MongoDB, JDBC, and the REST protocol.  Now z shops have no excuse not to connect their systems with mobile and social business.

Storage also grabbed the spotlight. IBM introduced new storage systems, including the IBM Power System E850, a four-socket system with flexible capacity and up to 70% guaranteed utilization. The E850 targets cloud service providers and medium or large enterprises looking to securely and efficiently deploy multi-tenancy workloads while speeding access to data through larger in-memory databases with up to 4TB of installed memory.

The IBM Power System E880, designed to scale to 192 cores, is suitable for IBM DB2 with BLU Acceleration, enhancing the efficiency of cloud deployments; and the PurePOWER System, a converged infrastructure for cloud. It is intended to help deliver insights via the cloud, and is managed with OpenStack.

The company also will be shipping IBM Spectrum Control Storage Insights, a new software-defined storage offering that provides data management as a hybrid cloud service to optimize on-premises storage infrastructures. Storage Insights is designed to simplify storage management by improving storage visibility while applying analytics to ease capacity planning, enhance performance monitoring, and improve storage utilization. It does this by reclaiming under-utilized storage. Thank you analytics.

Finally for storage, the company announced IBM XIV GEN 3, designed for cloud with real-time compression that enables scaling as demand for data storage capacity expands. You can get more details on all the announcements at Edge 2015 here.

Already announced is IBM Edge 2016, again at the Venetian in Las Vegas in October 2016. That gives IBM 18 months to pack it with even more advances. Doubt there will be a new z by then; a new business class version of the z13 is more likely.

DancingDinosaur will take up specific topics from Edge2015 in the coming week. These will include social business on z, real-time analytics on z, and Jon Toigo sorting through the hype on SDS.

DancingDinosaur is Alan Radding, a veteran IT analyst and writer. Please follow DancingDinosaur on Twitter, @mainframeblog. See more of his IT writing on Technologywriter.com and here.

Variety of System Vendors at IBM Edge2015

May 7, 2015

An interesting set of vendor sponsors and exhibitors are lined up for IBM Edge2015 in Las Vegas next week. For the past weeks DancingDinosaur has focused on a small selection of program sessions.  Now let’s take a look at some of the vendors that will be there.

DancingDinosaur loves the vendors because they’re usually the ones underwriting the free entertainment, food, and drinks as well as giving out the nifty stuff. (My daughters used to love going off to school with what they considered cool multi-colored pens, Day-Glo bouncing balls, folding Frisbees, and more, which I picked up free at different vendors’ booths.)

ibm enterprise cloud - cloud breakthrough year infographic_12-17-14b (1)

IBM enterprise cloud platform (click to enlarge)

Let’s start with Rocket Software. DancingDinosaur thinks of them mainly as a mainframe software provider with products for data management, performance optimization, catalog and system management, disaster recovery, storage management, and security. They also offer a bunch of interesting free utilities. At the end of April Rocket announced Rocket Discover, a self-service, intuitive data preparation and discovery solution to lets business managers and executives easily access, manipulate, prepare, and visualize data.

Both Brocade and Cisco will be there. In April, for instance, Brocade announced innovations for its campus LAN switch family. The switch is intended to help organization easily scale to meet increasing campus bandwidth demands. For instance it will deliver the industry’s highest 10 Gigabit Ethernet (GbE) port density for any switch in its class to accommodate what it refers to as the onslaught of user video and wireless traffic that is taxing campus networks.

In early May Cisco announced that Eletrobras, a Brazilian electric utility, would use Cisco’s technology for a smart metering initiative.  The project is expected to enable operational efficiency by improving service quality and control of non-technical losses, which, according to the company, reach 22% in the North and 10% in the Northeast of Brazil compared to required energy.

Of course Red Hat and SUSE, currently the leading Linux providers for the mainframe, will be there. DancingDinosaur has gotten some of his favorite baseball hats from each of these companies at previous IBM Edge conferences.

Red Hat introduced a new business resource planner as part of the latest releases of Red Hat JBoss BPM Suite and Red Hat JBoss BRMS. The planner, based on the open source OptaPlanner JBoss community project, is designed to help enterprises address complex scheduling and resource planning challenges. It also promises to increase operational adaptability in the face of rapidly changing and unpredictable business environments.

In late April SUSE announced the upcoming availability of SUSE Linux Enterprise Server for SAP Applications based on SUSE Linux Enterprise 12. New features, such as full operating system rollback, live kernel patching, and installation automation, should help simplify deployment and can increase uptime of mission-critical SAP solution-based workloads on Linux. SUSE customers should save time and resources as they experience improved performance and reliability.

Since the topic is Linux, let’s not forget Canonical’s Ubuntu, usually regarded as a desktop Linux distribution, is moving onto server platforms. At present Ubuntu is supported on POWER8 but not z. Ubuntu is included in numerous program sessions at Edge2015. For example, Ubuntu on Power – Using PowerKVM, presented by James Nash. The session covers various aspects to consider when moving to Ubuntu on the Power platform running in a PowerKVM environment.

In the exhibition area, where most people congregate for free food and drink after the program sessions there are over 30 exhibitors, including a handful of IBM units. For example, H&W Computer Systems  provides a handful of mainframe tools that enable you to run batch jobs during the business day without impacting CICS, automatically convert JES2 output to PDF or other formats, or use ISPF-like features to manage mainframe datasets. This is hardcore mainframe stuff.

An interesting exhibitor is ownCloud, an enterprise file sync and share system that is hosted in your data center, on your servers, using your storage. ownCloud provides Universal File Access through a single front-end to all of your disparate systems. Users can access company files on any device, anytime, from anywhere while IT can manage, control and audit file sharing activity to ensure security and compliance measures are met. (DancingDinosaur could actually use something like this—make note to check out this exhibitor.)

Recommend you spend a couple of late afternoons grazing through the exhibitor space, enjoying the food and drink, catching some demos, and collecting a new wardrobe of t-shirts and baseball caps.  And don’t forget to pick up some of the other funky stuff for your kids.

Of course, plan to save time for the free evening entertainment. In addition to Penn & Teller, a pair of magicians, and rocker Grace Potter, here. Also there will be a weird but terrific group, 2Cellos as well.

DancingDinosaur is Alan Radding, a veteran IT analyst and writer. Please follow DancingDinosaur on Twitter, @mainframeblog. See more of his IT writing on Technologywriter.com and here. If you are attending IBM Edge2015—now sold out—please look for me hanging out wherever people gather around available power outlets to recharge mobile devices.

POWER Systems for Cloud & Linux at IBM Edge2015

April 23, 2015

In October, IBM introduced a new range of POWER systems capable of handling massive amounts of computational data faster at nearly 20 percent better price/performance than comparable Intel Xeon v3 processor-based systems, delivering to clients a superior alternative to closed, commodity-based data center servers. DancingDinosaur covered it last October here. Expect this theme to play out big at IBM

Edge2015 in Las Vegas, May 10-15. Just a sampling of a few of the many POWER sessions makes that clear:

IBM Power S824L

Courtesy of Studio Stence, Power S824L (click to enlarge)

(lCV1655) Linux on Power and Linux on Intel: Side By Side, IT Economics Positioning; presenter Susan Proietti Conti

Based on real cases studied by the IBM Eagle team for many customers in different industries and geographies, this session explains where and when Linux on Power provides a competitive alternative to Linux on Intel. The session also highlights the IT economic value of architecture choices provided by the Linux/KVM/Power stack, based on open technologies brought by POWER8 and managed through OpenStack. DancingDinosaur periodically covers studies like these here and here.

(lCV1653) Power IT Economics Advantages for Cloud Service Providers and Private Cloud Deployment; presenter Susan Proietti Conti

Since the announcement of POWER8 and building momentum of the OpenPOWER consortium, there are new reasons for cloud service providers to look at Power technology to support their offerings. As an alternative open-based technology to traditional proprietary technologies, Power offers many competitive advantages that can be leveraged for cloud service providers to deliver IaaS services and other types of service delivery. This session illustrates what Power offers by highlighting client examples and the results of IT economics studies performed for different cloud service providers.

(lSY2653) Why POWER8 Is the Platform of Choice for Linux; presenter Gary Andrews

Linux is the platform of choice for running next generation workloads. With POWER8, IBM is investing heavily into Linux and is adding major enhancements to the Power platform to make it the server of choice for running Linux workloads. This session discusses the new features and how they can help run business faster and at lower costs on the Power platform. Andrews also points out many advanced features of Linux on Power that you can’t do with Linux on x86. He shows how competitive comparisons and performance tests demonstrate that POWER8 increases the lead over x86 latest processor family. In short, attend this session to understand the competitive advantages that POWER8 on Linux can deliver compared to Linux on x86.

(pBA1244) POWER8: Built for Big Data; presenter William Starke

Starke explains how IBM technologies from semiconductors through micro-architecture, system design, system software, and database and analytic software culminate in the POWER8 family of products optimized around big data analytics workloads. He shows how the optimization across these technologies delivers order-of-magnitude improvements via several example scenarios.

 (pPE1350) Best Practices Guide to Get Maximum Performance from IBM POWER8; presenter Archana Ravindar

This session presents a set of best practices that have been tried and tested in various application domains to get the maximum performance of an application on a POWER8 processor. Performance improvement can be gained at various levels: the system level, where system parameters can be tuned; the application level, where some parameters can be tuned as there is no one-size-fits-all scenario; and the compiler level, where options for every kind of application have shown to improve performance. Some options are unique to IBM and give an edge over competition in gaming applications. In cases where applications are still under development, Ravindar presents guidelines to ensure the code runs fastest on Power.

DancingDinosaur supports strategies that enable data centers to reuse existing resources like this one. (pCV2276) Developing a POWERful Cloud Strategy; presenter, Susan Schreitmueller

Here you get to examine decision points for how and when to use an existing Power infrastructure in a cloud environment. This session covers on-premises and off-premises, single vs. multi-tenant hosting, and security concerns. You also review IaaS, PaaS, and hybrid cloud solutions incorporating existing assets into a cloud infrastructure. Discover provisioning techniques to go from months to days and then to hours for new instances.

One session DancingDinosaur hasn’t found yet is whether it is less costly for an enterprise to virtualize a couple of thousand Linux virtual machines on one of the new IBM Power servers pictured above or on the z13 as an Enterprise Linux server purchased under the System z Solution Edition Program. Hmm, will have to ask around about that. But either way you’d end up with very low cost VMs compared to x86.

Of course, save time for the free evening entertainment. In addition to Penn & Teller, a pair of magicians, and rocker Grace Potter, here, there will be a weird but terrific group, 2Cellos as well.

DancingDinosaur is Alan Radding, a veteran IT analyst and writer. Please follow DancingDinosaur on Twitter, @mainframeblog. See more of his IT writing on Technologywriter.com and here. Please join DancingDinosaur at IBM Edge2015. You will find me hanging out wherever people gather around available power outlets to recharge mobile devices.

IBM z Systems at Edge2015

April 9, 2015

There are so many interesting z Systems sessions at IBM Edge2015 that DancingDinosaur can’t come close to attending them all or even writing about them.  Edge2015 will be in Las Vegas, May 10-15, at the Venetian, a huge hotel that just happens to have a faux Venice canal running within it (and Vegas is in the desert, remember).

The following offers a brief summation of a few z Systems sessions that jumped out at me.  In the coming weeks Dancing Dinosaur will look at sessions on Storage, Power Systems, cross-platform sessions, and middleware. IBM bills Edge2015 as the Infrastructure Innovation Conference so this blog will try at least to touch on bits of all of it. Am including the session numbers and presenters but please note that session and presenters may change.

radcliffe mobile as the next evolutionCourtesy of IBM (click to enlarge)

Session zBA1909; Mobile and Analytics Collide – A New Tipping Point; presenter Mark Simmonds

DancingDinosaur starting following mobile on z in 2012 and was reporting IBM mobile successes as recently as last month, click here. In this session Simmonds observes organizations being driven to deliver more insight and smarter outcomes in pursuit of increasing revenue and profit while lowering business costs and risks. The ubiquity of mobile devices adds two important dimensions to business analytics, the time and location of customers. Now you have an opportunity to leverage both via the mobile channel but only if your analytics strategy can respond to the demands of the mobile moment. At this session you’ll see how customers are using IBM solutions and the z to deliver business critical insight across the mobile community and hear how organizations are setting themselves apart by delivering near real-time analytics.

Session zBA1822; Hadoop and z Systems; presenter Alan Fellwock

DancingDinosaur looked at Hadoop on z as early as 2011. At that point it was mainly an evolving promise. By this past fall it had gotten real, click here.  In this session, Fellwock notes that various use cases are emerging that require Hadoop processing in conjunction with z Systems. In one category, the data originates on the z Systems platform itself—this could be semi-structured or unstructured data held in DB2 z/OS, VSAM or log files in z/OS. In another category, the data originates outside z Systems –this could be social media data, email, machine data, etc.—but needs to be integrated with core data on z Systems. Security and z Systems governance becomes critical for use cases where data originates on z Systems. There are several z Hadoop approaches available, ranging from Hadoop on Linux to an outboard Hadoop cluster under z governance to a cloud model that integrates with SoftLayer.

Session zAD1876; Bluemix to Mainframe – Making Development Accessible in the Cloud; presenter Rosalind Radcliffe

Cloud capability and technology is changing the way enterprises go to market. DancingDinosaur interviewed Radcliffe for a posting on DevOps for the mainframe in March. DevOps is about bringing the entire organization together, including development and operations, to more efficiently deliver business value be it on premise, off premise, or in a hybrid cloud environment. This session promises to explore how IBM DevOps solutions can transform the enterprise into a high quality application factory by leveraging technology across platforms and exploiting both systems of record and systems of engagement applications. It will show how to easily expose your important data and customer applications to drive innovation in a nimble, responsive way, maintaining the logic and integrity of your time-tested systems.

Session zAD1620; APIs to the Enterprise: Unlocking Mainframe Assets for Mobile and Cloud Applications; presenter Asit Dan

The emergence of APIs has changed how organizations build innovative mobile and web applications, enter new markets, and integrate with cloud and third party applications. DancingDinosaur generally refers to this as the API economy and it will become only more important going forward. IBM z Systems data centers have valuable assets that support core business functions. Now they can leverage these assets by exposing them as APIs for both internal and external consumption. With the help of IBM API Management, these organizations can govern the way APIs are consumed and get detailed analytics on the success of the APIs and applications that are consuming them. This session shows how companies can expose z Systems based functions as APIs creating new business opportunities.

Session zAD1469; Java 8 on IBM z13 – An Unstoppable Force Meets an Immovable Object; presenter Elton De Souza

What happens when you combine the most powerful commercially available machine on the planet with the latest iteration of the most popular programming language on the planet? An up to 50% throughput improvement for your generic applications and up to 2x throughput improvement for your security-enabled applications – that’s what! This session covers innovation and performance of Java 8 and IBM z13. With features such as SMT, SIMD and cryptographic extensions (CPACF) exploitation, IBM z Systems is once again pushing the envelope on Java performance. Java 8 is packed with features such as lambdas and streams along with improved performance, RAS and monitoring that continues a long roadmap of innovation and integration with z Systems. Expect to hear a lot about z13 at Edge2015.

Of course, there is more at Edge2015 than just z Systems sessions. There also is free evening entertainment. This year the headliner act is Penn & Teller, a pair of magicians. DancingDinosaur’s favorite, however, is Grace Potter, who delivers terrific hard rock and roll. Check her out here.

DancingDinosaur is Alan Radding, a veteran IT analyst and writer. Follow DancingDinosaur on Twitter, @mainframeblog. See more of his IT writing on Technologywriter.com and here. And join DancingDinsosaur at IBM Edge2015. You will find me hanging out wherever people gather around available power outlets to recharge mobile devices.

IBM Edge 2015 as All Platforms Infrastructure Innovation Conference

April 2, 2015

Please join DancingDinosaur at IBM Edge2015 at the Venetian in Las Vegas, May 10-15. It will consist of an Executive Edge track and a Technical track. The program is crammed with hundreds of sessions.  You can find the Technical track session list here. Dare you to find even 10 sessions that don’t interest you.

 marie wieck with IBM poster

Courtesy of IBM: Marie Wieck, General Manager, Middleware (click to enlarge)

This year Edge2015 merges last year’s two events, IBMEdge and Enterprise 2014, into what IBM calls the Infrastructure Innovation Conference. It is IBM’s only US event covering all IBM platforms—System Storage, IBM z Systems, IBM Power Systems, and IBM Middleware at a single venue.  It includes three Technical Universities: System Storage, z Systems, and Power Systems for those working toward certification.

Executive Edge, which this post will look at a little more closely below, offers an exclusive two-day summit for IT executives and leaders , as IBM explains, featuring the latest innovations and technology announcements, client success stories, insightful presentations from IBM executives and industry thought leaders. Plus, IBM promises top tier, one-on-one executive meetings and exclusive networking opportunities.

The IBM Systems and Technology Group (STG) top brass will be there. This IBM Systems lineup includes: Tom Rosamilia, Senior Vice President; Stephen Leonard, General Manager, Sales; Jamie M. Thomas, General Manager, IBM Storage & Software Defined Systems; Ross Mauri, General Manager, z Systems; Doug Balog, General Manager, Power Systems; and Marie Wieck General Manager, Middleware.

And then there is the free entertainment IBM provides. The headliner act is Penn & Teller, a pair of magicians. DancingDinosaur’s favorite, however, is Grace Potter, who delivers terrific hard rock and roll. If you skip the casinos you can catch both and avoid losing money in the process.

With the Executive track IBM promises to present its most innovative approaches to using IBM Systems and Middleware as a foundation for challenging new areas of information management including:

  • Cloud Infrastructure, especially hybrid clouds
  • Big Data and Analytics
  • Systems of Record
  • Systems of Engagement
  • Mobile and Security Services
  • Flash and Storage Virtualization
  • Software Defined Infrastructure

Cloud and big data/analytics have become accepted pillars of IT business value. Mobile, flash, and software-defined infrastructure are being widely embraced as the next wave of IT value delivery. And security must be a priority for everything. Also included will be dozens of client case studies.

Throughout both the Executive and Technology tracks there will be numerous sessions citing client cases and use cases. Although not the same both show how to actually deploy technology for business value.

For example, the session (cCV0821) titled Be Hybrid or Die, revolves around hybrid clouds. The session promises a clear understanding of the term hybrid and why hybrid has become the next step in IT value creation, extraction, and efficiency gains. Citing use cases the session will show how to map your business needs to the functional business blocks of hybrid clouds and to the IBM products portfolio that address those needs. It concludes by examining where IBM is investing, its long term view, and how this will increase your IT flexibility.

Speaking of cases, a session (cIT0514) titled How to Create Rock Solid Business Cases to Get IT Projects Approved looks at the subject of case studies from a standpoint of the IT person charged with answering the unavoidable ROI question. BTW, DancingDinosaur develops business cases based on various looks at cost of ownership should you need help.  They are time consuming but necessary. Management requires an understanding of current IT costs and strengths and the expected payback to better assess new ideas and technologies. This session identifies the key elements of an effective IT cost analysis and explores how to build compelling business cases around those costs and, you hope, quantifiable benefits. Concepts discussed include complexity analysis, platform assessment, Fit for Purpose analysis, and financial case structures. Hmmm, definitely one DancingDinosaur will attend.

Another session looks at the first customer experiences using SAP HANA on Power. SAP HANA is the company’s in-memory, column-oriented RDBMS that handles both high volume transactions and complex analytical query processing on the same platform, and does so very fast since all is in-memory. The session, (lBA0464) titled SAP HANA on Power Systems: First Experiences from Customers in the Test and Evaluation Program, reports on the first experiences gathered from the pilot clients. This presentation begins with a short overview of SAP HANA in general, and then covers specific aspects in the deployment of SAP HANA on IBM Power Systems and IBM storage. You’ll hear about the advantages of SAP HANA on Power Systems (vs. x86) and discover how fast and easy it is to implement in a private cloud with full use of PowerVM capabilities.

In about six weeks DancingDinosaur will be heading to IBM Edge2015. Please join me there. You can find me hanging out wherever people gather around available power outlets to recharge mobile devices. DancingDinosaur is Alan Radding, a veteran IT analyst and writer. Follow DancingDinosaur on Twitter, @mainframeblog. See more of his IT writing on Technologywriter.com and here. And join DancingDinsosaur at IBM Edge2015.

IBM DevOps for the Mainframe

March 27, 2015

DevOps is not just for distributed platforms. IBM has a DevOps strategy for large enterprises (usually mainframe shops) too. Nationwide, a longtime mainframe shop, is an early adopter of DevOps and already is reporting significant gains: reduction in critical software defects by 80% and a 20% efficiency gain in its maintenance and support operations in just 18 months.

DevOps, an agile methodology, establishes a continuous feedback loop between software development and deployment/operations that speeds development and deployment while ensuring quality. This is a far cry from the waterfall development methodologies of the mainframe past.

 desz DevOps adoption model

Courtesy of IBM (click to enlarge)

The IBM DevOps initiative, announced last November (link above), taps into the collaborative capabilities of IBM’s Cloud portfolio to speed the delivery of software that drives new models of engagement and business. Software has become the rock star of IT with software-driven innovation becoming a primary strategy for creating and delivering new value to customers. A survey of 400 business and IT executives by the IBM Institute for Business Value showed businesses that are more effective at software delivery are also more profitable than their peers nearly 70% of the time. As such, IBM notes, DevOps provides a way for businesses to remain competitive, applying lean and agile principles to software development to speed the delivery of software that meets new market requirements.

Some mainframe shops, however, continue to operate from a software standpoint as if client/server computing and PCs were still the new game in town. Meanwhile the business units keep complaining about how long it takes to make software changes while long backlogs drag on the IT budget.

DevOps is about continuous software development and deployment. That means continuous business planning, continuous collaborative dev, continuous testing, continuous release and deployment, continuous monitoring, and continuous feedback and optimization in a never ending cycle. Basically, continuous everything.  And it really works, as Nationwide can attest.

But DevOps makes traditional mainframe shops nervous. Mainframe applications are rock solid and crashes and failures almost unheard of. How can they switch to DevOps without risking everything the mainframe stands for, zero failure?

The answer: mainframe DevOps that leads straight into continuous testing, not deployment. The testing can and should be as rigorous and extensive as is necessary to reassure that everything works as it should and anything that will fail has failed. Only then does it go into production.

It would be comforting to the data centers to say that DevOps only addresses systems of engagement; those pesky mobile, collaborative, and social systems that suddenly are making demands on the core mainframe production applications. But that is not correct. DevOps is about integrating systems of engagement with systems of record, the enterprise’s mainframe crown jewels. The trick is to bring together the culture, processes, and tools across the entire software delivery lifecycle, as IBM says, to span it all—mobile to mainframe, slowing down only to conduct as exhaustive testing as the enterprise requires.

Mainframe tools from the era of waterfall methodologies won’t cut it. Rational offers a set of tools starting with Blue Agility. IBM also offers an expanded set of tools acquired through acquisitions such as UrbanCode (release automation) and GreenHat (software quality and testing solutions for the cloud and more) that offer an integrated developer experience on open cloud platforms such as Bluemix to expedite DevOps collaboration, according to IBM.

Expect push back from any attempt to introduce DevOps into a traditional mainframe development culture. Some shops have been developing systems the same way for 30 years or more. Resistance to change is normal. Plan to start gradually, implementing DevOps incrementally.

Some shops, however, may surprise you. Here the mainframe team senses they are falling behind. IBM, of course, has tools to help (see above). Some experts recommend focusing on automated testing early on; when testing is automated DevOps adoption gets easier, they say, and old school developers feel more reassured.

At IBM Edge2015, there are at least 2 sessions on DevOps: Light Up Performance of Your LAMP Apps and DevOps with a Power Optimized Stack; and CICS Cloud DevOps = Agility2. BTW, it is a good time to register for IBM Edge2015 right away, when you can still get a discount. IBM Edge2015, being billed as the Infrastructure Innovation Conference, takes place May 11 – 15 at The Venetian in Las Vegas. DancingDinsosaur will be there. Have just started pouring over the list of sessions on hundreds of topics for every IBM platform and infrastructure subject. IBM Edge2015 combines what previously had been multiple conferences into one.

DancingDinosaur is Alan Radding, a veteran IT analyst and writer. Follow DancingDinosaur on Twitter, @mainframeblog. See more of his IT writing on Technologywriter.com and here. And join DancingDinsosaur at IBM Edge2015.


Follow

Get every new post delivered to your Inbox.

Join 778 other followers

%d bloggers like this: