Posts Tagged ‘Java’

Compuware Continues Mainframe GUI Tool Enhancements

July 1, 2016

Early in 2015 Compuware announced the first in what it promised would be a continuing stream of new mainframe tools and tool enhancements. Did anyone really believe them? Mainframe ISVs are not widely regarded for their fast release cycles. DancingDinosaur reported on it then here and has continued to follow up and report its progress through a handful of new releases. This past week, DancingDinosaur received new Compuware mainframe tool announcements. For a mainframe ISV this is almost unheard of. IBM sometimes releases new mainframe products in intense spurts but then quickly resumes its typical languid release pace.

compuware ispw

Screen from Compuware’s ISPW for Continuous Delivery to the Mainframe

Let’s take a look at each of these new releases. First, ISPW Deploy, an advanced mainframe release automation solution that enables large enterprises to bring continuous delivery best practices to their IBM z/OS environments. ISPW Deploy, built on the ISPW technology Compuware acquired in January 2016, facilitates faster and more reliable mainframe software deployment. Specifically, it helps, according to Compuware, in three ways, through:

  1. Automation that rapidly moves code through the deployment process, including test staging and approvals, while also providing greatly simplified full or partial rollbacks.
  1. Visualization that enables DevOps managers to quickly pinpoint deployment issues in order to both solve immediate rollout problems and address persistent bottlenecks in code promotion.
  1. Integrations with both third-party solutions and Compuware’s own industry-leading mainframe toolkit that allow IT to build complete SCM-to-production DevOps pipelines and to quickly launch associated remediation support tools if and when deployment issues occur.

Compuware is further empowering enterprises to achieve mainframe agility by integrating. For instance, its ISPW and XebiaLabs’ cross-platform continuous delivery solutions enable IT organizations to orchestrate and visualize their mainframe DevOps processes in a common manner with their broader cross-platform DevOps automation.

The second announcement focused on Xebial Labs, as noted above. The idea here is to deliver cross-platform continuous releases for the mainframe. As Compuware explained, enterprises using XebiaLabs’ solution suite and Compuware ISPW, can now automate and monitor all phases of mainframe DevOps within the same continuous delivery management environment they use for their distributed, web, and cloud platforms. This automation and monitoring includes test/QA, pre-copy staging, and code promotion. The goal, as with all DevOps, is to speed digital agility for mainframe or distributed systems or both.

The third announcement concerned a partnership between Compuware and ConicIT that aims to help a new generation of IT ops staff proactively resolve emerging mainframe issues before they impact application service levels. It does so by integrating ConicIT’s predictive mainframe analytics with Compuware’s Strobe, which provides visually intuitive troubleshooting intelligence. Together, the two companies promise to enable even IT staff with relatively little hands-on mainframe experience to quickly identify and resolve a wide range of application performance problems.

The key to doing this is a reliance on the adoption of intuitive GUI interfaces. Compuware started this with its Topaz tools and has been continuing along this path for two years. Compuware’s CEO, Chris O’Malley, has been harping on these themes almost since he first arrived there.

Compuware customers apparently have gotten the message. As reported: “Market pressures are making it essential for us to deliver quality products and services to our clients more frequently, and the mainframe plays a critical role in that delivery,” according to Craig Danielson, Assistant Vice President for Commerce Bank. “We leverage ISPW to help in this capacity and its new capabilities will provide us the automation and visibility of our software deployment process to help us continuously improve our internal operations and services.” (note: DancingDinosaur did not validate this customer statement.)

Companies will need all the help modern mainframe tools can deliver. Mainframe data centers are facing unprecedented challenges that require unusual speed and agility. In short, they need DevOps fast. And they will have to respond with an increasingly aging core of experienced mainframe staff supplemented by millennials who have to be coaxed and cajoled onto the mainframe with easy graphical tools. If mainframe data centers can’t respond to these challenges—not just cloud, mobile, Linux, and analytics, but also IoT, blockchain, cognitive computing, and whatever else is coming along next—how are they going to cope. Already their users, the line of business managers, are turning to shadow IT out of frustration with the slow response from the mainframe data centers. And you know what comes next.

DancingDinosaur is Alan Radding, a veteran information technology analyst and writer. Please follow DancingDinosaur on Twitter, @mainframeblog. See more of his IT writing at technologywriter.com and here.

 

IBM to Acquire EZSource to Bolster IBM z System for Digital Transformation

June 9, 2016

Over the past two years you have been reading in DancingDinosaur the new tricks that your z System can do—real time analytics, IoT, Blockchain, and more. This is part of the digital transformation that is blowing through enterprises everywhere. EZSource facilitates and simplifies how you can play in this new areas. See the IBM announcement here.

zOS Security2 PNG ez Complexity

EZSource Dashboard, Credit: EZSource

EZSource expedites digital transformations by unlocking core business logic and apps. Specifically it will pinpoint your valuable mainframe code assets in preparation for leveraging them through a hybrid cloud strategy. In the process it will enable the understanding business-critical assets in preparation of deployment of a z-centered hybrid cloud. This also enables enterprise DevOps, which is necessary to keep up with the pace of changes overtaking existing business processes.

Specifically, this can entail the need to:

  • Identify API candidates to play in the API economy
  • Embrace micro services to deliver versatile apps fast
  • Identify code quality concerns, including dead code, to improve reliability and maintainability
  • Mitigate risk of change through understanding code, data and schedule interdependencies
  • Aid in the sizing of a change effort
  • Automate documentation to improve understanding
  • Reduce learning curve as new people are on-boarded
  • Add application understanding to DevOps lifecycle information to identify opportunities for work optimization

Managers a z data centers often shy away from modifying aging business-critical applications for fear of breaking something—if it ain’t broke, don’t fix it—often is the mantra. They also are rationing the use of their few remaining experienced z veterans with the domain expertise and deep knowledge of software that turns out to be quite esoteric code.  This is further aggravated by poorly documented mainframe code. The way to mitigate this risk of change is through understanding code, data, and interdependencies. EZSource can handle this visually and with ease; you no longer need to be an experienced z code expert.

So what do you have to do to get on the digitization transformation bandwagon? Start by identifying your mainframe assets that are most often accessed. Most of them will be what the mobile apps are calling, usually a CICS routine or two or three. Then expose these business critical services through APIs and micro-services. This may require re-writing parts of them as platform agnostic language and Java components to work within the context of a hybrid cloud. As noted just above, EZSource can help with much of this too.

In short, EZSource performs app discovery, which facilitates code quality improvement. It helps clean up code. It also applies analytics to DevOps, in effect enabling Cognitive DevOps, which makes sense in the dynamic hybrid cloud. The result: you focus only on the relevant code and, of that, what is particularly problematic.

The goal is to increase competitiveness and business innovation through digital forms of engagement; the engagement currently being fueled by mobile, social, analytic, and cognitive computing in a hybrid cloud environment. The expectation is that you will be able to tap the accumulated software assets for insights while modernizing business critical applications already resident on the z. IBM contends that this is the fastest and most cost effective way to drive new value and agility and DancingDinosaur agrees.

Is it worth it?  Most DancingDinosaur readers probably already believe that the mainframe and its data and software assets sit smack at the center of a digital enterprise. (Just a glimpse of the growth of monthly software peak workload charges should confirm that). It makes no sense not to leverage this complex resource to the max. EZSource, with its smart code analytics and visual capabilities, can save thousands of hours of work, avoid mistakes, and speed the delivery of the kind of slick new hybrid cloud apps that customers are demanding.  EZSource is primarily a Linux, Windows, and Java tool with only a few pieces residing on the z to handle necessary z-specific connectivity.

DancingDinosaur is Alan Radding, a veteran information technology analyst and writer. Please follow DancingDinosaur on Twitter, @mainframeblog. See more of his IT writing at technologywriter.com and here.

 

BMC MainView for Java Manages Critical Digital Business

May 16, 2016

A large European financial services firm increasingly handles a lot of critical transaction functions with Java running through CICS and WebSphere.  As the firm looks forward, its managers see Java playing a bigger and more critical role in its core business as it shifts more of its business functionality to agile programming in Java. This firm is not even thinking about abandoning its workhorse COBOL code ever, but all new work is being directed to Java.

bmc mainview java 2

With that in mind, BMC last week announced MainView for Java Environments, part of BMC’s MainView integrated systems management suite of tools that provides insight into how Java is consuming resources and affecting application performance on the z System.  It is no surprise, therefore, that the firm became an early beta user for MainView for Java Environments.

According to a BMC survey, 93% of mainframe organizations in a recent BMC survey said Java usage is growing or steady, and Java is the language of choice for writing new or rewriting existing mainframe applications. BMC MainView for Java Environments provides insight into Java resource usage and how it impacts other workloads and applications. For example it automatically discovers all the Java Virtual Machines (JVMs) across z/OS. That alone can help with identifying performance problems in an effort to find and fix problems fast.

Java is the key to both performance and cost savings by running on zIIP assist processors. Java workloads, however, can affect performance and availability on the mainframe, as they consume system resources without regard for the needs of other applications or services, which is another reason why zIIP is essentially. Also, an integrated management approach gives IT operations a holistic view of the environment to quickly and easily discover Java Virtual Machines (JVMs) and to manage the effect of their resource consumption on application performance.

Java was the first object oriented programming language DancingDinosaur tried.  Never got good enough to try it on real production work, but here’s what made it appealing: fully object oriented, produces truly portable write-once, run-anywhere code (mainly because it results in Java virtual machine bytecode) and had automatic garbage collection. For a run-of-the-mill programmer, Java was a joy compared to C or, ugh, COBOL. Some of the new languages becoming popular today, the ones driving mobile and cloud and analytics apps look even easier, but DancingDinosaur would feel too embarrassed to sit in a programming class with twenty-somethings the age of his daughters.

Java usage today, according to the BMC survey, is growing or steady, while Java has become the language of choice for writing new or rewriting existing mainframe applications. The only drawback may be that Java workloads can affect performance and resource availability on the mainframe as JVMs consume system resources oblivious to the needs of other applications or services or the cost of uncontrolled resource consumption, which is what Java unrestrained produces. An integrated management approach that allows for a holistic view of the environment can quickly and easily discover JVMs and manage can constrain the effects on resource consumption on application performance and offset any drawback.

Explained Tim Grieser, program vice president, at IDC’s Enterprise System Management Software: “Since Java manages its own resources it can consume excessive amounts of processor time and memory resources leading to performance or availability problems if not proactively managed.” The key being proactively managed.  BMC’s MainView for Java Environments promises exactly that kind of proactive management by monitoring z/OS Java runtime environments and provides a consolidated view of all resources being consumed. This will enable system admins and operators to identify and manage performance issues before they impact end users.

“Java on the mainframe is being used to develop and deploy new applications faster and more economically to meet dynamically changing digital business needs and to take advantage of widely available programming skills” IDC’s Grieser continued. Something like BMC’s MainView for Java Environments can be used to constrain Java. IBM’s Omegamon can fulfill a similar function.

According to the financial firm beta test manager, with BMC’s MainView for Java Environments tool, Java can be effectively used to unlock Java’s potential on the mainframe  vital in a changing application and systems environment as part of an integrated performance management solution that discovers and monitors JVMs. As such, it provides a single graphical console which enables you  to quickly understand the Java applications impact on resources and its effect on the performance of other applications and transactions. The solution promises to improve application performance and ensure availability while reducing Mean Time to Repair (MTTR) and lowering Monthly License Charges (MLC) by monitoring zIIP offloading, which is the key to both performance and cost management.

DancingDinosaur is Alan Radding, a veteran information technology analyst and writer and occasional wanna-be programmer. Please follow DancingDinosaur on Twitter, @mainframeblog. See more of his IT writing at technologywriter.com and here.

 

State of z System CICS in the Modern Enterprise

March 25, 2016

You should be very familiar with the figures describing the continued strength of mainframe computing in the enterprise today. Seventy percent of enterprise data resides on a mainframe, 71 percent of all Fortune 500 companies run their core businesses on the mainframe, and 92 of the top 100 banks rely on the mainframe to provide at-your-fingertip banking services to the customers (many via mobile).  CICS, according to IBM, handles 1.1 million transactions every second, every day. By comparison, Google handles a mere 59,421 searches every second.

cics at-interconnect-2015-1-638

CICS at IBM Interconnect 2015

H&W, a top mainframe ISV recently released its State of CICS in the Modern Enterprise study. Find a copy of the study here.  For starters, it found that nearly two-thirds on respondents run 51-100% their business-critical applications online through CICS. Within government, 32% of respondents reported running 75-100% of business-critical applications through CICS.

A different study suggests that CICS applications handle more than 30 billion transactions per day and process more than $1 trillion dollars’ worth of business each week. Mainframe data also still drives information systems worldwide. Approximately 60 percent of organizations responding to a 2013 Arcati survey said they manage 40 to 100 percent of their enterprise data on the mainframe.

Integrating legacy systems is a strategy mainframe sites continue to adopt. In fact, 74 percent of respondents in that survey said specifically they are web-enabling CICS subsystems. However, as organizations pursue this strategy, challenges can include unlocking the data, keeping the applications and data available to users, and maintaining data integrity in an efficient and cost-effective manner. Nothing new for data center managers about this.

According to the H&W study, online CICS usage has gone up in the last 3 years, from 54% of respondents reporting running over half of their business-crit applications through CICS to 62% in 2015. Hope people will finally stop talking about the mainframe heading toward extinction.

CICS also has carved out a place on the web and with mobile. Sixty-five percent of respondents say at least some of their business-crit applications are available via PC, phone, tablet, and web-based interfaces while 11% more reported plans to mobile- and web-enable their mainframe apps in the future. Thirteen percent reported no plans to do so. Government sector respondents reported that they were significantly more likely to not make the applications available for online access; so much for open government and transparency.

CICS availability proved to raise no concern although a few were concerned with performance. Based on the study results in 2012 some predicted that companies would be moving away from CICS by now. These predictions, apparently, have not come to pass, at least not yet.

In fact, as far as the future of CICS, the technology seems to be facing a remarkably stable outlook for the next 3-5 years. The largest number of respondents, 37%, expected the number of CICS applications to remain the same in that period while 34% said they would be decreasing. More encouragingly, 27% of respondents planned to increase their number of CICS applications accessible online. In the financial services segment, 38% planned to increase the number of online CICS applications while only 10% expected to decrease the number of online applications. Given the demands by banking customers for mobile apps the increase in the number of CICS applications makes perfect sense.

The researchers concluded that CICS continues to play an important role for the majority of mainframe shops surveyed and an increasingly important role for a significant chunk of them.  The respondents also reported that, in general, they were satisfied with CICS performance even in the face of increasingly complex online workloads.

Mainframe CICS may see even more action going forward depending on what companies do with Internet of Things. As with mobile traffic, companies may turn to CICS to handle critical aspects of backend IoT activity, which has the potential to become quite large.

DancingDinosaur is Alan Radding, a veteran information technology analyst and writer. Please follow DancingDinosaur on Twitter, @mainframeblog. See more of his IT writing at technologywriter.com and here.

Exploiting the IBM z13 for Maximum Price/Performance Advantage

February 4, 2016

The z13 is the most powerful general purpose computer IBM has ever made. The key to capturing the maximum value from the z13, however, lies in how you plan, design, configure, and optimize your systems and software for everything from COBOL and Java to process parallelization and analytics. What you do in this regard will have significant impact on not only the price/performance you experience but on your success at achieving the business outcomes you are expecting.

z13-under the covers

IBM System z13

This really becomes a software configuration challenge. By tapping approximately 600 internal processors IBM already has optimized the hardware, input, output, memory, and networking/communications about as much as it can be. Your job is to optimize the software you are running, which will require working closely with your ISV.

The place to start is by leveraging the z13’s new compiler technology, parallelism, zIIP and assist processors. This will enable you to save significant money while boosting workload performance. You will literally be doing more for less.

Similarly, in the not too distant past Moore’s Law would virtually guarantee a 15-20% price/performance gain automatically just by taking a new machine out of the box and plugging it in. That’s no longer the case. Now you will have to partner with your ISV to exploit advanced software to maximize the hardware payback and continue the ride along the favorable Moore’s Law price/performance slope.

Then look at the latest COBOL V5.x and its compiler on the z13. Out of the box it is better optimized than previous compilers. In general, the strategic value of COBOL V5.x comes from migrating high CPU usage programs as quickly as possible, effectively saving organizations considerable money by running optimized code.

Some organizations report a 15% on average reduction of CPU time, which adds up to significant savings in monthly CPU charges. How significant? Up to $150k less on a $1 million bill, with some reporting even higher percentage reductions producing even greater savings. Just migrate to COBOL V5.2 (or at least V5.1) to achieve the savings. In general, staying on the software curve with the latest releases of the OS, languages, and compilers with applications optimized for them is the best way to ensure your workloads are achieving top performance in the most cost-effective way.

For example, the new z13 processor leverages a new Vector Facility for certain COBOL statements and expands the use of Decimal Floating Point Facility for packed decimal calculations. Well-structured, compute-intensive batch applications running on z13 and compiled with the Enterprise COBOL V5.2  compiler have shown CPU reduction usage of up to 14% over the same applications running on zEC12 (compiled with the GA release of Enterprise COBOL V5.1), according to IBM. The result: improved workload price/performance.

Enterprise COBOL V5.2 also includes new features to improve programmability, developer productivity, and application modernization. Supporting JSON, for instance, will provide mobile applications easy access to data and the processing they need from business critical production applications written in COBOL.

The z13 and its z sister, the latest LinuxONE dedicated Linux models, were designed and optimized from the start for cloud, mobile, and analytics. They were intended to run alongside traditional mainframe workloads with z/OS or Linux running on the appropriate models.

Finally, plan to take advantage of the new assist processors and expanded memory capacity to further boost performance and lower cost. With the z13, there is a mandatory migration of all zAAP-enabled applications to zIIP. Expect the usage of the zIIP assist processors to surge when all those Java applications move from the zAAP.  ISVs like Compuware should be able to help with this.  In addition, if you enable SMT on the z13, you’ll immediately get more Java capacity.  Applications that run under IBM WebSphere (WAS) on z/OS will benefit too.

The z13 and especially the LinuxONE are breaking new ground. IBM has established, in conjunction with the Linux Foundation, an Open Mainframe Project to support and advance ongoing open source Linux innovation on the mainframe. IBM also is breaking with its traditional mainframe pricing model by offering a pay-per-use option in the form of a fixed monthly payment with costs scaling up or down based on usage. It also offers per-core pricing with software licenses for designated cores. See DancingDinosaur here.

An upcoming DancingDinosaur will look at more of the enhancements being added to these machines, including some of the latest LinuxOne enhancements like support for Google’s Go language and Cloudant’s NoSQL services. The message: the new z System can take you to the places you will want to be in this emerging cloud-mobile-analytics era.

DancingDinosaur is Alan Radding, a veteran information technology analyst and writer. Please follow DancingDinosaur on Twitter, @mainframeblog. See more of his IT writing at technologywriter.com and here.

 

Making the IBM Mainframe Agile and Swift

December 7, 2015

Do you remember what the mainframe was like when you started on the mainframe career path? Today IBM blurs distinctions between the mainframe and distributed environments through Linux and Java as well as cloud and mobile delivery models.  Heck, you can run Windows natively on x86 cards in a zBX cabinet managed from a console on the z itself. Maybe it’s not the most efficient way to do it and expect better ways coming from IBM, but it is doable now.

seasoft devops imageDevOps in the SDLC, Courtesy Seasoft

More than just interact, the z and distributed environment must productively and seamlessly integrate and interoperate to produce a streamlined development, test, and deployment process. Compounding the challenge: they must do it fast. Organizations can no longer wait for six-month or nine-month release cycles to introduce new capabilities. If capabilities cannot be introduced in just a few weeks max, opportunities and revenue can be lost.  Agile and batch teams have no choice; they must work together.

This calls for data center adoption of DevOps, a combination of development, testing, and operations. Already IBM has instituted DevOps on the z System. The idea of bringing agile and batch together—it almost sounds like an oxymoron or the punchline from a bad computer systems joke—no longer is farfetched. Welcome to the world of hybrid computing where what was once considered disparate and incompatible systems are being brought together, often on the same platform.

The latest generations of the mainframes have been fully hybrid-capable platforms, starting with the z10. They are capable of running mixed workloads concurrently, some of which previously belonged in the distributed platform world only. Today, a mainframe shop with the latest z13 can run traditional z/OS COBOL workloads right alongside Java and Linux workloads. Those with a zBX extension cabinet can even run Windows workloads too under the same unified mainframe management console.

If that sounds a little too kludgy for you, just jump into the cloud. From Bluemix in the cloud you can get to DevOps and find just about everything you need already there, including IBM’s StrongLoop acquisition for API management and microservices.

So now the idea of bringing batch and agile computing together on the mainframe platform doesn’t seem so farfetched.  And it won’t stop there. IBM has been doing its enterprise thing with Apple for about a year. Expect more coming.

That said; an agile mainframe/distributed DevOps environment will become increasingly necessary. How often do you release software? Previously, if an IT organization released new software every year or even every 18 months customers were satisfied. Not anymore.  Today you can’t wait six months before the organization risks falling behind. LOB managers and customers won’t wait. There are too many competitors waiting for any chance to seize an advantage. Slow system refreshes and software updates just play into these competitors’ hands.

DevOps also is essential to the organization’s mobile strategy. Companies in every industry segment are deploying new mobile apps as fast as they can and then almost immediately updating them. For many of these mobile apps the mainframe is the back end, if not the middleware too. Each mobile request for information or to make a purchase or to schedule something triggers numerous back end processes that quickly make their way to the mainframe. It had gotten to the point where IBM had to discount mobile processing on the z or it would hinder mobile growth. DancingDinosaur covered it here.

Helping to drive mobile on the z, of course, is IBM’s relationship with Apple. Over the past year the two companies have been bringing out combined enterprise-mobile applications. Now Apple just announced that it is making its popular programming language, Swift, open source. It shouldn’t take much to get it onto Bluemix. Back in Sept. 2014  IBM announced it already had a preliminary version working through Bluemix.

Although Swift is known mainly for mobile client development, today it is described as combining the performance and efficiency of compiled languages with the simplicity and interactivity of popular scripting languages.  Apple’s Swift strategy seems coming right out of IBM’s recent playbook of embracing open source communities. You can get started at the Swift website, here.

DancingDinosaur is Alan Radding, a veteran information technology analyst and writer. Please follow DancingDinosaur on Twitter, @mainframeblog. See more of his IT writing at technologywriter.com and here.

IBM LinuxONE Can Uberize x86-Based IT

November 13, 2015

Uberization—industry disruption caused by an unlikely competitor—emerged as a dominant concern of C-suite executives in a recently announced IBM-Institute of Business Value study. According to the study, the percentage of C-suite leaders who expect to contend with competition from outside their industry increased from 43% in 2013 to 54% today.

IBM Csuite Study_Tiles_10_30_2 competition data

These competitors, future Ubers, aren’t just resulting from new permutations of old industries; they also are coming from digital invaders with totally different business models. Consider IBM LinuxONE, a powerful open source Linux z13 mainframe supported by two open communities, the Open Mainframe Project and the Linux Foundation. For the typical mass market Linux shop, usually an x86-based data center, LinuxONE can deliver a standard Linux distribution with both KVM and Ubuntu as part of a new pricing model that offers a pay-per-use option in the form of a fixed monthly payment with costs scaling up or down based on usage. It also offers per-core pricing with software licenses for designated cores.

Talk about disruptive; plus it brings scalability, reliability, high performance, and rock-solid security of the latest mainframe. LinuxONE can handle 8000 virtual servers in a single system, tens of thousands of containers. Try doing that with an x86 machine or even a dozen.

Customers of traditional taxi companies or guests at conventional hotels have had to rethink their transportation or accommodation options in the face of Uberization and the arrival of other disruptive alternatives like Airbnb. So too, x86 platform shops will have to rethink their technology platform options. On either a per-workload basis or a total cost of ownership (TCO) basis, the mainframe has been cost competitive for years. Now with the Uberization of the Linux platform by LinuxONE and IBM’s latest pricing options for it, the time to rethink an x86 platform strategy clearly has arrived. Many long-held misconceptions about the mainframe will have to be dropped or, at least, updated.

The biggest risk to businesses used to come from a new rival with a better or cheaper offering, making it relatively simple to alter strategies. Today, entrenched players are being threatened by new entrants with completely different business models, as well as smaller, more agile players unencumbered by legacy infrastructure. Except for the part of being smaller, IBM’s LinuxONE definitely meets the criteria as a threatening disruptive entrant in the Linux platform space.

IBM even is bring new business models to the effort too, including hybrid cloud and a services-driven approach as well as its new pricing. How about renting a LinuxONE mainframe short term? You can with one of IBM’s new pricing options: just rent a LinuxONE machine monthly with no upfront payment.  At the end of the 36-month rental (can return the hardware after 1 year) you choose to return, buy, or replace. Try that with enterprise-class x86 machines.

The introduction of support for both KVM and Ubuntu on the z platform opens even more possibilities. With the LinuxONE announcement Ubuntu has been elevated to a fully z-supported Linux distribution. Together IBM and Canonical are bringing a distribution of Linux incorporating Ubuntu’s scale-out and cloud expertise on the IBM z Systems platform, further expanding the reach of both. Ubuntu combined with KVM should make LinuxONE very attractive for OpenStack-based hybrid cloud computing that may involve thousands of VMs and containers. And don’t forget a broader range of tools, including an expanded set of open-source and industry tools and software, including Apache Spark, Node.js, MongoDB, MariaDB, PostgreSQL, Chef and Coker.

Deon Newman, VP of Marketing for IBM z Systems, can recite the LinuxONE scalability stats off the top of his head: The entry-level, single-frame LinuxONE server, named Rockhopper, starts at 80 virtual Linux machines, and hundreds and hundreds of containers while the high-end double-frame server, Emperor, features six IFLs that support up to 350 virtual machines and can scale all the way to 8,000 virtual machines. On the Emperor server, you can literally have hundreds of thousands of containers on a single platform. Newman deliberately emphasizes that LinuxONE machines are servers.  x86 server users take note. LinuxONE definitely is not your father’s mainframe.

In the latest C-suite study all C-suite executives—regardless of role—identified for the first time technology as the most important external force impacting their enterprise. These executives believe cloud computing, mobile solutions, the Internet of Things, and cognitive computing are the technologies most likely to revolutionize or Uberize their business.

DancingDinosaur is Alan Radding, a veteran information technology analyst and writer. Please follow DancingDinosaur on Twitter, @mainframeblog. See more of his IT writing at technologywriter.com and here.

 

 

 

 

 

IBM Continues to Bolster Bluemix PaaS

September 10, 2015

In the last 10 years the industry, led by IBM, has gotten remarkably better at enabling nearly coding-free development. This is important given how critical app development has become. Today it is impossible to launch any product without sufficient app dev support.  At a minimum you need a mobile app and maybe a few micro-services. To that end, since May IBM has spent the summer introducing a series of Bluemix enhancements. Find them here and here and here and here.  DancingDinosaur, at best a mediocre programmer, hasn’t written any code for decades but in this new coding environment he has started to get the urge to participate in a hack-a-thon. Doesn’t that (below) look like fun?

Bluemix Garage Toronto 1

IBM’s Bluemix Garage in Toronto (click to enlarge)

The essential role of software today cannot be overestimated. Even companies introducing non-technical products have to support them with apps and digital services that must be continually refreshed.  When IoT really starts to ramp up bits and pieces of code everywhere will be needed to handle the disparate pieces, get everything to interoperate, collect the data, and then use it or analyze it and initiate the next action.

Bluemix, a cloud-based PaaS product, comes as close to an all-in-one Swiss army knife development and deployment platform for today’s kind of applications as you will find. Having only played around with a demo it appears about as intuitive as an enterprise-class product can get.

The most recent of IBM’s summer Bluemix announcement promises more flexibility to integrate Java-based resources into Bluemix.  It offers a set of services to more seamlessly integrate Java-based resources into cloud-based applications. For instance, according to IBM, it is now possible to test and run applications in Bluemix with Java 8. Additionally, among other improvements, the jsp-2.3, el-3.0, and jdbc-4.1 Liberty features, previously in beta, are now available as production-ready. Plus, Eclipse Tools for Bluemix now includes JavaScript Debug, support for Node.js applications, Java 8 Liberty for Java integration, and Eclipse Mars support for the latest Eclipse Mars version as well as an improved trust self-signed certificates capability. Incremental publish support for JEE applications also has been expanded to handle web fragment projects.

In mid-August IBM announced the use of streaming analytics and data warehouse services on Bluemix. This should enable developers to expand the capabilities of their applications to give users a more robust cloud experience by facilitating the integration of data analytics and visualization seamlessly in their apps. Specifically, according to IBM, a new streaming analytics capability was put into open beta; the service provides the capability to instantaneously analyze data while scaling to thousands of sources on the cloud. IBM also added MPP (massively parallel processing) capabilities to enable faster query processing and overall scalability. The announcement also introduces built-in Netezza analytics libraries integrated with Watson Analytics, and more.

Earlier in August, IBM announced the Bluemix Garage opening in Toronto (pictured above). Toronto is just the latest in a series coding workspaces IBM intends to open worldwide. Next up appear to be Nice, France and Melbourne, Australia later this year.  According to IBM, Bluemix Garages create a bridge between the scale of enterprises and the culture of startups by establishing physical collaboration spaces housed in the heart of thriving entrepreneurial communities around the world. Toronto marks the third Bluemix Garage. The Toronto Bluemix Garage is located at the DMZ at Ryerson University, described as the top-ranked university-based incubator in Canada. Experts there will mentor the rising numbers of developers and startups in the region to create of the next generation of cloud apps and services using IBM’s Bluemix.

Members of the Toronto Bluemix Garage include Tangerine, a bank based in Canada that is using Bluemix to implement its mobile strategy. Through the IBM Mobile Quality Assurance for Bluemix service, Tangerine gathers customer feedback and actionable insight on its mobile banking app, effectively streamlining its implementation and development processes.

Finally, back in May IBM introduced new Bluemix Services to help developers create analytics-driven cloud applications. Bluemix, according to IBM, is now the largest Cloud Foundry deployment in the world. And the services the company announced promise to make it easier for developers to create cloud applications for mobile, IoT, supply chain analytics, and intelligent infrastructure solutions. The new capabilities will be added to over 100 services already available in the Bluemix catalog.

At the May announcement, IBM reported bringing more of its own technology into Bluemix, including:

  • Bluemix API Management, which allows developers to rapidly create, deploy, and share large-scale APIs and provides a simple and consumable way of controlling critical APIs not possible with simpler connector services
  • New mobile capabilities available on Bluemix for the IBM MobileFirst Platform, which provide the ability to develop location-based mobile apps that connect insights from digital engagement and physical presence

It also announced a handful of ecosystem and third-party services being added into Bluemix, including several that will facilitate working with .NET capabilities. In short, it will enable Bluemix developers to take advantage of Microsoft development approaches, which should make it easier to integrate multiple mixed-platform cloud workloads.

Finally, as a surprise note at the end of the May announcement IBM added that the company’s total cloud revenue—covering public, private and hybrid engagements—was $7.7 billion over the previous 12 months as of the end of March 2015, growing more than 60% in first quarter 2015.  Hope you’ve noticed that IBM is serious about putting its efforts into the cloud and openness. And it’s starting to pay off.

DancingDinosaur is Alan Radding, a veteran IT analyst and writer. Please follow DancingDinosaur on Twitter, @mainframeblog. See more of his IT writing at technologywriter.com and here.

 

IBM LinuxONE and Open Mainframe Project Expand the z System

August 20, 2015

Meet the new IBM z System; called LinuxONE Emperor (named after the Emperor Penguin.) It is a z13 running only Linux. Check out the full announcement here.

Primary LinuxOne emperor

Courtesy of IBM, LinuxONE Emperor, the newest z System

DancingDinosaur is excited by several aspects of this announcement:  IBM is establishing, in conjunction with the Linux Foundation, an Open Mainframe Project; the company is breaking with its traditional mainframe pricing model; it also is putting KVM and Ubuntu on the machine; and it is offering a smorgasbord of app-dev options, including some of the sexiest in the industry today. DancingDinosaur never believed it would refer to a mainframe as sexy (must be time to retire).

Along with LinuxONE Emperor IBM announced an entry dedicated Linux machine, the LinuxONE Rockhopper. (BTW; notice the new playfulness in IBM’s product naming.) Rockhopper appears to be very similar to what IBM used to call a Business Class z, although IBM has stepped away from that designation. The closest you may get to a z13 business class machine may be LinuxONE Rockhopper. Rockhopper, according to IBM, is designed for clients and emerging markets seeking the speed, security and availability of the mainframe but in a smaller package.

The biggest long term potential impact from the announcement may come out of the Open Mainframe Project. Like many of IBM’s community project initiatives, IBM is starting by seeding the open community with z code, in effect creating the beginning of an open z System machine.  IBM describes this as the largest single contribution of mainframe code from IBM to the open source community. A key part of the mainframe code contributions will be the z’s IT predictive analytics that constantly monitor for unusual system behavior and help prevent issues from turning into failures. In effect, IBM is handing over zAware to the open source community. It had already announced intentions to port zAware to Linux on z early this year so it might as well make it fully open. The code, notes IBM, can be used by developers to build similar sense-and-respond resiliency capabilities for other systems.

The Open Mainframe Project, being formed with the Linux Foundation, will involve a collaboration of nearly a dozen organizations across academia, government, and corporate sectors to advance development and adoption of Linux on the mainframe. It appears that most of the big mainframe ISVs have already signed on. DancingDinosaur, however, expressed concern that this approach brings the possibility of branching the underlying functionality between z and Linux versions. IBM insists that won’t happen since the innovations would be implemented at the software level, safely insulated from the hardware. And furthermore, should there emerge an innovation that makes sense for the z System, maybe some innovation around the zAware capabilities, the company is prepared to bring it back to the core z.

The newly announced pricing should also present an interesting opportunity for shops running Linux on z.  As IBM notes: new financing models for the LinuxONE portfolio provide flexibility in pricing and resources that allow enterprises to pay for what they use and scale up quickly when their business grows. Specifically, for IBM hardware and software, the company is offering a pay-per-use option in the form of a fixed monthly payment with costs scaling up or down based on usage. It also offers per-core pricing with software licenses for designated cores. In that case you can order what you need and decrease licenses or cancel on 30 days notice. Or, you can rent a LinuxONE machine monthly with no upfront payment.  At the end of the 36-month rental (can return the hardware after 1 year) you choose to return, buy, or replace. Having spent hours attending mainframe pricing sessions at numerous IBM conferences this seems refreshingly straightforward. IBM has not yet provided any prices to analysts so whether this actually is a bargain remains to be seen. But at least you have pricing option flexibility you never had before.

The introduction of support for both KVM and Ubuntu on the z platform opens intriguing possibilities.  Full disclosure: DancingDinosaur was an early Fedora adopter because he could get it to run on a memory-challenged antiquated laptop. With the LinuxONE announcement Ubuntu has been elevated to a fully z-supported Linux distribution. Together IBM and Canonical are bringing a distribution of Linux incorporating Ubuntu’s scale-out and cloud expertise on the IBM z Systems platform, further expanding the reach of both. Ubuntu combined with KVM should make either LinuxONE machine very attractive for OpenStack-based hybrid cloud computing that may involve thousands of VMs. Depending on how IBM ultimately prices things, this could turn into an unexpected bargain for Linux on z data centers that want to save money by consolidating x86 Linux servers, thereby reducing the data center footprint and cutting energy costs.  LinuxONE Emperor can handle 8000 virtual servers in a single system, tens of thousands of containers.

Finally, LinuxONE can run the sexiest app-dev tools using any of the hottest open technologies, specifically:

  • Distributions: Red Hat, SuSE and Ubuntu
  • Hypervisors: PR/SM, z/VM, and KVM
  • Languages: Python, Perl, Ruby, Rails, Erlang, Java, Node.js
  • Management: WAVE, IBM Cloud Manager, Urban Code Openstack, Docker, Chef, Puppet, VMware vRealize Automation
  • Database: Oracle, DB2LUW, MariaDB, MongoDB, PostgreSQL
  • Analytics: Hadoop, Big Insights, DB2BLU and Spark

And run the results however you want: single platform, multi-platform, on-prem and off-prem, or multiple mixed cloud environments with a common toolset. Could a combination of LinuxONE alongside a conventional z13 be the mainframe data center you really want going forward?

DancingDinosaur is Alan Radding, a veteran IT analyst and writer. Please follow DancingDinosaur on Twitter, @mainframeblog. See more of his IT writing at technologywriter.com and here.

IBM Continues Open Source Commitment with Apache Spark

June 18, 2015

If anyone believes IBM’s commitment to open source is a passing fad, forget it. IBM has invested billions in Linux, open Power through the Open Power Foundation, and more. Its latest is the announcement of a major commitment to Apache Spark, a fast open source and general cluster computing system for big data.

spark VGN8668

Courtesy of IBM: developers work with Spark at Galvanize Hackathon

As IBM sees it, Spark brings essential advances to large-scale data processing. Specifically, it dramatically improves the performance of data dependent-apps and is expected to play a big role in the Internet of Things (IoT). In addition, it radically simplifies the process of developing intelligent apps, which are fueled by data. It does so by providing high-level APIs in Scala, Java, and Python, and an optimized engine that supports general computation graphs for data analysis. It also supports a rich set of higher-level tools including Spark SQL for SQL and DataFrames, MLlib for machine learning, GraphX for graph processing, and Spark Streaming for stream processing.

IBM is contributing its breakthrough IBM SystemML machine learning technology to the Spark open source ecosystem. Spark brings essential advances to large-scale data processing, such as improvements in the performance of data dependent apps. It also radically simplifies the process of developing intelligent apps, which are fueled by data. But maybe the biggest advantage is that it can handle data coming from multiple, disparate sources.

What IBM likes in Spark is that it’s agile, fast, and easy to use. It also likes it being open source, which ensures it is improved continuously by a worldwide community. That’s also some of the main reasons mainframe and Power Systems data centers should pay attention to Spark.  Spark will make it easier to connect applications to data residing in your data center. If you haven’t yet noticed an uptick in mobile transactions coming into your data center, they will be coming. These benefit from Spark. And if you look out just a year or two, expect to see IoT applications adding to and needing to combine all sorts of data, much of it ending up on the mainframe or Power System in one form or another. So make sure Spark is on your radar screen.

Over the course of the next few months, IBM scientists and engineers will work with the Apache Spark open community to accelerate access to advanced machine learning capabilities and help drive speed-to-innovation in the development of smart business apps. By contributing SystemML, IBM hopes data scientists iterate faster to address the changing needs of business and to enable a growing ecosystem of app developers who will apply deep intelligence to everything.

To ensure that happens, IBM will commit more than 3,500 researchers and developers to work on Spark-related projects at more than a dozen labs worldwide, and open a Spark Technology Center in San Francisco for the Data Science and Developer community to foster design-led innovation in intelligent applications. IBM also aims to educate more than 1 million data scientists and data engineers on Spark through extensive partnerships with AMPLab, DataCamp, MetiStream, Galvanize, and Big Data University MOOC (Massive Open Online Course).

Of course, Spark isn’t going to be the end of tools to expedite the latest app dev. With IoT just beginning to gain widespread interest expect a flood of tools to expedite developing IoT data-intensive applications and more tools to facilitate connecting all these coming connected devices, estimated to number in the tens of billions within a few years.

DancingDinosaur applauds IBM’s decade-plus commitment to open source and its willingness to put real money and real code behind it. That means the IBM z System mainframe, the POWER platform, Linux, and the rest will be around for some time. That’s good; DancingDinosaur is not quite ready to retire.

DancingDinosaur is Alan Radding, a veteran IT analyst and writer. Please follow DancingDinosaur on Twitter, @mainframeblog. See more of his IT writing on Technologywriter.com and here.


Follow

Get every new post delivered to your Inbox.

Join 883 other followers

%d bloggers like this: