Posts Tagged ‘millenials’

LinuxONE is a Bargain

September 21, 2018

LinuxONE may be the best bargain you’ll ever find this season, and you don’t have to wait until Santa brings it down your chimney. Think instead about transformation and digital disruption.  Do you want to be in business in 3 years? That is the basic question that faces every organization that exists today, writes Kat Lind, Chief Systems Engineer, Solitaire Interglobal Ltd, author of the white paper Scaling the Digital Mountain.

Then there is the Robert Frances Group’s  Top 10 Reasons to Choose LinuxONE. DancingDinosaur won’t rehash all ten. Instead, let’s selectively pick a few, starting with the first one, Least Risk Solution, which pretty much encapsulates the LinuxONE story. It reduces business, compliance, financial, operations, and project risks. Its availability, disaster recovery, scalability and security features minimize the business and financial exposures. In addition to pervasive encryption it offers a range of security capabilities often overlooked or downplayed including; logical partition (LPAR) isolation, and secure containers.

Since it is a z dedicated to Linux, unlike the z13 or z14 z/OS machines that also run Linux but not as easily or efficiently,  As the Robert Frances Group noted: it also handles Java, Python; and other languages and tools like Hadoop, Docker, other containers, Chef, Puppet, KVM, multiple Linux distributions, open source, and more.  It also can be used in a traditional legacy environment or used as the platform of choice for cloud hosting. LinuxONE supports tools that enable DevOps similar to those on x86 servers.

And LinuxONE delivers world class performance. As the Robert Frances Group puts it: LinuxONE is capable of driving processor utilization to virtually 100% without a latency impact, performance instabilities, or performance penalties. In addition, LinuxONE uses the fastest commercially available processors, running at 5.2GHz, offloads I/O to separate processors enabling the main processors to concentrate on application workloads, and enables much more data in memory, up to 32TB.

In addition, you can run thousands of virtual machine instances on a single LinuxONE server. The cost benefit of this is astounding compared to managing the equivalent number of x86 servers. The added labor cost alone would break your budget.

In terms of security, LinuxONE is a no brainer. Adds Lind from Solitaire:  Failure in this area erodes an organization’s reputation faster than any other factor. The impact of breaches on customer confidence and follow-on sales has been tracked, and an analysis of that data shows that after a significant incursion, the average customer fall-off exceeds 41% accompanied by a long-running drop in revenues. Recovery involves a significant outlay of service, equipment, and personnel expenses to reestablish a trusted position, as much as 18.6x what it cost to get the customer initially. And Lind doesn’t even begin to mention the impact when the compliance regulators and lawyers start piling on. Anything but the most minor security breach will put you out of business faster than the three years Lind asked at the top of this piece.

But all the above is just talking in terms of conventional data center thinking. DancingDinosaur has put his children through college doing TCO studies around these issues. Lind now turns to something mainframe data centers are just beginning to think about; digital disruption. The strategy and challenges of successfully navigating the chaos of cyberspace translates into a need to have information on both business and security and how they interact.

Digital business and security go hand in hand, so any analysis has to include extensive correlation between the two. Using data from volumes of customer experience responses, IT operational details, business performance, and security, Solitaire examined the positioning of IBM LinuxONE in the digital business market. The results of that examination boil down into three: security, agility, and cost. These areas incorporate the primary objectives that organizations operating in cyberspace today regard as the most relevant. And guess who wins any comparative platform analysis, Lind concludes: LinuxONE.

DancingDinosaur is Alan Radding, a veteran information technology analyst, writer, and ghost-writer. Follow DancingDinosaur on Twitter, @mainframeblog. See more of his work at technologywriter.com.

 

 

 

 

 

 

 

Compuware Continues Mainframe Software Renaissance

January 19, 2017

While IBM focuses on its strategic imperatives, especially cognitive computing (which are doing quite well according to the latest statement that came out today–will take up next week), Compuware is fueling a mainframe software renaissance on its own. It’s latest two announcements brings Java-like unit testing to COBOL code via its Topaz product set and automate and intelligently optimize the processing of batch jobs through its acquisition of MVS Solutions. Both modernize and simplify the processes around legacy mainframe coding thus the reference to mainframe software renaissance.

compuware-total-test-graphic-process-flow-diagram

Let’s start with Compuware’s Topaz set of graphical tools. Since they are GUI-based even novice developers can immediately validate and troubleshoot whatever changes, either intended or inadvertent, they made to the existing COBOL applications.  Compuware’s aim for Topaz for Total Test is to eliminate any notion that such applications are legacy code and therefore cannot be updated as frequently and with the same confidence as other types of applications. Basically, mainframe DevOps.

By bringing fast, developer-friendly unit testing to COBOL applications, the new test tool also enables enterprises to deliver better customer experiences—since to create those experiences, IT needs its Agile/DevOps processes to encompass all platforms, from the mainframe to the cloud.  As a result z shops can gain increased digital agility along with higher quality, lower costs, and dramatically reduced dependency on the specialized knowledge of mainframe veterans aging out of the active IT workforce. In fact, the design of the Topaz tools enables z data centers to rapidly introduce the z to novice mainframe staff, which become productive virtually from the start—another cost saver.

Today in 2017 does management still need to be reminded of the importance of the mainframe. Probably, even though many organizations—among them the world’s largest banks, insurance companies, retailers and airlines—continue run their business on mainframe applications, and recent surveys clearly indicate that situation is unlikely to change anytime soon. However, as Compuware points out, the ability of enterprises to quickly update those applications in response to ever-changing business imperatives is daily being hampered by manual, antiquated development and testing processes; the ongoing loss of specialized COBOL programming knowledge; and the risk and associated fear of introducing even the slightest defect into core mainframe systems of record. The entire Topaz design approach from the very first tool, was to make mainframe code accessible to novices. That has continued every quarter for the past two years.

This is not just a DancingDinosaur rant. IT analyst Rich Ptak from Ptak Associates also noted: “By eliminating a long-standing constraint to COBOL Compuware provides enterprise IT the ability to deliver more digital capabilities to the business at greater speed and with less risk.”

Gartner in its latest Predicts 2017, chimes in with its DevOps equivalent of your mother’s reminder to brush your teeth after each meal: Application leaders in IT organizations should adopt a continuous quality culture that includes practices to manage technical debt and automate tests focused on unit and API testing. It should also automate test lab operations to provide access to production-like environments, and enable testing of deployment through the use of DevOps pipeline tools.” OK mom; everybody got the message.

The acquisition of MVS Solutions, Compuware’s fourth in the last year, adds to the company’s collection of mainframe software tools that promise agile, DevOps and millennial-friendly management of the IBM z platform—a continuation of its efforts to make the mainframe accessible to novices. DancingDinosaur covered these acquisition in early December here.

Batch processing accounts for the majority of peak mainframe workloads at large enterprises, providing essential back-end digital capabilities for customer-, employee- and partner-facing mobile, cloud, and web applications. As demands on these back-end mainframe batch processes intensify in terms of scale and performance, enterprises are under increasing pressure to ensure compliance with SLAs and control costs.

These challenges are exacerbated by the fact that responsibility for batch management is rapidly being shifted from platform veterans with decades of experience in mainframe operations to millennial ops staff who are unfamiliar with batch management. They also find native IBM z Systems management tools arcane and impractical, which increases the risk of critical batch operations being delayed or even failing. Run incorrectly, the batch workloads risk generating excessive peak utilization costs.

The solution, notes Compuware, lies in its new ThruPut Manager, which promises automatic, intelligent optimized batch processing. In the process it:

  • Provides immediate, intuitive insight into batch processing that even inexperienced operators can readily understand
  • Makes it easy to prioritize batch processing based on business policies and goals
  • Ensures proper batch execution by verifying that jobs have all the resources they need and proactively managing resource contention between jobs
  • Reduces the organizations’ IBM Monthly Licensing Charges (MLC) by minimizing rolling four-hour average (R4HA) processing peaks while avoiding counter-productive soft capping

Run in conjunction with Strobe, Compuware’s mainframe application performance management tool, ThruPut Manager also makes it easier to optimize batch workload and application performance as part of everyday mainframe DevOps tasks. ThruPut promises to lead to more efficiency and greater throughput resulting in a shorter batch workload and reduced processing capacity. These benefits also support better cross-platform DevOps, since distributed and cloud applications often depend on back-end mainframe batch processing.

Now, go out an hire some millenials and bring fresh blood into the mainframe. (Watch for DancingDinosaur’s upcoming post on why the mainframe is cool again.)

DancingDinosaur is Alan Radding, a veteran information technology analyst, writer, and ghost-writer. Please follow DancingDinosaur on Twitter, @mainframeblog. See more of his IT writing at technologywriter.com and here.

 


%d bloggers like this: