Posts Tagged ‘Java’

Z Open Terminal Emulation

September 25, 2020

You can spend a lot of time working with the Z and not find much new in terminal emulation. But there actually are a few new things, mainly because times change and people work differently, using different devices and doing new things. Sure, it all goes back to the mainframe, but it is a new world.

Terminal emulator screen

Rocket Software’s latest wrinkle in terminal emulation is BlueZone Web, which promises to simplify using the mainframe by enabling users to access host-based applications anywhere and on any type of device. It is part of a broader initiative Rocket calls Open AppDev for Z. From DancingDinosaur’s perspective its strength lies in being Zowe-compliant, an open source development environment from the Open Mainframe Project.This makes IBM Z a valuable open platform for an enterprise DevOps infrastructure.

Zowe is the first open source framework for z/OS. It facilitates DevOps teams to securely manage, control, script and develop on the mainframe like any other cloud platform. Launched in a collaboration of initial contributors IBM, CA Technologies, and Rocket Software, and supported by the Open Mainframe Project. The goal is to cultivate the next generation of mainframe developers, whether or not they have Z experience. Zowe promotes a faster team on-ramp to productivity, collaboration, knowledge sharing, and communication.

This is the critical thing about Zowe: you don’t need Z platform experience. Open source developers and programmers can use a wide range of popular open source tools, languages, and technologies–the tools they already know. Sure it’d be nice to find an experienced zOS developer  but that is increasingly unlikely, making Zowe a much better bet.   

According to the Open Source Project, IBM’s initial contribution to Zowe was an extensible z/OS framework that provides REST-based services and APIs that will allow even inexperienced developers to rapidly use new technology, tools, languages, and modern workflows with z/OS. 

IBM continues to invest in the open source environment through Zowe and other open source initiatives.  Zowe also has help from Rocket Software, which provides a web user interface, and CA, which handles the Command Line Interface. You can find more about zowe here.

IBM introduced Linux, a leading open source technology, to the Z over 20 years ago. In time it has expanded the range of the Z through open-source tools that can be combined with products developed by different communities. This does create unintentional regulatory and security risks. Rocket Open AppDev for Z helps mitigate these risks, offering a solution that provides developers with a package of open tools and languages they want, along with the security, easy management, and support IBM Z customers require.

“We wanted to solve three common customer challenges that have prevented enterprises from leveraging the flexibility and agility of open software within their mainframe environment: user and system programmer experience, security, and version latency,” said Peter Fandel, Rocket’s Product Director of Open Software for Z. “With Rocket Open AppDev for Z, we believe we have provided an innovative secure path forward for our customers,” he adds. Businesses can now extend the mainframe’s capabilities through the adoption of open source software, making IBM Z a valuable platform for their DevOps infrastructure.”

But there is an even bigger question here that Rocket turned to IDC to answer. The question: whether businesses that run mission-critical workloads on IBM Z or IBMi should remain on these platforms and modernize them by leveraging the innovative tools that exist today or replatform by moving to an alternative on-premises solution, typically x86 or the cloud.

IDC investigated more than 440 businesses that have either modernized the IBM Z or IBMi or replatformed. The results: modernizers incur lower costs for their modernizing initiative than the replatformers.  Modernizers were more satisfied with the new capabilities of their modernized platform than replatformers; and the modernizers achieved a new baseline for which they paid less in hardware, software, and staffing. There is much more of interest in this study, which DancingDinosaur will explore in the weeks or months ahead.

Alan Radding, a veteran information technology analyst, writer, and ghost-writer, is DancingDinosaur. Follow DancingDinosaur on Twitter, @mainframeblog, and see more of his work at http://technologywriter.com/.

5G Will Accelerate a New Wave of IoT Applications and Z

August 10, 2020

Even before the advent of 5G DancingDinosaur, which had ghostwritten a top book on IoT, believed that IoT and smartphones would lead back to the Z eventually, somehow. Maybe the arrival of 5G and smart edge computing might slow the path to the Z. Or maybe not.

Even transactions and data originating and being processed at the edge will need to be secured, backed up, stored, distributed to the cloud, to other servers and systems, to multiple clouds, on premises, and further  processed and reprocessed in numerous ways. Along the way, they will find their ways back to a Z somehow and somewhere, sooner or later.

an edge architecture

5G is driving change in the Internet of Things (IoT). It’s a powerful enabling technology for a new generation of use cases that will leverage edge computing to make IoT more effective and efficient,” writes Rishi Vaish and Sky Matthews. Rishi Vaish is CTO and VP, IBM AI Applications; Sky Matthews is CTO, Engineering Lifecycle Management at IBM.  DancingDinosaur completely agrees, adding only that it won’t just stop there.

Vaish and Matthews continue: “In many ways, the narrative of 5G is the interaction between two inexorable forces: the rise in highly reliable, high-bandwidth communications, and the rapid spread of available computing power throughout the network. The computing power doesn’t just end at the network, though. End-point devices that connect to the network are also getting smarter and more powerful.” 

True enough, the power does not just end there; neither does it start there. There is a long line of powerful systems, the z15 and generations of Z before it that handle and enhance everything that happens in whatever ways are desired at that moment or, as is often the case, later. 

And yes, there will be numerous ways to create comparable services using similarly smart and flexible edge devices. But experience has shown that it takes time to work out the inevitable kinks that invariably will surface, often at the least expected and most inopportune moment. Think of it as just the latest manifestation of Murphy’s Law moved to the edge and 5G.

The increasingly dynamic and powerful computational environment that’s taking shape as telcos begin to redesign their networks for 5G will accelerate the uptake of IoT applications and services throughout industry,  Vaish and Matthews continue. We expect that 5G will enable new use cases in remote monitoring and visual inspection, autonomous operations in large-scale remote environments such as mines, connected vehicles, and more.

This rapidly expanding range of computing options, they add,  requires a much more flexible approach to building and deploying applications and AI models that can take advantage of the most cost-efficient compute resources available.

IBM chimes in: There are many ways that this combination of 5G and edge computing can enable new applications and new innovations in various industries. IBM and Verizon, for example, are developing potential 5G and edge solutions like remote-controlled robotics, near real-time video analysis, and other kinds of factory-floor automation.

The advantage comes from smart 5G edge devices doing the analytics immediately, at the spot where decisions may be best made. Are you sure that decisions made at the edge immediately are always the best? DancingDinosaur would like to see a little more data on that.

In that case, don’t be surprised to discover that there will be other decisions that benefit from being made later, with the addition of other data and analysis. There is too much added value and insight packed into the Z data center to not take advantage of it.

Alan Radding, a veteran information technology analyst, writer, and ghost-writer, is DancingDinosaur. Follow DancingDinosaur on Twitter, @mainframeblog, and see more of his work at http://technologywriter.com/.

Do Your ISVs Run Current Z Systems

March 9, 2020

How many of your mainframe ISVs have the latest z15? Here is my mea culpa: DancingDinosaur has spoken with many mainframe ISVs repeatedly over the years–usually their PR people or SMEs anyway–and never specifically asked if they were running the latest z at the time. I’m embarrassed; it will now be the first question I ask when they come looking for publicity.

IBM Launches Their New Mainframe Called the IBM z15

IBM z15

DancingDinosaur recently spoke with Matt Deres, the CIO of Rocket Software, a leading mainframe ISV, specifically because they had recently upgraded from a z14 to the latest z15. For the record, Rocket Software is a privately held ISV founded in 1990. Rocket develops products in such fields as analytics, networks, data, storage, and enterprise software. The firm’s products are designed to run on mainframes, Linux/Unix/Windows, IBM i, cloud, and hybrid/virtualized systems.

“The main reason we got it was to make sure we’re using the same platform our customers are using and buying,” explained Deres. In addition, they wanted to take advantage of the new features and functionality.  Their system is a z15 Model 717, running 9 IFLs.

The company is committed to keeping current with the technology. That means moving to a z16 in 24-30 months. “We usually lag a few months behind,” he admits. Even with that small lag, Rocket ensures its software remains state-of-the-art.

Currently the company’s 900 people are using the z15. These include developers, sales engineers, and consultants. And the transition from the z14 to the z15 proved seamless. People kept working without interruption.

Granted, small ISVs may have difficulty securing a fully configured z. There are, however, other ways to gain access to z functionality. Sure, they are not the same as having an actual z, even an older, modestly configured one. OK, not every ISV can be Rocket Software when it comes to staying current with the z, but there are other ways. 

For example, IBM Workload Simulator for z/OS and OS/390 can simulate a network of terminals and its associated messages. This solution for stress, performance, regression, function, and capacity planning testing eliminates the need for large amounts of terminal hardware and operator time by providing a powerful analysis with log list, log compare, and response time utilities.

Maybe the most popular is the IBM zPDT. The z Personal Development Tool (zPDT) allows developers to emulate z/OS on their x86 desktops.  z/OS on x86 may be one of the easiest ways to grow a mainframe ISV business, especially if budget is tight. Basically, it lets you simulate z functionality on an x86 system.

Finally, there is Hercules, an emulator. Hercules, as described by Wikipedia, allows software written for IBM mainframes  (System/370, System/390, and zSeries/System z) and plug-compatible mainframes (such as Amdahl) to run on other types of computer hardware, notably on low-cost personal computers. Development started in 1999 by Roger Bowler, a mainframe systems programmer.

Hercules runs under multiple parent operating systems including GNU/Linux, Microsoft Windows, FreeBSD, Solaris, and Mac OS X and is released under the open source software license QPL.  A vendor (or distributor) must still provide an operating system, and the user must install it. Hercules, reportedly,  was the first mainframe emulator to incorporate 64-bit z/Architecture support.

DancingDinosaur is Alan Radding, a veteran information technology analyst, writer, and ghost-writer. Follow DancingDinosaur on Twitter, @mainframeblog, and see more of his work at http://technologywriter.com/

Red Hat OpenShift Container Platform on z

February 20, 2020

IBM is finally starting to capitalize on last year’s $34 billion acquisition of Red Hat for z shops. If you had a new z and it ran Linux you would have no problem running Red Hat products so the company line went. Well, in mid February IBM announced Red Hat’s OpenShift Container Platform is now available on the z and LinuxONE, a z with built-in Linux optimized for the underlying z.

OpenShift comes to z and LinuxONE

As the company puts it:  The availability of OpenShift for z and LinuxONE is a major milestone for both hybrid multicloud and enterprise computing. OpenShift, a form of middleware for use with DevOps,  supports cloud-native applications being built once and deployed anywhere, including to on premises enterprise servers, especially the z and LinuxONE. This new release results from the collaboration between IBM and Red Hat development teams, and discussions with early adopter clients.

Working with its Hybrid Cloud, the company has created a roadmap for bringing the ecosystem of enterprise software to the OpenShift platform. IBM Cloud Paks containerize key IBM and open source software components to help enable faster enterprise application development and delivery. In addition to the availability of OpenShift for z it also announced that IBM Cloud Pak for Applications is available for the z and LinuxONE. In effect, it supports the modernization of existing apps and the building of new cloud-native apps. In addition, as announced last August,it is the company’s intention to deliver additional Cloud Paks for the z and LinuxONE.

Red Hat is a leader in hybrid cloud and enterprise Kubernetes, with more than 1,000 customers already using Red Hat OpenShift Container Platform. With the availability of OpenShift for the z and LinuxONE, the agile cloud-native world of containers and Kubernetes, which has become the defacto open global standard for containers and orchestration,  but it is now reinforced by the security features, scalability, and reliability of IBM’s enterprise servers.

“Containers are the next generation of software-defined compute that enterprises will leverage to accelerate their digital transformation initiatives,” says Gary Chen, Research Director at IDC, in a published report.  “IDC estimates that 71% of organizations are in the process of implementing containers and orchestration or are already using them regularly. IDC forecasts that the worldwide container infrastructure software opportunity is growing at a 63.9 % 5-year CAGR and is predicted to reach over $1.5B by 2022.”

By combining the agility and portability of Red Hat OpenShift and IBM Cloud Paks with the security features, scalability, and reliability of z and LinuxONE, enterprises will have the tools to build new cloud-native applications while also modernizing existing applications. Deploying Red Hat OpenShift and IBM Cloud Paks on z and LinuxONE reinforces key strengths and offers additional benefits:

  • Vertical scalability enables existing large monolithic applications to be containerized, and horizontal scalability enables support for large numbers of containers in a single z or LinuxONE enterprise server
  • Protection of data from external attacks and insider threats, with pervasive encryption and tamper-responsive protection of encryption keys
  • Availability of 99.999%  to meet service levels and customer expectations
  • Integration and co-location of cloud-native applications on the same system as the data, ensuring the fastest response times

IBM z/OS Cloud Broker helps enable OpenShift applications to interact with data and applications on IBM Z. IBM z/OS Cloud Broker is the first software product to provide access to z/OS services by the broader development community.

To more easily manage the resulting infrastructure organizations can license the IBM Cloud Infrastructure Center. This is an Infrastructure-as-a-Service offering which provides simplified infrastructure management in support of z/VM-based Linux virtual machines on the z and LinuxONE.

DancingDinosaur is Alan Radding, a veteran information technology analyst, writer, and ghost-writer. Follow DancingDinosaur on Twitter, @mainframeblog, and see more of his work at http://technologywriter.com/

Montana Sidelines the Mainframe

January 21, 2020

Over the past 20+ years DancingDinosaur has written this story numerous times. It never ends exactly the way they think it will. Here is the one I encountered this past week.

IBM z15

But that doesn’t stop the pr writers from finding a cute way to write the story. This time the  writers turned to references to the moon landings and trilby hats (huh?). Looks like a Fedora to me, but what do I know; I only wear baseball hats. But they always have to come up with something that makes the mainframe sound completely outdated. In this case they wrote: Mainframe computers, a technology that harkens back to an era of moon landings and men in trilby hats, are still widely used throughout government, but not in Montana for much longer.

At least they didn’t write that the mainframe was dead and gone or forgotten. Usually, I follow up on stories like this months later and call whichever IT person is still there. I congratulate him or her and ask how it went. That’s when I usually start hearing ums and uhs. It turns out the mainframe is still there, handling those last few jobs they just can’t replace yet.

Depending on how playful I’m feeling that day, I ask him or her what happened to the justification presented at the start of the project. Or I might ask what happened to the previous IT person. 

Sometimes, I might even refer them to a recent DancingDinosaur piece that explains about Linux on the mainframe or Java or describes mainframes running the latest Docker container technology or microservices. I’m not doing this for spite; I’m just trying to build up my readership. DancingDinosaur hates losing any reader, even if it’s late in their game.  So I always follow up with a link to DancingDinosaur

In an interview published by StateScoop, Chief Information Officer Tim Bottenfield described how for the last several years, the last remaining agencies using the state’s mainframe have migrated their data away from it and are now developing modern applications that can be moved to the state’s private and highly virtualized cloud environment. By spring 2021, Montana expects to be mainframe-free. Will make a note to call Bottenfield in Spring 2021 and see how they are doing.  Does anyone want to bet if the mainframe actually is completely out of service and gone by then?

As you all know, mainframes can be expensive to maintain, particularly if it’s just to keep a handful of applications running, which usually turn out to be mission-critical applications. Of the three major applications Montana still runs on its mainframe, two are used by the Montana Department of Public Health and Human Services, which is in the process of recoding those programs to work on modern platforms, as if the z15 isn’t  modern.

They haven’t told us whether these applications handle payments or deliver critical services to citizens. Either way it will not be pretty if such applications go down. The third is the state’s vehicle titling and registration system, which is being rebuilt to run out of the state’s data center. Again, we don’t know much about the criticality of these systems. But think how you might feel if you can’t get accurate or timely information from one of these systems. I can bet you wouldn’t be a happy camper; neither would I.

Systems like these are difficult to get right the first time, if at all. This is especially true if you will be using the latest hybrid cloud and services technologies. Yes, skilled mainframe people are hard to find and retain but so are any technically skilled and experienced people. If I were a decade younger, I could be attracted to the wide open spaces of Montana as a relief from the congestion of Boston. But I’m not the kind of hire Montana needs or wants. Stay tuned for when I check back in Spring 2021.

DancingDinosaur is Alan Radding, a veteran information technology analyst, writer, and ghost-writer. Follow DancingDinosaur on Twitter, @mainframeblog, and see more of his work at http://technologywriter.com/ 

Syncsort Drives IBMi Security with AI

May 2, 2019

The technology security landscape looks increasingly dangerous  The problem revolves around the possible impact of AI. the impact of which is not fully clear. The hope, of course, is that AI will make security more efficient and effective.  However, the security bad actors can also jump on AI to advance their own schemes. Like a cyber version of the nuclear arms race, this has been an ongoing battle for decades. The industry has to cooperate and, specifically, share information and hope the good guys can stay a step ahead.

In the meantime, vendors like IBM and most recently Syncsort have been stepping up to  the latest challengers. Syncsort, for example, earlier this month launched its Assure Security to address the increasing sophistication of cyber attacks and expanding data privacy regulations.  In surprising ways, it turns out, data privacy and AI are closely related in the AI security battle.

Syncsort, a leader in Big Iron-to-Big Data software, announced Assure Security, which combines access control, data privacy, compliance monitoring, and risk assessment into a single product. Together, these capabilities help security officers, IBMi administrators, and Db2 administrators address critical security challenges and comply with new regulations meant to safeguard and protect the privacy of data.

And it clearly is coming at the right time.  According to Privacy Rights Clearinghouse, a non-profit corporation with a mission to advocate for data privacy there were 828 reported security incidents in 2018 resulting in the exposure of over 1.37 billion records of sensitive data. As regulations to help protect consumer and business data become stricter and more numerous, organizations must build more robust data governance and security programs to keep the data from being exploited by bad security actors for nefarious purposes.  The industry already has scrambled to comply with GDPR and the New York Department of Financial Services Cybersecurity regulations and they now must prepare for the GDPR-like California Consumer Privacy Act, which takes effect January 1, 2020.

In its own survey Syncsort found security is the number one priority among IT pros with IBMi systems. “Given the increasing sophistication of cyber attacks, it’s not surprising 41 percent of respondents reported their company experienced a security breach and 20 percent more were unsure if they even had been breached,” said David Hodgson, CPO, Syncsort. The company’s new Assure Security product leverages the wealth of IBMi security technology and the expertise to help organizations address their highest-priority challenges. This includes protecting against vulnerabilities introduced by new, open-source methods of connecting to IBMi systems, adopting new cloud services, and complying with expanded government regulations.

Of course, IBM hasn’t been sleeping through this. The company continues to push various permutations of Watson to tackle the AI security challenge. For example, IBM leverages AI to gather insights and use reasoning to identify relationships between threats, such as malicious files, suspicious IP addresses,  or even insiders. This analysis takes seconds or minutes, allowing security analysts to respond to threats up to 60 times faster.

It also relies on AI to eliminate time-consuming research tasks and provides curated analysis of risks, which reduces the amount of time security analysts require to make the critical decisions and launch an orchestrated response to counter each threat. The result, which IBM refers to as cognitive security, combines the strengths of artificial intelligence and human intelligence.

Cognitive AI in effect, learns with each interaction to proactively detect and analyze threats and provides actionable insights to security analysts making informed decisions. Such cognitive security, let’s hope, combines the strengths of artificial intelligence with human judgement.

Syncsort’s Assure Security, specifically brings together best-in-class IBMi security capabilities acquired by Syncsort into an all-in-one solution, with the flexibility for customers to license individual modules. The resulting product includes:

  • Assure  Compliance Monitoring quickly identifies security and compliance issues with real-time alerts and reports on IBMi system activity and database changes.
  • Assure Access Control provides control of access to IBMi systems and their data through a varied bundle of capabilities.
  • Assure Data Privacy protects IBMi data at-rest and in-motion from unauthorized access and theft through a combination of NIST-certified encryption, tokenization, masking, and secure file transfer capabilities.
  • Assure Security Risk Assessment examines over a dozen categories of security values, open ports, power users, and more to address vulnerabilities.

It probably won’t surprise anyone but the AI security situation is not going to be cleared up soon. Expect to see a steady stream of headlines around security hits and misses over the next few years. Just hope will get easier to separate the good guys from the bad actors and the lessons will be clear.

DancingDinosaur is Alan Radding, a veteran information technology analyst, writer, and ghost-writer. Follow DancingDinosaur on Twitter, @mainframeblog, and see more of his work at technologywriter.com.

IBM Joins with Harley-Davidson for LiveWire

March 1, 2019

I should have written this piece 40 years ago as a young man fresh out of grad school. Then I was dying for a 1200cc Harley Davidson motorcycle. My mother was dead set against it—she wouldn’t even allow me to play tackle football and has since been vindicated (You win on that, mom.). My father, too, was opposed and wouldn’t help pay for it. I had to settle for a puny little motor scooter that offered zero excitement.

In the decades since I graduated, Harley’s fortunes have plummeted as the HOG (Harley Owners Group) community aged out and few youngsters have picked up the slack. The 1200cc bike I once lusted after probably is now too heavy for me to handle. So, what is Harley to do? Redefine its classic American brand with an electric model, LiveWire.

Courtesy: Harley Davidson, IBM

With LiveWire, Harley expects to remake the motorcycle as a cloud-connected machine and promises to deliver new products for fresh motorcycle segments, broaden engagement with the brand, and strengthen the H-D dealer network. It also boldly proclaimed that Harley-Davidson will lead the electrification of motorcycling.

According to the company, Harley’s LiveWire will leverage H-D Connect, a service (available in select markets), built on thIBM AI, analytics, and IoTe IBM Cloud. This will enable it to deliver new mobility and concierge services today and leverage an expanding use of IBM AI, analytics, and IoT to enhance and evolve the rider’s experience. In order to capture this next generation of bikers, Harley is working with IBM to transform the everyday experience of riding through the latest technologies and features IBM can deliver via the cloud.

Would DancingDinosaur, an aging Harley enthusiast, plunk down the thousands it would take to buy one of these? Since I rarely use my smartphone to do anything more than check email and news, I am probably not a likely prospect for LiveWire.

Will LiveWire save Harley? Maybe; it depends on what the promised services will actually deliver. Already, I can access a wide variety of services through my car but, other than Waze, I rarely use any of those.

According to the joint IBM-Harley announcement, a fully cellular-connected electric motorcycle needed a partner that could deliver mobility solutions that would meet riders’ changing expectations, as well as enhance security. With IBM, Harley hopes to strike a balance between using data to create both intelligent and personal experiences while maintaining privacy and security, said Marc McAllister, Harley-Davidson VP Product Planning and Portfolio in the announcement.

So, based on this description, are you ready to jump to LiveWire? You probably need more details. So far, IBM and Harley have identified only three:

  1. Powering The Ride: LiveWire riders will be able to check bike vitals at any time and from any location. Information available includes features such as range, battery health, and charge level. Motorcycle status features will also support the needs of the electric bike, such as the location of charging stations. Also riders can see their motorcycle’s current map location.  Identifying charging stations could be useful.
  2. Powering Security: An alert will be sent to the owner’s phone if the motorcycle has been bumped, tampered, or moved. GPS-enabled stolen-vehicle assistance will provide peace of mind that the motorcycle’s location can be tracked. (Requires law enforcement assistance. Available in select markets).
  3. Powering Convenience: Reminders about upcoming motorcycle service requirements and other care notifications will be provided. In addition, riders will receive automated service reminders as well as safety or recall notifications.

“The next generation of Harley-Davidson riders will demand a more engaged and personalized customer experience,” said Venkatesh Iyer, Vice President, North America IoT and Connected Solutions, Global Business Services, IBM. Introducing enhanced capabilities, he continues, via the IBM Cloud will not only enable new services immediately, but will also provide a roadmap for the journey ahead. (Huh?)

As much as DancingDinosaur aches for Harley to come roaring back with a story that will win the hearts of the HOG users who haven’t already drifted away Harley will need more than the usual buzzwords, trivial apps, and cloud hype.

DancingDinosaur is Alan Radding, a veteran information technology analyst, writer, and ghost-writer. Follow DancingDinosaur on Twitter, @mainframeblog, and see more of his work at technologywriter.com.

Meet SUSE Enterprise Linux Server 12

February 25, 2019

A surprising amount of competition has emerged lately for Linux on the mainframe, but SUSE continues to be among the top of the heap.  With the newest release last fall, SUSE Linux Enterprise 12, should secure its position for some time to come.

SUSE touts SLE 12 as the latest version of its reliable, scalable and secure platform for efficiently deploying and managing highly available enterprise-class IT services in physical, virtual, or cloud environments. New products based on SLE 12 feature enhancements should allow for better system uptime, improved operational efficiency, and accelerated innovation. As the foundation for all SUSE data center operating systems and extensions, according to the company, SUSE Linux Enterprise meets the performance requirements of data centers with mixed IT environments while reducing the risk of technological obsolescence and vendor lock-in.

With SLE 12 the company also introduces an updated customer portal, SUSE Customer Center, to make it easier for customers to manage their subscriptions, access patches and updates, and communicate with SUSE customer support. It promises a new way to manage a SUSE account and subscriptions via one interface, anytime, anywhere.

Al Gillen, program vice president for servers and system software at IDC, said, “The industry is seeing growing movement of mission-critical workloads to Linux, with that trend expected to continue well into the future.” For Gillen, the modular design of SLE 12, as well as other mission-critical features like full system rollback and live kernel patching, helps address some of the key reservations customers express, and should help accelerate the adoption of Linux on z.

It’s about time. Linux has been available on the z for 20 years. Only with the introduction of IBM LinuxONE a couple of years ago has IBM gotten serious about Linux on z.  Around that time IBM also ported the Go programming language to LinuxOne. Go was developed by Google and is designed for building simple, reliable and efficient software, making it easier for developers to combine the software tools they know with the speed, security and scale offered by LinuxONE. Taking it even further, following Apple’s introduction of Swift as the new language for OS X and iOS application development. IBM began partnering with Apple to bring the power of Swift open source programming to the z. This was closely tied to Canonical’s Ubuntu port to the z.

And it didn’t stop there. IBM ported the Go programming language to LinuxOne too. Go was developed by Google and is designed for building simple, reliable and efficient software, making it easier for developers to combine the software tools they know with the speed, security and scale offered by LinuxONE. As expected IBM has contributed code to the Go community.

Then IBM brought Apple’s Swift programming to the party, first to the IBM Watson iOS SDK, which gives developers a Swift API to simplify integration with many of the Watson Developer Cloud services – all of which are available today, and can now be integrated with just a few lines of code. As soon as Apple introduced Swift as the new language for OS X and iOS application development. IBM began partnering with Apple to bring the power of Swift open source programming to the z. This was closely tied to Canonical’s Ubuntu port to the z, which has already been released.

With SUSE Linux Enterprise Server for x86_64, IBM Power Systems, and IBM System SUSE ES 12 has boosted its versatility, able to deliver business-critical IT services in a variety of physical, virtual, and cloud environments. New features like full system rollback, live kernel patching, and software modules increase data center uptime, improve operational efficiency, and accelerate the adoption of open source innovation. ES 12 further builds on SUSE’s leadership with Linux Containers technology and adds the Docker framework, which is now included as an integral part of the operating system.

DancingDinosaur is Alan Radding, a veteran information technology analyst, writer, and ghost-writer. Follow DancingDinosaur on Twitter, @mainframeblog, and see more of his work at technologywriter.com.

Meet IBM Q System One

February 1, 2019

A couple of weeks ago, IBM slipped in a new quantum machine at CES. The new machine, dubbed IBM Q System One, is designed for both scientific and commercial computing. IBM described it as the first integrated universal approximate quantum computing system.

Courtesy of IBM

Approximate refers to the short coherence time of the qubits, explains Michael Houston, manager, Analyst Relations. Or, to put it another way: how long the qubits remain stable enough to run reliable and repeatable calculations. IBM Q systems report an industry-best average of 100 microseconds. That’s not enough time for a round of golf, but probably long enough to start running some serious quantum analytics.

As described by IBM, the new machine family, the Q systems, are designed to one day tackle problems that are currently seen as too complex or too exponential in scale for classical (conventional) systems to handle. Such Q Systems may use quantum computing to find new ways to model financial data or isolate key global risk factors to make better investments or find the optimal path across global systems for ultra-efficient logistics or optimizing fleet operations for improved deliveries.

The design of IBM Q System One includes a 9x9x9 cube case constructed of half-inch thick borosilicate glass to form a sealed, airtight enclosure that opens effortlessly using roto-translation, a motor-driven rotation around two displaced axes engineered to simplify the system’s maintenance and upgrade process while minimizing downtime. Overall, the entire system was intended to enable the most stable qubits, which allows for the machine to deliver the reliable commercial use.

A series of independent aluminum and steel frames not only unify, but also decouple the system’s cryostat, control electronics, and exterior casing, helping to avoid potential vibration interference that leads to phase jitter and qubit decoherence.

The object of all of this, Houston explains, is to deliver a sophisticated, modular, and compact design optimized for stability, reliability, and continuous commercial use. For the first time ever, IBM Q System One enables universal approximate superconducting quantum computers to operate beyond the confines of the research lab.

In effect, think of the Q System One as bringing the quantum machine to the data center, starting with Q System’s design that squeezes all the quantum computing electronics, controllers, and other components into a 9x9x9 foot cube made of half-inch thick glass to create a sealed, airtight enclosure that will allow the system to cool the qubits to low Kelvin temperatures and keep them cold enough and undisturbed from any interference for long enough to perform meaningful work. All the Q System One’s components and control mechanisms are intended to keep the qubits at 10 mK  (-442F) to operate

This machine, notes IBM, should look familiar to conventional computer data center managers. Maybe, if you think a 9x9x9, half-inch thick borosilicate glass cube is a regular feature of any data center you have worked in

In effect, IBM is applying the same approach to quantum computing that it has followed for decades with its conventional computers–providing everything you need to get it operating in your data center. Just plan to bring in some trained quantum technicians, specialists, and, don’t forget, a handful of people who can program such a machine.

Other than that, the IBM Q System One consists of a number of custom components that work together–remember they said integrated: Specifically, the new machine will include:

  • Quantum hardware designed to be stable and auto-calibrated to give repeatable and predictable high-quality qubits;
  • Cryogenic engineering that delivers a continuous cold and isolated quantum environment;
  • High precision electronics in compact form factors to tightly control large numbers of qubits;
  • Quantum firmware to manage the system health and enable system upgrades without downtime for users

Are you up for it? Maybe you’d prefer to try before you buy. The IBM Q Quantum Computation Center, opening later this year in Poughkeepsie, extends the IBM Q Network to commercial quantum computing programs,

DancingDinosaur is Alan Radding, a veteran information technology analyst, writer, and ghost-writer. Follow DancingDinosaur on Twitter, @mainframeblog, and see more of his work at technologywriter.com.

Factsheets for AI

December 21, 2018

Depending on when you check in on the IBM website the primary technology trend for 2019 is quantum computing or hybrid clouds or blockchain, or artificial intelligence or any of a handful of others. Maybe IBM does have enough talented people, resources, and time to do it all well now. But somehow DancingDinosuar is dubious.

There is an old tech industry saying: you can have it right, fast, cheap—pick 2. When it comes to AI depending on your choices or patience you could win an attractive share of the projected $83 billion AI industry by 2021 or a share of the estimated $200 billion AI market by 2025, according to venturebeat.

IBM sees the technology industry at a pivotal moment in the path to mass adoption of artificial intelligence (AI). Google subsidiary DeepMind is leveraging AI to determine how to refer optometry patients. Haven Life is using AI to extend life insurance policies to people who wouldn’t traditionally be eligible, such as people with chronic illnesses and non-U.S. citizens. And Google self-driving car spinoff Waymo is tapping it to provide mobility to elderly and disabled people.

But despite the good AI is clearly capable of doing, doubts abound over its safety, transparency, and bias. IBM believes part of the problem is a lack of standard practices.

As a result, there’s no consistent, agreed-upon way AI services should be created, tested, trained, deployed, and evaluated, observes Aleksandra Mojsilovic, head of AI foundations at IBM Research and co-director of the AI Science for Social Good program. To clear up the ambiguity surrounding AI, Mojsilovic and colleagues propose voluntary factsheets or as more formally called Supplier’s Declaration of Conformity (DoC). The goal: increasing the transparency of particular AI services and engendering trust in them.

Such factsheets alone could enable a competitive advantage to AI offers in the marketplace. Such factsheets could provide explain-ability around susceptibility to adversarial attacks—issues that must be addressed in order for AI services to be trusted along with fairness and robustness, Mojsilovic continued. Factsheets take away the black box perception of AI and render the AI system understandable by both researchers and developers.

Several core pillars form the basis for trust in AI systems: fairness, robustness, and explain-ability, the first 3 pillars.  Late in her piece, Mojsilovic introduces a fourth pillar — lineage — which concerns AI systems’ history. Factsheets would answer questions ranging from system operation and training data to underlying algorithms, test setups and results, performance benchmarks, fairness and robustness checks, intended uses, maintenance, and retraining. More granular topics might include governance strategies used to track the AI service’s data workflow, the methodologies used in testing, and bias mitigations performed on the dataset. But in Mojsilovic’s view, documents detailing the ins and outs of systems would go a long way to maintaining the public’s faith in AI.

For natural language processing algorithms specifically, the researchers propose data statements that would show how an algorithm might be generalized, how it might be deployed, and what biases it might contain.

Natural language processing systems aren’t as fraught with controversy as, say, facial recognition, but they’ve come under fire for their susceptibility to bias.  IBM, Microsoft, Accenture, Facebook, and others are actively working on automated tools that detect and minimize bias, and companies like Speechmatics and Nuance have developed solutions specifically aimed at minimizing the so-called accent gap — the tendency of voice recognition models to skew toward speakers from certain regions. But in Mojsilovic’s view, documents detailing the ins and outs of systems—factsheets–would go a long way to restoring the public’s faith in AI.

Fairness, safety, reliability, explain-ability, robustness, accountability — all agree that they are critical. Yet, to achieve trust in AI, making progress on these issues alone will not be enough; it must be accompanied with the ability to measure and communicate the performance levels of a system on each of these dimensions, she wrote. Understanding and evaluating AI systems is an issue of utmost importance for the AI community, an issue IBM believes the industry, academia, and AI practitioners should be working on together.

DancingDinosaur is Alan Radding, a veteran information technology analyst, writer, and ghost-writer. Follow DancingDinosaur on Twitter, @mainframeblog, and see more of his work at technologywriter.com.


%d bloggers like this: