Posts Tagged ‘hybrid computing’

Pushing Quantum Onto the Cloud

September 4, 2020

Did you ever imagine the cloud would become your quantum computing platform, a place where you would run complex quantum algorithms requiring significant specialized processing across multi-qubit machines available at a click? But that is exactly what is happening.

IBM started it a few years back by making their small qubit machines available in the cloud and even larger ones now. Today Xanadu is offering 8-qubit or 12-qubit chips, and even a 24-qubit chip in the next month or so, according to the Toronto-based company.

Xanadu quantum processor

As DancingDinosaur has previously reported, there are even more: Google reports a quantum computer lab with five machines and Honeywell has six quantum machines. D-Wave is another along with more startups, including nQ, Quantum Circuits, and Rigetti Computing.

D-Wave is another along with more startups, including nQ, Quantum Circuits, and Rigetti Computing.In September, Xanadu introduced its quantum cloud platform. This allows developers to access its gate-based photonic quantum processors with 8-qubit or 12-qubit chips across the cloud.

Photonics-based quantum machines have certain advantages over other platforms, according to the company. Xanadu’s quantum processors operate at room temperature, not low Kelvin temperatures. They can easily integrate into an existing fiber optic-based telecommunication infrastructure, enabling quantum computers to be networked. It also offers scalability and fault tolerance, owing to error-resistant physical qubits and flexibility in designing error correction codes. Xanadu’s type of qubit is based on squeezed states – a special type of light generated by its own chip-integrated silicon photonic devices, it claims.

DancingDinosaur recommends you check out Xanadu’s documentation and details. It does not have sufficient familiarity with photonics, especially as related to quantum computing, to judge any of the above statements. The company also notes it offers a cross-platform Python library for simulating and executing programs on quantum photonic hardware. Its open source tools are available on GitHub.

Late in August IBM has unveiled a new milestone on its quantum computing road map, achieving the company’s highest Quantum Volume to date. By following the link, you see that Quantum Value is a metric conceived by IBM to measure and compare quantum computing power. DancingDinosaur is not aware of any other quantum computing vendors using it, which doesn’t mean anything of course. Quantum computing is so new and so different and with many players joining in with different approaches it will be years before anadu see what metrics prove most useful. 

To come up with its Quantum Volume rating, IBM  combined a series of new software and hardware techniques to improve overall performance, IBM has upgraded one of its newest 27-qubit, systems to achieve the high Quantum Volume rating. The company has made a total of 28 quantum computers available over the last four years through the IBM Quantum Experience, which companies join to gain access to its quantum machines and tools, including its software development toolset, 

Do not confuse Quantum Volume with Quantum Advantage, the point where certain information processing tasks can be performed more efficiently or cost effectively on a quantum computer versus a conventional one. Quantum Advantage will require improved quantum circuits, the building blocks of quantum applications. Quantum Volume, notes IBM, measures the length and complexity of circuits – the higher the Quantum Volume, the higher the potential for exploring solutions to real world problems across industry, government, and research.

To achieve its Quantum Volume milestone, the company focused on a new set of techniques and improvements that used knowledge of the hardware to optimally run the Quantum Volume circuits. These hardware-aware methods are extensible and will improve any quantum circuit run on any IBM Quantum system, resulting in improvements to the experiments and applications which users can explore. These techniques will be available in upcoming releases and improvements to the IBM Cloud software services and the cross-platform open source software development kit (SDK) Qiskit. The IBM Quantum team has shared details on the technical improvements made across the full stack to reach Quantum Volume 64 in a preprint released on arXiv, today.

What is most exciting is that the latest quantum happenings are things quantum you can access over the cloud without having to cool your data center to near zero Kelvin temperatures. If you try any of these, DancingDinosaur would love to hear how it goes.

Alan Radding, a veteran information technology analyst, writer, and ghost-writer, is DancingDinosaur. Follow DancingDinosaur on Twitter, @mainframeblog, and see more of his work at http://technologywriter.com/.

5G Will Accelerate a New Wave of IoT Applications and Z

August 10, 2020

Even before the advent of 5G DancingDinosaur, which had ghostwritten a top book on IoT, believed that IoT and smartphones would lead back to the Z eventually, somehow. Maybe the arrival of 5G and smart edge computing might slow the path to the Z. Or maybe not.

Even transactions and data originating and being processed at the edge will need to be secured, backed up, stored, distributed to the cloud, to other servers and systems, to multiple clouds, on premises, and further  processed and reprocessed in numerous ways. Along the way, they will find their ways back to a Z somehow and somewhere, sooner or later.

an edge architecture

5G is driving change in the Internet of Things (IoT). It’s a powerful enabling technology for a new generation of use cases that will leverage edge computing to make IoT more effective and efficient,” writes Rishi Vaish and Sky Matthews. Rishi Vaish is CTO and VP, IBM AI Applications; Sky Matthews is CTO, Engineering Lifecycle Management at IBM.  DancingDinosaur completely agrees, adding only that it won’t just stop there.

Vaish and Matthews continue: “In many ways, the narrative of 5G is the interaction between two inexorable forces: the rise in highly reliable, high-bandwidth communications, and the rapid spread of available computing power throughout the network. The computing power doesn’t just end at the network, though. End-point devices that connect to the network are also getting smarter and more powerful.” 

True enough, the power does not just end there; neither does it start there. There is a long line of powerful systems, the z15 and generations of Z before it that handle and enhance everything that happens in whatever ways are desired at that moment or, as is often the case, later. 

And yes, there will be numerous ways to create comparable services using similarly smart and flexible edge devices. But experience has shown that it takes time to work out the inevitable kinks that invariably will surface, often at the least expected and most inopportune moment. Think of it as just the latest manifestation of Murphy’s Law moved to the edge and 5G.

The increasingly dynamic and powerful computational environment that’s taking shape as telcos begin to redesign their networks for 5G will accelerate the uptake of IoT applications and services throughout industry,  Vaish and Matthews continue. We expect that 5G will enable new use cases in remote monitoring and visual inspection, autonomous operations in large-scale remote environments such as mines, connected vehicles, and more.

This rapidly expanding range of computing options, they add,  requires a much more flexible approach to building and deploying applications and AI models that can take advantage of the most cost-efficient compute resources available.

IBM chimes in: There are many ways that this combination of 5G and edge computing can enable new applications and new innovations in various industries. IBM and Verizon, for example, are developing potential 5G and edge solutions like remote-controlled robotics, near real-time video analysis, and other kinds of factory-floor automation.

The advantage comes from smart 5G edge devices doing the analytics immediately, at the spot where decisions may be best made. Are you sure that decisions made at the edge immediately are always the best? DancingDinosaur would like to see a little more data on that.

In that case, don’t be surprised to discover that there will be other decisions that benefit from being made later, with the addition of other data and analysis. There is too much added value and insight packed into the Z data center to not take advantage of it.

Alan Radding, a veteran information technology analyst, writer, and ghost-writer, is DancingDinosaur. Follow DancingDinosaur on Twitter, @mainframeblog, and see more of his work at http://technologywriter.com/.

Red Hat OpenShift Container Platform on z

February 20, 2020

IBM is finally starting to capitalize on last year’s $34 billion acquisition of Red Hat for z shops. If you had a new z and it ran Linux you would have no problem running Red Hat products so the company line went. Well, in mid February IBM announced Red Hat’s OpenShift Container Platform is now available on the z and LinuxONE, a z with built-in Linux optimized for the underlying z.

OpenShift comes to z and LinuxONE

As the company puts it:  The availability of OpenShift for z and LinuxONE is a major milestone for both hybrid multicloud and enterprise computing. OpenShift, a form of middleware for use with DevOps,  supports cloud-native applications being built once and deployed anywhere, including to on premises enterprise servers, especially the z and LinuxONE. This new release results from the collaboration between IBM and Red Hat development teams, and discussions with early adopter clients.

Working with its Hybrid Cloud, the company has created a roadmap for bringing the ecosystem of enterprise software to the OpenShift platform. IBM Cloud Paks containerize key IBM and open source software components to help enable faster enterprise application development and delivery. In addition to the availability of OpenShift for z it also announced that IBM Cloud Pak for Applications is available for the z and LinuxONE. In effect, it supports the modernization of existing apps and the building of new cloud-native apps. In addition, as announced last August,it is the company’s intention to deliver additional Cloud Paks for the z and LinuxONE.

Red Hat is a leader in hybrid cloud and enterprise Kubernetes, with more than 1,000 customers already using Red Hat OpenShift Container Platform. With the availability of OpenShift for the z and LinuxONE, the agile cloud-native world of containers and Kubernetes, which has become the defacto open global standard for containers and orchestration,  but it is now reinforced by the security features, scalability, and reliability of IBM’s enterprise servers.

“Containers are the next generation of software-defined compute that enterprises will leverage to accelerate their digital transformation initiatives,” says Gary Chen, Research Director at IDC, in a published report.  “IDC estimates that 71% of organizations are in the process of implementing containers and orchestration or are already using them regularly. IDC forecasts that the worldwide container infrastructure software opportunity is growing at a 63.9 % 5-year CAGR and is predicted to reach over $1.5B by 2022.”

By combining the agility and portability of Red Hat OpenShift and IBM Cloud Paks with the security features, scalability, and reliability of z and LinuxONE, enterprises will have the tools to build new cloud-native applications while also modernizing existing applications. Deploying Red Hat OpenShift and IBM Cloud Paks on z and LinuxONE reinforces key strengths and offers additional benefits:

  • Vertical scalability enables existing large monolithic applications to be containerized, and horizontal scalability enables support for large numbers of containers in a single z or LinuxONE enterprise server
  • Protection of data from external attacks and insider threats, with pervasive encryption and tamper-responsive protection of encryption keys
  • Availability of 99.999%  to meet service levels and customer expectations
  • Integration and co-location of cloud-native applications on the same system as the data, ensuring the fastest response times

IBM z/OS Cloud Broker helps enable OpenShift applications to interact with data and applications on IBM Z. IBM z/OS Cloud Broker is the first software product to provide access to z/OS services by the broader development community.

To more easily manage the resulting infrastructure organizations can license the IBM Cloud Infrastructure Center. This is an Infrastructure-as-a-Service offering which provides simplified infrastructure management in support of z/VM-based Linux virtual machines on the z and LinuxONE.

DancingDinosaur is Alan Radding, a veteran information technology analyst, writer, and ghost-writer. Follow DancingDinosaur on Twitter, @mainframeblog, and see more of his work at http://technologywriter.com/

IBM Introduces New Flash Storage Family

February 14, 2020

IBM describes this mainly as a simplification move. The company is eliminating 2 current storage lines, Storwize and Flash Systems A9000, and replacing them with a series of flash storage systems that will scale from entry to enterprise. 

Well, uh, not quite enterprise as Dancing Dinosaur readers might think of it. No changes are planned for the DS8000 storage systems, which are focused on the mainframe market, “All our existing product lines, not including our mainframe storage, will be replaced by the new FlashSystem family,” said Eric Herzog, IBM’s chief marketing officer and vice president of worldwide storage channel in a published report earlier this week

The move will rename two incompatible storage lines out of the IBM product lineup and replace them with a line that provides compatible storage software and services from entry level to the highest enterprise, mainframe excluded, Herzog explained. The new flash systems family promises more functions, more features, and lower prices, he continued.

Central to the new Flash Storage Family is NVMe, which comes in multiple flavors.  NVM Express (NVMe) or Non-Volatile Memory Host Controller Interface Specification (NVMHCIS) is an open logical device interface specification for accessing non-volatile storage media attached via a PCI Express (PCIe) bus.

At the top of the new family line is the NVMe and multicloud ultra-high throughput storage system. This is a validated system with IBM implementation. IBM promises unmatched NVMe performance, SCM, and  IBM FlashCore technology. In addition it brings the features of IBM Spectrum Virtualize to support the most demanding workloads.

Image result for IBM flash storage family

IBM multi-cloud flash storage family system

Get NVMe performance, SCM and  IBM FlashCore technology, and the rich features of IBM Spectrum Virtualize to support your most demanding workloads.

NVM Express (NVMe) or Non-Volatile Memory Host Controller Interface Specification (NVMHCIS) is an open logical device interface specification for accessing non-volatile storage media attached via a PCI Express (PCIe) bus.

Next up are the IBM FlashSystem 9200 and IBM FlashSystem 9200R, IBM tested and validated rack solutions designed for the most demanding environments. With the extreme performance of end-to-end NVMe, the IBM FlashCore technology, and the ultra-low latency of Storage Class Memory (SCM). It also brings IBM Spectrum Virtualize and AI predictive storage management with proactive support by Storage Insights. FlashSystem 9200R is delivered assembled, with installation and configuration completed by IBM to ensure a working multicloud solution.

Gain the performance of all-flash and NVMe with SCM support for flash acceleration and the reliability and innovation of IBM FlashCore technology, plus the rich features of IBM Spectrum Virtualize — all in a powerful 2U storage system.

Combine the performance of flash and NVMe with the reliability and innovation of IBM FlashCore® and the rich features of IBM Spectrum Virtualize™, bringing high-end capability to clients needing enterprise mid-range storage.

In the middle of the family is the IBM FlashSystem 7200 and FlashSystem 7200H. As IBM puts it, these offer end-to-end NVMe, the innovation of IBM FlashCore technology, the ultra-low latency of Storage Class Memory (SCM), the flexibility of IBM Spectrum Virtualize, and the AI predictive storage management and proactive support of Storage Insights. It comes in a powerful 2U storage all flash or hybrid flash array. The IBM FlashSystem 7200 brings mid-range storage while allowing the organization to add  multicloud technology that best supports the business.

At the bottom of the line is the NVMe entry enterprise all flash storage solution, which brings  NVMe end-to-end capabilities and flash performance to the affordable FlashSystem 5100. As IBM describes it, the FlashSystem® 5010 and IBM FlashSystem 5030 (formerly known as IBM Storwize V5010E and Storwize V5030E–they are still there, just renamed) are all-flash or hybrid flash solutions intended to provide enterprise-grade functionalities without compromising affordability or performance. Built with the flexibility of IBM Spectrum Virtualize and AI-powered predictive storage management and proactive support of Storage Insights. IBM FlashSystem 5000 helps make modern technologies such as artificial intelligence accessible to enterprises of all sizes. In short, these promise entry-level flash storage solutions designed to provide enterprise-grade functionality without compromising affordability or performance

IBM likes the words affordable and affordability in discussing this new storage family. But, as is typical with IBM, nowhere will you see a price or a reference to cost/TB or cost/IOPS or cost of anything although these are crucial metrics for evaluating any flash storage system. DancingDinosaur expects this after 20 years of writing about the z. Also, as I wrote at the outset, the z is not even included in this new flash storage family so we don’t even have to chuckle if they describe z storage as affordable.

DancingDinosaur is Alan Radding, a veteran information technology analyst, writer, and ghost-writer. Follow DancingDinosaur on Twitter, @mainframeblog, and see more of his work at http://technologywriter.com/

Meet IBM’s New CEO

February 6, 2020

Have to admire Ginny Rometty. She survived 19 consecutive losing quarters (one quarter shy of 5 years), which DancingDinosaur and the rest of the world covered with monotonous regularity, and she was not bounced out until this January. Memo to readers: Keep that in mind if you start feeling performance heat from top management. Can’t imagine another company that would tolerate it but what do I know.

Arvind Krishna becomes the Chief Executive Officer and a member of the I BM Board of Directors effective April 6, 2020. Krishna is currently IBM Senior Vice President for Cloud and Cognitive Software, and was a principal architect of the company’s acquisition of Red Hat. The cloud/Red Hat strategy has only just started to show signs of payback.

As IBM writes: Under Rometty’s leadership, IBM acquired 65 companies, built out key capabilities in hybrid cloud, security, industry and data, and AI both organically and inorganically, and successfully completed one of the largest technology acquisitions in history (Red Hat).  She reinvented more than 50% of IBM’s portfolio, built a $21 billion hybrid cloud business and established IBM’s leadership in AI, quantum computing, and blockchain, while divesting nearly $9 billion in annual revenue to focus the portfolio on IBM’s high value, integrated offerings. Part of that was the approximately $34 billion Red Hat acquisition, IBM’s, and possibly the IT industry’s, biggest to date. Rometty isn’t going away all that soon; she continues in some executive Board position.

It is way too early to get IBM 1Q2020 results, which will be the last quarter of Rometty’s reign. The fourth quarter of 2019, at least was positive, especially after all those quarters of revenue loss. The company reported  $21.8 billion in revenue, up 0.1 percent. Red Hat revenue was up 24 percent. Cloud and cognitive systems were up 9 percent while systems, which includes the z, was up 16 percent. 

Total cloud revenue, the new CEO Arvind Krishna’s baby, was up 21 percent. Even with z revenue up more than cloud and cognitive systems, it is probably unlikely IBM will easily find a buyer for the z soon. If IBM dumps it, they will probably have to pay somebody to take it despite the z’s faithful, profitable blue chip customer base. 

Although the losing streak has come to an end Krishna still faces some serious challenges.  For example, although DancingDinosaur has been enthusiastically cheerleading quantum computing as the future there is no proven business model there. Except for some limited adoption by a few early adopters, there is no widespread groundswell of demand for quantum computing and the technology has not yet proven itself useful. Also there is no ready pool of skilled quantum talent. If you wanted to try quantum computing would you even know what to try or where to find skilled people?

Even in the area of cloud computing where IBM finally is starting to show some progress the company has yet to penetrate the top tier of players. These players–Amazon, Google, Microsoft/Azur–are not likely to concede market share.

So here is DancingDinosaur’s advice to Krishna: Be prepared to scrap for every point of cloud share and be prepared to spin a compelling case around quantum computing. Finally, don’t give up the z until the accountants and lawyers force you, which they will undoubtedly insist on.To the contrary, slash the z prices and make it an irresistible bargain. 

DancingDinosaur is Alan Radding, a veteran information technology analyst, writer, and ghost-writer. Follow DancingDinosaur on Twitter, @mainframeblog, and see more of his work at http://technologywriter.com/ 

October 8, 2019

z15 LinuxONE III for Hybrid Cloud

 

It didn’t take long following the introduction of the z15 for a LinuxONE to arrive. Meet the LinuxONE III, a z15 machine with dedicated built-in Linux. And it comes with the primary goodies that the z15 offers: automatic pervasive compression of everything along with a closely related privacy capability, Data Passport.

3-frame LinuxONE III

Z-quality security, privacy, and availability, it turns out, has become central to the mission of the LinuxONE III.The reason is simple: Cloud. According to IBM, only 20% of workloads have been moved to cloud. Why? Companies need assurance that their data privacy and security will not be breached. To many IT pros and business executives, the cloud remains the wild, wild west where bad guys roam looking to steal whatever they can.

IBM is touting the LinuxONE III, which is built on its newly introduced z15, for hybrid clouds. The company has been preaching the gospel of clouds and, particularly, hybrid clouds for several years, which was its primary reason for acquiring Red Hat. Red Hat Linux is built into the LinuxONE III, probably its first formal appearance since IBM closed its acquisition of Red Hat this spring. 

With Red Hat and z15 IBM is aiming to cash in on what it sees as a big opportunity in hybrid clouds. While the Cloud brings the promise of flexibility, agility and openness, only 20% of workloads have been moved to cloud, notes IBM. Why? Companies need assurance that their data privacy and security will not be breached. LinuxONE III also promises cloud native development.

By integrating the new IBM LinuxONE III as a key element in an organization’s hybrid cloud strategy, it adds another level of security and stability and availability to its cloud infrastructure. It gives the organization both agile deployment and unbeatable levels of uptime, reliability, and security. While the cloud already offers appealing flexibility and costs, the last three capabilities–uptime, reliability, security–are not usually associated with cloud computing. By security, IBM means 100% data encryption automatically, from the moment the data arrives or is created. And it remains encrypted for the rest of its life, at rest or in transit.

Are those capabilities important? You bet. A Harris study commissioned by IBM found that 64 percent of all consumers have opted not to work with a business out of concerns over whether that business could keep their data secure. However, that same study found 76 percent of respondents would be more willing to share personal information if there was a way to fully take back and retrieve that data at any time. Thus the importance of the z15’s pervasive encryption and the new data passports.

IBM has previously brought out its latest z running dedicated Linux. Initially it was a way to expand the z market through a reduced cost z.  DancingDinosaur doesn’t know the cost of the LinuxONE III. In the past they have been discounted but given the $34 billion IBM spent to acquire Red Hat the new machines might not be such a bargain this time.

DancingDinosaur is Alan Radding, a veteran information technology analyst, writer, and ghost-writer. Follow DancingDinosaur on Twitter, @mainframeblog, and see more of his work at http://technologywriter.com/ 

Syncsort Drives IBMi Security with AI

May 2, 2019

The technology security landscape looks increasingly dangerous  The problem revolves around the possible impact of AI. the impact of which is not fully clear. The hope, of course, is that AI will make security more efficient and effective.  However, the security bad actors can also jump on AI to advance their own schemes. Like a cyber version of the nuclear arms race, this has been an ongoing battle for decades. The industry has to cooperate and, specifically, share information and hope the good guys can stay a step ahead.

In the meantime, vendors like IBM and most recently Syncsort have been stepping up to  the latest challengers. Syncsort, for example, earlier this month launched its Assure Security to address the increasing sophistication of cyber attacks and expanding data privacy regulations.  In surprising ways, it turns out, data privacy and AI are closely related in the AI security battle.

Syncsort, a leader in Big Iron-to-Big Data software, announced Assure Security, which combines access control, data privacy, compliance monitoring, and risk assessment into a single product. Together, these capabilities help security officers, IBMi administrators, and Db2 administrators address critical security challenges and comply with new regulations meant to safeguard and protect the privacy of data.

And it clearly is coming at the right time.  According to Privacy Rights Clearinghouse, a non-profit corporation with a mission to advocate for data privacy there were 828 reported security incidents in 2018 resulting in the exposure of over 1.37 billion records of sensitive data. As regulations to help protect consumer and business data become stricter and more numerous, organizations must build more robust data governance and security programs to keep the data from being exploited by bad security actors for nefarious purposes.  The industry already has scrambled to comply with GDPR and the New York Department of Financial Services Cybersecurity regulations and they now must prepare for the GDPR-like California Consumer Privacy Act, which takes effect January 1, 2020.

In its own survey Syncsort found security is the number one priority among IT pros with IBMi systems. “Given the increasing sophistication of cyber attacks, it’s not surprising 41 percent of respondents reported their company experienced a security breach and 20 percent more were unsure if they even had been breached,” said David Hodgson, CPO, Syncsort. The company’s new Assure Security product leverages the wealth of IBMi security technology and the expertise to help organizations address their highest-priority challenges. This includes protecting against vulnerabilities introduced by new, open-source methods of connecting to IBMi systems, adopting new cloud services, and complying with expanded government regulations.

Of course, IBM hasn’t been sleeping through this. The company continues to push various permutations of Watson to tackle the AI security challenge. For example, IBM leverages AI to gather insights and use reasoning to identify relationships between threats, such as malicious files, suspicious IP addresses,  or even insiders. This analysis takes seconds or minutes, allowing security analysts to respond to threats up to 60 times faster.

It also relies on AI to eliminate time-consuming research tasks and provides curated analysis of risks, which reduces the amount of time security analysts require to make the critical decisions and launch an orchestrated response to counter each threat. The result, which IBM refers to as cognitive security, combines the strengths of artificial intelligence and human intelligence.

Cognitive AI in effect, learns with each interaction to proactively detect and analyze threats and provides actionable insights to security analysts making informed decisions. Such cognitive security, let’s hope, combines the strengths of artificial intelligence with human judgement.

Syncsort’s Assure Security, specifically brings together best-in-class IBMi security capabilities acquired by Syncsort into an all-in-one solution, with the flexibility for customers to license individual modules. The resulting product includes:

  • Assure  Compliance Monitoring quickly identifies security and compliance issues with real-time alerts and reports on IBMi system activity and database changes.
  • Assure Access Control provides control of access to IBMi systems and their data through a varied bundle of capabilities.
  • Assure Data Privacy protects IBMi data at-rest and in-motion from unauthorized access and theft through a combination of NIST-certified encryption, tokenization, masking, and secure file transfer capabilities.
  • Assure Security Risk Assessment examines over a dozen categories of security values, open ports, power users, and more to address vulnerabilities.

It probably won’t surprise anyone but the AI security situation is not going to be cleared up soon. Expect to see a steady stream of headlines around security hits and misses over the next few years. Just hope will get easier to separate the good guys from the bad actors and the lessons will be clear.

DancingDinosaur is Alan Radding, a veteran information technology analyst, writer, and ghost-writer. Follow DancingDinosaur on Twitter, @mainframeblog, and see more of his work at technologywriter.com.

Meet SUSE Enterprise Linux Server 12

February 25, 2019

A surprising amount of competition has emerged lately for Linux on the mainframe, but SUSE continues to be among the top of the heap.  With the newest release last fall, SUSE Linux Enterprise 12, should secure its position for some time to come.

SUSE touts SLE 12 as the latest version of its reliable, scalable and secure platform for efficiently deploying and managing highly available enterprise-class IT services in physical, virtual, or cloud environments. New products based on SLE 12 feature enhancements should allow for better system uptime, improved operational efficiency, and accelerated innovation. As the foundation for all SUSE data center operating systems and extensions, according to the company, SUSE Linux Enterprise meets the performance requirements of data centers with mixed IT environments while reducing the risk of technological obsolescence and vendor lock-in.

With SLE 12 the company also introduces an updated customer portal, SUSE Customer Center, to make it easier for customers to manage their subscriptions, access patches and updates, and communicate with SUSE customer support. It promises a new way to manage a SUSE account and subscriptions via one interface, anytime, anywhere.

Al Gillen, program vice president for servers and system software at IDC, said, “The industry is seeing growing movement of mission-critical workloads to Linux, with that trend expected to continue well into the future.” For Gillen, the modular design of SLE 12, as well as other mission-critical features like full system rollback and live kernel patching, helps address some of the key reservations customers express, and should help accelerate the adoption of Linux on z.

It’s about time. Linux has been available on the z for 20 years. Only with the introduction of IBM LinuxONE a couple of years ago has IBM gotten serious about Linux on z.  Around that time IBM also ported the Go programming language to LinuxOne. Go was developed by Google and is designed for building simple, reliable and efficient software, making it easier for developers to combine the software tools they know with the speed, security and scale offered by LinuxONE. Taking it even further, following Apple’s introduction of Swift as the new language for OS X and iOS application development. IBM began partnering with Apple to bring the power of Swift open source programming to the z. This was closely tied to Canonical’s Ubuntu port to the z.

And it didn’t stop there. IBM ported the Go programming language to LinuxOne too. Go was developed by Google and is designed for building simple, reliable and efficient software, making it easier for developers to combine the software tools they know with the speed, security and scale offered by LinuxONE. As expected IBM has contributed code to the Go community.

Then IBM brought Apple’s Swift programming to the party, first to the IBM Watson iOS SDK, which gives developers a Swift API to simplify integration with many of the Watson Developer Cloud services – all of which are available today, and can now be integrated with just a few lines of code. As soon as Apple introduced Swift as the new language for OS X and iOS application development. IBM began partnering with Apple to bring the power of Swift open source programming to the z. This was closely tied to Canonical’s Ubuntu port to the z, which has already been released.

With SUSE Linux Enterprise Server for x86_64, IBM Power Systems, and IBM System SUSE ES 12 has boosted its versatility, able to deliver business-critical IT services in a variety of physical, virtual, and cloud environments. New features like full system rollback, live kernel patching, and software modules increase data center uptime, improve operational efficiency, and accelerate the adoption of open source innovation. ES 12 further builds on SUSE’s leadership with Linux Containers technology and adds the Docker framework, which is now included as an integral part of the operating system.

DancingDinosaur is Alan Radding, a veteran information technology analyst, writer, and ghost-writer. Follow DancingDinosaur on Twitter, @mainframeblog, and see more of his work at technologywriter.com.

Meet IBM Q System One

February 1, 2019

A couple of weeks ago, IBM slipped in a new quantum machine at CES. The new machine, dubbed IBM Q System One, is designed for both scientific and commercial computing. IBM described it as the first integrated universal approximate quantum computing system.

Courtesy of IBM

Approximate refers to the short coherence time of the qubits, explains Michael Houston, manager, Analyst Relations. Or, to put it another way: how long the qubits remain stable enough to run reliable and repeatable calculations. IBM Q systems report an industry-best average of 100 microseconds. That’s not enough time for a round of golf, but probably long enough to start running some serious quantum analytics.

As described by IBM, the new machine family, the Q systems, are designed to one day tackle problems that are currently seen as too complex or too exponential in scale for classical (conventional) systems to handle. Such Q Systems may use quantum computing to find new ways to model financial data or isolate key global risk factors to make better investments or find the optimal path across global systems for ultra-efficient logistics or optimizing fleet operations for improved deliveries.

The design of IBM Q System One includes a 9x9x9 cube case constructed of half-inch thick borosilicate glass to form a sealed, airtight enclosure that opens effortlessly using roto-translation, a motor-driven rotation around two displaced axes engineered to simplify the system’s maintenance and upgrade process while minimizing downtime. Overall, the entire system was intended to enable the most stable qubits, which allows for the machine to deliver the reliable commercial use.

A series of independent aluminum and steel frames not only unify, but also decouple the system’s cryostat, control electronics, and exterior casing, helping to avoid potential vibration interference that leads to phase jitter and qubit decoherence.

The object of all of this, Houston explains, is to deliver a sophisticated, modular, and compact design optimized for stability, reliability, and continuous commercial use. For the first time ever, IBM Q System One enables universal approximate superconducting quantum computers to operate beyond the confines of the research lab.

In effect, think of the Q System One as bringing the quantum machine to the data center, starting with Q System’s design that squeezes all the quantum computing electronics, controllers, and other components into a 9x9x9 foot cube made of half-inch thick glass to create a sealed, airtight enclosure that will allow the system to cool the qubits to low Kelvin temperatures and keep them cold enough and undisturbed from any interference for long enough to perform meaningful work. All the Q System One’s components and control mechanisms are intended to keep the qubits at 10 mK  (-442F) to operate

This machine, notes IBM, should look familiar to conventional computer data center managers. Maybe, if you think a 9x9x9, half-inch thick borosilicate glass cube is a regular feature of any data center you have worked in

In effect, IBM is applying the same approach to quantum computing that it has followed for decades with its conventional computers–providing everything you need to get it operating in your data center. Just plan to bring in some trained quantum technicians, specialists, and, don’t forget, a handful of people who can program such a machine.

Other than that, the IBM Q System One consists of a number of custom components that work together–remember they said integrated: Specifically, the new machine will include:

  • Quantum hardware designed to be stable and auto-calibrated to give repeatable and predictable high-quality qubits;
  • Cryogenic engineering that delivers a continuous cold and isolated quantum environment;
  • High precision electronics in compact form factors to tightly control large numbers of qubits;
  • Quantum firmware to manage the system health and enable system upgrades without downtime for users

Are you up for it? Maybe you’d prefer to try before you buy. The IBM Q Quantum Computation Center, opening later this year in Poughkeepsie, extends the IBM Q Network to commercial quantum computing programs,

DancingDinosaur is Alan Radding, a veteran information technology analyst, writer, and ghost-writer. Follow DancingDinosaur on Twitter, @mainframeblog, and see more of his work at technologywriter.com.

Factsheets for AI

December 21, 2018

Depending on when you check in on the IBM website the primary technology trend for 2019 is quantum computing or hybrid clouds or blockchain, or artificial intelligence or any of a handful of others. Maybe IBM does have enough talented people, resources, and time to do it all well now. But somehow DancingDinosuar is dubious.

There is an old tech industry saying: you can have it right, fast, cheap—pick 2. When it comes to AI depending on your choices or patience you could win an attractive share of the projected $83 billion AI industry by 2021 or a share of the estimated $200 billion AI market by 2025, according to venturebeat.

IBM sees the technology industry at a pivotal moment in the path to mass adoption of artificial intelligence (AI). Google subsidiary DeepMind is leveraging AI to determine how to refer optometry patients. Haven Life is using AI to extend life insurance policies to people who wouldn’t traditionally be eligible, such as people with chronic illnesses and non-U.S. citizens. And Google self-driving car spinoff Waymo is tapping it to provide mobility to elderly and disabled people.

But despite the good AI is clearly capable of doing, doubts abound over its safety, transparency, and bias. IBM believes part of the problem is a lack of standard practices.

As a result, there’s no consistent, agreed-upon way AI services should be created, tested, trained, deployed, and evaluated, observes Aleksandra Mojsilovic, head of AI foundations at IBM Research and co-director of the AI Science for Social Good program. To clear up the ambiguity surrounding AI, Mojsilovic and colleagues propose voluntary factsheets or as more formally called Supplier’s Declaration of Conformity (DoC). The goal: increasing the transparency of particular AI services and engendering trust in them.

Such factsheets alone could enable a competitive advantage to AI offers in the marketplace. Such factsheets could provide explain-ability around susceptibility to adversarial attacks—issues that must be addressed in order for AI services to be trusted along with fairness and robustness, Mojsilovic continued. Factsheets take away the black box perception of AI and render the AI system understandable by both researchers and developers.

Several core pillars form the basis for trust in AI systems: fairness, robustness, and explain-ability, the first 3 pillars.  Late in her piece, Mojsilovic introduces a fourth pillar — lineage — which concerns AI systems’ history. Factsheets would answer questions ranging from system operation and training data to underlying algorithms, test setups and results, performance benchmarks, fairness and robustness checks, intended uses, maintenance, and retraining. More granular topics might include governance strategies used to track the AI service’s data workflow, the methodologies used in testing, and bias mitigations performed on the dataset. But in Mojsilovic’s view, documents detailing the ins and outs of systems would go a long way to maintaining the public’s faith in AI.

For natural language processing algorithms specifically, the researchers propose data statements that would show how an algorithm might be generalized, how it might be deployed, and what biases it might contain.

Natural language processing systems aren’t as fraught with controversy as, say, facial recognition, but they’ve come under fire for their susceptibility to bias.  IBM, Microsoft, Accenture, Facebook, and others are actively working on automated tools that detect and minimize bias, and companies like Speechmatics and Nuance have developed solutions specifically aimed at minimizing the so-called accent gap — the tendency of voice recognition models to skew toward speakers from certain regions. But in Mojsilovic’s view, documents detailing the ins and outs of systems—factsheets–would go a long way to restoring the public’s faith in AI.

Fairness, safety, reliability, explain-ability, robustness, accountability — all agree that they are critical. Yet, to achieve trust in AI, making progress on these issues alone will not be enough; it must be accompanied with the ability to measure and communicate the performance levels of a system on each of these dimensions, she wrote. Understanding and evaluating AI systems is an issue of utmost importance for the AI community, an issue IBM believes the industry, academia, and AI practitioners should be working on together.

DancingDinosaur is Alan Radding, a veteran information technology analyst, writer, and ghost-writer. Follow DancingDinosaur on Twitter, @mainframeblog, and see more of his work at technologywriter.com.


%d bloggers like this: