Posts Tagged ‘Java’

Is Your Enterprise Ready for AI?

May 11, 2018

According to IBM’s gospel of AI “we are in the midst of a global transformation and it is touching every aspect of our world, our lives, and our businesses.”  IBM has been preaching its gospel of AI of the past year or longer, but most of its clients haven’t jumped fully aboard. “For most of our clients, AI will be a journey. This is demonstrated by the fact that most organizations are still in the early phases of AI adoption.”

AC922 with NIVIDIA Tesla V100 and Enhanced NVLink GPUs

The company’s latest announcements earlier this week focus POWER9 squarely on AI. Said Tim Burke, Engineering Vice President, Cloud and Operating System Infrastructure, at Red Hat. “POWER9-based servers, running Red Hat’s leading open technologies offer a more stable and performance optimized foundation for machine learning and AI frameworks, which is required for production deployments… including PowerAI, IBM’s software platform for deep learning with IBM Power Systems that includes popular frameworks like Tensorflow and Caffe, as the first commercially supported AI software offering for [the Red Hat] platform.”

IBM insists this is not just about POWER9 and they may have a point; GPUs and other assist processors are taking on more importance as companies try to emulate the hyperscalers in their efforts to drive server efficiency while boosting power in the wake of declines in Moore’s Law. ”GPUs are at the foundation of major advances in AI and deep learning around the world,” said Paresh Kharya, group product marketing manager of Accelerated Computing at NVIDIA. [Through] “the tight integration of IBM POWER9 processors and NVIDIA V100 GPUs made possible by NVIDIA NVLink, enterprises can experience incredible increases in performance for compute- intensive workloads.”

To create an AI-optimized infrastructure, IBM announced the latest additions to its POWER9 lineup, the IBM Power Systems LC922 and LC921. Characterized by IBM as balanced servers offering both compute capabilities and up to 120 terabytes of data storage and NVMe for rapid access to vast amounts of data. IBM included HDD in the announcement but any serious AI workload will choke without ample SSD.

Specifically, these new servers bring an updated version of the AC922 server, which now features recently announced 32GB NVIDIA V100 GPUs and larger system memory, which enables bigger deep learning models to improve the accuracy of AI workloads.

IBM has characterized the new models as data-intensive machines and AI-intensive systems, LC922 and LC921 Servers with POWER9 processors. The AC922, arrived last fall. It was designed for the what IBM calls the post-CPU era. The AC922 was the first to embed PCI-Express 4.0, next-generation NVIDIA NVLink, and OpenCAPI—3 interface accelerators—which together can accelerate data movement 9.5x faster than PCIe 3.0 based x86 systems. The AC922 was designed to drive demonstrable performance improvements across popular AI frameworks such as TensorFlow and Caffe.

In the post CPU era, where Moore’s Law no longer rules, you need to pay as much attention to the GPU and other assist processors as the CPU itself, maybe even more so. For example, the coherence and high-speed of the NVLink enables hash tables—critical for fast analytics—on GPUs. As IBM noted at the introduction of the new machines this week: Hash tables are fundamental data structure for analytics over large datasets. For this you need large memory: small GPU memory limits hash table size and analytic performance. The CPU-GPU NVLink2 solves 2 key problems: large memory and high-speed enables storing the full hash table in CPU memory and transferring pieces to GPU for fast operations; coherence enables new inserts in CPU memory to get updated in GPU memory. Otherwise, modifications on data in CPU memory do not get updated in GPU memory.

IBM has started referring to the LC922 and LC921 as big data crushers. The LC921 brings 2 POWER9 sockets in a 1U form factor; for I/O it comes with both PCIe 4.0 and CAPI 2.0.; and offers up to 40 cores (160 threads) and 2TB RAM, which is ideal for environments requiring dense computing.

The LC922 is considerably bigger. It offers balanced compute capabilities delivered with the P9 processor and up to 120TB of storage capacity, again advanced I/O through PCIe 4.0/CAPI 2.0, and up to 44 cores (176 threads) and 2TB RAM. The list price, notes IBM is ~30% less.

If your organization is not thinking about AI your organization is probably in the minority, according to IDC.

  • 31 percent of organizations are in [AI] discovery/evaluation
  • 22 percent of organizations plan to implement AI in next 1-2 years
  • 22 percent of organizations are running AI trials
  • 4 percent of organizations have already deployed AI

Underpinning both servers is the IBM POWER9 CPU. The POWER9 enjoys a nearly 5.6x improved CPU to GPU bandwidth vs x86, which can improve deep learning training times by nearly 4x. Even today companies are struggling to cobble together the different pieces and make them work. IBM learned that lesson and now offers a unified AI infrastructure in PowerAI and Power9 that you can use today.

DancingDinosaur is Alan Radding, a veteran information technology analyst, writer, and ghost-writer. Please follow DancingDinosaur on Twitter, @mainframeblog. See more of his IT writing at technologywriter.com and here.

IBM Grows Quantum Ecosystem

April 27, 2018

It is good that you aren’t dying to deploy quantum computing soon because IBM readily admits that it is not ready for enterprise production now or in several weeks or maybe several months. IBM, however, continues to assemble the building blocks you will eventually need when you finally feel the urge to deploy a quantum application that can address a real problem that you need to resolve.

cryostat with prototype of quantum processor

IBM is surprisingly frank about the state of quantum today. There is nothing you can do at this point that you can’t simulate on a conventional or classical computer system. This situation is unlikely to change anytime soon either. For years to come, we can expect hybrid quantum and conventional compute environments that will somehow work together to solve very demanding problems, although most aren’t sure exactly what those problems will be when the time comes. Still at Think earlier this year IBM predicted quantum computing will be mainstream in 5 years.

Of course, IBM has some ideas of where the likely problems to solve will be found:

  • Chemistry—material design, oil and gas, drug discovery
  • Artificial Intelligence—classification, machine learning, linear algebra
  • Financial Services—portfolio optimization, scenario analysis, pricing

It has been some time since the computer systems industry had to build a radically different kind of compute discipline from scratch. Following the model of the current IT discipline IBM began by launching the IBM Q Network, a collaboration with leading Fortune 500 companies and research institutions with a shared mission. This will form the foundation of a quantum ecosystem.  The Q Network will be comprised of hubs, which are regional centers of quantum computing R&D and ecosystem; partners, who are pioneers of quantum computing in a specific industry or academic field; and most recently, startups, which are expected to rapidly advance early applications.

The most important of these to drive growth of quantum are the startups. To date, IBM reports eight startups and it is on the make for more. Early startups include QC Ware, Q-Ctrl, Cambridge Quantum Computing (UK), which is working on a compiler for quantum computing, 1Qbit based in Canada, Zapata Computing located at Harvard, Strangeworks, an Austin-based tool developer, QxBranch, which is trying to apply classical computing techniques to quantum, and Quantum Benchmark.

Startups get membership in the Q network and can run experiments and algorithms on IBM quantum computers via cloud-based access; provide deeper access to APIs and advanced quantum software tools, libraries, and applications; and have the opportunity to collaborate with IBM researchers and technical SMEs on potential applications, as well as with other IBM Q Network organizations. If it hasn’t become obvious yet, the payoff will come from developing applications that solve recognizable problems. Also check out QISKit, a software development kit for quantum applications available through GitHub.

The last problem to solve is the question around acquiring quantum talent. How many quantum scientists, engineers, or programmers do you have? Do you even know where to find them? The young people excited about computing today are primarily interested in technologies to build sexy apps using Node.js, Python, Jupyter, and such.

To find the people you need to build quantum computing systems you will need to scour the proverbial halls of MIT, Caltech, and other top schools that produce physicists and quantum scientists. A scan of salaries for these people reveals $135,000- $160,000, if they are available at all.

The best guidance from IBM on starting is to start small. The industry is still at the building block stage; not ready to throw specific application at real problems. In that case sign up for IBM’s Q Network and get some of your people engaged in the opportunities to get educated in quantum.

When DancingDinosaur first heard about quantum physics he was in a high school science class decades ago. It was intriguing but he never expected to even be alive to see quantum physics becoming real, but now it is. And he’s still here. Not quite ready to sign up for QISKit and take a small qubit machine for a spin in the cloud, but who knows…

DancingDinosaur is Alan Radding, a veteran information technology analyst, writer, and ghost-writer. Follow DancingDinosaur on Twitter, @mainframeblog. See more of his work at technologywriter.com and here.

IT Security Enters the Cooperative Era

April 20, 2018

Ever hear of the cybersecurity tech accord?  It was  announced on Tuesday. Microsoft, Facebook, and 32 other companies signed aboard.  Absent from the signing were Apple, Alphabet and Amazon. Also missing was IBM. Actually, IBM was already at the RSA Conference making its own security announcement of an effort to help cybersecurity teams collaborate just like the attackers they’re defending against do via the dark web by sharing information among themselves.

IBM security control center

Tuesday’s Cybersecurity Tech Accord amounted to a promise to work together on cybersecurity issues. Specifically, the companies promise to work against state sponsored cyberattacks. The companies also agreed to collaborate on stronger defense systems and protect against the tampering of their products, according to published reports.

Giving importance to the accord is the financial impact of cybersecurity attacks on businesses and organizations, which is projected to reach $8 trillion by 2022. Other technology leaders, including Cisco, HP, Nokia, Oracle also joined the accord.

A few highly visible and costly attacks were enough to galvanize the IT leaders. In May, WannaCry ransomware targeted more than 300,000 computers in 150 countries, including 48 UK medical facilities. In a bid to help, Microsoft issued patches for old Windows systems, even though it no longer supports them, because so many firms run old software that was vulnerable to the attack, according to published reports. The White House attributed the attack to North Korea.

In June, NotPetya ransomware, which initially targeted computers in Ukraine before spreading, infected computers, locked down their hard drives, and demanded a $300 ransom to be paid in bitcoin. Even victims that paid weren’t able to recover their files, according to reports. The British government said Russia was behind the global cyberattack.

The Cybersecurity Tech Accord is modeled after a digital Geneva Convention, with a long-term goal of updating international law to protect people in times of peace from malicious cyberattacks, according to Microsoft president Brad Smith.

Github’s chief strategy officer Julio Avalos wrote in a separate blog post that “protecting the Internet is becoming more urgent every day as more fundamental vulnerabilities in infrastructure are discovered—and in some cases used by government organizations for cyberattacks that threaten to make the Internet a theater of war.” He continued: “Reaching industry-wide agreement on security principles and collaborating with global technology companies is a crucial step toward securing our future.”

Added Sridhar Muppidi, Co-CTO of IBM Security about the company’s efforts to help cybersecurity teams collaborate like the attackers they’re working against, in a recent published interview: The good guys have to collaborate with each other so that we can provide a better and more secure and robust systems. So we talk about how we share the good intelligence. We also talk about sharing good practices, so that we can then build more robust systems, which are a lot more secure.

It’s the same concept of open source model, where you provide some level of intellectual capital with an opportunity to bring in a bigger community together so that we can take the problem and solve it better and faster. And learn from each other’s mistakes and each other’s advancement so that it can help, individually, each of our offerings. So, end of the day, for a topic like AI, the algorithm is going to be an algorithm. It’s the data, it’s the models, it’s the set of things which go around it which make it very robust and reliable, Muppidi continued.

IBM appears to be practicing what it preaches by facilitating the collaboration of people and machines in defense of cyberspace. Last year at RSA, IBM introduced Watson to the cybersecurity industry to augment the skills of analysts in their security investigations. This year investments and artificial intelligence (AI), according to IBM, were made with a larger vision in mind: a move toward “automation of response” in cybersecurity.

At RSA, IBM also announced the next-generation IBM Resilient Incident Response Platform (IRP) with Intelligent Orchestration. The new platform promises to accelerate and sharpen incident response by seamlessly combining incident case management, orchestration, automation, AI, and deep two-way partner integrations into a single platform.

Maybe DancingDinosaur, which has spent decades acting as an IT-organization-of-one, can finally turn over some of the security chores to an intelligent system, which hopefully will do it better and faster.

DancingDinosaur is Alan Radding, a veteran information technology analyst, writer, and ghost-writer. Follow DancingDinosaur on Twitter, @mainframeblog. See more of his work at technologywriter.com and here.

IBM Introduces Skinny Z Systems

April 13, 2018

Early this week IBM unveiled two miniaturized mainframe models, dubbed skinny mainframes, it said are easier to deploy in a public or private cloud facility than their more traditional, much bulkier predecessors. Relying on all their design tricks, IBM engineers managed to pack each machine into a standard 19-inch rack with space to spare, which can be used for additional components.

Z14 LinuxONE Rockhopper II, 19-inch rack

The first new mainframe introduced this week, also in a 19-inch rack, is the Z14 model ZR1. You can expect subsequent models to increment the model numbering.  The second new machine is the LinuxONE Rockhopper II, also in a 19-inch rack.

In the past, about a year after IBM introduced a new mainframe, say the z10, it was introduced what it called a Business Class (BC) version. The BC machines were less richly configured, less expandable but delivered comparable performance with lower capacity and a distinctly lower price.

In a Q&A analyst session IBM insisted the new machines would be priced noticeably lower, as were the BC-class machines of the past. These are not comparable to the old BC machines. Instead, they are intended to attract a new group of users who face new challenges. As such, they come cloud-ready. The 19-inch industry standard, single-frame design is intended for easy placement into existing cloud data centers alongside other components and private cloud environments.

The company, said Ross Mauri, General Manager IBM Z, is targeting the new machines toward clients seeking robust security with pervasive encryption, cloud capabilities and powerful analytics through machine learning. Not only, he continued, does this increase security and capability in on-premises and hybrid cloud environments for clients, IBM will also deploy the new systems in IBM public cloud data centers as the company focuses on enhancing security and performance for increasingly intensive data loads.

In terms of security, the new machines will be hard to beat. IBM reports the new machines capable of processing over 850 million fully encrypted transactions a day on a single system. Along the same lines, the new mainframes do not require special space, cooling or energy. They do, however, still provide IBM’s pervasive encryption and Secure Service Container technology, which secures data serving at a massive scale.

Ross continued: The new IBM Z and IBM LinuxONE offerings also bring significant increases in capacity, performance, memory and cache across nearly all aspects of the system. A complete system redesign delivers this capacity growth in 40 percent less space and is standardized to be deployed in any data center. The z14 ZR1 can be the foundation for an IBM Cloud Private solution, creating a data-center-in-a-box by co-locating storage, networking and other elements in the same physical frame as the mainframe server.  This is where you can utilize that extra space, which was included in the 19-inch rack.

The LinuxONE Rockhopper II can also accommodate a Docker-certified infrastructure for Docker EE with integrated management and scale tested up to 330,000 Docker containers –allowing developers to build high-performance applications and embrace a micro-services architecture.

The 19-inch rack, however, comes with tradeoffs, notes Timothy Green writing in The Motley Fool. Yes, it takes up 40% less floor space than the full-size Z14, but accommodates only 30 processor cores, far below the 170 cores supported by a full size Z14, , which fills a 24-inch rack. Both new systems can handle around 850 million fully encrypted transactions per day, a fraction of the Z14’s full capacity. But not every company needs the full performance and capacity of the traditional mainframe. For companies that don’t need the full power of a Z14 mainframe, notes Green, or that have previously balked at the high price or massive footprint of full mainframe systems, these smaller mainframes may be just what it takes to bring them to the Z. Now IBM needs to come through with the advantageous pricing they insisted they would offer.

The new skinny mainframe are just the latest in IBM’s continuing efforts to keep the mainframe relevant. It began over a decade ago with porting Linux to the mainframe. It continued with Hadoop, blockchain, and containers. Machine learning and deep learning are coming right along.  The only question for DancingDinosaur is when IBM engineers will figure out how to put quantum computing on the Z and squeeze it into customers’ public or private cloud environments.

DancingDinosaur is Alan Radding, a veteran information technology analyst, writer, and ghost-writer. Follow DancingDinosaur on Twitter, @mainframeblog. See more of his work at technologywriter.com and here.

Mainframe ISVs Advance the Mainframe While IBM Focuses on Think

March 30, 2018

Last week IBM reveled in the attention of upwards of 30,000 visitors to its Think conference, reportedly a record for an IBM conference. Meanwhile Syncsort and Compuware stayed home pushing new mainframe initiatives. Specifically, Syncsort introduced innovations to deliver mainframe log and application data in real-time directly to Elastic for deeper next generation analytics through like Splunk, Hadoop and the Elastic Stack.

Syncsort Ironstone for next-gen analytics

Compuware reported that the percentage of organizations running at least half their business-critical applications on the mainframe expected to increase next year, although the loss of skilled mainframe staff, and the failure to subsequently fill those positions pose significant threats to application quality, velocity and efficiency. Compuware has been taking the lead in modernizing the mainframe developer experience to make it compatible with the familiar x86 experience.

According to David Hodgson, Syncsort’s chief product officer, many organizations are using Elastic’s Kibana to visualize Elasticsearch data and navigate the Elastic Stack. These organizations, like others, are turning to tools like Hadoop and Splunk to get a 360-degree view of their mainframe data enterprise-wide. “In keeping with our proven track record of enabling our customers to quickly extract value from their critical data anytime, anywhere, we are empowering enterprises to make better decisions by making mission-critical mainframe data available in another popular analytics platform,” he adds.

For cost management, Syncsort now offers Ironstream with the flexibility of MSU-based (capacity) or Ingestion-based pricing.

Compuware took a more global view of the mainframe. The mainframe, the company notes, is becoming more important to large enterprises as the percentage of organizations running at least half their business-critical applications on that platform expected to increase next year. However, the loss of skilled mainframe staff, and the failure to subsequently fill those positions, pose significant threats to application quality, velocity and efficiency.

These are among the findings of research and analysis conducted by Forrester Consulting on behalf of Compuware.  According to the study, “As mainframe workload increases—driven by modern analytics, blockchain and more mobile activity hitting the platform—customer-obsessed companies should seek to modernize application delivery and remove roadblocks to innovation.”

The survey of mainframe decision-makers and developers in the US and Europe also revealed the growing mainframe importance–64 percent of enterprises will run more than half of their critical applications on the platform within the next year, up from 57 percent this year. And just to ratchet up the pressure a few notches, 72 percent of customer-facing applications at these enterprises are completely or very reliant on mainframe processing.

That means the loss of essential mainframe staff hurts, putting critical business processes at risk. Overall, enterprises reported losing an average of 23 percent of specialized mainframe staff in the last five years while 63 percent of those positions have not been filled.

There is more to the study, but these findings alone suggest that mainframe investments, culture, and management practices need to evolve fast in light of the changing market realities. As Forrester puts it: “IT decision makers cannot afford to treat their mainframe applications as static environments bound by long release cycles, nor can they fail to respond to their critical dependence with a retiring workforce. Instead, firms must implement the modern tools necessary to accelerate not only the quality, but the speed and efficiency of their mainframe, as well as draw [new] people to work on the platform.”

Nobody has 10 years or even three years to cultivate a new mainframer. You need to attract and cultivate talented x86 or ARM people now, equip each—him or her—with the sexiest, most efficient tools, and get them working on the most urgent items at the top of your backlog.

DancingDinosaur is Alan Radding, a veteran information technology analyst, writer, and ghost-writer. Follow DancingDinosaur on Twitter, @mainframeblog. See more of his work at technologywriter.com and here.

 

IBM Boosts AI at Think

March 23, 2018

Enterprise system vendors are racing to AI along with all the others. Writes Jeffrey Burt, an analyst at The Next Platform, “There continues to be an ongoing push among tech vendors to bring artificial intelligence (AI) and its various components – including deep learning and machine learning – to the enterprise. The technologies are being rapidly adopted by hyperscalers and in the HPC space, and enterprises stand to reap significant benefits by also embracing them.” Exactly what those benefits are still need to be specifically articulated and, if possible, quantified.

IBM Think Conference this week

For enterprise data centers running the Z or Power Systems, the most obvious quick payoff will be fast, deeper, more insightful data analytics along with more targeted guidance on actions to take in response. After that there still remains the possibility of more automation of operations but the Z already is pretty thoroughly automated and optimized. Just give it your operational and performance parameters and it will handle the rest.  In addition, vendors like Compuware and Syncsort have been making the mainframe more graphical and intuitive. The days of needing deep mainframe experience or expertise have passed. Even x86 admins can quickly pick up a modern mainframe today.

In a late 2016 study by Accenture that modeled the impact of AI for 12 developed economies. The research compared the size of each country’s economy in 2035 in a baseline scenario, which shows expected economic growth under current assumptions and an AI scenario reflecting expected growth once the impact of AI has been absorbed into the economy. AI was found to yield the highest economic benefits for the United States, increasing its annual growth rate from 2.6 percent to 4.6 percent by 2035, translating to an additional USD $8.3 trillion in gross value added (GVA). In the United Kingdom, AI could add an additional USD $814 billion to the economy by 2035, increasing the annual growth rate of GVA from 2.5 to 3.9 percent. Japan has the potential to more than triple its annual rate of GVA growth by 2035, and Finland, Sweden, the Netherlands, Germany and Austria could see their growth rates double. You can still find the study here.

Also coming out of Think this week was the announcement of an expanded Apple-IBM partnership around AI and machine learning (ML). The resulting AI service is intended for corporate developers to build apps themselves. The new service, Watson Services for Core ML, links Apple’s Core ML tools for developers that it unveiled last year with IBM’s Watson data crunching service. Core ML helps coders build machine learning-powered apps that more efficiently perform calculations on smartphones instead of processing those calculations in external data centers. It’s similar to other smartphone-based machine learning tools like Google’s TensorFlow Lite.

The goal is to help enterprises reimagine the way they work through a combination of Core ML and Watson Services to stimulate the next generation of intelligent mobile enterprise apps. Take the example of field technicians who inspect power lines or machinery. The new AI field app could feed images of electrical equipment to Watson to train it to recognize the machinery. The result would enable field technicians to scan the electrical equipment they are inspecting on their iPhones or iPads and automatically detect any anomalies. The app would eliminate the need to send that data to IBM’s cloud computing data centers for processing, thus reducing the amount of time it takes to detect equipment issues to near real-time.

Apple’s Core ML toolkit could already be used to connect with competing cloud-based machine learning services from Google, Amazon, and Microsoft to create developer tools that more easily link the Core ML service with Watson. For example, Coca-Cola already is testing Watson Services for Core ML to see if it helps its field technicians better inspect vending machines. If you want try it in your shop, the service will be free to developers to use now. Eventually, developers will have to pay.

Such new roll-your-own AI services represent a shift for IBM. Previously you had to work with IBM consulting teams. Now the new Watson developer services are intended to be bought in an “accessible and bite size” way, according to IBM, and sold in a “pay as you go” model without consultants.  In a related announcement at Think, IBM announced it is contributing the core of Watson Studio’s Deep Learning Service as an open source project called Fabric for Deep Learning. This will enable developers and data scientists to work together on furthering the democratization of deep learning.

Ultimately, the democratization of AI is the only way to go. When intelligent systems speak together and share insights everyone’s work will be faster, smarter. Yes, there will need to be ways to compensate distinctively valuable contributions but with over two decades of open source experience, the industry should be able to pretty easily figure that out.

DancingDinosaur is Alan Radding, a veteran information technology analyst, writer, and ghost-writer. Follow DancingDinosaur on Twitter, @mainframeblog. See more of his work at technologywriter.com and here.

IBM Jumps into the Next Gen Server Party with POWER9

February 15, 2018

IBM re-introduced its POWER9 lineup of servers  this week starting with 2-socket and 4-socket systems and more variations coming in the months ahead as IBM, along with the rest of the IT vendor community grapples with how to address changing data center needs. The first, the AC922, arrived last fall. DancingDinosaur covered it here. More, the S922/S914/S924 and H922/H924/L922, are promised later this quarter.

The workloads organizations are running these days are changing, often dramatically and quickly. One processor, no matter how capable or flexible or efficient will be unlikely to do the job going forward. It will take an entire family of chips.  That’s as true for Intel and AMR and the other chip players as IBM.

In some ways, IBM’s challenge is even qwerkier. Its chips will not only need to support Linux and Windows, but also IBMi and AIX. IBM simply cannot abandon its IBMi and AIX customer bases. So chips supporting IBMi and AIX are being built into the POWER9 family.

For IBMi the company is promising POWER9 exploitation for:

  • Expanding the secure-ability of IBMi with TLS, secure APIs, and logs for SIEM solutions
  • Expanded Install options with an installation process using USB 3.0 media
  • Encryption and compression for cloud storage
  • Increasing the productivity of developers and administrators

This may sound trivial to those who have focused on the Linux world and work with x86 systems too, but it is not for a company still mired in productive yet aging IBMi systems.

IBM also is promising POWER9 goodies for AIX, its legacy Unix OS, including:

  • AIX Security: PowerSC and PowerSC MFA updates for malware intrusion prevention and strong authentication
  • New workload acceleration with shared memory communications over RDMA (SMC-R)
  • Improved availability: AIX Live Update enhancements; GDR 1.2; PowerHA 7.2
  • Improved Cloud Mgmt: IBM Cloud PowerVC Manager for SDI; Import/Export;
  • AIX 7.2 native support for POWER9 – e.g. enabling NVMe

Again, if you have been running Linux on z or LinuxONE this may sound antiquated, but AIX has not been considered state-of-the-art for years. NVMe alone gives is a big boost.

But despite all the nice things IBM is doing for IBMi and AIX, DancingDinosaur believes the company clearly is betting POWER9 will cut into Intel x86 sales. But that is not a given. Intel is rolling out its own family of advanced x86 Xeon machines under the Skylake code name. Different versions will be packaged and tuned to different workloads. They are rumored, at the fully configured high end, to be quite expensive. Just don’t expect POWER9 systems to be cheap either.

And the chip market is getting more crowded. As Timothy Prickett Morgan, analyst at The Next Platform noted, various ARM chips –especially ThunderX2 from Cavium and Centriq 2400 from Qualcomm –can boost non-X86 numbers and divert sales from IBM’s POWER9 family. Also, AMD’s Epyc X86 processors have a good chance of stealing some market share from Intel’s Skylake. So the POWER9 will have to fight for every sale IBM wants.

Morgan went on: IBM differentiated the hardware and the pricing with its NVLink versions, depending on the workload and the competition, with its most aggressive pricing and a leaner and cheaper microcode and hypervisor stack reserved for the Linux workloads that the company is chasing. IBM very much wants to sell its Power-Linux combo against Intel’s Xeon-Linux and also keep AMD’s Epyc-Linux at bay. Where the Power8 chip had the advantage over the Intel’s Haswell and Broadwell Xeon E5 processors when it came to memory capacity and memory bandwidth per socket, and could meet or beat the Xeons when it came to performance on some workloads that is not yet apparent with the POWER9.

With the POWER9, however, IBM will likely charge a little less for companies buying its Linux-only variants, observes Morgan, effectively enabling IBM to win Linux deals, particularly where data analytics and open source databases drive the customer’s use case. Similarly, some traditional simulation and modeling workloads in the HPC and machine learning areas are ripe for POWER9.

POWER9 is not one chip. Packed into the chip are next-generation NVIDIA NVLink and OpenCAPI to provide significantly faster performance for attached GPUs. The PCI-Express 4.0 interconnect will be twice the speed of PCI-Express 3.0. The open POWER9 architecture also allows companies to mix a wide range of accelerators to meet various needs. Meanwhile, OpenCAPI can unlock coherent FPGAs to support varied accelerated storage, compute, and networking workloads. IBM also is counting on the 300+ members of the OpenPOWER Foundation and OpenCAPI Consortium to launch innovations for POWER9. Much is happening: Stay tuned to DancingDinosaur

DancingDinosaur is Alan Radding, a veteran information technology analyst, writer, and ghost-writer. Follow DancingDinosaur on Twitter, @mainframeblog. See more of his work at technologywriter.com and here.

IBM Boosts DevOps with ADDI on Z

February 9, 2018

IBM’s Application Discovery and Delivery Intelligence (ADDI) is an analytical platform for application modernization. It uses cognitive technologies to analyze mainframe applications so you can quickly discover and understand interdependencies and impacts of change. You can use this intelligence to transform and renew these applications faster than ever. Capitalize on time-tested mainframe code to engage the API economy. Accelerate application transformation of your IBM Z hybrid cloud environment and more.

Formerly, ADDI was known as EZSource. Back then EZSource was designed to expedite digital transformations by unlocking core business logic and apps. Specifically it enabled the IT team to pinpoint specific mainframe code in preparation for leveraging IT through a hybrid cloud strategy. In effect it enabled the understanding business-critical assets in preparation of deployment of a z-centered hybrid cloud. This also enabled enterprise DevOps, which was necessary to keep up with the pace of changes overtaking existing business processes.

This wasn’t easy when EZSource initially arrived and it still isn’t although the intelligence built into ADDI makes it easier now.  Originally it was intended to help the mainframe data center team to:

  • Identify API candidates to play in the API economy
  • Embrace micro services to deliver versatile apps fast
  • Identify code quality concerns, including dead code, to improve reliability and maintainability
  • Mitigate risk of change through understanding code, data, and schedule interdependencies
  • Aid in sizing the change effort
  • Automate documentation to improve understanding
  • Reduce learning curve as new people came onboarded
  • Add application understanding to DevOps lifecycle information to identify opportunities for work optimization

Today, IBM describes Application Discovery and Delivery Intelligence (ADDI), its follow-up to EZSource, as an analytical platform for application modernization. It uses cognitive technologies to analyze mainframe applications so your team can quickly discover and understand interdependencies and impacts of any change. In theory you should be able to use this intelligence to transform and renew these applications more efficiently and productively. In short, it should allow you to leverage time-tested mainframe code to engage with the API economy and accelerate the application transformation on your IBM Z and hybrid cloud environment.

More specifically, it promises to enable your team to analyze a broad range of IBM and non-IBM programing languages, databases, workload schedulers, and environments. Enterprise application portfolios were built over decades using an ever-evolving set of technologies, so you need a tool with broad support, such as ADDI, to truly understand the relationships between application components and accurately determine the impacts of potential changes.

In practice, it integrates with mainframe environments and tools via a z/OS agent to automatically synchronize application changes. Without keeping your application analysis synchronized with the latest changes that your developers made, according to IBM, your analysis can get out of date and you risk missing critical changes.

In addition, it provides visual analysis integrated with leading IDEs. Data center managers are petrified of changing applications that still work, fearing they will inadvertently break it or slow performance. When modifying complex applications, you need to be able to quickly navigate the dependencies between application components and drill down to see relevant details. After you understand the code, you can then effectively modify it at much lower risk. The integration between ADDI and IBM Developer for z (IDz) combines the leading mainframe IDE with the application understanding and analytics capabilities you need to safely and efficiently modify the code.

It also, IBM continues, cognitively optimizes your test suites.  When you have a large code base to maintain and manyf tests to run, you must run the tests most optimally. ADDI correlates code coverage data and code changes with test execution records to enable you to identify which regression tests are the most critical, allowing you to optimize time and resources while reducing risk. It exposes poorly tested or complex code and empowers the test teams with cognitive insights that turns awareness of trends into mitigation of future risks.

Finally, ADDI intelligently identifies performance degradations before they hit production. It correlates runtime performance data with application discovery data and test data to quickly pinpoint performance degradation and narrow down the code artifacts to those that are relevant to the cause of bad performance. This enables early detection of performance issues and speeds resolution.

What’s the biggest benefit of ADDI on the Z? It enables your data center to play a central role in digital transformation, a phrase spoken by every c-level executive today as a holy mantra. But more importantly, it will keep your mainframe relevant.

DancingDinosaur is Alan Radding, a veteran information technology analyst, writer, and ghost-writer. Please follow DancingDinosaur on Twitter, @mainframeblog. See more of his IT writing at technologywriter.com and here.

Value and Power of LinuxOne Emperor II

February 4, 2018

There is much value n the mainframe but it doesn’t become clear until you do a full TCO analysis. When you talk to an IBMer about the cost of a mainframe the conversation immediately shifts to TCO, usually in the form of how many x86 systems you would have to deploy to handle a comparable workload with similar quality of service.  The LinuxONE Emperor II, introduced in September, can beat those comparisons.

LinuxONE Emperor II

Proponents of x86 boast about the low acquisition cost of x86 systems. They are right if you are only thinking about a low initial acquisition cost. But you also have to think about the cost of software for each low-cost core you purchase, and for many enterprise workloads you will need to acquire a lot of cores. This is where costs can mount quickly.

As a result, software will likely become the highest TCO item because many software products are priced per core.  Often the amount charged for cores is determined by the server’s maximum number of physical cores, regardless of whether they actually are activated. In addition, some architectures require more cores per workload. Ouch! An inexpensive device suddenly becomes a pricy machine when all those cores are tallied and priced.

Finally, x86 to IBM Z core ratios differ per workload, but x86 almost invariably requires more cores than a z-based workload; remember, any LinuxONE is a Z System. For example, the same WebSphere workload on x86 that requires 10 – 12 cores may require only one IFL on the Z. The lesson here: whether you’re talking about system software or middleware, you have to consider the impact of software on TCO.

The Emperor II delivers stunning specs. The machine can be packed with up to 170 cores, as much as 32 TB of memory, and 160 PCIe slots. And it is flexible; use this capacity, for instance, to add more system resources—cores or memory—to service an existing Linux instance or clone more Linux instances. Think of it as scale-out capabilities on steroids, taking you far beyond what you can achieve in the x86 world and do it with just a few keystrokes. As IBM puts it, you might:

  • Dynamically add cores, memory, I/O adapters, devices, and network cards without disruption.
  • Grow horizontally by adding Linux instances or grow vertically by adding resources (memory, cores, slots) to existing Linux guests.
  • Provision for peak utilization.
  • After the peak subsides automatically return unused resources to the resource pool for reallocation to another workload.

So, what does this mean to most enterprise Linux data centers? For example, IBM often cites a large insurance firm. The insurer needed fast and flexible provisioning for its database workloads. The company’s approach directed it to deploy more x86 servers to address growth. Unfortunately, the management of software for all those cores had become time consuming and costly. The company deployed 32 x86 servers with 768 cores running 384 competitor’s database licenses.

By leveraging elastic pricing on the Emperor II, for example, it only needed one machine running 63 IFLs serving 64 competitor’s database licenses.  It estimated savings of $15.6 million over 5 years just by eliminating charges for unused cores. (Full disclosure: these figures are provided by IBM; DancingDinosaur did not interview the insurer to verify this data.) Also, note there are many variables at play here around workloads and architecture, usage patterns, labor costs, and more. As IBM warns: Your results may vary.

And then there is security. Since the Emperor II is a Z it delivers all the security of the newest z14, although in a slightly different form. Specifically, it provides:

  • Ultimate workload isolation and pervasive encryption through Secure Service Containers
  • Encryption of data at rest without application change and with better performance than x86
  • Protection of data in flight over the network with full end-to-end network security
  • Use of Protected Keys to secure data without giving up performance
  • Industry-leading secure Java performance via TLS (2-3x faster than Intel)

BTW the Emperor II also anchors IBM’s Blockchain cloud service. That calls for security to the max. In the end. the Emperor II is unlike any x86 Linux system.

  • EAL 5+ isolation, best in class crypto key protection, and Secure Service Containers
  • 640 Power cores in its I/O channels (not included in the core count)
  • Leading I/O capacity and performance in the industry
  • IBM’s shared memory vertical scale architecture with a better architecture for stateful workloads like databases and systems of record
  • Hardware designed to give good response time even with 100% utilization, which simplifies the solution and reduces the extra costs x86 users assume are necessary because they’re used to keeping a utilization safety margin.

This goes far beyond TCO.  Just remember all the things the Emperor II brings: scalability, reliability, container-based security and flexibility, and more.

…and Go Pats!

DancingDinosaur is Alan Radding, a Boston-based veteran information technology analyst, writer, and ghost-writer. Please follow DancingDinosaur on Twitter, @mainframeblog. See more of his IT writing at technologywriter.com and here.


%d bloggers like this: