New z15 for Multicloud– Cloud-Native– Instant Recovery

September 16, 2019

The z14 was introduced in the summer of 2017. So, it’s been a little over two years since IBM released a new top end mainframe, which seems about the right time for a new z. Unlike the z14, this machine is not focusing on the speeds and feeds, although they appear quite robust. With the z15, the focus is on multicloud environments.  No surprise there. IBM has been singing the gospel of multicloud and hybrid clouds for over a year.

z15–2-frames 19″ rack each

IBM does not describe the z15 in the usual terms for a top of the line mainframe, the ever-larger speeds and feeds. Rather, IBM describes it as its new enterprise platform delivering the ability to manage the privacy of customer data across hybrid multicloud environments. With z15, clients can manage who gets access to data via policy-based controls, with an industry-first capability to revoke access to data even across the hybrid cloud.

One of the capabilities IBM cites most often about the z15 isn’t even new: it’s pervasive encryption.  This was available on the z14. With pervasive encryption data is encrypted immediately without your having to do anything and it imposes no cost in terms of system overhead. Thank you embedded hardware. The data also is automatically protected both at rest and in flight. The savings in terms of staff time alone should be considerable.

Related to pervasive encryption is a new z15 capability: Data Privacy Passport. This allows you to gain control over how data is stored and shared, enabling the ability to protect and provision data and revoke access to that data at any time, not only within the z15 environment but across an enterprise’s hybrid multicloud environment. Similarly, the z15 can also encrypt data everywhere – across hybrid multicloud environments – to help enterprises secure their data wherever it travels and lands.

Another new z15 capability is Cloud-Native Development. This capability facilitates how you can modernize apps in place by building new cloud-native apps and securely integrate your important workloads across clouds. This just expedites what Z shops have long done: leverage existing Z software assets. Now you can more easily build, deploy and manage next-gen apps and protect data through advanced security and pervasive encryption.

Yet another new capability is instant recovery. Here the goal is to limit the cost and impact of planned and unplanned downtime by enabling you to access full system capacity for a period of time to accelerate shutdown and restart IBM Z services and provide a temporary capacity boost to rapidly recover from lost time.

Instant Recovery addresses what has become a recent customer imperative, concern about organizations mishandling their organization’s private data.  IBM commissioned a study and found that 64 percent of all consumers have opted not to work with a business out of concerns over whether they would keep their data secure. In addition, that same study found 76 percent of respondents would be more willing to share personal information if there was a way to fully take back and retrieve that data at any time. With z15, pervasive encryption is designed to extend across the enterprise to enforce data privacy by policy even when it leaves the platform. With this built-in capability, your clients, as IBM explains it,  can offer new services and features that gives customers stronger control over how their personal data is used.

The new z15 also addresses the rising Importance of data privacy  to clients. IBM commissioned a study by Harris, the polling organization, and released it this week to coincide with the z15 introduction. The Harris study found that 64 percent of all consumers have opted not to work with a business out of concerns over whether a given partner could keep their data secure. However, that same study found 76 percent of respondents would be more willing to share personal information if there was a way to fully take back and retrieve that data at any time. With z15, pervasive encryption is designed to extend across the enterprise enforcing data privacy by policy even when it leaves the platform. With this you can offer new services and features that give clients stronger control over how their personal data is used.

Along with the new z15, this week IBM also introduced a new mainframe storage array, the DS8000F.  In keeping with the hybrid, multicloud theme, the new array is specifically designed for mission critical, hybrid, multicloud environments. This new array, according to IBM, promises comprehensive next-level cyber security, data availability, and system resiliency. In addition to z15, the IBM DS8900F delivers  more than 99.99999 (seven nines) of availability.percent uptime along with several Disaster Recovery options designed for near-zero recovery times to ensure protection of data. 

Seven nines of availability surpases any level of availability DancingDinosaur has previously written about, even with the z14. It comes to, according to Google (https://uptime.is/complex?sla=99.99999&dur=24&dur=24&dur=24&dur=24&dur=24&dur=24&dur=24):

  • Weekly: 0.1s
  • Monthly: 0.3s
  • Yearly: 3.2s

With this level of availability,  you don’t even have time even to dash off to the restroom during an instance of downtime. With the new seven nines (99.99999) of availability,  IBM Z, clients, says the company, now have a new level of control of how and when they store their data to make the best economic and business sense, while always keeping it resilient and available.

DancingDinosaur is Alan Radding, a veteran information technology analyst, writer, and ghost-writer. Follow DancingDinosaur on Twitter, @mainframeblog, and see more of his work at http://technologywriter.com/ 

TradeLens Gives IBM Blockchain Traction

August 27, 2019

In August IBM and Maersk, the global shipping company, announced the launch on TradeLens, a secure open network for the exchange of shipping information among the multiple parties involved in a shipment. TradeLens is expected to eliminate shipping’s information blind spot by delivering end-to-end information about each shipment.

Since the announcement, Trade Lens has attracted over 140 participants to the network. By the end of 2019 IBM expects the network to handle  over 50% of global shipping volume. Competitors like Oracle, however, have recently jumped in with its own blockchain platform as a cloud service. Expect more to come.

TradeLens uses blockchain technology, specifically HyperLedger, to create an industry standard for the secure digitization and transmission of supply chain documents around the world. TradeLens will generate savings for participants and enhance global supply chain security.

TradeLens connects all participants using  public facing, documented apps that connect into same API. Third parties can develop their own apps for the network. While blockchain/hyperledger is the primary technology for IBM and Maersk it is not the only one. TradeLens can accommodate legacy comm and data formats too.

Global trade is big, generating $16 trillion annually.  Just squeezing some efficiency out of what amounts to a cumbersome process bogged down with paper and handoffs should generate an attractive payback in terms of efficiency and speed. During the 12-month trial, Maersk and IBM worked with dozens of ecosystem partners to identify opportunities to prevent delays caused by documentation errors, information slowdowns, and other impediments.

One example demonstrated how TradeLens can reduce the transit time of a shipment of packaging materials to a production line in the United States by 40 percent, avoiding thousands of dollars in cost. Through better visibility and more efficient means of communicating, some supply chain participants estimate they could reduce the steps taken to answer such basic operation questions as “where is my container” from 10 steps and five people to, with TradeLens, one step and one person.

Apps are the key to using TradeLens. TradeLens apps all connect into same public facing and documented API. Underneath network is blockchain-hyperledger. Blockchain is just one of the technologies  being used. TradeLens can accommodate legacy comm formats. Data models, supply chain reference models, consignments, heirarchical models get shared.

During the 12-month trial, Maersk and IBM worked with dozens of ecosystem partners to identify opportunities to prevent delays caused by documentation errors, information delays, and other impediments. One example demonstrated how TradeLens can reduce the transit time of a shipment of packaging materials to a production line in the United States by 40 percent, avoiding thousands of dollars in cost.

Looking ahead, IBM and Mearsk envision a future roadmap that brings new capabilities. These may include shipping instructions, bills of lading with specific language.  As it turns out, you need an original bill of lading to take possession of shipped items. Solving that alone could eliminate a major bottleneck.

Currently, the platform handles 10 million events and more than 100,000 documents every week, and it is growing rapidly, according to IBM.  Similarly, the system handles about 20% of trade volume. With more shippers joining the network, IBM expects to reach 65% volume, And it can grow more. TradLens is not limited to ocean transport only. TradeLens could support other transport modes.

Blockchain is ideal for the z, with its security, scalability, and performance. There is no reason that only the biggest shippers should run TradeLens.

DancingDinosaur is Alan Radding, a veteran information technology analyst, writer, and ghost-writer. Follow DancingDinosaur on Twitter, @mainframeblog, and see more of his work at technologywriter.com.

IBM Advances Commercial Quantum Computing

August 7, 2019

The reason IBM and others are so eager for quantum computing is simple: money. Recent efforts have demonstrated that quantum analytics can process massive amounts of transactions quickly and accurately, as much as nearly $70 Trillion last year, according to the World Bank.

“These are enormous amounts of money,” says mathematician Cornelis Oosterlee of Centrum Wiskunde & Informatica, a national research institute in the Netherlands for a piece in Wired Magazine. “Some single trades involve numbers that are scary to imagine”—part of a company’s pension fund, say, or a university endowment, he continues.

Of course, this isn’t exactly new. Large organizations with access to huge amounts of resources devote inordinate quantities of those resources in an effort to predict how much their assets will be worth in the future.  If they could do this modeling faster or more accurately or more efficiently, maybe just shaving off a few seconds here or there; well you can do the arithmetic.

Today these calculations are expensive to run, requiring either an in-house supercomputer or two or a big chunk of cloud computing processors and time. But if or when quantum computing could deliver on some of its theoretical promise to drive these analyses faster, more accurately, more efficiently and cheaper that’s something IBM could build into the next generation of systems.. 

And it is not just IBM. From Google on down to startups, developers are working on machines that could one day beat conventional computers at various tasks, such as classifying data through machine learning or inventing new drugs—and running complex financial calculations. In a step toward delivering on that promise, researchers affiliated with IBM and J.P. Morgan recently figured out how to run a simplified risk calculation on an actual quantum computer.

Using one of IBM’s machines, located in Yorktown Heights, New York, the researchers demonstrated they could simulate the future value of a financial product called an option. Currently, many banks use what’s called  the Monte Carlo method to simulate prices of all sorts of financial instruments. In essence, the Monte Carlo method models the future as a series of forks in the road. A company might go under; it might not. President Trump might start a trade war; he might not. Analysts estimate the likelihood of such scenarios, then generate millions of alternate futures at random. To predict the value of a financial asset, they produce a weighted average of these millions of possible outcomes.

Quantum computers are particularly well suited to this sort of probabilistic calculation, says Stefan Woerner, who led the IBM team. Classical (or conventional) computers—the kind most of us use—are designed to manipulate bits. Bits are binary, having a value of either 0 or 1. Quantum computers, on the other hand, manipulate qubits, which represent an in-between state. A qubit is like a coin flipping in the air—neither heads nor tails, neither 0 nor 1 but some probability of being one or the other. And because a qubit has unpredictability built in, it promises to  be a natural tool for simulating uncertain outcomes.

Woerner and his colleagues ran their Monte Carlo calculations using three of the 20 qubits available on their quantum machine. The experiment was too simplistic to be useful commercially, but it’s a promising proof of concept; once bigger and smoother-running quantum computers are available, the researchers hope to execute the algorithm faster than conventional machines.

But this theoretical advantage is just that, theoretical. Existing machines remain too error-ridden to compute consistently, In addition, financial institutions already have ample computing power available, onsite or in the cloud.. And they will have even more as graphics processing units (GPU), which can execute many calculations in parallel, come on line. A quantum computer might well be faster than an individual chip but it’s unclear whether it could beat a fleet of high performance GPUs in a supercomputer.

Still, it’s noteworthy that the IBM team was able to implement the algorithm on actual hardware, says mathematician Ashley Montanaro of the University of Bristol in the UK, who was not involved with the work. Academics first developed the mathematical proofs behind this quantum computing algorithm in 2000, but it remained a theoretical exercise for years. Woerner’s group took a 19-year-old recipe and figured out how to make it quantum-ready on actual quantum hardware.

Now they’re looking to improve their algorithm by using more qubits. The most powerful quantum computers today have fewer than 200 qubits, Practitioners suggest it may take thousands to consistently beat conventional methods.

But demonstrations like Woerner’s, even with their limited scope, are useful in that they apply quantum computers to problems organizationz actually want to solve, And that is what it will take if IBM expects to build quantum computing into a viable commercial business.

DancingDinosaur is Alan Radding, a veteran information technology analyst, writer, and ghost-writer. Follow DancingDinosaur on Twitter, @mainframeblog, and see more of his work at technologywriter.com. 

IBM teams with Cloudera and Hortonworks 

July 11, 2019

Dancing Dinosaur has a friend on the West coast who finally left IBM after years of complaining, swearing never to return, and has been happily working at Cloudera ever since. IBM and Cloudera this week announced a strategic partnership to develop joint go-to-market programs designed to bring advanced data and AI solutions to more organizations across the expansive Apache Hadoop ecosystem.

Graphic representing a single solution for big data analytics

Deploy a single solution for big data

The agreement builds on the long-standing relationship between IBM and Hortonworks, which merged with Cloudera this past January to create integrated solutions for data science and data management. The new agreement builds on the integrated solutions and extends them to include the Cloudera platform. “This should stop the big-data-is-dead thinking that has been cropping up,” he says, putting his best positive spin on the situation.

Unfortunately, my West coast buddy may be back at IBM sooner than he thinks. With IBM finalizing its $34 billion Red Hat acquisition yesterday, it is small additional money to just buy Horton and Cloudera and own them all as a solid big data-cloud capabilities block IBM owns.  

As IBM sees it, the companies have partnered to offer an industry-leading, enterprise-grade Hadoop distribution plus an ecosystem of integrated products and services – all designed to help organizations achieve faster analytic results at scale. As a part of this partnership, IBM promises to:

  • Resell and support of Cloudera products
  • Sell and support of Hortonworks products under a multi-year contract
  • Provide migration assistance to future Cloudera/Hortonworks unity products
  • Deliver the benefits of the combined IBM and Cloudera collaboration and investment in the open source community, along with commitment to better support analytics initiatives from the edge to AI.

IBM also will resell the Cloudera Enterprise Data Hub, Cloudera DataFlow, and Cloudera Data Science Workbench. In response, Cloudera will begin to resell IBM’s Watson Studio and BigSQL.

“By teaming more strategically with IBM we can accelerate data-driven decision making for our joint enterprise customers who want a hybrid and multi-cloud data management solution with common security and governance,” said Scott Andress, Cloudera’s Vice President of Global Channels and Alliances in the announcement. 

Cloudera enables organizations to transform complex data into clear and actionable insights. It delivers an enterprise data cloud for any data, anywhere, from the edge to AI. One obvious question: how long until IBM wants to include Cloudera as part of its own hybrid cloud? 

But IBM isn’t stopping here. It also just announced new storage solutions across AI and big data, modern data protection, hybrid multicloud, and more. These innovations will allow organizations to leverage more heterogeneous data sources and data types for deeper insights from AI and analytics, expand their ability to consolidate rapidly expanding data on IBM’s object storage, and extend modern data protection to support more workloads in hybrid cloud environments.

The key is IBM Spectrum Discover, metadata management software that provides data insight for petabyte-scale unstructured storage. The software connects to IBM Cloud Object Storage and IBM Spectrum Scale, enabling it to rapidly ingest, consolidate, and index metadata for billions of files and objects. It provides a rich metadata layer that enables storage administrators, data stewards, and data scientists to efficiently manage, classify, and gain insights from massive amounts of unstructured data. Combining that with Cloudera and Horton on the IBM’s hybrid cloud should give you a powerful data analytics solution. 

DancingDinosaur is Alan Radding, a veteran information technology analyst, writer, and ghost-writer. Follow DancingDinosaur on Twitter, @mainframeblog, and see more of his work at technologywriter.com. 

 

IBM Pushes Quantum for Business

June 20, 2019

Other major system providers pursuing quantum computing initiatives, but none are pursuing it as methodically or persistently as IBM. In a recent announcement:  IBM’s Institute for Business Value introduced a five-step roadmap to bring quantum computing to your organization.

Into IBM Q computation center: dilution refrigerators with microwave electronics (middle) that provide Q Network cloud access to 20-qubit processor. (Credit: Connie Zhou)

Start by familiarizing yourself with superposition and entanglement, which enable quantum computers to solve problems intractable for today’s conventional computers:

Superposition. A conventional computer uses binary bits that can only depict either 1 or 0. Instead, quantum computers use qubits that can depict a 1 or 0, or any combination by superposition of the qubits’ possible states. This supplies quantum computers with an exponential set of states they can explore to solve certain types of problems better than conventional computers.

Entanglement. In the quantum world, two qubits located even light-years apart can still act in ways that are strongly correlated. Quantum computing takes advantage of this entanglement to encode problems that exploit this correlation between qubits.

The quantum properties of superposition and entanglement enable quantum computers to rapidly explore an enormous set of possibilities to identify an optimal answer that could maximize business value. As future quantum computers can calculate certain answers exponentially faster than today’s conventional machines, they will enable tackling business problems that are exponentially more complex.

Despite conventional computers’ limitations, quantum computers are not expected to replace them in the foreseeable future. Instead, hybrid quantum-conventional architectures are expected to emerge that, in effect, outsource portions of difficult problems to a quantum computer.

Already Quantum computing appears ripe to transform certain industries. For instance, current computational chemistry methods rely heavily on approximation because the exact equations cannot be solved by conventional computers. Similarly, quantum algorithms are expected to deliver accurate simulations of molecules over longer timescales, currently impossible to model precisely. This could enable life-saving drug discoveries and significantly shorten the number of years required to develop complex pharmaceuticals.

Additionally, quantum computing’s anticipated ability to solve today’s impossibly complex logistics problems could produce considerable cost savings and carbon footprint reduction. For example, consider improving the global routes of the trillion-dollar shipping industry (see Dancing Dinosaur’s recent piece on blockchain gaining traction). If quantum computing could improve container utilization and shipping volumes by even a small fraction, this could save shippers hundreds of millions of dollars. To profit from quantum computing’s advantages ahead of competitors, notes IBM, some businesses are developing expertise now to explore which use cases may benefit their own industries as soon as the technology matures.

To stimulate this type of thinking, IBM’s Institute of Business Value has come up with 5 steps to get you started:

  1. Identify your quantum champions. Assign this staff to learn more about the prospective benefits of quantum computing. Just designate some of your leading professionals as quantum champions and charge them with understanding quantum computing, its potential impact on your industry, your competitors’ response, and how your business might benefit. Have these champions report periodically to senior management to educate the organization and align progress to strategic objectives.
  2. Begin identifying quantum computing use cases and associated value propositions. Have your champions identify specific areas where quantum computing could propel your organization ahead of competitors. Have these champions monitor progress in quantum application development to track which use cases may be commercialized sooner. Finally, ensure your quantum exploration links to business results. Then select the most promising quantum computing applications, such as creating breakthrough products and services or new ways to optimize the supply chain.
  3. Experiment with real quantum systems. Demystify quantum computing by trying out a real quantum computer (IBM’s Q Experience). Have your champions get a sense for how quantum computing may solve your business problems and interface with your existing tools. A quantum solution may not be a fit for every business issue. Your champions will need to focus on solutions to address your highest priority use cases, ones that conventional computers can’t practically solve.
  4. Chart your quantum course. This entails constructing a quantum computing roadmap with viable next steps for the purpose of pursuing problems that could create formidable competitive barriers or enable sustainable business advantage. To accelerate your organization’s quantum readiness, consider joining an emerging quantum community. This can help you gain better access to technical infrastructure, evolving industry applications, and expertise that can enhance your development of specific quantum applications.
  5. Lastly, be flexible about your quantum future. Quantum computing is rapidly evolving. Seek out technologies and development toolkits that are becoming the industry standard, those around which ecosystems are coalescing. Realize that new breakthroughs may cause you to adjust your approach to your quantum development process, including changing your ecosystem partners. Similarly, your own quantum computing needs may evolve over time, particularly as you improve your understanding of which business issues can benefit most from quantum solutions.

Finally, actually have people in your organization try a quantum computer, such as through IBM’s Q program and Qiskit, a free development tool. Q provides a free 16-qubit quantum computer you don’t have to configure or keep cool and stable. That’s IBM’s headache.

DancingDinosaur is Alan Radding, a veteran information technology analyst, writer, and ghost-writer. Follow DancingDinosaur on Twitter, @mainframeblog, and see more of his work at technologywriter.com.

IBM Blockchain Gains Traction for Shipping with TradeLens

June 6, 2019

Most everything that touches our lives takes a long trip across the ocean. With 90 percent of the goods we use every day — from toothbrushes to furniture — transported as ocean freight, you appreciate the scale and importance of the global shipping industry.

IBM’s TradeLens Gains New Shipping Partners

Blockchain is helping to modernize the shipping industry, which for years has dealt with isolated systems that require reams of paperwork to get freight from its point of origin to its final destination. Maersk, a leader in global shipping, found that a simple shipment can go through nearly 30 people and organizations.

The Z should be ideal for blockchain. It has the scalability, security, and rock solid reliability that benefits blockchain.

In an effort to transform this complex and far-reaching industry, several years ago IBM and Maersk embarked on a pilot to digitize global trade and share the resulting information. Last August they announced TradeLens, a blockchain-enabled platform that promotes information sharing, collaboration, and trust among trading partners. In less than a year, TradeLens entered production and is now operating with more than 100 participants who are tracking and sharing over 500 million shipping events and documents. The platform uses open standards, open governance and open APIs to ensure the entire industry can benefit and drive continued innovation, while at the same time providing much-needed security and data privacy.

TradeLens has now reached a major tipping point: two of the top ocean cargo carriers, CMA CGM and MSC, have joined the platform. By joining forces to rapidly expand the geographic reach and scope of TradeLens, shippers like Procter & Gamble can get a single real-time view of all of its containerized cargo, regardless of whether its cargo is carried on a Maersk, CMA CGM or MSC ship or any of the other 10 ocean carriers now on the network and also are assured that their data is not visible to their competitors.

TradeLens brings together not just ocean carriers but port and terminal operators, inland transportation providers, customs authorities, cargo owners and freight forwarders. As TradeLens grows, the benefit to all grows through greater visibility, consistent information, better collaboration and shared insights–all achieved fast and with less labor. Modernizing the shipping supply chain using this kind of technology has the potential to add billions of dollars of value creation to the global shipping industry; maybe even enough to help offset some of the President’s latest tariffs.

CMA CGM and MSC plan to bring their terminals on board and offer the TradeLens platform through open source Hyperledger blockchain technology,  along with a toolkit for developers to make it easy to build and innovate on the open platform. TradeLens is also working with openshipping.org to align its APIs with the United Nations Centre for Trade Facilitation and Electronic Business (UN/CEFACT) standards.

With the new participants TradeLens is better positioned to get insight into other key requirements for continued innovation and interoperability. For example, TradeLens plans to support the Digital Container Shipping Association (DCSA) efforts on interoperability and standards in container shipping.

At the same time, TradeLens is designed for security and data privacy, which are essential to the success of a business network that includes not just partners but competitors. TradeLens uses IBM Blockchain technology to give trading partners a shared view of transactions without compromising details, privacy, or confidentiality, ensuring  an immutable record of transactions. Perfect for Z.

Everyone in the shipping business can benefit. On TradeLens, ocean carriers can operate independent blockchain nodes for data sharing related to their contracted consignments. Access controls let data owners prescriptively determine different levels of permission for data access. Data owners also control their data even after it becomes part of the blockchain flow.

Equally important, the momentum of TradeLens demonstrates the growing adoption of blockchain, which already is used in banking, food safety, supply chain and other industries where trading systems must be digitized, shared, and secured. Today Blockchain and the TradeLens platform tracks more than 10 million events per week.

Blockchain is a team sport, and IBM and its latest partners hope that those who have been watching and waiting will now make a commitment to join TradeLens. It’s the best way transform global shipping for the industry and for themselves.

DancingDinosaur is Alan Radding, a veteran information technology analyst, writer, and ghost-writer. Follow DancingDinosaur on Twitter, @mainframeblog, and see more of his work at technologywriter.com.

Welcome to Tailored Fit Pricing

May 16, 2019

For as long as DancingDinosaur has been writing about the Z users have been captive to the rolling 4-hour avg.(R4HA) pricing.IBM’s announcement earlier this week changes this if you want to. And most of you probably will want to once you recognize all the constraints you have been chafing against for decades.

As IBM puts it, Tailored Fit Pricing (TFP) is a new cloud-like pricing model for the IBM Z models. The objective is to further position the z as the center of a secured enterprise hybrid cloud strategy. The promise is to empower z customers to expand the use of their z in pursuit of different forms of cloud computing as well as to give them the pricing flexibility to build and manage their new cloud-based environments.

This is not going to be a sure-bet no-brainer. To begin, you will need to figure out how you will want to incorporate the z into a hybrid cloud strategy now and, more importantly in the future as those strategies further evolve.

For now TFP comes in two basic flavors:

  1. The Enterprise Consumption Solution, a tailored consumption-based licensing model.
  2. The Enterprise Capacity Solution, a tailored full-capacity licensing model.

Both flavors promise to simplify the existing pricing landscape, delivering flexible deployment options that are tailored to reflect today’s rapidly evolving customer individual environments. Each model includes additional capacity for development and test environments as well as reduced pricing for all types of workload growth.

Beware, however. TFP was not designed to reduce Z pricing overall. Eliminating the R4HA may not reduce your software costs. In fact, your new hybrid cloud strategy may increase your processing volume in such a way that your overall software costs may go up. IBM has no interest in reducing your overall software costs unless your are increasing your overall MSU consumption in the course of doing more work and more new types of work on your z.

Don’t want to be crass but IBM’s goal is to sell more z MIPS, and it will give up the R4HA, which is doomed anyway, to do that. Your challenge is to figure out how best to use what IBM is ceding to come up with the most advantageous hybrid system strategy at the lowest cost your can come up with within the new rules, which amounts to consumption-based pricing, with economies of scale for workloads on z/OS. As with consumption based pricing the pricing adjusts with usage, removing the need for complex and restrictive capping, and includes aggressive pricing for growth–remember, IBM is rigging this to encourage z MIPS growth

As such, the capacity solution allows you to mix and match workloads to maximize the full capacity of the z. At the end of the day, TFP is designed to both unlock the full power of the z platform and deliver optimal response times and service-level agreements.

And IBM is not stopping here. It also has some new goodies to drive z usage: Specifically, it is adding z/OS container extensions. This is intended to modernize and extend z/OS applications by adding the ability to run Linux on IBM z Docker containers in direct support of z/OS workloads on the same z/OS system. The z/OS Container Extensions will enable access to the most recent development tools and processes available in the Linux on the Z ecosystem, which enables developers to build new, cloud-native containerized apps and deploy them on z/OS without requiring Linux or a Linux partition.

Finally, IBM is introducing the z/OS Cloud Broker, which gives you the ability to access and deploy z/OS resources and services on IBM Cloud Private, leting you customize your unique data landscape with an open, endlessly extensible architecture on any cloud.. This should help you achieve a more seamless and universal cloud development experience. The z/OS Cloud Broker is designed to encourage cloud application developers to provision and deprovision z/OS environments to support the app development cycle. As a tool for simplified management and access to critical enterprise services; IBM z/OS Cloud Broker provides a single control plane across z/OS, Linux on Z, Power, and public clouds. In turn, this can help optimize management efficiencies and achieve speed of innovation.

In its pricing announcement IBM also slipped in references to the IBM Cloud Hyper Protect family of cloud-native services. Hyper Protect offers a range of on- and off-premises deployment choices for extending IBM Z services and data—while balancing performance, availability or security.

In the end, notes IBM, the company sees a secure hybrid and multi-cloud as the future enterprise IT with IBM Z at the center. DancingDinosaur sees that too, but not in exactly the same terms.

DancingDinosaur is Alan Radding, a veteran information technology analyst, writer, and ghost-writer. Follow DancingDinosaur on Twitter, @mainframeblog, and see more of his work at technologywriter.com.

Syncsort Drives IBMi Security with AI

May 2, 2019

The technology security landscape looks increasingly dangerous  The problem revolves around the possible impact of AI. the impact of which is not fully clear. The hope, of course, is that AI will make security more efficient and effective.  However, the security bad actors can also jump on AI to advance their own schemes. Like a cyber version of the nuclear arms race, this has been an ongoing battle for decades. The industry has to cooperate and, specifically, share information and hope the good guys can stay a step ahead.

In the meantime, vendors like IBM and most recently Syncsort have been stepping up to  the latest challengers. Syncsort, for example, earlier this month launched its Assure Security to address the increasing sophistication of cyber attacks and expanding data privacy regulations.  In surprising ways, it turns out, data privacy and AI are closely related in the AI security battle.

Syncsort, a leader in Big Iron-to-Big Data software, announced Assure Security, which combines access control, data privacy, compliance monitoring, and risk assessment into a single product. Together, these capabilities help security officers, IBMi administrators, and Db2 administrators address critical security challenges and comply with new regulations meant to safeguard and protect the privacy of data.

And it clearly is coming at the right time.  According to Privacy Rights Clearinghouse, a non-profit corporation with a mission to advocate for data privacy there were 828 reported security incidents in 2018 resulting in the exposure of over 1.37 billion records of sensitive data. As regulations to help protect consumer and business data become stricter and more numerous, organizations must build more robust data governance and security programs to keep the data from being exploited by bad security actors for nefarious purposes.  The industry already has scrambled to comply with GDPR and the New York Department of Financial Services Cybersecurity regulations and they now must prepare for the GDPR-like California Consumer Privacy Act, which takes effect January 1, 2020.

In its own survey Syncsort found security is the number one priority among IT pros with IBMi systems. “Given the increasing sophistication of cyber attacks, it’s not surprising 41 percent of respondents reported their company experienced a security breach and 20 percent more were unsure if they even had been breached,” said David Hodgson, CPO, Syncsort. The company’s new Assure Security product leverages the wealth of IBMi security technology and the expertise to help organizations address their highest-priority challenges. This includes protecting against vulnerabilities introduced by new, open-source methods of connecting to IBMi systems, adopting new cloud services, and complying with expanded government regulations.

Of course, IBM hasn’t been sleeping through this. The company continues to push various permutations of Watson to tackle the AI security challenge. For example, IBM leverages AI to gather insights and use reasoning to identify relationships between threats, such as malicious files, suspicious IP addresses,  or even insiders. This analysis takes seconds or minutes, allowing security analysts to respond to threats up to 60 times faster.

It also relies on AI to eliminate time-consuming research tasks and provides curated analysis of risks, which reduces the amount of time security analysts require to make the critical decisions and launch an orchestrated response to counter each threat. The result, which IBM refers to as cognitive security, combines the strengths of artificial intelligence and human intelligence.

Cognitive AI in effect, learns with each interaction to proactively detect and analyze threats and provides actionable insights to security analysts making informed decisions. Such cognitive security, let’s hope, combines the strengths of artificial intelligence with human judgement.

Syncsort’s Assure Security, specifically brings together best-in-class IBMi security capabilities acquired by Syncsort into an all-in-one solution, with the flexibility for customers to license individual modules. The resulting product includes:

  • Assure  Compliance Monitoring quickly identifies security and compliance issues with real-time alerts and reports on IBMi system activity and database changes.
  • Assure Access Control provides control of access to IBMi systems and their data through a varied bundle of capabilities.
  • Assure Data Privacy protects IBMi data at-rest and in-motion from unauthorized access and theft through a combination of NIST-certified encryption, tokenization, masking, and secure file transfer capabilities.
  • Assure Security Risk Assessment examines over a dozen categories of security values, open ports, power users, and more to address vulnerabilities.

It probably won’t surprise anyone but the AI security situation is not going to be cleared up soon. Expect to see a steady stream of headlines around security hits and misses over the next few years. Just hope will get easier to separate the good guys from the bad actors and the lessons will be clear.

DancingDinosaur is Alan Radding, a veteran information technology analyst, writer, and ghost-writer. Follow DancingDinosaur on Twitter, @mainframeblog, and see more of his work at technologywriter.com.

IBM Revamps V5000

April 5, 2019

On April 2nd IBM announced several key enhancements across the Storwize V5000 portfolio and along with new models. These new models include the V5010E, 5030E and the V5100. The E stands for EXPRESS.) To further complicate the story, it utilizes Broadwell, Intel’s new 14 nanometer die shrink of its Haswell microarchitecture. Broadwell did not completely replace the full range of CPUs from Intel’s previous Haswell microarchitecture but IBM is using it widely in the new V5000 models.

IBM NVMe Flash Core Module

And the results can be impressive. From a scale-out perspective the V5010E supports a single controller configuration, while the V5030E and V5100 both support up to two controller clusters. This provides for a maximum of 392 drives in the V5010E and a massive 1520 drives in either the V5030E or V5100 dual controller clusters. The V5030E includes the Broadwell DE 1.9GHz, 6 core processor in its two canisters. Each canister supports a maximum of 32GB of RAM. Better still, the V5100 boasts a single Skylake 1.7Ghz processor with 8 cores in each canister. RAM is increased to a total of 576GB for the entire controller, or 288GB maximum per canister.

.For the next generation Storwize V5000 platforms IBM encouraging them to be called Gen3. The Gen3 encompasses 8 new MTM (Machine Type Model) based on 3 hardware models, V5010E, V5030E and V5100. The V5100 comes in two models, a hybrid (HDD and Flash) and the All Flash model V5100F. Of these 4 types, each is available with a 1 year or 3 year warranty.

The V5000E models are based on the Gen2 hardware, with various enhancements, including more memory options on the V5010E. The V5100 models are all new hardware and bring same NVMe Flash Core Modules (FCM) that are available on the V7000 and FlashSystem9100 products, completing Core Modules the transition of the Storwize family to all NVMe arrays. If you haven’t seen or heard about IBM’s FCM technology introduced last year to optimize NVMe FCM are a family of high-performance flash drives that utilizes the NVMe protocol, a PCIe Gen3 interface, and high-speed NAND memory to provide high throughput and IOPS and very low latency. FCM is available in 4.8 TB, 9.6 TB, and 19.2 TB capacities. Hardware-based data compression and self-encryption are built in.

The all flash (F) variants of the V5000 can also attach SAS expansions to extend capacity using SAS based Flash drives to allow expansion up to 1520 drives. The drives, however, are not interchangeable with the new FCM drives. The E variants allow attachment of SAS 2.5” and 3.5” HDD drives, with the V5010E expandable to 392 drives and the others up to 1520.

Inbuilt host attachments come in the form of 10GbE ports for iSCSI workloads, with optional 16Gbit FibreChannel (SCSI or FC-NVMe) as well as additional 10GbE or 25GbE iSCSI. The V5100 models can also use the iSER (an iSCSI translation layer for operation over RDMA transports, such as InfiniBand) protocol over the 25GbE ports for clustering capability, with plans to support NVMeF over Ethernet. In terms of cache memory, the V5000E products are expandable up to 64GB per controller (IO Group) and the V5100 can support up to 576GB per controller. Similarly, IBM issued as a statement of direction for all 25GbE port types across the entire Spectrum Virtualize family of products.

As Lloyd Dean, IBM Senior Certified Executive IT Architect noted, the new lineup for the V5000 is impressive; regarding the quantity of drives, and the storage available per model will “blow your mind”. How mind blowing will depend, of course, on your configuration and IBM’s pricing. As usual, IBM talks about affordable and comparative cost and storage efficiency but they usually never state a price. But they did once 3 years ago: List price then for the V5010 was $9,250 including hardware, software and a one-year warranty, according to a published report. Today IBM will likely steer you to cloud pricing, which may or may not be a bargain depending on how the deal is structured and priced. With the cloud, everything is in the details.

DancingDinosaur is Alan Radding, a veteran information technology analyst, writer, and ghost-writer. Follow DancingDinosaur on Twitter, @mainframeblog, and see more of his work at technologywriter.com.

IBM Rides Quantum Volume  to Quantum Advantage

March 19, 2019

Recently IBM announced achieving its’ highest quantum volume to date. Of course, nobody else knows what Quantum Volume is.  Quantum volume is both a measurement and a procedure developed, no surprise here, by IBM to determine how powerful a quantum computer is. Read the May 4 announcement here.

Quantum volume is not just about the number of qubits, although that is one part of it. It also includes both gate and measurement errors, device cross talk, as well as device connectivity and circuit compiler efficiency. According to IBM, the company has doubled the power of its quantum computers annually since 2017.

The upgraded processor will be available for use by developers, researchers, and programmers to explore quantum computing using a real quantum processor at no cost via the IBM Cloud. This offer has been out in various forms since May 2016 as IBM’s Q Experience.

Also announced was a new prototype of a commercial processor, which will be the core for the first IBM Q early-access commercial systems.  Dates have only been hinted at.

IBM’s recently unveiled IBM Q System One quantum computer, with a fourth-generation 20-qubit processor, which has resulted in a Quantum Volume of 16, roughly double that of the current IBM Q 20-qubit device, which have a Quantum Volume of 8.

The Q volume math goes something like this: a variety of factors determine Quantum Volume, including the number of qubits, connectivity, and coherence time, plus accounting for gate and measurement errors, device cross talk, and circuit software compiler efficiency.

In addition to producing the highest Quantum Volume to date, IBM Q System One’s performance reflects some of the lowest error rates IBM has ever measured, with an average 2-qubit gate error less than 2 percent, and its best gate achieving less than a 1 percent error rate. To build a fully-functional, large-scale, universal, fault-tolerant quantum computer, long coherence times and low error rates are required. Otherwise how could you ever be sure of the results?

Quantum Volume is a fundamental performance metric that measures progress in the pursuit of Quantum Advantage, the Quantum Holy Grail—the point at which quantum applications deliver a significant, practical benefit beyond what classical computers alone are capable. To achieve Quantum Advantage in the next decade, IBM believes that the industry will need to continue to double Quantum Volume every year.

Sounds like Moore’s Law all over again. IBM doesn’t deny the comparison. It writes: in 1965, Gordon Moore postulated that the number of components per integrated function would grow exponentially for classical computers. Jump to the new quantum era and IBM notes its Q system progress since 2017 presents a similar early growth pattern, supporting the premise that Quantum Volume will need to double every year and presenting a clear roadmap toward achieving Quantum Advantage.

IBM’s recently unveiled IBM Q System One quantum computer, with a fourth-generation 20-qubit processor, which has produced a Quantum Volume of 16, roughly double that of the current IBM Q 20-qubit IBM Q Network device, which has a Quantum Volume of 8.

Potential use cases, such as precisely simulating battery-cell chemistry for electric vehicles, speeding quadratic derivative models, and many others are already being investigated by IBM Q Network partners. To achieve Quantum Advantage in the 2020s, IBM believes the industry will need to continue doubling Quantum Volume every year.

In time AI should play a role expediting quantum computing.  For that, researchers will need to develop more effective AI that can identify patterns in data otherwise invisible to classical computers.

Until then how should most data centers proceed? IBM researchers suggest 3 initial steps:

  1. Develop quantum algorithms that demonstrate how quantum computers can improve AI classification accuracy.
  1. Improve feature mapping to a scale beyond the reach of the most powerful classical computers
  2. Classify data through the use of short depth circuits, allowing AI applications in the NISQ (noisy intermediate scale quantum) regime and a path forward to achieve quantum advantage for machine learning.

Sounds simple, right? Let DancingDinosaur know how you are progressing.

DancingDinosaur is Alan Radding, a veteran information technology analyst, writer, and ghost-writer. Follow DancingDinosaur on Twitter, @mainframeblog, and see more of his work at technologywriter.com.


%d bloggers like this: