Posts Tagged ‘technology’

Illusive Networks’ Mainframe Guard to Deter Cyber Attacks

October 18, 2017

At a time when IBM promised that automatic pervasive encryption on the new Z would spell an end to worries about security an Israeli company stepped forward this week to insist that the z14, or just Z, can’t do the entire job. Pervasive encryption can be undermined by Advanced Persistent Threats (APT), which co-op legit users as they access protected data. Illusive Networks introduced its security tool, Mainframe Guard, earlier this week at Sibos in Toronto.

Mainframe Guard enables admins to action against advanced, targeted cyberattacks by detecting and disrupting movement toward critical business assets early in the attack cycle. Illusive deploys sophisticated and confusing honeypots to distract, misguide, and trap an attacker before he or she can touch the data. In short, the security staff can identify and intervene against advanced, targeted cyberattacks by detecting and disrupting movement toward critical business assets early. With the new Z and pervasive security, of course, that data will already be encrypted and the keys safely stored out of reach.

IBM Breach Cost Estimator

At a time when organizations of all types and in every market segment are under attack from hackers, ransomware, data breaches, and more all data center managers should welcome any data protection tools that work. Yet 96% don’t even bother to encrypt—too costly, too cumbersome, too complicated. As DancingDinosaur noted at the Z launch, the list of excuses is endless. Of the 9 billion records breached since 2013 only 4% were encrypted! And you already know why: encryption is tedious, impacts staff, slows system performance, costs money, and more.

Such attitudes, especially at a mainframe shop, invite serious breaches. While IBM’s latest mainframe automatically encrypts all transaction data, the vast majority of systems expose significant vulnerabilities.

Making the situation even worse; the need to secure against innovations such as mobile applications, cloud-based services, and smart devices presents new challenges. “Organizations are sometimes reluctant to upgrade legacy applications and databases on these enterprise servers, particularly in today’s always-on economy. But unless you address every link in the end-to-end process, you haven’t secured it.” noted Andrew Howard, CTO at Kudelski Security, which cites experience remediating mainframe systems in the wake of cyber breaches.

Even older mainframe shops—pre pervasive encryption—can have effective security. Consider adding Mainframe Guard, which requires you to actively follow the threats and initiate defensive actions.

So how might an attacker today get around the Z’s pervasive encryption? The attack typically starts with lurking and watching as legitimate users gain access to the system. The attacker will then impersonate a legit user. Illusive, however, lures the attacker to locations where the attacker may think he or she has found a trove of intelligence gold.  “Remember, the attacker doesn’t know which machine he has landed on,” said Ofer Israeli, CEO of Illusive Networks. Unless the attacker brings inside information, he is blind inside the network.  From there Illusive leads constantly baits the attacker with deceptive information, which the attacker will have to dodge correctly to avoid giving away the attack.

Leveraging Illusive’s deceptive approach, Mainframe Guard works by detecting malicious movement toward the mainframe and providing a non-intrusive method of protecting the systems, the data they host, and the services they support. The solution is comprised of:

  • A family of deceptions for mainframe environments
  • The ability to display mainframe assets along with other sensitive assets in the Illusive Attacker View portion of the management console, which enables security personnel to see potential attack paths toward the mainframe and track the proximity and progress of attackers toward these assets
  • Purpose-built views of the mainframe environment monitor unexpected connections to mainframe servers
  • An interactive layer added to the Illusive Trap Server mimics mainframe behavior and login screens, tricking attackers into believing they are interacting with an actual mainframe system.

When everything is encrypted and the keys, APIs, and more are safeguarded with the Z’s pervasive encryption on top of Illusive’s deceptions, maybe you can finally begin to relax, at least until the next level of attacks start to emerge.

BTW, DancingDinosaur will be away for 2 weeks. Given IBM’s just released Q3 results. you can hear IBM’s relief even before I’m gone.  Expect some celebrating around the Z; nothing like a new machine to boost revenues. Look for DancingDinosaur the week of Nov. 6.

DancingDinosaur is Alan Radding, a veteran information technology analyst, writer, and ghost-writer. Please follow DancingDinosaur on Twitter, @mainframeblog. See more of his IT writing at technologywriter.com and here.

Compuware Brings the Mainframe to AWS

October 6, 2017

IBM talks about the power of the cloud for the mainframe and has turned Bluemix into a cloud development and deployment platform for open systems. Where’s the Z?

Now Compuware has made for the past several years quarterly advances in its mainframe tooling, which are now  available through AWS. Not only have those advances made mainframe management and operations more intuitive and graphical through a string of Topaz releases, but with AWS it is now more accessible from anywhere. DancingDinosaur has been reporting on Compuware’s string of Topaz advances for two years, here, here, and here.

By tapping the power of both the cloud and the mainframe, enterprises can deploy Topaz to their global development workforce in minutes, accelerating the modernization of their mainframe environments. As Compuware noted: mainframe shops now have the choice of deploying Topaz on-premise or on AWS. By leveraging the cloud, they can deploy Topaz more quickly, securely, and scale without capital costs while benefiting from new Topaz features as soon as the company delivers them.

To make Topaz work on AWS Compuware turned to Amazon AppStream 2.0 technology, which provides for global development, test, and ops teams with immediate and secure cloud access to Compuware’s entire innovative mainframe Agile/DevOps solution stack, mainly Topaz. Amazon AppStream 2.0 is a fully managed, secure application streaming service that allows users to stream desktop applications from AWS to any device running a web browser.

Cloud-based deployment of Topaz, Compuware notes, allows for significantly faster implementation, simple administration, a virtual integrated development environment (IDE), adaptive capacity, and immediate developer access to software updates. The last of these is important, since Compuware has been maintaining a quarterly upgrade release schedule, in effect delivering new capabilities every 90 days.

Compuware is in the process of patenting technology to offer an intuitive, streamlined configuration menu that leverages AWS best practices to make it easy for mainframe admins to quickly configure secure connectivity between Topaz on AWS and their mainframe environment. It also enables the same connectivity to their existing cross-platform enterprise DevOps toolchains running on-premise, in the cloud, or both. The upshot: organizations can deploy Topaz across their global development workforce in minutes, accelerating the modernization of their mainframe environments.

Using Topaz on AWS, notes Compuware, mainframe shops can benefit in a variety of ways, specifically:

  • Modify, test and debug COBOL, PL/I, Assembler and other mainframe code via an Eclipse-based virtual IDE
  • Visualize complex and/or undocumented application logic and data relationships
  • Manage source code and promote artifacts through the DevOps lifecycle
  • Perform common tasks such as job submission, review, print and purge
  • Leverage a single data editor to discover, visualize, edit, compare, and protect mainframe files and data

The move to the Eclipse-based IDE presents a giant step for traditional mainframe shops trying to modernize. Eclipse is a leading open source IDE with IBM as a founding member. In addition to Eclipse, Compuware also integrates with other modern tools, including Jenkins, SonarSource, Altassian. Jenkins is an open source automation server written in Java that helps to automate the non-human part of software development process with continuous integration while facilitating technical aspects of continuous delivery. SonarSource enables visibility into mainframe application quality. Atlassian develops products for software developers, project managers, and content management and is best known for Jira, its issue tracking application.

Unlike many mainframe ISVs, Compuware has been actively partnering with various innovative vendors to extend the mainframe’s tool footprint and bring the kind of tools to the mainframe that young developers, especially Millennials, want. Yes, it is possible to access the sexy REST-based Web and mobile tools through IBM’s Bluemix, but for mainframe shops it appears kludgy. By giving its mainframe customers access through AWS to advanced tools, Compuware improves on this. And AWS beats Bluemix in terms of cloud penetration and low cost.

All mainframe ISVs should make their mainframe products accessible through the cloud if they want to keep their mainframe products relevant. IBM has its cloud; of course there is AWS, Microsoft has Azure, and Google rounds out the top four. These and others will keep cloud economics competitive for the foreseeable future. Hope to see you in the cloud.

DancingDinosaur is Alan Radding, a veteran information technology analyst, writer, and ghost-writer. Please follow DancingDinosaur on Twitter, @mainframeblog. See more of his IT writing at technologywriter.com and here.

 

New Oracle SPARC M8 Mimics IBM Z

September 28, 2017

Not even two weeks ago, Oracle announced its eighth-generation SPARC platform, the SPARC M8, as an engineered system and as a cloud service. The new system promises the world’s most advanced processor, breakthrough performance, and security enhancements with Software in Silicon v2 for Oracle Cloud, Oracle Engineered Systems, and Servers. Furthermore, the new SPARC M8 line of servers and engineered systems extend the existing M7 portfolio products, and includes: SPARC T8-1 server, SPARC T8-2 server, SPARC T8-4 server, SPARC M8-8 server and Oracle SuperCluster M8.

Oracle SPARC M7

Pictured above is Oracle SPARC M7, the previous generation SPARC. The new SPARC M8 systems deliver up to 7x better performance, security capabilities, and efficiency than Intel-based systems.  Seems like the remaining active enterprise system vendors, mainly IBM and Oracle, want to present their systems as beating Intel. Both companies, DancingDinosaur suspects, will discover that beating Intel by a few gigahertz or microseconds or nanoseconds won’t generate the desired stream of new customers ready to ditch the slower Intel systems they have used for, by now, decades.  Oracle and IBM will have to deliver something substantially more tangible and distinctive.

For the z14, it should be pervasive encryption, which reduces or eliminates data compliance audit burdens and the corresponding fear of costly data breaches. Don‘t we all wish Equifax had encrypted its data, unless yours somehow are NOT among the 140 million or so compromised records. DancingDinosaur covered the Z launch in July. Not surprisingly, Oracle never mentioned the z14 or IBM in its M8 announcement or data sheet.

What Oracle did say was this: the Oracle SuperCluster M8 engineered systems and SPARC T8 and M8 servers, are designed to seamlessly integrate with existing infrastructures and include fully integrated virtualization and management for private cloud. All existing commercial and custom applications will run on SPARC M8 systems unchanged with new levels of performance, security capabilities, and availability. The SPARC M8 processor with Software in Silicon v2 extends the industry’s first Silicon Secured Memory, which provides always-on hardware-based memory protection for advanced intrusion protection and end-to-end encryption and Data Analytics Accelerators (DAX) with open API’s for breakthrough performance and efficiency running Database analytics and Java streams processing. Oracle Cloud SPARC Dedicated Compute service will also be updated with the SPARC M8 processor.

It almost sounds like a weak parody of IBM’s July z14 announcement here. The following is part of what IBM wrote: Pervasively encrypts data, all the time at any scale. Addresses global data breach epidemic; helps automate compliance for EU General Data Protection Regulation, Federal Reserve, and other emerging regulations. Encrypts data 18x faster than compared x86 platforms, at 5 percent of the cost.

Not sure what DancingDinosaur was expecting Oracle to say. Maybe some recognition that there is another enterprise server out there making similar promises and claims. Certainly it could have benchmarked its own database against the z13 if not the z14. DancingDinosaur may be a mainframe bigot but is no true blue fan of IBM.

What Oracle did say seemed somewhat thin and x86-obsessed:

  • Database: Engineered to run Oracle Database faster than any other microprocessor, SPARC M8 delivers 2x faster OLTP performance per core than x86 and 1.4x faster than M7 microprocessors, as well as up to 7x faster database analytics than x86.
  • Java: SPARC M8 delivers 2x better Java performance than x86 and 1.3x better than M7 microprocessors. DAX v2 produces 8x more efficient Java streams processing, improving overall application performance.
  • In Memory Analytics: Innovative new processor delivers 7x Queries per Minute (QPM)/core than x86 for database analytics.

But one thing Oracle did say appears truly noteworthy for a computer vendor: Oracle’s long history of binary compatibility across processor generations continues with M8, providing an upgrade path for customers when they are ready. Oracle has also publicly committed to supporting Solaris until at least 2034. DancingDinosaur expects to retire in a few years. Hope to not be reading Oracle or IBM press releases then.

DancingDinosaur is Alan Radding, a veteran information technology analyst, writer, and ghost-writer. Please follow DancingDinosaur on Twitter, @mainframeblog. See more of his IT writing at technologywriter.com and here.

 

IBM Moves Quantum Computing Toward Commercial Systems

September 20, 2017

IBM seem determined to advance quantum computing. Just this week IBM announced its researchers developed a new approach to simulate molecules on a quantum computer that may one day help revolutionize chemistry and materials science. In this case, the researchers implemented a novel algorithm that is efficient with respect to the number of quantum operations required for the simulation. This involved a 7-qubit processor.

7-cubit processor

In the diagram above IBM scientists successfully used six qubits on a purpose-built seven-qubit quantum device to address the molecular structure problem for beryllium hydride (BeH2) – the largest molecule simulated on a quantum computer to date.

Back in May IBM announced an even bigger quantum device. It prototyped the first commercial processor with 17 qubits and leverages significant materials, device, and architecture improvements to make it the most powerful quantum processor created to date by IBM. This week’s announcement certainly didn’t surpass it in size. IBM engineered the 17-qubit system to be at least twice as powerful as what is available today to the public on the IBM Cloud and it will be the basis for the first IBM Q early-access commercial systems.

It has become apparent to the scientists and researchers who try to work with complex mathematical problems and simulations that the most powerful conventional commercial computers are not up to the task. Even the z14 with its 10-core CPU and hundreds of additional processors dedicated to I/O cannot do the job.

As IBM puts it: Even today’s most powerful supercomputers cannot exactly simulate the interacting behavior of all the electrons contained in a simple chemical compound such as caffeine. The ability of quantum computers to analyze molecules and chemical reactions could help accelerate research and lead to the creation of novel materials, development of more personalized drugs, or discovery of more efficient and sustainable energy sources.

The interplay of atoms and molecules is responsible for all matter that surrounds us in the world. Now “we have the potential to use quantum computers to boost our knowledge of natural phenomena in the world,” said Dario Gil, vice president of AI research and IBM Q, IBM Research. “Over the next few years, we anticipate IBM Q systems’ capabilities to surpass what today’s conventional computers can do, and start becoming a tool for experts in areas such as chemistry, biology, healthcare and materials science.”

So commercial quantum systems are coming.  Are you ready to bring a quantum system into you data center? Actually you can try one today for free here  or through GitHub, which offers a Python software development kit for writing quantum computing experiments, programs, and applications. Although DancingDinosaur will gladly stumble through conventional coding, quantum computing probably exceeds his frustration level even with a Python development kit.

However, if your organization is involved in these industries—materials science, chemistry, and the like or is wrestling with a problem you cannot do on a conventional computer—it probably is worth a try, especially for free. You can try an easy demo card game that compares quantum computing with conventional computing.

But as reassuringly as IBM makes quantum computing sound, don’t kid yourself; it is very complicated.  Deploying even a small qubit machine is not going to be like buying your first PC. Quantum bits, reportedly, are very fragile or transitory. Labs will keep them very cold just to better stabilize the system and keep them from switching their states before they should.  Just think how you’d feel about your PC if the bit states of 0 and 1 suddenly and inextricably changed.

That’s not the only possible headache. You only have limited time to work on cubits given their current volatility when not super cooled. Also, work still is progressing on advancing the quantum frameworks and mapping out ecosystem enablement.

Even IBM researchers admit that some problems may not be better on quantum computers. Still, until you pass certain threshold, like qubit volume, your workload might not perform better on a quantum computer. The IBM quantum team suggests it will take until 2021 to consistently solve a problem that has commercial relevance using quantum computing.

Until then, and even after, IBM is talking about a hybrid approach in which parts of a problem are solved with a quantum computer and the rest with a conventional system. So don’t plan on replacing your Z with a few dozen or even hundreds of qubits anytime soon.

DancingDinosaur is Alan Radding, a veteran information technology analyst, writer, and ghost-writer. Please follow DancingDinosaur on Twitter, @mainframeblog. See more of his IT writing at technologywriter.com and here.

 

Meet the new IBM LinuxONE Emperor II

September 15, 2017

Early this week IBM introduced the newest generation of the LinuxONE, the IBM LinuxONE Emperor II, built on the same technology as the IBM z14, which DancingDinosaur covered on July 19. The key feature of the new LinuxONE Emperor II, is IBM Secure Service Container, presented as an exclusive LinuxONE technology representing a significant leap forward in data privacy and security capabilities. With the z14 the key capability was pervasive encryption. This time the Emperor II promises very high levels of security and data privacy assurance while rapidly addressing unpredictable data and transaction growth. Didn’t we just hear a story like this a few weeks ago?

IBM LinuxONE Emperor (not II)

Through the IBM Secure Service Container, for the first time data can be protected against internal threats at the system level from users with elevated credentials or hackers who obtain a user’s credentials, as well as external threats. Software developers will benefit by not having to create proprietary dependencies in their code to take advantage of these security capabilities. An application only needs to be put into a Docker container to be ready for Secure Service Container deployment. The application can be managed using the Docker and Kubernetes tools that are included to make Secure Service Container environments easy to deploy and use.

The Emperor II and the LinuxONE are being positioned as the premier Linux system for highly secured data serving. To that end, it promises:

  • Ultimate workload isolation and pervasive encryption through Secure Service Containers (SoD)
  • Encryption of data at rest without application change and with better performance than x86
  • Protection of data in flight over the network with full end-to-end network security
  • Use of Protected Keys to secure data without giving up performance
  • Industry-leading secure Java performance via TLS (2-3x faster than Intel)

With the z14 you got this too, maybe worded slightly differently.

In terms of performance and scalability, IBM promises:

  • Industry-leading performance of Java workloads, up to 50% faster than Intel
  • Vertical scale to 170 cores, equivalent to hundreds of x86 cores
  • Simplification to make the most of your Linux skill base and speed time to value
  • SIMD to accelerate analytics workloads & decimal compute (critical to financial applications)
  • Pause-less garbage collection to enable vertical scaling while maintaining predictable performance

Like the z14, the Emperor II also lays a foundation for data serving and next gen apps, specifically:

  • Adds performance and security to new open source DBaaS deployments
  • Develops new blockchain applications based on the proven IBM Blockchain Platform—in terms of security, blockchain may prove more valuable than even secure containers or pervasive encryption
  • Support for data-in-memory applications and new workloads using 32 TB of memory—that’s enough to run production databases entirely in memory (of course, you’ll have to figure out if the increased performance, which should be significant, is worth the extra memory cost)
  • A build-your-cloud approach for providers wanting a secure, scalable, open source platform

If you haven’t figured it out yet, IBM sees itself in a titanic struggle with Intel’s x86 platform.  With the LinuxONE Emperor II IBM senses it can gain the upper hand with certain workloads. Specifically:

  • EAL 5+ isolation, best in class crypto key protection, and Secure Service Containers
  • 640 Power cores in its I/O channels (that aren’t included in the core count) giving the platform the best I/O capacity and performance in the industry
  • Its shared memory, vertical scale architecture delivers a measurably better architecture for stateful workloads like databases and systems of record
  • The LinuxONE/z14 hardware designed to still give good response time at up to 100% utilization, which simplifies the solution and reduces the extra costs many data centers assume are necessary because they’re used to 50% utilization
  • The Emperor II can be ordered designed and tested for earthquake resistance
  • The z-based LinuxONE infrastructure has survived fire and flood scenarios where all other server infrastructures have failed

That doesn’t mean, however, the Emperor II is a Linux no brainer, even for shops facing pressure around security compliance, never-fail mission critical performance, high capacity, and high performance. Change is hard and there remains a cultural mindset based on the lingering myth of the cheap PC of decades ago.

DancingDinosaur is Alan Radding, a veteran information technology analyst, writer, and ghost-writer. Please follow DancingDinosaur on Twitter, @mainframeblog. See more of his IT writing at technologywriter.com and here.

 

Syncsort Finds New Corporate Home and Friend

September 8, 2017

Centerbridge Partners, L.P. a private investment firm, completed the $1.26 billion acquisitions of enterprise software providers Syncsort Incorporated and Vision Solutions, Inc. from affiliates of Clearlake Capital Group, L.P. Clearlake, which acquired Syncsort in 2015 and Vision in 2016, will retain a minority ownership stake in the combined company.

Syncsort is a provider of enterprise software and a player in Big Iron to Big Data solutions. DancingDinosaur has covered it here and here. According to the company, customers in more than 85 countries rely on Syncsort to move and transform mission-critical data and workloads. Vision Solutions provides business resilience tools addressing high availability, disaster recovery, migration, and data sharing for IBM Power Systems.

The company apparently hasn’t suffered from being passed between owners. Syncsort has been active in tech acquisitions for the past two years as it builds its data transformation footprint. Just a couple of weeks ago, it acquired Metron, a provider of cross-platform capacity management software, services. Metron’s signature athene solution delivers trend-based forecasting, capacity modeling, and planning capabilities that enable enterprises to optimize their data infrastructure to improve performance and control costs on premise or in the cloud.

This acquisition is the first since the announcement that Syncsort and Vision Solutions are combining, adding expertise and proven leadership in IBMi and AIX Power Systems platforms and to reinforce its ‘Big Iron to big data’ focus. Syncsort has also long established player in the mainframe business. Its Big Iron to Big Data promises to be a fast-growing market segment comprised of solutions that optimize traditional data systems and deliver mission-critical data from these systems to next-generation analytic environments using innovative Big Data technologies. Metron’s solutions and expertise is expected to contribute to the company’s data infrastructure optimization portfolio.

Syncsort has been on a roll since late in 2016 when, backed by Clearlake, it acquired Trillium Software, a global provider of data quality solutions. The acquisition of Trillium was the largest in Syncsort’s history then, and brings together data quality and data integration technology for enterprise environments. The combination of Syncsort and Trillium, according to the company, enables enterprises to harness all their valuable data assets for greater business insights, applying high-performance and scalable data movement, transformation, profiling, and quality across traditional data management technology stacks as well as Hadoop and cloud environments.

Specifically, Syncsort and Trillium both have a substantial number of large enterprise customers seeking to generate new insights by combining traditional corporate data with diverse information sources from mobile, online, social, and the Internet of Things. Syncsort expects these organizations to continue to rely heavily on next-generation analytic capabilities, creating a growing need for its best-in-class data integration and quality solutions to make their Big Data initiatives successful. Together, Syncsort and Trillium will continue to focus on providing customers with these capabilities for traditional environments, while leading the industry in delivering them for Hadoop and Spark too.

Earlier this year Syncsort integrated its own Big Data integration solution, DMX-h, with Cloudera Director, enabling organizations to easily deploy DMX-h along with Cloudera Enterprise on Amazon Web Services, Microsoft Azure, or Google Cloud. By deploying DMX-h with CDH, organizations can quickly pull data into new, ready-to-work clusters in the cloud—accelerating the time to capture cloud benefits, including cost savings and Data-as-a-Service (DaaS) delivery.

“As organizations liberate data from across the enterprise and deliver it into the cloud, they are looking for a self-service, elastic experience that’s easy to deploy and manage. This is a requirement for a variety of use cases – from data archiving to analytics that combine data originating in the cloud with on premise reference data,” said Tendü Yoğurtçu, Chief Technology Officer.

“By integrating DMX-h with Cloudera Director,” Yoğurtçu continued, “DMX-h is instantly available and ready to put enterprise data to work in newly activated cloud clusters.”

Syncsort DMX-h pulls enterprise data into Hadoop in the cloud and prepares that data for business workloads using native Hadoop frameworks, Apache Spark, or MapReduce, effectively enabling IT to achieve time-to-value goals and quickly deliver business insights.

It is always encouraging to see the mainframe eco-system continue to thrive. IBM’s own performance over the past few years has been anything but encouraging.

DancingDinosaur is Alan Radding, a veteran information technology analyst, writer, and ghost-writer. Please follow DancingDinosaur on Twitter, @mainframeblog. See more of his IT writing at technologywriter.com and here.

 

IBM Promises Easy Fast Data Protection

September 1, 2017

Data protection used to be simple. You simply made a couple of copies of your data and stored them someplace safe. That hasn’t worked for years with most enterprises and certainly won’t work going forward. There are too many systems and data. Now you have to contend with virtual machines, NoSQL databases, cloud storage, and more. In the face of growing compliance mandates and threats like ransomware, and a bevy of data protection threats data protection has gotten much more complicated.

Last week IBM simplified it again by announcing IBM Spectrum Protect Plus. It promises to make data protection available in as little as one hour.

IBM achieves tape breakthrough

Turned out August proved to be a good month for IBM storage. In addition to introducing Spectrum Protect Plus IBM and Sony researchers achieved a record of 201 Gb/in2 (gigabits per square inch) in areal density. That translates into the potential to record up to about 330 terabytes (TB) of uncompressed data on a single tape cartridge. Don’t expect commercially available products with this density soon. But you will want it sooner than you may think as organizations face the need to collect, store, and protect massive amounts of data for a wide range of use cases, from surveillance images to analytics to cognitive to, eventually, quantum computing.

IBM Spectrum Protect Plus delivers data availability using snapshot technology for rapid backup, recovery and data management. Designed to be used by virtual machines (VM) and application administrators, it also provides data clone functionality to support and automate DevOps workflows. Unlike other data availability solutions, IBM Spectrum Protect Plus performs data protection and monitoring based on automated Service Level Agreements to ensure proper backup status and retention compliance, noted IBM.

The company has taken to referring Spectrum Protect Plus as the future of data protection, recovery and data reuse. IBM designed it to be fast, modern, light weight, low cost, easy to use, and simple to deploy while delivering rapid time to value.  As noted at the top, the company claims it can make effective data protection available in an hour without relying on highly trained storage experts. Spectrum Protect Plus, delivers data protection, according to IBM, “anyone can manage,” adding that it installs in less than 15 mins.

You get instant data and virtual machine recovery, which you grab from a snapshot. It is so slick, IBM managers say, that “when someone sends you a ransomware letter you can just laugh at them.” Only, of course, if you have been diligent in making backups. Don’t blame the Protect Plus tool, which is thoroughly automated behind scenes. It was announced last week but won’t be available until the fourth quarter of this year.

Protect Plus also brings a handful of new goodies for different stakeholders, as IBM describes it:

  • CIOs get a single view of the backup and recovery status across the data portfolio and the elimination of silos of data backup and recovery.
  • Senior IT Manager (VM and Application Admins) can rapidly self-serve their data availability without complexity. IBM Spectrum Protect Plus also provides an ability to integrate the VM and application backups into the business rules of the enterprise.
  • Senior Application LOB owners can experience data lifecycle management with near instantaneous recovery, copy management, and global search for fast data access and recovery

Specifically designed for virtual machine (VM) environments to support daily administration the product rapidly deploys without agents. It also features a simple, role-based user interface (UI) with intuitive global search for fast recovery.

Data backup and recovery, always a pain in the neck, has gotten even far more complex. For an enterprise data center facing stringent data protection and compliance obligations and juggling the backup of virtual and physical systems, probably across multiple clouds and multiple data centers the challenges and risks have grown by orders of magnitude. You will need tools like Spectrum Protect Plus, especially the Plus part, which IBM insists is a completely new offering.

DancingDinosaur is Alan Radding, a veteran information technology analyst, writer, and ghost-writer. Please follow DancingDinosaur on Twitter, @mainframeblog. See more of his IT writing at technologywriter.com and here.

 

 

IBM Blockchain Platform Aims for Immutable Accuracy

August 25, 2017

Earlier this week IBM announced a major blockchain collaboration among group of leading companies across the global food supply chain. The goal is to reduce the number of people falling ill or even dying from eating contaminated food. IBM’s solution is its blockchain platform, which it believes is ideally suited to help address these challenges because it establishes a trusted environment that tracks all transactions, an accurate, consistent, immutable version.

Blockchain can improve food traceability

The food segment is just one of many industries IBM will target for its blockchain platform. It describes the platform as ideally suited to help address varied industry challenges because it establishes a trusted environment for all transactions. IBM claims it as the only fully integrated enterprise-ready blockchain platform designed to accelerate the development, governance and operation of a multi-institution business network. Rival vendors, like Accenture, may disagree.  In the case of the global food supply chain, all participants -growers, suppliers, processors, distributors, retailers, regulators and consumers – can gain permissioned access to known and trusted information regarding the origin and state of food. In December 2016 DancingDinosaur reported on IBM and Walmart using blockchain for food safety.

IBM’s blockchain platform is built around Hyperledger Composer, integrated with popular development environments using open developer tools, and accepted business terms to generate blockchain code and smart contracts. It also includes sample industry use cases.  Using IBM’s platform, developers can create standard business language in JavaScript and the APIs help keep development work at the business level, rather than being highly technical. This makes it possible for most any programmer to be a blockchain developer. Additionally, a variety of IBM Developer Journeys for blockchain are available featuring free open source code, documentation, APIs, architecture diagrams, and one-click deployment Git repositories to fast-track building, according to IBM.

For governance and operation it also provides activation tools for new networks, members, smart contracts and transaction channels. It also includes multi-party workflow tool with member activities panel, integrated notifications, and secure signature collection for policy voting. In addition, a new class of democratic governance tools designed to help improve productivity across the organizations uses a voting process that collects signatures from members to govern member invitation distribution of smart contracts and the creation of transactions channels. By enabling the quick onboarding of participants, assigning roles, and managing access, organizations can begin transacting via the blockchain fast.

In operating the network IBM blockchain platform provides always-on, high availability with seamless software and blockchain network updates, a hardened security stack with no privileged access, which blocks malware, and built-in blockchain monitoring for full network visibility. Woven throughout the platform is the Hyperledger Fabric. It also provides the highest-level, commercially available tamper resistant FIPS140-2 level 4 protection for encryption keys.

Along with its blockchain platform, IBM is advancing other blockchain supply chain initiatives by using the platform for an automated billing and invoicing system. Initial work to use blockchain for invoicing also is underway starting with Lenovo. This will provide an audit-ready solution with full traceability of billing and operational data, and help speed on-boarding time for new vendors and new contract requirements, according to IBM.

The platform leverages IBM’s work for more than 400 organizations. It includes insights gained as IBM has built blockchain networks across industries ranging from financial services, supply chain and logistics, retail, government, and healthcare.

Extensively tested and piloted, the IBM’s new blockchain platform addresses a wide range of enterprise pain points, including both business and technical requirements around security, performance, collaboration and privacy. It includes innovation developed through open source collaboration in the Hyperledger community, including the newest Hyperledger Fabric v1.0 framework and Hyperledger Composer blockchain tool, both hosted by the Linux Foundation.

DancingDinosaur has previously noted that the z appears ideal for blockchain. DancingDinosaur based this on the z13’s scalability, security, and performance. The new z14, with its automated, pervasive encryption may be even better.  The Hyperledger Composer capabilities along with the sample use cases promise an easy, simple way to try blockchain among some suppliers and partners.

DancingDinosaur is Alan Radding, a veteran information technology analyst, writer, and ghost-writer. Please follow DancingDinosaur on Twitter, @mainframeblog. See more of his IT writing at technologywriter.com and here.

 

Join Me at Share in Providence

August 3, 2017

Share runs all week but DancingDinosaur plans to be there on Tues., 8/8. Share is happening at the Rhode Island Convention Center, Providence, RI, August 6–11, 2017. Get details at Share.org.  The keynote looks interesting that day: As Share describes it: Security and regulatory compliance are concerns that impact every professional within your IT organization.

In the Tuesday Keynote presentation at SHARE Providence, expert panelists will offer their perspectives on how various roles are specifically impacted by security, and what areas you should be most concerned about in your own roles. Listen to your peers share their insights in a series of TED-style Talks, starting with David Hayes of the Government Accountability Office, who will focus on common compliance and risk frameworks. Stu Henderson of Henderson Consulting will discuss organizational values and how those interacting with the systems are part of the overall control environment, followed by Simon Dodge of Wells Fargo providing a look at proactive activities in the organization that are important for staying ahead of threats and reducing the need to play catch-up when the auditors arrive. In the final talk of the morning, emerging cyber security threats will be discussed by Buzz Woeckener of Nationwide Insurance, along with tips on how to be prepared. At the conclusion of their presentations, the panelists will address audience questions on the topics of security and compliance.

You’ll find me wandering around the sessions and the expo. Will be the guy wearing the Boston hat.

DancingDinosaur is Alan Radding, a veteran information technology analyst, writer, and ghost-writer. Please follow DancingDinosaur on Twitter, @mainframeblog. See more of his IT writing at technologywriter.com and here.

New Software Pricing for IBM Z

July 27, 2017

One of the often overlooked benefits of the introduction of a new mainframe like the Z is cost savings. Even though the machine may cost more, the cost of the performance and capabilities it delivers typically cost less on a per unit basis. In the case of the new Z, it’s not just a modest drop in price/performance. With the new Z, IBM announced, three new Container Pricing models for IBM Z, providing greatly simplified software pricing that promises flexible deployment with competitive economics vs. public clouds and on-premises x86 environments.

Working on the new IBM Z

Here are the three biggest software pricing changes:

  • Predictable and Transparent Container Pricing—providing organizations greatly simplified software pricing that combines flexible deployment with competitive economics vs. public clouds and on-premises x86 environments. To IBM, a container can be any address space, however large and small. You can have any number of containers. “Container Pricing provides collocated workloads with line-of-sight pricing to a solution,” explained Ray Jones, VP, IBM Z Software and Hybrid Cloud. With container pricing, Jones continued, “the client determines where to deploy using WLM, z/OS and SCRT do the rest.”
  • Application dev and test—highly competitive stand-alone pricing for z/OS based development and test workloads. Organizations can increase their DevTest capacity up to 3 times at no additional MLC cost. This will be based on the organization’s existing DevTest workload size. Or a company can choose the multiplier it wants and set the reference point for both MLC and OTC software.
  • Payment systems pricing are based on the business metric of payments volume a bank processes, not the available capacity. This gives organizations much greater flexibility to innovate affordably in a competitive environment, particularly in the fast-growing Instant Payment segment. To use the new per payment pricing, Jones added, up front licensing of IBM Financial Transaction Manager (FTM) software is required.

The Container Pricing options are designed to give clients the predictability and transparency they require for their business. The pricing models are scalable both within and across logical partitions (LPARs) and deliver greatly enhanced metering, capping and billing capabilities. Container Pricing for IBM Z is planned to be available by year-end 2017 and enabled in z/OS V2.2 and z/OS V2.3

Jones introduced the software discounts by reiterating that this was focused on software container pricing for IBM z and promised that there will be a technology software benefit with z14 as there was with the z13. IBM, he added, will offer a way to migrate to the new pricing, “This is a beginning of a new beginning. Clearly as we go forward we want to expand what’s applicable to container pricing.” His clear implication: IBM is intent on expanding the discounting it started when, several years ago, it introduced discounts for mobile transactions running on the z, which was driving up monthly software cost averages as mobile transaction volume began to skyrocket.

To understand the latest changes you need to appreciate what IBM means by container. This is not just about Docker containers. A container to IBM simply is an address space.  An organization can have multiple containers in a logical partition and have as many containers as it wants and change the size of containers as needed.

The fundamental advantage of IBM’s container pricing is that it enables co-location of workloads to get improved performance and remove latency, thus IBM’s repeated references to line-of-sight pricing. In short, this is about MLC (4hr) pricing. The new pricing eliminates what goes on in container from consideration. The price of container is just that; the price of the container. It won’t impact the 4hr rolling average, resulting in very predictable pricing.

The benefits are straightforward: simplified pricing for qualified solutions and allowance to deploy in the best way. And IBM can price competitively to the customer’s solution; in effect solution-specific pricing. When combined with the new price metric-payments pricing IBM trying to put together a competitive cost/price story. Of course, it is all predicated on the actual prices IBM finally publishes.  Let’s hope they are as competitive as IBM implies.

DancingDinosaur never passes up an opportunity to flog IBM for overpricing its systems and services. From discussions with Jones and other IBM during the pre-launch briefings managers the company may finally understand the need to make the mainframe or z or Z or whatever IBM calls it price-competitive on an operational level today. Low TCO or low cost of IOPS or low cost of QoS is not the same.

This is especially important now. Managers everywhere appear to be waking up to the need transform their mainframe-based businesses, at least in part, by becoming competitive digital businesses. DancingDinosaur never imagined that he would post something referencing the mainframe as a cost-competitive system able to rival x86 systems not just on quality of service but on cost. With the IBM Z the company is talking about competing with an aggressive cost strategy. It’s up to you, paying customers, to force them to deliver.

DancingDinosaur is Alan Radding, a veteran information technology analyst, writer, and ghost-writer. Please follow DancingDinosaur on Twitter, @mainframeblog. See more of his IT writing at technologywriter.com and here.

 


%d bloggers like this: