Archive for the ‘Uncategorized’ Category

Illusive Networks’ Mainframe Guard to Deter Cyber Attacks

October 18, 2017

At a time when IBM promised that automatic pervasive encryption on the new Z would spell an end to worries about security an Israeli company stepped forward this week to insist that the z14, or just Z, can’t do the entire job. Pervasive encryption can be undermined by Advanced Persistent Threats (APT), which co-op legit users as they access protected data. Illusive Networks introduced its security tool, Mainframe Guard, earlier this week at Sibos in Toronto.

Mainframe Guard enables admins to action against advanced, targeted cyberattacks by detecting and disrupting movement toward critical business assets early in the attack cycle. Illusive deploys sophisticated and confusing honeypots to distract, misguide, and trap an attacker before he or she can touch the data. In short, the security staff can identify and intervene against advanced, targeted cyberattacks by detecting and disrupting movement toward critical business assets early. With the new Z and pervasive security, of course, that data will already be encrypted and the keys safely stored out of reach.

IBM Breach Cost Estimator

At a time when organizations of all types and in every market segment are under attack from hackers, ransomware, data breaches, and more all data center managers should welcome any data protection tools that work. Yet 96% don’t even bother to encrypt—too costly, too cumbersome, too complicated. As DancingDinosaur noted at the Z launch, the list of excuses is endless. Of the 9 billion records breached since 2013 only 4% were encrypted! And you already know why: encryption is tedious, impacts staff, slows system performance, costs money, and more.

Such attitudes, especially at a mainframe shop, invite serious breaches. While IBM’s latest mainframe automatically encrypts all transaction data, the vast majority of systems expose significant vulnerabilities.

Making the situation even worse; the need to secure against innovations such as mobile applications, cloud-based services, and smart devices presents new challenges. “Organizations are sometimes reluctant to upgrade legacy applications and databases on these enterprise servers, particularly in today’s always-on economy. But unless you address every link in the end-to-end process, you haven’t secured it.” noted Andrew Howard, CTO at Kudelski Security, which cites experience remediating mainframe systems in the wake of cyber breaches.

Even older mainframe shops—pre pervasive encryption—can have effective security. Consider adding Mainframe Guard, which requires you to actively follow the threats and initiate defensive actions.

So how might an attacker today get around the Z’s pervasive encryption? The attack typically starts with lurking and watching as legitimate users gain access to the system. The attacker will then impersonate a legit user. Illusive, however, lures the attacker to locations where the attacker may think he or she has found a trove of intelligence gold.  “Remember, the attacker doesn’t know which machine he has landed on,” said Ofer Israeli, CEO of Illusive Networks. Unless the attacker brings inside information, he is blind inside the network.  From there Illusive leads constantly baits the attacker with deceptive information, which the attacker will have to dodge correctly to avoid giving away the attack.

Leveraging Illusive’s deceptive approach, Mainframe Guard works by detecting malicious movement toward the mainframe and providing a non-intrusive method of protecting the systems, the data they host, and the services they support. The solution is comprised of:

  • A family of deceptions for mainframe environments
  • The ability to display mainframe assets along with other sensitive assets in the Illusive Attacker View portion of the management console, which enables security personnel to see potential attack paths toward the mainframe and track the proximity and progress of attackers toward these assets
  • Purpose-built views of the mainframe environment monitor unexpected connections to mainframe servers
  • An interactive layer added to the Illusive Trap Server mimics mainframe behavior and login screens, tricking attackers into believing they are interacting with an actual mainframe system.

When everything is encrypted and the keys, APIs, and more are safeguarded with the Z’s pervasive encryption on top of Illusive’s deceptions, maybe you can finally begin to relax, at least until the next level of attacks start to emerge.

BTW, DancingDinosaur will be away for 2 weeks. Given IBM’s just released Q3 results. you can hear IBM’s relief even before I’m gone.  Expect some celebrating around the Z; nothing like a new machine to boost revenues. Look for DancingDinosaur the week of Nov. 6.

DancingDinosaur is Alan Radding, a veteran information technology analyst, writer, and ghost-writer. Please follow DancingDinosaur on Twitter, @mainframeblog. See more of his IT writing at technologywriter.com and here.

IBM combines Power AI and Data Science Experience

October 13, 2017

The AI bandwagon is getting big fast. Gartner reports Global IT spending in 2018 will increase 4.3% over last year topping $3.7 trillion, driven by business strategies tied to varying degrees of digital transformation and more uses around artificial intelligence. AI actually comes out as Gartner’s #1 tech trend for 2018, with the company saying: The ability to use AI to enhance decision making, reinvent business models and ecosystems, and remake the customer experience will drive the payoff for digital initiatives through 2025.

IBM already has made cognitive computing, one of its myriad terms for AI, a strategic imperative. To underscore the point, on Oct. 10 the company announced integrating two key software tools for AI, PowerAI deep learning with the IBM Data Science Experience.  If you were dithering about how to get involved in AI or cognitive computing, here’s a way to start. The Data Science Experience is available through a per-user licensing model while Power AI is available for free, at least for now.

Rethinking the way work works

With this integration, data scientists will be able to develop AI models with the leading open source deep learning frameworks like TensorFlow or Caffe to unlock analytical insights. The Data Science Experience (DSX) is a collaborative workspace that enables users to develop machine learning models and manage their data and trained models. PowerAI adds topnotch deep learning libraries, algorithms, and capabilities from popular open-source frameworks. The deep-learning frameworks will be able to sort through all types of data – sound, text or visuals – to enhance learning models on DSX.

For example, banks today can leverage deep learning to make more informed predictions about clients who might default on credit or to better detect credit card fraud, or sense clients who are ready to switch bank, which would give the bank a chance to make an offer that might save the account and reduce churn.

In manufacturing, deep learning models can be trained to identify potential failures before they happen by analyzing historical data derived from the functioning of equipment. Through such AI-driven predictive analysis, the manufacturer can reduce downtime and boost productivity. As these learning models continuously evolve and get smarter over time, they become more sophisticated, or smarter, at identifying anomalies and can alert the team on site to take remedial action before a production line unexpectedly stops. It also can advise of specific actions to take.

The Distributed Deep Learning library included with PowerAI from IBM Research reduces deep learning training times from weeks to hours. By integrating such capabilities with DSX brings accelerated deep learning to DSX’s collaborative workspace environment, which further speeds the results.

The growth of deep learning and machine learning is fueled, at least in part, by a rapid rise in computing capability via accelerators like NVIDIA Tesla GPUs. IBM optimized the deep learning frameworks like TensorFlow in PowerAI for IBM Power Systems. For example, the company takes advantage of the industry’s only CPU to GPU implementation of the NVIDIA NVLink high-speed interconnect, which acts as a communications superhighway of sorts, to speed the results.

Frameworks like TensorFlow and Caffe democratize insights through AI. This is expected to result in better client experiences sooner and new business models. And now the PowerAI deep learning enterprise software distribution is integrated into the DSX, a collaborative workspace that helps data scientists to build, manage and deploy AI models from which everyone benefits, both the company and its customers, who enjoy a better customer experience.

The PowerAI libraries and algorithms are optimized for the IBM Power Systems S822LC for High Performance Computing, enabling users ranging from data scientists to business analysts to engage in machine and deep learning through the Data Science Experience collaborative environment. Data scientists are particularly well-positioned to look at deep learning to leverage data as a competitive differentiator and asset.

DSX and PowerAI are packaged as two separate software offerings but integrated and designed to work together.  PowerAI is available only on Power systems while DSX is available on IBM Cloud and on-premises through Power or x86.

As IBM puts it: When it comes to deep learning, faster is better, enabling enterprises of all types to tap into the unlimited potential of AI. If you are a Power shop, grab the free PowerAI deal while its available and then sign up at least a few of your users for DSX and see what you can do.

DancingDinosaur is Alan Radding, a veteran information technology analyst, writer, and ghost-writer. Please follow DancingDinosaur on Twitter, @mainframeblog. See more of his IT writing at technologywriter.com and here.

 

Compuware Brings the Mainframe to AWS

October 6, 2017

IBM talks about the power of the cloud for the mainframe and has turned Bluemix into a cloud development and deployment platform for open systems. Where’s the Z?

Now Compuware has made for the past several years quarterly advances in its mainframe tooling, which are now  available through AWS. Not only have those advances made mainframe management and operations more intuitive and graphical through a string of Topaz releases, but with AWS it is now more accessible from anywhere. DancingDinosaur has been reporting on Compuware’s string of Topaz advances for two years, here, here, and here.

By tapping the power of both the cloud and the mainframe, enterprises can deploy Topaz to their global development workforce in minutes, accelerating the modernization of their mainframe environments. As Compuware noted: mainframe shops now have the choice of deploying Topaz on-premise or on AWS. By leveraging the cloud, they can deploy Topaz more quickly, securely, and scale without capital costs while benefiting from new Topaz features as soon as the company delivers them.

To make Topaz work on AWS Compuware turned to Amazon AppStream 2.0 technology, which provides for global development, test, and ops teams with immediate and secure cloud access to Compuware’s entire innovative mainframe Agile/DevOps solution stack, mainly Topaz. Amazon AppStream 2.0 is a fully managed, secure application streaming service that allows users to stream desktop applications from AWS to any device running a web browser.

Cloud-based deployment of Topaz, Compuware notes, allows for significantly faster implementation, simple administration, a virtual integrated development environment (IDE), adaptive capacity, and immediate developer access to software updates. The last of these is important, since Compuware has been maintaining a quarterly upgrade release schedule, in effect delivering new capabilities every 90 days.

Compuware is in the process of patenting technology to offer an intuitive, streamlined configuration menu that leverages AWS best practices to make it easy for mainframe admins to quickly configure secure connectivity between Topaz on AWS and their mainframe environment. It also enables the same connectivity to their existing cross-platform enterprise DevOps toolchains running on-premise, in the cloud, or both. The upshot: organizations can deploy Topaz across their global development workforce in minutes, accelerating the modernization of their mainframe environments.

Using Topaz on AWS, notes Compuware, mainframe shops can benefit in a variety of ways, specifically:

  • Modify, test and debug COBOL, PL/I, Assembler and other mainframe code via an Eclipse-based virtual IDE
  • Visualize complex and/or undocumented application logic and data relationships
  • Manage source code and promote artifacts through the DevOps lifecycle
  • Perform common tasks such as job submission, review, print and purge
  • Leverage a single data editor to discover, visualize, edit, compare, and protect mainframe files and data

The move to the Eclipse-based IDE presents a giant step for traditional mainframe shops trying to modernize. Eclipse is a leading open source IDE with IBM as a founding member. In addition to Eclipse, Compuware also integrates with other modern tools, including Jenkins, SonarSource, Altassian. Jenkins is an open source automation server written in Java that helps to automate the non-human part of software development process with continuous integration while facilitating technical aspects of continuous delivery. SonarSource enables visibility into mainframe application quality. Atlassian develops products for software developers, project managers, and content management and is best known for Jira, its issue tracking application.

Unlike many mainframe ISVs, Compuware has been actively partnering with various innovative vendors to extend the mainframe’s tool footprint and bring the kind of tools to the mainframe that young developers, especially Millennials, want. Yes, it is possible to access the sexy REST-based Web and mobile tools through IBM’s Bluemix, but for mainframe shops it appears kludgy. By giving its mainframe customers access through AWS to advanced tools, Compuware improves on this. And AWS beats Bluemix in terms of cloud penetration and low cost.

All mainframe ISVs should make their mainframe products accessible through the cloud if they want to keep their mainframe products relevant. IBM has its cloud; of course there is AWS, Microsoft has Azure, and Google rounds out the top four. These and others will keep cloud economics competitive for the foreseeable future. Hope to see you in the cloud.

DancingDinosaur is Alan Radding, a veteran information technology analyst, writer, and ghost-writer. Please follow DancingDinosaur on Twitter, @mainframeblog. See more of his IT writing at technologywriter.com and here.

 

New Oracle SPARC M8 Mimics IBM Z

September 28, 2017

Not even two weeks ago, Oracle announced its eighth-generation SPARC platform, the SPARC M8, as an engineered system and as a cloud service. The new system promises the world’s most advanced processor, breakthrough performance, and security enhancements with Software in Silicon v2 for Oracle Cloud, Oracle Engineered Systems, and Servers. Furthermore, the new SPARC M8 line of servers and engineered systems extend the existing M7 portfolio products, and includes: SPARC T8-1 server, SPARC T8-2 server, SPARC T8-4 server, SPARC M8-8 server and Oracle SuperCluster M8.

Oracle SPARC M7

Pictured above is Oracle SPARC M7, the previous generation SPARC. The new SPARC M8 systems deliver up to 7x better performance, security capabilities, and efficiency than Intel-based systems.  Seems like the remaining active enterprise system vendors, mainly IBM and Oracle, want to present their systems as beating Intel. Both companies, DancingDinosaur suspects, will discover that beating Intel by a few gigahertz or microseconds or nanoseconds won’t generate the desired stream of new customers ready to ditch the slower Intel systems they have used for, by now, decades.  Oracle and IBM will have to deliver something substantially more tangible and distinctive.

For the z14, it should be pervasive encryption, which reduces or eliminates data compliance audit burdens and the corresponding fear of costly data breaches. Don‘t we all wish Equifax had encrypted its data, unless yours somehow are NOT among the 140 million or so compromised records. DancingDinosaur covered the Z launch in July. Not surprisingly, Oracle never mentioned the z14 or IBM in its M8 announcement or data sheet.

What Oracle did say was this: the Oracle SuperCluster M8 engineered systems and SPARC T8 and M8 servers, are designed to seamlessly integrate with existing infrastructures and include fully integrated virtualization and management for private cloud. All existing commercial and custom applications will run on SPARC M8 systems unchanged with new levels of performance, security capabilities, and availability. The SPARC M8 processor with Software in Silicon v2 extends the industry’s first Silicon Secured Memory, which provides always-on hardware-based memory protection for advanced intrusion protection and end-to-end encryption and Data Analytics Accelerators (DAX) with open API’s for breakthrough performance and efficiency running Database analytics and Java streams processing. Oracle Cloud SPARC Dedicated Compute service will also be updated with the SPARC M8 processor.

It almost sounds like a weak parody of IBM’s July z14 announcement here. The following is part of what IBM wrote: Pervasively encrypts data, all the time at any scale. Addresses global data breach epidemic; helps automate compliance for EU General Data Protection Regulation, Federal Reserve, and other emerging regulations. Encrypts data 18x faster than compared x86 platforms, at 5 percent of the cost.

Not sure what DancingDinosaur was expecting Oracle to say. Maybe some recognition that there is another enterprise server out there making similar promises and claims. Certainly it could have benchmarked its own database against the z13 if not the z14. DancingDinosaur may be a mainframe bigot but is no true blue fan of IBM.

What Oracle did say seemed somewhat thin and x86-obsessed:

  • Database: Engineered to run Oracle Database faster than any other microprocessor, SPARC M8 delivers 2x faster OLTP performance per core than x86 and 1.4x faster than M7 microprocessors, as well as up to 7x faster database analytics than x86.
  • Java: SPARC M8 delivers 2x better Java performance than x86 and 1.3x better than M7 microprocessors. DAX v2 produces 8x more efficient Java streams processing, improving overall application performance.
  • In Memory Analytics: Innovative new processor delivers 7x Queries per Minute (QPM)/core than x86 for database analytics.

But one thing Oracle did say appears truly noteworthy for a computer vendor: Oracle’s long history of binary compatibility across processor generations continues with M8, providing an upgrade path for customers when they are ready. Oracle has also publicly committed to supporting Solaris until at least 2034. DancingDinosaur expects to retire in a few years. Hope to not be reading Oracle or IBM press releases then.

DancingDinosaur is Alan Radding, a veteran information technology analyst, writer, and ghost-writer. Please follow DancingDinosaur on Twitter, @mainframeblog. See more of his IT writing at technologywriter.com and here.

 

IBM Moves Quantum Computing Toward Commercial Systems

September 20, 2017

IBM seem determined to advance quantum computing. Just this week IBM announced its researchers developed a new approach to simulate molecules on a quantum computer that may one day help revolutionize chemistry and materials science. In this case, the researchers implemented a novel algorithm that is efficient with respect to the number of quantum operations required for the simulation. This involved a 7-qubit processor.

7-cubit processor

In the diagram above IBM scientists successfully used six qubits on a purpose-built seven-qubit quantum device to address the molecular structure problem for beryllium hydride (BeH2) – the largest molecule simulated on a quantum computer to date.

Back in May IBM announced an even bigger quantum device. It prototyped the first commercial processor with 17 qubits and leverages significant materials, device, and architecture improvements to make it the most powerful quantum processor created to date by IBM. This week’s announcement certainly didn’t surpass it in size. IBM engineered the 17-qubit system to be at least twice as powerful as what is available today to the public on the IBM Cloud and it will be the basis for the first IBM Q early-access commercial systems.

It has become apparent to the scientists and researchers who try to work with complex mathematical problems and simulations that the most powerful conventional commercial computers are not up to the task. Even the z14 with its 10-core CPU and hundreds of additional processors dedicated to I/O cannot do the job.

As IBM puts it: Even today’s most powerful supercomputers cannot exactly simulate the interacting behavior of all the electrons contained in a simple chemical compound such as caffeine. The ability of quantum computers to analyze molecules and chemical reactions could help accelerate research and lead to the creation of novel materials, development of more personalized drugs, or discovery of more efficient and sustainable energy sources.

The interplay of atoms and molecules is responsible for all matter that surrounds us in the world. Now “we have the potential to use quantum computers to boost our knowledge of natural phenomena in the world,” said Dario Gil, vice president of AI research and IBM Q, IBM Research. “Over the next few years, we anticipate IBM Q systems’ capabilities to surpass what today’s conventional computers can do, and start becoming a tool for experts in areas such as chemistry, biology, healthcare and materials science.”

So commercial quantum systems are coming.  Are you ready to bring a quantum system into you data center? Actually you can try one today for free here  or through GitHub, which offers a Python software development kit for writing quantum computing experiments, programs, and applications. Although DancingDinosaur will gladly stumble through conventional coding, quantum computing probably exceeds his frustration level even with a Python development kit.

However, if your organization is involved in these industries—materials science, chemistry, and the like or is wrestling with a problem you cannot do on a conventional computer—it probably is worth a try, especially for free. You can try an easy demo card game that compares quantum computing with conventional computing.

But as reassuringly as IBM makes quantum computing sound, don’t kid yourself; it is very complicated.  Deploying even a small qubit machine is not going to be like buying your first PC. Quantum bits, reportedly, are very fragile or transitory. Labs will keep them very cold just to better stabilize the system and keep them from switching their states before they should.  Just think how you’d feel about your PC if the bit states of 0 and 1 suddenly and inextricably changed.

That’s not the only possible headache. You only have limited time to work on cubits given their current volatility when not super cooled. Also, work still is progressing on advancing the quantum frameworks and mapping out ecosystem enablement.

Even IBM researchers admit that some problems may not be better on quantum computers. Still, until you pass certain threshold, like qubit volume, your workload might not perform better on a quantum computer. The IBM quantum team suggests it will take until 2021 to consistently solve a problem that has commercial relevance using quantum computing.

Until then, and even after, IBM is talking about a hybrid approach in which parts of a problem are solved with a quantum computer and the rest with a conventional system. So don’t plan on replacing your Z with a few dozen or even hundreds of qubits anytime soon.

DancingDinosaur is Alan Radding, a veteran information technology analyst, writer, and ghost-writer. Please follow DancingDinosaur on Twitter, @mainframeblog. See more of his IT writing at technologywriter.com and here.

 

Meet the new IBM LinuxONE Emperor II

September 15, 2017

Early this week IBM introduced the newest generation of the LinuxONE, the IBM LinuxONE Emperor II, built on the same technology as the IBM z14, which DancingDinosaur covered on July 19. The key feature of the new LinuxONE Emperor II, is IBM Secure Service Container, presented as an exclusive LinuxONE technology representing a significant leap forward in data privacy and security capabilities. With the z14 the key capability was pervasive encryption. This time the Emperor II promises very high levels of security and data privacy assurance while rapidly addressing unpredictable data and transaction growth. Didn’t we just hear a story like this a few weeks ago?

IBM LinuxONE Emperor (not II)

Through the IBM Secure Service Container, for the first time data can be protected against internal threats at the system level from users with elevated credentials or hackers who obtain a user’s credentials, as well as external threats. Software developers will benefit by not having to create proprietary dependencies in their code to take advantage of these security capabilities. An application only needs to be put into a Docker container to be ready for Secure Service Container deployment. The application can be managed using the Docker and Kubernetes tools that are included to make Secure Service Container environments easy to deploy and use.

The Emperor II and the LinuxONE are being positioned as the premier Linux system for highly secured data serving. To that end, it promises:

  • Ultimate workload isolation and pervasive encryption through Secure Service Containers (SoD)
  • Encryption of data at rest without application change and with better performance than x86
  • Protection of data in flight over the network with full end-to-end network security
  • Use of Protected Keys to secure data without giving up performance
  • Industry-leading secure Java performance via TLS (2-3x faster than Intel)

With the z14 you got this too, maybe worded slightly differently.

In terms of performance and scalability, IBM promises:

  • Industry-leading performance of Java workloads, up to 50% faster than Intel
  • Vertical scale to 170 cores, equivalent to hundreds of x86 cores
  • Simplification to make the most of your Linux skill base and speed time to value
  • SIMD to accelerate analytics workloads & decimal compute (critical to financial applications)
  • Pause-less garbage collection to enable vertical scaling while maintaining predictable performance

Like the z14, the Emperor II also lays a foundation for data serving and next gen apps, specifically:

  • Adds performance and security to new open source DBaaS deployments
  • Develops new blockchain applications based on the proven IBM Blockchain Platform—in terms of security, blockchain may prove more valuable than even secure containers or pervasive encryption
  • Support for data-in-memory applications and new workloads using 32 TB of memory—that’s enough to run production databases entirely in memory (of course, you’ll have to figure out if the increased performance, which should be significant, is worth the extra memory cost)
  • A build-your-cloud approach for providers wanting a secure, scalable, open source platform

If you haven’t figured it out yet, IBM sees itself in a titanic struggle with Intel’s x86 platform.  With the LinuxONE Emperor II IBM senses it can gain the upper hand with certain workloads. Specifically:

  • EAL 5+ isolation, best in class crypto key protection, and Secure Service Containers
  • 640 Power cores in its I/O channels (that aren’t included in the core count) giving the platform the best I/O capacity and performance in the industry
  • Its shared memory, vertical scale architecture delivers a measurably better architecture for stateful workloads like databases and systems of record
  • The LinuxONE/z14 hardware designed to still give good response time at up to 100% utilization, which simplifies the solution and reduces the extra costs many data centers assume are necessary because they’re used to 50% utilization
  • The Emperor II can be ordered designed and tested for earthquake resistance
  • The z-based LinuxONE infrastructure has survived fire and flood scenarios where all other server infrastructures have failed

That doesn’t mean, however, the Emperor II is a Linux no brainer, even for shops facing pressure around security compliance, never-fail mission critical performance, high capacity, and high performance. Change is hard and there remains a cultural mindset based on the lingering myth of the cheap PC of decades ago.

DancingDinosaur is Alan Radding, a veteran information technology analyst, writer, and ghost-writer. Please follow DancingDinosaur on Twitter, @mainframeblog. See more of his IT writing at technologywriter.com and here.

 

Syncsort Finds New Corporate Home and Friend

September 8, 2017

Centerbridge Partners, L.P. a private investment firm, completed the $1.26 billion acquisitions of enterprise software providers Syncsort Incorporated and Vision Solutions, Inc. from affiliates of Clearlake Capital Group, L.P. Clearlake, which acquired Syncsort in 2015 and Vision in 2016, will retain a minority ownership stake in the combined company.

Syncsort is a provider of enterprise software and a player in Big Iron to Big Data solutions. DancingDinosaur has covered it here and here. According to the company, customers in more than 85 countries rely on Syncsort to move and transform mission-critical data and workloads. Vision Solutions provides business resilience tools addressing high availability, disaster recovery, migration, and data sharing for IBM Power Systems.

The company apparently hasn’t suffered from being passed between owners. Syncsort has been active in tech acquisitions for the past two years as it builds its data transformation footprint. Just a couple of weeks ago, it acquired Metron, a provider of cross-platform capacity management software, services. Metron’s signature athene solution delivers trend-based forecasting, capacity modeling, and planning capabilities that enable enterprises to optimize their data infrastructure to improve performance and control costs on premise or in the cloud.

This acquisition is the first since the announcement that Syncsort and Vision Solutions are combining, adding expertise and proven leadership in IBMi and AIX Power Systems platforms and to reinforce its ‘Big Iron to big data’ focus. Syncsort has also long established player in the mainframe business. Its Big Iron to Big Data promises to be a fast-growing market segment comprised of solutions that optimize traditional data systems and deliver mission-critical data from these systems to next-generation analytic environments using innovative Big Data technologies. Metron’s solutions and expertise is expected to contribute to the company’s data infrastructure optimization portfolio.

Syncsort has been on a roll since late in 2016 when, backed by Clearlake, it acquired Trillium Software, a global provider of data quality solutions. The acquisition of Trillium was the largest in Syncsort’s history then, and brings together data quality and data integration technology for enterprise environments. The combination of Syncsort and Trillium, according to the company, enables enterprises to harness all their valuable data assets for greater business insights, applying high-performance and scalable data movement, transformation, profiling, and quality across traditional data management technology stacks as well as Hadoop and cloud environments.

Specifically, Syncsort and Trillium both have a substantial number of large enterprise customers seeking to generate new insights by combining traditional corporate data with diverse information sources from mobile, online, social, and the Internet of Things. Syncsort expects these organizations to continue to rely heavily on next-generation analytic capabilities, creating a growing need for its best-in-class data integration and quality solutions to make their Big Data initiatives successful. Together, Syncsort and Trillium will continue to focus on providing customers with these capabilities for traditional environments, while leading the industry in delivering them for Hadoop and Spark too.

Earlier this year Syncsort integrated its own Big Data integration solution, DMX-h, with Cloudera Director, enabling organizations to easily deploy DMX-h along with Cloudera Enterprise on Amazon Web Services, Microsoft Azure, or Google Cloud. By deploying DMX-h with CDH, organizations can quickly pull data into new, ready-to-work clusters in the cloud—accelerating the time to capture cloud benefits, including cost savings and Data-as-a-Service (DaaS) delivery.

“As organizations liberate data from across the enterprise and deliver it into the cloud, they are looking for a self-service, elastic experience that’s easy to deploy and manage. This is a requirement for a variety of use cases – from data archiving to analytics that combine data originating in the cloud with on premise reference data,” said Tendü Yoğurtçu, Chief Technology Officer.

“By integrating DMX-h with Cloudera Director,” Yoğurtçu continued, “DMX-h is instantly available and ready to put enterprise data to work in newly activated cloud clusters.”

Syncsort DMX-h pulls enterprise data into Hadoop in the cloud and prepares that data for business workloads using native Hadoop frameworks, Apache Spark, or MapReduce, effectively enabling IT to achieve time-to-value goals and quickly deliver business insights.

It is always encouraging to see the mainframe eco-system continue to thrive. IBM’s own performance over the past few years has been anything but encouraging.

DancingDinosaur is Alan Radding, a veteran information technology analyst, writer, and ghost-writer. Please follow DancingDinosaur on Twitter, @mainframeblog. See more of his IT writing at technologywriter.com and here.

 

IBM Promises Easy Fast Data Protection

September 1, 2017

Data protection used to be simple. You simply made a couple of copies of your data and stored them someplace safe. That hasn’t worked for years with most enterprises and certainly won’t work going forward. There are too many systems and data. Now you have to contend with virtual machines, NoSQL databases, cloud storage, and more. In the face of growing compliance mandates and threats like ransomware, and a bevy of data protection threats data protection has gotten much more complicated.

Last week IBM simplified it again by announcing IBM Spectrum Protect Plus. It promises to make data protection available in as little as one hour.

IBM achieves tape breakthrough

Turned out August proved to be a good month for IBM storage. In addition to introducing Spectrum Protect Plus IBM and Sony researchers achieved a record of 201 Gb/in2 (gigabits per square inch) in areal density. That translates into the potential to record up to about 330 terabytes (TB) of uncompressed data on a single tape cartridge. Don’t expect commercially available products with this density soon. But you will want it sooner than you may think as organizations face the need to collect, store, and protect massive amounts of data for a wide range of use cases, from surveillance images to analytics to cognitive to, eventually, quantum computing.

IBM Spectrum Protect Plus delivers data availability using snapshot technology for rapid backup, recovery and data management. Designed to be used by virtual machines (VM) and application administrators, it also provides data clone functionality to support and automate DevOps workflows. Unlike other data availability solutions, IBM Spectrum Protect Plus performs data protection and monitoring based on automated Service Level Agreements to ensure proper backup status and retention compliance, noted IBM.

The company has taken to referring Spectrum Protect Plus as the future of data protection, recovery and data reuse. IBM designed it to be fast, modern, light weight, low cost, easy to use, and simple to deploy while delivering rapid time to value.  As noted at the top, the company claims it can make effective data protection available in an hour without relying on highly trained storage experts. Spectrum Protect Plus, delivers data protection, according to IBM, “anyone can manage,” adding that it installs in less than 15 mins.

You get instant data and virtual machine recovery, which you grab from a snapshot. It is so slick, IBM managers say, that “when someone sends you a ransomware letter you can just laugh at them.” Only, of course, if you have been diligent in making backups. Don’t blame the Protect Plus tool, which is thoroughly automated behind scenes. It was announced last week but won’t be available until the fourth quarter of this year.

Protect Plus also brings a handful of new goodies for different stakeholders, as IBM describes it:

  • CIOs get a single view of the backup and recovery status across the data portfolio and the elimination of silos of data backup and recovery.
  • Senior IT Manager (VM and Application Admins) can rapidly self-serve their data availability without complexity. IBM Spectrum Protect Plus also provides an ability to integrate the VM and application backups into the business rules of the enterprise.
  • Senior Application LOB owners can experience data lifecycle management with near instantaneous recovery, copy management, and global search for fast data access and recovery

Specifically designed for virtual machine (VM) environments to support daily administration the product rapidly deploys without agents. It also features a simple, role-based user interface (UI) with intuitive global search for fast recovery.

Data backup and recovery, always a pain in the neck, has gotten even far more complex. For an enterprise data center facing stringent data protection and compliance obligations and juggling the backup of virtual and physical systems, probably across multiple clouds and multiple data centers the challenges and risks have grown by orders of magnitude. You will need tools like Spectrum Protect Plus, especially the Plus part, which IBM insists is a completely new offering.

DancingDinosaur is Alan Radding, a veteran information technology analyst, writer, and ghost-writer. Please follow DancingDinosaur on Twitter, @mainframeblog. See more of his IT writing at technologywriter.com and here.

 

 

IBM Blockchain Platform Aims for Immutable Accuracy

August 25, 2017

Earlier this week IBM announced a major blockchain collaboration among group of leading companies across the global food supply chain. The goal is to reduce the number of people falling ill or even dying from eating contaminated food. IBM’s solution is its blockchain platform, which it believes is ideally suited to help address these challenges because it establishes a trusted environment that tracks all transactions, an accurate, consistent, immutable version.

Blockchain can improve food traceability

The food segment is just one of many industries IBM will target for its blockchain platform. It describes the platform as ideally suited to help address varied industry challenges because it establishes a trusted environment for all transactions. IBM claims it as the only fully integrated enterprise-ready blockchain platform designed to accelerate the development, governance and operation of a multi-institution business network. Rival vendors, like Accenture, may disagree.  In the case of the global food supply chain, all participants -growers, suppliers, processors, distributors, retailers, regulators and consumers – can gain permissioned access to known and trusted information regarding the origin and state of food. In December 2016 DancingDinosaur reported on IBM and Walmart using blockchain for food safety.

IBM’s blockchain platform is built around Hyperledger Composer, integrated with popular development environments using open developer tools, and accepted business terms to generate blockchain code and smart contracts. It also includes sample industry use cases.  Using IBM’s platform, developers can create standard business language in JavaScript and the APIs help keep development work at the business level, rather than being highly technical. This makes it possible for most any programmer to be a blockchain developer. Additionally, a variety of IBM Developer Journeys for blockchain are available featuring free open source code, documentation, APIs, architecture diagrams, and one-click deployment Git repositories to fast-track building, according to IBM.

For governance and operation it also provides activation tools for new networks, members, smart contracts and transaction channels. It also includes multi-party workflow tool with member activities panel, integrated notifications, and secure signature collection for policy voting. In addition, a new class of democratic governance tools designed to help improve productivity across the organizations uses a voting process that collects signatures from members to govern member invitation distribution of smart contracts and the creation of transactions channels. By enabling the quick onboarding of participants, assigning roles, and managing access, organizations can begin transacting via the blockchain fast.

In operating the network IBM blockchain platform provides always-on, high availability with seamless software and blockchain network updates, a hardened security stack with no privileged access, which blocks malware, and built-in blockchain monitoring for full network visibility. Woven throughout the platform is the Hyperledger Fabric. It also provides the highest-level, commercially available tamper resistant FIPS140-2 level 4 protection for encryption keys.

Along with its blockchain platform, IBM is advancing other blockchain supply chain initiatives by using the platform for an automated billing and invoicing system. Initial work to use blockchain for invoicing also is underway starting with Lenovo. This will provide an audit-ready solution with full traceability of billing and operational data, and help speed on-boarding time for new vendors and new contract requirements, according to IBM.

The platform leverages IBM’s work for more than 400 organizations. It includes insights gained as IBM has built blockchain networks across industries ranging from financial services, supply chain and logistics, retail, government, and healthcare.

Extensively tested and piloted, the IBM’s new blockchain platform addresses a wide range of enterprise pain points, including both business and technical requirements around security, performance, collaboration and privacy. It includes innovation developed through open source collaboration in the Hyperledger community, including the newest Hyperledger Fabric v1.0 framework and Hyperledger Composer blockchain tool, both hosted by the Linux Foundation.

DancingDinosaur has previously noted that the z appears ideal for blockchain. DancingDinosaur based this on the z13’s scalability, security, and performance. The new z14, with its automated, pervasive encryption may be even better.  The Hyperledger Composer capabilities along with the sample use cases promise an easy, simple way to try blockchain among some suppliers and partners.

DancingDinosaur is Alan Radding, a veteran information technology analyst, writer, and ghost-writer. Please follow DancingDinosaur on Twitter, @mainframeblog. See more of his IT writing at technologywriter.com and here.

 

IBM Cognitive Computing Results Here

August 18, 2017

IBM released the latest study on cognitive computing from its Institute of Business Value (IBV) and it brings a classic good news/bad news story line. The good news: professionals using cognitive computing are able to create and deliver the personalized, intuitive experiences customers demand. The bad news: cognitive computing could also be one of the most disruptive forces their organizations face. However you see the cognitive glass—half full or half empty—IBV calls cognitive computing “game-changing technology.”

ibm ibv info graphic

Cognitive could be the answer to marketers’ and sellers’ prayers or nightmares Still, it appears to the IBV researchers that Chief Marketing Officers (CMOs) and heads of sales are ready to take the gamble and make the cognitive leap. Nearly two-thirds of those surveyed believe their industries will be ready to adopt cognitive solutions by 2020.

The technology can quickly make sense of vast amounts of structured and unstructured data, including sounds and images, in ways similar to humans—by reasoning, learning, and interacting to improve accuracy overtime. Companies identified as outperformers report cognitive is already operational at their organizations, with 73 percent already collecting and analyzing external market data.

Many marketing and sales executives, IBV researchers report, expect their organizations’ cognitive spend to increase within the next three years. Today, 63 percent estimate that cognitive accounts for 5 percent or less of their organizations’ IT budgets, including 18 percent who say it constitutes zero. Of those, 5 percent say their cognitive budget will still be zero in three years. By then, 21 percent expect it to grow to 5 – 10 percent. Almost a quarter of outperformers say it could account for more than 20 percent of their IT spend.

Respondents have high expectations that this technology will pay off. Nearly a third say their organizations would need a 10 – 15 percent return to justify their investment. More than half expect their organizations to recover their cognitive investment within 2-4 years. Clearly they expect the payoff from cognitive insights to come fast.

Today’s big driver, IBV researchers found, is customer satisfaction. Meeting or exceeding customer expectations is a common buzzword of many managers. But practically speaking, the researchers continued, many of those surveyed say they aren’t sure their organizations are currently set up to make a successful transition. The study, conducted in cooperation with Oxford Economics, is based on a global survey with 525 CMOs.

Traditional analytics long provided data for businesses to draw insights. Cognitive analytics goes further, providing predictive outcomes that turn insights into forward-directed recommendations, which, hopefully, impact real business decisions.

But the results are not guaranteed. The IBV researchers found that it’s important that organizations not simply focus on their expectations of better marketing and sales results. Rather, they also need to take into account the efficiencies and cost savings they could potentially gain with cognitive solutions, particularly for sales prospecting and management, as well as the media and marketing spend. Additionally, cognitive computing’s ability to help companies improve their customer experience should be included in any company’s ROI calculations.

Many eager adopters, however, report being hampered by a number of challenges, according to the IBV report. CMOs, for instance, complain they lack the technology needed to implement cognitive solutions. Additionally, they don’t believe they have enough of the required skills and expertise. Others suggest data governance and data sharing policies present a barrier while CMOs worry about security and privacy implications.

Still others cite a more elemental concern:  a lack of executive support for cognitive computing and worry their organizational culture may not be a good fit for a cognitive solution. At first glance, this seems surprising, given their apparent faith in cognitive benefits and the anticipation their industries will be adopting cognitive as the new normal.

The IBV findings, however, reveal executives’ mixed emotions about what the change cognitive represents for their companies. Many say they are feeling overwhelmed by the challenge of adopting yet another new technology and processes. Some marketing teams are in the midst of their own digital transformations. The adoption of cognitive computing, however, can be integrated into their current digital strategy and the tools they are using today, if they have some key capabilities in place. To some, that might be a big if.

To ensure the best results, IBV suggests:

  • Make room for cognitive solutions in your businesses’ digital reinvention strategy from the start
  • Enhance employees’ business skills, not just their data analytics skills
  • Make cognitive a golden opportunity for collaboration, innovation, and closer alignment within the C-suite
  • Start small if necessary but do start, preferably now

Cognitive can applied in many ways in different areas of the business, potentially benefiting any of them. But until you try you can’t tell. The real risk, notes IBV, is waiting too long on the sidelines while the competition forges ahead. That’s a story you’ve heard before.

DancingDinosaur is Alan Radding, a veteran information technology analyst, writer, and ghost-writer. Please follow DancingDinosaur on Twitter, @mainframeblog. See more of his IT writing at technologywriter.com and here.

 


%d bloggers like this: