Posts Tagged ‘mainframe’

Compuware Expedites DevOps on Z

July 13, 2018

Compuware continues its quarterly introduction of new capabilities for the mainframe, a process that has been going on for several years by now. The latest advance, Topaz for Enterprise Data, promises to expedite the way DevOps teams can access the data they need while reducing the complexity, labor, and risk through extraction, masking, and visualization of the mainframe. The result: the ability to leverage all available data sources to deliver high-value apps and analytics fast.

Topaz for Enterprise Data expedites data access for DevOps

The days when mainframe shops could take a methodical and deliberate approach—painstakingly slow—to accessing enterprise data have long passed. Your DevOps teams need to dig the value out of that data and put it into the hands of managers and LOB teams fast, in hours, maybe just minutes so they can jump on even the most fleeting opportunities.

Fast, streamlined access to high-value data has become an urgent concern as businesses seek competitive advantages in a digital economy while fulfilling increasingly stringent compliance requirements. Topaz for Enterprise Data enables developers, QA staff, operations teams, and data scientists at all skill and experience levels to ensure they have immediate, secure access to the data they need, when they need it, in any format required.

It starts with data masking, which in just the last few months has become a critical concern with the rollout of GDPR across the EU. GDPR grants considerable protections and options to the people whose data your systems have been collecting. Now you need to protect personally identifiable information (PII) and comply with regulatory mandates like GDPR and whatever similar regs will come here.

Regs like these don’t apply just to your primary transaction data. You need data masking with all your data, especially when large, diverse datasets of high business value residing on the mainframe contain sensitive business or personal information.

This isn’t going to go away anytime soon so large enterprises must start transferring responsibility for the stewardship of this data to the next generation of DevOps folks who will be stuck with it. You can bet somebody will surely step forward and say “you have to change every instance of my data that contains this or that.” Even the most expensive lawyers will not be able to blunt such requests. Better to have the tools in place to respond to this quickly and easily.

The newest tool, according to Compuware, is Topaz for Enterprise Data. It will enable even a mainframe- inexperienced DevOps team to:

  • Readily understand relationships between data even when they lack direct familiarity with specific data types or applications, to ensure data integrity and resulting code quality.
  • Quickly generate data for testing, training, or business analytics purposes that properly and accurately represents actual production data.
  • Ensure that any sensitive business or personal data extracted from production is properly masked for privacy and compliance purposes, while preserving essential data relationships and characteristics.
  • Convert file types as required.

Topaz users can access all these capabilities from within Topaz’s familiar Eclipse development environment, eliminating the need to learn yet another new and complicated tool.

Those who experience it apparently like what they find. Noted Lynn Farley, Manager of Data Management at TCF Bank: “Testing with production-like obfuscated data helps us develop and deliver better quality applications, as well as remain compliant with data privacy requirements, and Topaz provides our developers with a way to implement data privacy rules to mask multiple data types across platforms and with consistent results.”

Rich Ptak, principal of IT analyst firm Ptak Associates similarly observed: “Leveraging a modern interface for fast, simple access to data for testing and other purposes is critical to digital agility,” adding it “resolves the long-standing challenge of rapidly getting value from the reams of data in disparate sources and formats that are critical to DevOps and continuous improvement.”

“The wealth of data that should give large enterprises a major competitive advantage in the digital economy often instead becomes a hindrance due to the complexity of sourcing across platforms, databases, and formats,” said Chris O’Malley,Comp CEO of Compuware. As DancingDinosaur sees it, by removing such obstacles Compuware reduces the friction between enterprise data and business advantage.

DancingDinosaur is Alan Radding, a veteran information technology analyst, writer, and ghost-writer. Follow DancingDinosaur on Twitter, @mainframeblog. See more of his work at technologywriter.com and here.

Hybrid Cloud to Streamline IBM Z

June 27, 2018

2020 is the year, according to IDC,  when combined IT infrastructure spending on private and public clouds will eclipse spending on traditional data centers. The researcher predicts the public cloud will account for 31.68 percent of IT infrastructure spending in 2020, while private clouds will take a 19.82 percent slice of the spending pie, totaling more than half (51.5 percent) of all infrastructure spending for the first time, with the rest going to traditional data centers.

Source: courtesy of IBM

There is no going back. By 2021 IDC expects the balance to continue tilting further toward the cloud, with combined public and private cloud dollars making up 53.15 percent of infrastructure spending. Enterprise spending on cloud, according to IDC, will grow over $530 billion as over 90 percent of enterprises will be using a mix of multiple cloud services and platforms, both on and off premises.

Technology customers want choices. They want to choose their access device, interface, deployment options, cost and even their speed of change. Luckily, today’s hybrid age enables choices. Hybrid clouds and multi-cloud IT offer the most efficient way of delivering the widest range of customer choices.

For Z shops, this shouldn’t come as a complete surprise. IBM has been preaching the hybrid gospel for years, at least since x86 machines began making significant inroads into its platform business. The basic message has always been the same: Center the core of your business on the mainframe and then build around it— using x86 if you must but now try LinuxONE and hybrid clouds, both public and on-premises.

For many organizations a multi-cloud strategy using two or more different clouds, public or on-premise, offers the fastest and most efficient way of delivering the maximum in choice, regardless of your particular strategy. For example one might prefer a compute cloud while the other a storage cloud. Or, an organization might use different clouds—a cloud for finance, another for R&D, and yet another for DevOps.

The reasoning behind a multi-cloud strategy can also vary. Reasons can range from risk mitigation, to the need for specialized functionality, to cost management, analytics, security, flexible access, and more.

Another reason for a hybrid cloud strategy, which should resonate with DancingDinosaur readers, is modernizing legacy systems. According to Gartner, by 2020, every dollar invested in digital business innovation will require enterprises to spend at least three times that to continuously modernize the legacy application portfolio. In the past, such legacy application portfolios have often been viewed as a problem subjected to large-scale rip-and-replace efforts in desperate, often unsuccessful attempts to salvage them.

With the growth of hybrid clouds, data center managers instead can manage their legacy portfolio as an asset by mixing and matching capabilities from various cloud offerings to execute business-driven modernization. This will typically include microservices, containers, and APIs to leverage maximum value from the legacy apps, which will no longer be an albatross but a valuable asset.

While the advent of multi-clouds or hybrid clouds may appear to complicate an already muddled situation, they actually provide more options and choices as organizations seek the best solution for their needs at their price and terms.

With the Z this may be easier done than it initially sounds. “Companies have lots of records on Z, and the way to get to these records is through APIs, particularly REST APIs,” explains Juliet Candee, IBM Systems Business Continuity Architecture. Start with the IBM Z Hybrid Cloud Architecture. Then, begin assembling catalogs of APIs and leverage z/OS Connect to access popular IBM middleware like CICS. By using z/OS Connect and APIs through microservices, you can break monolithic systems into smaller, more composable and flexible pieces that contain business functions.

Don’t forget LinuxONE, another Z but optimized for Linux and available at a lower cost. With the LinuxONE Rockhopper II, the latest slimmed down model, you can run 240 concurrent MongoDB databases executing a total of 58 billion database transactions per day on a single server. Accelerate delivery of your new applications through containers and cloud-native development tools, with up to 330,000 Docker containers on a single Rockhopper II server. Similarly, lower TCO and achieve a faster ROI with up to 65 percent cost savings over x86. And the new Rockhopper II’s industry-standard 19-inch rack uses 40 percent less space than the previous Rockhopper while delivering up to 60 percent more Linux capacity.

This results in what Candee describes as a new style of building IT that involves much smaller components, which are easier to monitor and debug. Then, connect it all to IBM Cloud on Z using secure Linux containers. This could be a hybrid cloud combining IBM Cloud Private and an assortment of public clouds along with secure zLinux containers as desired.

DancingDinosaur is Alan Radding, a veteran information technology analyst, writer, and ghost-writer. Follow DancingDinosaur on Twitter, @mainframeblog. See more of his work at technologywriter.com and here.

Please note: DancingDinosaur will be away for the first 2 weeks of July. The next piece should appear the week of July 16 unless the weather is unusually bad.

IBM Introduces a Reference Architecture for On-Premise AI

June 22, 2018

This week IBM announced an AI infrastructure Reference Architecture for on-premises AI deployments. The architecture promises to address the challenges organizations face experimenting with AI PoCs, growing into multi-tenant production systems, and then expanding to enterprise scale while integrating into an organization’s existing IT infrastructure.

The reference architecture includes, according to IBM, a set of integrated software tools built on optimized, accelerated hardware for the purpose of enabling organizations to jump start. AI and Deep Learning projects, speed time to model accuracy, and provide enterprise-grade security, interoperability, and support.  IBM’s graphic above should give you the general picture.

Specifically, IBM’s AI reference architecture should support iterative, multi-stage, data-driven processes or workflows that entail specialized knowledge, skills, and, usually, a new compute and storage infrastructure. Still, these projects have many attributes that are familiar to traditional CIOs and IT departments.

The first of these is that the results are only as good as the data going into it, and model development is dependent upon having a lot of data and the data being in the format expected by the deep learning framework. Surprised? You have been hearing this for decades as GIGO (Garbage In Garbage Out).  The AI process also is iterative; repeatedly looping through data sets and tunings to develop more accurate models and then comparing new data in the model to the original business or technical requirements to refine the approach.  In this sense, AI reference model is no different than IT 101, an intro course for wannabe IT folks.

But AI doesn’t stay simplistic for long. As the reference architecture puts it, AI is a sophisticated, complex process that requires specialized software and infrastructure. That’s where IBM’s PowerAI Platform comes in. Most organizations start with small pilot projects bound to a few systems and data sets but grow from there.

As projects grow beyond the first test systems, however, it is time to bulk up an appropriate storage and networking infrastructure. This will allow it to sustain growth and eventually support a larger organization.

The trickiest part of AI and the part that takes inspired genius to conceive, test, and train is the model. The accuracy and quality of a trained AI model are directly affected by the quality and quantity of data used for training. The data scientist needs to understand the problem they are trying to solve and then find the data needed to build a model that solves the problem.

Data for AI is separated into a few broad sets; the data used to train and test the models and data that is analyzed by the models and the archived data that may be reused. This data can come from many different sources such as traditional organizational data from ERP systems, databases, data lakes, sensors, collaborators and partners, public data, mobile apps, social media, and legacy data. It may be structured or unstructured in many formats such as file, block, object, Hadoop Distributed File Systems (HDFS), or something else.

Many AI projects begin as a big data problem. Regardless of how it starts, a large volume of data is needed, and it inevitably needs preparation, transformation, and manipulation. But it doesn’t stop there.

AI models require the training data to be in a specific format; each model has its own and usually different format. Invariably the initial data is nowhere near those formats. Preparing the data is often one of the largest organizational challenges, not only in complexity but also in the amount of time it takes to transform the data into a format that can be analyzed. Many data scientists, notes IBM, claim that over 80% of their time is spent in this phase and only 20% on the actual process of data science. Data transformation and preparation is typically a highly manual, serial set of steps: identifying and connecting to data sources, extracting to a staging server, tagging the data, using tools and scripts to manipulate the data. Hadoop is often a significant source of this raw data, and Spark typically provides the analytics and transformation engines used along with advanced AI data matching and traditional SQL scripts.

There are two other considerations in this phase: 1) data storage and access and the speed of execution. For this—don’t be shocked—IBM recommends Spectrum Scale to provide multi-protocol support with a native HDFS connector, which can centralize and analyze data in place rather than wasting time copying and moving data. But you may have your preferred platform.

IBM’s reference architecture provides a place to start. A skilled IT group will eventually tweak IBM’s reference architecture, making it their own.

DancingDinosaur is Alan Radding, a veteran information technology analyst, writer, and ghost-writer. Follow DancingDinosaur on Twitter, @mainframeblog. See more of his work at technologywriter.com and here.

IBM Expands and Enhances its Cloud Offerings

June 15, 2018

IBM announced 18 new availability zones in North America, Europe, and Asia Pacific to bolster its IBM Cloud business and try to keep pace with AWS, the public cloud leader, and Microsoft. The new availability zones are located in Europe (Germany and UK), Asia-Pacific (Tokyo and Sydney), and North America (Washington, DC and Dallas).

IBM cloud availability zone, Dallas

In addition, organizations will be able to deploy multi-zone Kubernetes clusters across the availability zones via the IBM Cloud Kubernetes Service. This will simplify how they deploy and manage containerized applications and add further consistency to their cloud experience. Furthermore, deploying multi-zone clusters will have minimal impact on performance, about 2 ms latency between availability zones.

An availability zone, according to IBM, is an isolated instance of a cloud inside a data center region. Each zone brings independent power, cooling, and networking to strengthen fault tolerance. While IBM Cloud already operates in nearly 60 locations, the new zones add even more capacity and capability in these key centers. This global cloud footprint becomes especially critical as clients look to gain greater control of their data in the face of tightening data regulations, such as the European Union’s new General Data Protection Regulation (GDPR). See DancingDinosaur June 1, IBM preps z world for GDPR.

In its Q1 earnings IBM reported cloud revenue of $17.7bn over the past year, up 22 percent over the previous year, but that includes two quarters of outstanding Z revenue that is unlikely to be sustained,  at least until the next Z comes out, which is at least a few quarters away.  AWS meanwhile reported quarterly revenues up 49 percent to $5.4 billion, while Microsoft recently reported 93 percent growth for Azure revenues.

That leaves IBM trying to catch up the old fashioned way by adding new cloud capabilities, enhancing existing cloud capabilities, and attracting more clients to its cloud capabilities however they may be delivered. For example, IBM announced it is the first cloud provider to let developers run managed Kubernetes containers directly on bare metal servers with direct access to GPUs to improve the performance of machine-learning applications, which is critical to any AI effort.  Along the same lines, IBM will extend its IBM Cloud Private and IBM Cloud Private for Data and middleware to Red Hat’s OpenShift Container Platform and Certified Containers. Red Hat already is a leading provider of enterprise Linux to Z shops.

IBM has also expanded its cloud offerings to support the widest range of platforms. Not just Z, LinuxONE, and Power9 for Watson, but also x86 and a variety of non-IBM architectures and platforms. Similarly, notes IBM, users have gotten accustomed to accessing corporate databases wherever they reside, but proximity to cloud data centers still remains important. Distance to data centers can have an impact on network performance, resulting in slow uploads or downloads.

Contrary to simplifying things, the propagation of more and different types of clouds and cloud strategies complicate an organization’s cloud approach. Already, today companies are managing complex, hybrid public-private cloud environments. At the same time, eighty percent of the world’s data is sitting on private servers. It just is not practical or even permissible in some cases to move all the data to the public cloud. Other organizations are run very traditional workloads that they’re looking to modernize over time as they acquire new cloud-native skills. The new IBM cloud centers can host data in multiple formats and databases including DB2, SQLBase, PostreSQL, or NoSQL, all exposed as cloud services, if desired.

The IBM cloud centers, the company continues, also promise common logging and services between the on-prem environment and IBM’s public cloud environment. In fact, IBM will make all its cloud services, including the Watson AI service, consistent across all its availability zones, and offer multi-cluster support, in effect enabling the ability to run workloads and do backups across availability zones.

DancingDinosaur is Alan Radding, a veteran information technology analyst, writer, and ghost-writer. Follow DancingDinosaur on Twitter, @mainframeblog. See more of his work at technologywriter.com and here.

IBM Continues Quantum Push

June 8, 2018

IBM continued building out its Q Network ecosystem in May with the announcement of North Carolina State University, which is the first university-based IBM Q Hub in North America. As a hub. NC State will focus on accelerating industry collaborations, learning, skills development, and the implementation of quantum computing.

Scientists inside an open dilution fridge

NC State will work directly with IBM to advance quantum computing and industry collaborations, as part of the IBM Q Network’s growing quantum computing ecosystem. The school is the latest Q Network member. The network consists of individuals and organizations, including scientists, engineers, and business leaders, along with forward thinking companies, academic institutions, and national research labs enabled by IBM Q. Its mission: advancing quantum computing and launching the first commercial applications.

This past Nov. IBM announced a 50 qubit system. Shortly after Google announced Bristlecone, which claims to top that. With Bristlecone Google topped IBM for now with 72 qubits. However, that may not be the most important metric to focus on.

Stability rather than the number of qubits should be the most important metric. The big challenge today revolves around the instability of qubits. To maintain qubit machines stable enough the systems need to keep their processors extremely cold (Kelvin levels of cold) and protect them from external shocks. This is not something you want to build into a laptop or even a desktop. Instability leads to inaccuracy, which defeats the whole purpose.  Even accidental sounds can cause the computer to make mistakes. For minimally acceptable error rates, quantum systems need to have an error rate of less than 0.5 percent for every two qubits. To drop the error rate for any qubit processor, engineers must figure out how software, control electronics, and the processor itself can work alongside one another without causing errors.

50 cubits currently is considered the minimum number for serious business work. IBM’s November announcement, however, was quick to point out that “does not mean quantum computing is ready for common use.” The system IBM developed remains extremely finicky and challenging to use, as are those being built by others. In its 50-qubit system, the quantum state is preserved for 90 microseconds—record length for the industry but still an extremely short period of time.

Nonetheless, 50 qubits have emerged as the minimum number for a (relatively) stable system to perform practical quantum computing. According to IBM, a 50-qubit machine can do things that are extremely difficult to even simulate with the fastest conventional system.

Today, IBM offers the public IBM Q Experience, which provides access to 5- and 16-qubit systems; and the open quantum software development kit, QISKit, maybe the first quantum SDK. To date, more than 80,000 users of the IBM Q Experience, have run more than 4 million experiments and generated more than 65 third-party research articles.

Still, don’t expect to pop a couple of quantum systems into your data center. For the immediate future, the way to access and run qubit systems is through the cloud. IBM has put qubit systems in the cloud, where they are available to participants in its Q Network and Q Experience.

IBM has also put some of its conventional systems, like the Z, in the cloud. This raises some interesting possibilities. If IBM has both quantum and conventional systems in the cloud, can the results of one be accessed or somehow shared with the other. Hmm, DancingDinosaur posed that question to IBM managers earlier this week at a meeting in North Carolina (NC State, are you listening?).

The IBMers acknowledged the possibility although in what form and what timeframe wasn’t even at the point of being discussed. Quantum is a topic DancingDinosaur expects to revisit regularly in the coming months or even years. Stay tuned.

DancingDinosaur is Alan Radding, a veteran information technology analyst, writer, and ghost-writer. Follow DancingDinosaur on Twitter, @mainframeblog. See more of his work at technologywriter.com and here.

IBM Preps Z World for GDPR

June 1, 2018

Remember Y2K?  That was when calendars rolled over from the 1999 to 2000. It was hyped as an event that would screw up computers worldwide. Sorry, planes did not fall out of the sky overnight (or at all), elevators didn’t plummet to the basement, and hospitals and banks did not cease functioning. DancingDinosaur did OK writing white papers on preparing for Y2K. Maybe nothing bad happened because companies read papers like those and worked on changing their date fields.

Starting May 25, 2018 GDPR became the new Y2K. GRDP, the EC’s (or EU) General Data Protection Regulation (GDPR), an overhaul of existing EC data protection rules, promises to strengthen and unify those laws for EC citizens and organizations anywhere collecting and exchanging data involving its citizens. That is probably most of the readers of DancingDinosaur. GDRP went into effect at the end of May and generated a firestorm of trade business press but nothing near what Y2K did.  The primary GDPR objectives are to give citizens control over their personal data and simplify the regulatory environment for international business.

According to Bob Yelland, author of How it Works: GDPR, a Little Bee Book above, 50% of global companies  say they will struggle to meet the rules set out by Europe unless they make significant changes to how they operate, and this may lead many companies to appoint a Data Protection Officer, which the rules recommend. Doesn’t it feel a little like Y2K again?

The Economist in April wrote: “After years of deliberation on how best to protect personal data, the EC is imposing a set of tough rules. These are designed to improve how data are stored and used by giving more control to individuals over their information and by obliging companies to handle what data they have more carefully. “

As you would expect, IBM created a GDPR framework with five phases to help organizations achieve readiness: Assess, Design, Transform, Operate, and Conform. The goal of the framework is to help organizations manage security and privacy effectively in order to reduce risks and therefore avoid incidents.

DancingDinosaur is not an expert on GDPR in any sense, but from reading GDPR documents, the Z with its pervasive encryption and automated secure key management should eliminate many concerns. The rest probably can be handled by following good Z data center policy and practices.

There is only one area of GDPR, however, that may be foreign to North American organizations—the parts about respecting and protecting the private data of individuals.

As The Economist wrote: GDPR obliges organizations to create an inventory of the personal data they hold. With digital storage becoming ever cheaper, companies often keep hundreds of databases, many of which are long forgotten. To comply with the new regulation, firms have to think harder about data hygiene. This is something North American companies probably have not thought enough about.

IBM recommends you start by assessing your current data privacy situation under all of the GDPR provisions. In particular, discover where protected information is located in your enterprise. Under GDPR, individuals have rights to consent to access, correct, delete, and transfer personal data. This will be new to most North American data centers, even the best managed Z data centers.

Then, IBM advises, assess the current state of your security practices, identify gaps, and design security controls to plug those gaps. In the process find and prioritize security vulnerabilities, as well as any personal data assets and affected systems. Again, you will want to design appropriate controls. If this starts sounding a little too complicated just turn it over to IBM or any of the handful of other vendors who are racing GDPR readiness services into the market. IBM offers Data Privacy Consulting Services along with a GDPR readiness assessment.

Of course, you can just outsource it to IBM or others. IBM also offers its GDPR framework with five phases. The goal of the framework is to help organizations subject to GDPR manage security and privacy with the goal of reducing risks and avoiding problems.

GDPR is not going to be fun, especially the obligation to comply with each individual’s rights regarding their data. DancingDinosaur suspects it could even get downright ugly.

DancingDinosaur is Alan Radding, a veteran information technology analyst, writer, and ghost-writer. Please follow DancingDinosaur on Twitter, @mainframeblog. See more of his IT writing at technologywriter.com and here.

IBM Eager to Move Blockchain to Next Level

May 24, 2018

Blockchain, as IBM sees it, has moved past the days of proof of concept and is now focused on platform industrialization and best practices. As for industries, financial systems remain its primary target although areas like digital identity, provenance, and asset management are gaining interest fast. To that end the company is offering the IBM Blockchain Platform on a free trial basis. The Platform promises to make any developer a blockchain developer fast. According to IBM, hundreds of companies on dozens of blockchain networks are solving business challenges across varied industries, not just financial services.

Although IBM has been a blockchain pioneer, especially in the enterprise segment, which is getting more crowded.  Specifically, the Enterprise Ethereum Alliance (EEA), which lists over 500 members, recently made available to the public the release of its Enterprise Ethereum Architecture Stack. According to EEA, the stack defines the building blocks needed to drive the Web 3.0 era of decentralized, connective intelligence that will work anywhere and is capable of facilitating smart contracts without intermediaries. The stack, available as a free public document to download here, incorporates components developed by the EEA.

IBM is not saying much about EEA, preferring instead to focus on Hyperledger. This much, however, an IBMer was willing to say: Our main POV is that Hyperledger is the only blockchain platform designed for enterprises from the ground up.  Ethereum is better suited for crypto currencies and ICOs.  But we know they are working to move into the enterprise space. Interestingly, of the 500 companies listed as EEA members, including many of IBM’s consulting and vendor peers but not IBM.

The IBMer continued: we have had many clients start with Ethereum and discover that it does not meet their business needs … and then they move to Hyperledger, which as a permissioned blockchain network does meet enterprise business needs. Permissioned means you have to be approved to join the network, presumably making it more secure.

IBM clearly has locked onto the idea of a permissioned blockchain network as important, at least among enterprise customers. The IBMer continued: we state that you can have a public permissioned blockchain network—and you’ll start to see more of these around payments.

The IBMer noted: it’s unclear what Ethereum’s permissioning method is; what their governance model is.  There is lack of agreement around new types of consensus.

In case you have missed the blockchain buzz of the past few years blockchain is a shared, immutable ledger for recording the history of transactions. Immutable means that once a transaction has been accepted by the other nodes on the blockchain it cannot be removed or deleted. It sits there as a permanent record for the life of the blockchain. Of course, corrections can be made but they are added as another immutable record. As such, depending on the number of nodes in the blockchain, you can end up with a long chain of immutable records, sort of an auditor’s dream or nightmare.

A business blockchain, such as IBM Blockchain and the Linux Foundation’s Hyperledger Project, provides a permissioned network with known identities. And unlike Bitcoin, there is no need for cryptocurrency exchange. Where any payments are required, you make them as you would in any transaction.  Except with blockchain the transaction exists forever as an immutable record. From initial order to delivery acceptance to payment it all exists as a permanent record. Warning: sleazy business partners should avoid blockchain transactions since you won’t be able to weasel out so easily.

To make blockchain easy, IBM is developing what it describes as enterprise-ready solutions. For example, to address the provenance of diamonds and avoid buying what are called blood diamonds, IBM has backed TrustChain, which tracks six styles of diamond and gold engagement rings on a blockchain network, enabling dealers to know who handled the diamond from the mine to processor to the buyer’s finger.

Similarly, IBM’s Food Trust blockchain includes Dole, Driscoll’s, Golden State Foods, Kroger, McCormick and Company, McLane Company, Nestle, Tyson Foods, Unilever, and Walmart to address food safety. So the next time you eat something and get a stomach ache, just turn to Food Trust to track down the culprit.

DancingDinosaur is Alan Radding, a veteran information technology analyst, writer, and ghost-writer. Please follow DancingDinosaur on Twitter, @mainframeblog. See more of his IT writing at technologywriter.com and here.

High Cost of Ignoring Z’s Pervasive Encryption

May 17, 2018

That cost was spelled out at IBM’s Think this past spring.  Writes David Bruce, who leads IBM’s strategies for security on IBM Z and LinuxONE, data breaches are expensive, costing $3.6 million on average. And hoping to avoid one by doing business as usual is a bad bet. Bruce reports breaches are increasingly likely: an organization has a 28 percent chance of being breached in the next 24 months. You can find Bruce’s comments on security and pervasive encryption here.

9 million data records were compromised in 2015

Were any of those 9 million records from your organization? Did you end up on the front page of the newspaper? To stay out of the data breach headlines, organizations require security solutions that protect enterprise and customer data at minimal cost and effort, Bruce observes.

Encryption is the preferred solution, but it is costly, cumbersome, labor-intensive, and hit-or-miss. It is hit-or-miss because the overhead involved forces organizations to choose what to encrypt and what to skip. You have to painstakingly classify the data in terms of risk, which takes time and only adds to the costs. Outside of critical revenue transactions or key intellectual property—no brainers—you will invariably choose wrong and miss something you will regret when it shows up on the front page of the New York Times.

Adding to the cost is the compliance runaround. Auditors are scheduled to visit or maybe they aren’t even scheduled and just drop in; you now have to drop whatever your staff was hoping to do and gather the necessary documentation to prove your data is safe and secure.  Do you really need this? Life is too short as it is.

You really want to put an end to the entire security compliance runaround and all the headaches it entails. But more than that, you want protected, secure data; all data, all the time.  When someone from a ransomware operation calls asking for hundreds or thousands of dollars to get your data back you can laugh and hang up the phone. That’s what Bruce means when he talks about pervasive encryption. All your data is safely encrypted with its keys protected from the moment it is created until the moment it is destroyed by you. And you don’t have to lift a finger; the Z does it all.

That embarrassing news item about a data breach; it won’t happen to you either. Most importantly of all, customers will never see it and get upset.

In fact, at Think, Forrester discussed today’s customer-obsessed approach that leading organizations are adopting to spur growth. To obsess over customers, explained Bruce, means to take great care in protecting the customer’s sensitive data, which provides the cornerstone of a customer-obsessed Forrester zero trust security framework. The framework includes, among other security elements, encryption of all data across the enterprise. Enabling the Z’s built in pervasive encryption and automatic key protection you can ignore the rest of Forrester’s framework.

Pervasive encryption, unique to Z, addresses the security challenges while helping you thrive in this age of the customer. At Think, Michael Jordan, IBM Distinguished Engineer for IBM Z Security, detailed how pervasive encryption represents a paradigm shift in security, reported Bruce. Previously, selective field-level encryption was the only feasible way to secure data, but it was time-, cost-, and resource-intensive – and it left large portions of data unsecured.

Pervasive encryption, however, offers a solution capable of encrypting data in bulk, making it possible and practical to encrypt all data associated with an application, database, and cloud service – whether on premises or in the cloud, at-rest or in-flight. This approach also simplifies compliance by eliminating the need to demonstrate compliance at the field level. Multiple layers of encryption – from disk and tape up through applications – provide the strongest possible defense against security breaches. The high levels of security enabled by pervasive encryption help you promote customer confidence by protecting their data and privacy.

If you have a Z and have not enabled pervasive encryption, you are putting your customers and your organization at risk. Am curious, please drop me a note why.

DancingDinosaur is Alan Radding, a veteran information technology analyst, writer, and ghost-writer. Please follow DancingDinosaur on Twitter, @mainframeblog. See more of his IT writing at technologywriter.com and here.

 

May 4, 2018

Compuware Tackles Mainframe Workforce Attrition and Batch Processing

While IBM works furiously to deliver quantum computing and expand AI and blockchain into just about everything, many DancingDinosaur readers are still wrestling with the traditional headaches and boosting quality and efficiency or mainframe operations and optimizing the most traditional mainframe activities there are, batch processes. Would be nice if quantum computing could handle multiple batch operations simultaneously but that’s not high on IBM’s list of quantum priorities.

So Compuware is stepping up as it has been doing quarterly by delivering new systems to expedite and facilitate conventional mainframe processes.  Its zAdviser promises actionable analytic insight to continuously improve quality, velocity and efficiency on the mainframe. While Compuware’s ThruPut Manager enables next-gen ITstaff to optimize mainframe batch execution through new visually intuitive workload scheduling.

zAdviser captures data about developers’ behaviors

zAdviser uses machine learning to continuously measure and improve an organization’s mainframe DevOps processes and development outcomes. Based on key performance indicators (KPIs), zAdviser measures application quality, as well as development speed and the efficiency of a development team. The result: managers can now make evidence-based decisions in support of their continuous improvement efforts.

The new tool leverages a set of analytic models that uncover correlations between mainframe developer behaviors and mainframe DevOps KPIs. These correlations represent the best available empirical evidence regarding the impact of process, training and tooling decisions on digital business outcomes. Compuware is offering zAdviser free to customers on current maintenance.

zAdviser leverages a set of analytic models that uncover correlations between mainframe developer behaviors and mainframe DevOps KPIs. These correlations represent the best available empirical evidence regarding the impact of process, training and tooling decisions on digital business outcomes.

Long mainframe software backlogs are no longer acceptable. Improvements in mainframe DevOps has become an urgent imperative for large enterprises that find themselves even more dependent on mainframe applications—not less. According to a recent Forrester Consulting study commissioned by Compuware, 57 percent of enterprises with a mainframe run more than half of their business-critical workloads on the mainframe. That percentage is expected to increase to 64 percent by 2019, while at the same time enterprises are failing to replace the expert mainframe workforce they have lost by attrition. Hence the need for modern, automated, intelligent tools to speed the learning curve for workers groomed on Python or Node.js.

Meanwhile, IBM hasn’t exactly been twiddling its thumbs in regard to DevOps analytics for the Z. Its zAware delivers a self-contained firmware IT analytics offering that helps systems and operations professionals rapidly identify problematic messages and unusual system behavior in near real time, which systems administrators can use to take corrective actions.

ThruPut Manager brings a new web interface that offers  visually intuitive insight for the mainframe staff, especially new staff, into how batch jobs are being initiated and executed—as well as the impact of those jobs on mainframe software licensing costs.

By implementing ThruPut Manager, Compuware explains, enterprises can better safeguard the performance of both batch and non-batch applications while avoiding the significant adverse economic impact of preventable spikes in utilization as measured by Rolling 4-Hour Averages (R4HA). Reducing the R4HA is a key way data centers can contain mainframe costs.

More importantly,  with the new ThruPut Manager, enterprises can successfully transfer batch management responsibilities to the next generation of IT staff with far less hands-on platform experience—without exposing themselves to related risks such as missed batch execution deadlines, missed SLAs, and excess costs.

With these new releases, Compuware is providing a way to reduce the mainframe software backlog—the long growing complaint that mainframe shops cannot deliver new requested functionality fast enough—while it offers a way to replace the attrition among aging mainframe staff with young staff who don’t have years of mainframe experience to fall back on. And if the new tools lower some mainframe costs however modestly in the process, no one but IBM will complain.

DancingDinosaur is Alan Radding, a veteran information technology analyst, writer, and ghost-writer. Follow DancingDinosaur on Twitter, @mainframeblog. See more of his work at technologywriter.com and here.

IBM Grows Quantum Ecosystem

April 27, 2018

It is good that you aren’t dying to deploy quantum computing soon because IBM readily admits that it is not ready for enterprise production now or in several weeks or maybe several months. IBM, however, continues to assemble the building blocks you will eventually need when you finally feel the urge to deploy a quantum application that can address a real problem that you need to resolve.

cryostat with prototype of quantum processor

IBM is surprisingly frank about the state of quantum today. There is nothing you can do at this point that you can’t simulate on a conventional or classical computer system. This situation is unlikely to change anytime soon either. For years to come, we can expect hybrid quantum and conventional compute environments that will somehow work together to solve very demanding problems, although most aren’t sure exactly what those problems will be when the time comes. Still at Think earlier this year IBM predicted quantum computing will be mainstream in 5 years.

Of course, IBM has some ideas of where the likely problems to solve will be found:

  • Chemistry—material design, oil and gas, drug discovery
  • Artificial Intelligence—classification, machine learning, linear algebra
  • Financial Services—portfolio optimization, scenario analysis, pricing

It has been some time since the computer systems industry had to build a radically different kind of compute discipline from scratch. Following the model of the current IT discipline IBM began by launching the IBM Q Network, a collaboration with leading Fortune 500 companies and research institutions with a shared mission. This will form the foundation of a quantum ecosystem.  The Q Network will be comprised of hubs, which are regional centers of quantum computing R&D and ecosystem; partners, who are pioneers of quantum computing in a specific industry or academic field; and most recently, startups, which are expected to rapidly advance early applications.

The most important of these to drive growth of quantum are the startups. To date, IBM reports eight startups and it is on the make for more. Early startups include QC Ware, Q-Ctrl, Cambridge Quantum Computing (UK), which is working on a compiler for quantum computing, 1Qbit based in Canada, Zapata Computing located at Harvard, Strangeworks, an Austin-based tool developer, QxBranch, which is trying to apply classical computing techniques to quantum, and Quantum Benchmark.

Startups get membership in the Q network and can run experiments and algorithms on IBM quantum computers via cloud-based access; provide deeper access to APIs and advanced quantum software tools, libraries, and applications; and have the opportunity to collaborate with IBM researchers and technical SMEs on potential applications, as well as with other IBM Q Network organizations. If it hasn’t become obvious yet, the payoff will come from developing applications that solve recognizable problems. Also check out QISKit, a software development kit for quantum applications available through GitHub.

The last problem to solve is the question around acquiring quantum talent. How many quantum scientists, engineers, or programmers do you have? Do you even know where to find them? The young people excited about computing today are primarily interested in technologies to build sexy apps using Node.js, Python, Jupyter, and such.

To find the people you need to build quantum computing systems you will need to scour the proverbial halls of MIT, Caltech, and other top schools that produce physicists and quantum scientists. A scan of salaries for these people reveals $135,000- $160,000, if they are available at all.

The best guidance from IBM on starting is to start small. The industry is still at the building block stage; not ready to throw specific application at real problems. In that case sign up for IBM’s Q Network and get some of your people engaged in the opportunities to get educated in quantum.

When DancingDinosaur first heard about quantum physics he was in a high school science class decades ago. It was intriguing but he never expected to even be alive to see quantum physics becoming real, but now it is. And he’s still here. Not quite ready to sign up for QISKit and take a small qubit machine for a spin in the cloud, but who knows…

DancingDinosaur is Alan Radding, a veteran information technology analyst, writer, and ghost-writer. Follow DancingDinosaur on Twitter, @mainframeblog. See more of his work at technologywriter.com and here.


%d bloggers like this: