IBM z13 Helps Avoid Costly Data Breaches

June 24, 2016

A global study sponsored by IBM and conducted by the Ponemon Institute found that the average cost of a data breach for companies surveyed has grown to $4 million, representing a 29 percent increase since 2013. With cybersecurity incidents continuing to increase with 64% more security incidents in 2015 than in 2014 the costs are poised to grow.

World’s Most Secure System

z13–world’s most secure system

The z13, at least, is one way to keep security costs down. It comes with a cryptographic processor unit available on every core, enabled as a no-charge feature. It also provides EAL5+ support, a regulatory certification for LPARS, which verifies the separation of partitions to further improve security along with a dozen or so other built-in security features for the z13. For a full list of z13 security features click here. There also is a Redbook, Ultimate Security with the IBM z13 here. A midsize z, the z13s brings the benefits of mainframe security and mainframe computing to smaller organizations. You read about the z13s here on DancingDinosaur this past February.

As security threats become more complex, the researchers noted, the cost to companies continues to rise. For example, the study found that companies lose $158 per compromised record. Breaches in highly regulated industries were even more costly, with healthcare reaching $355 per record – a full $100 more than in 2013. And the number of records involved can run from the thousands to the millions.

Wow, why so costly? The researchers try to answer that too: leveraging an incident response team was the single biggest factor associated with reducing the cost of a data breach – saving companies nearly $400,000 on average (or $16 per record). In fact, response activities like incident forensics, communications, legal expenditures and regulatory mandates account for 59 percent of the cost of a data breach. Part of these high costs may be linked to the fact that 70 percent of U.S. security executives report they don’t even have incident response plans in place.

The process of responding to a breach is extremely complex and time consuming if not properly planned for. As described by the researchers, the process of responding to a breach consists of a minimum of four steps. Among the specified steps, a company must:

  • Work with IT or outside security experts to quickly identify the source of the breach and stop any more data leakage
  • Disclose the breach to the appropriate government/regulatory officials, meeting specific deadlines to avoid potential fines
  • Communicate the breach with customers, partners, and stakeholders
  • Set up any necessary hotline support and credit monitoring services for affected customers

And not even included in the researchers’ list are tasks like inventorying and identifying the data records that have been corrupted or destroyed, remediating the damaged data, and validating it against the last known clean backup copy. Am surprised the costs aren’t even higher. Let’s not even talk about the PR damage or loss of customer goodwill. Now, aren’t you glad you have a z13?

That’s not even the worst of it. The study also found the longer it takes to detect and contain a data breach, the more costly it becomes to resolve. While breaches that were identified in less than 100 days cost companies an average of $3.23 million, breaches that were found after the 100-day mark cost over $1 million more on average ($4.38 million). The average time to identify a breach in the study was estimated at 201 days and the average time to contain a breach was estimated at 70 days. The cost of a z13 or even the lower cost z13s could justify itself by averting just one data breach.

The researchers also found that companies with predefined Business Continuity Management (BCM) processes in place found and contained breaches more quickly, discovering breaches 52 days earlier and containing them 36 days faster than companies without BCM. Still, the cheapest solution is to avert breaches in the first place.

Not surprisingly, IBM is targeting the incident response business as an up and coming profit center. The company increased its investment in the Incident response market with the recent acquisition of Resilient Systems, which just came out with an updated version that graphically displays the relationships between Indicators of Compromise (IOCs) and incidents in an organization’s environment. But the z13 is probably a better investment if you want to avoid data breaches in the first place.

Surprisingly, sometimes your blogger is presented as a mainframe guru. Find the latest here.

DancingDinosaur is Alan Radding, a veteran information technology analyst writer. Please follow DancingDinosaur on Twitter, @mainframeblog. See more of his IT writing at technologywriter.com and here.

 

IBM Fires a Shot at Intel with its Latest POWER Roadmap

June 17, 2016

In case you worry that IBM will abandon hardware in the pursuit of its strategic initiatives focusing on cloud, mobile, analytics and more; well, stop worrying. With the announcement of its POWER Roadmap at the OpenPOWER Summit earlier this spring, it appears POWER will be around for years to come. But IBM is not abandoning the strategic initiatives either; the new Roadmap promises to support new types of workloads, such as real time analytics, Linux, hyperscale data centers, and more along with support for the current POWER workloads.

power9b

Pictured above: POWER9 Architecture, courtesy of IBM

Specifically, IBM is offering a denser roadmap, not tied to technology and not even tied solely to IBM. It draws on innovations from a handful of the members of the Open POWER Foundation as well as support from Google. The new roadmap also signals IBM’s intention to make a serious run at Intel’s near monopoly on enterprise server processors by offering comparable or better price, performance, and features.

Google, for example, reports porting many of its popular web services to run on Power systems; its toolchain has been updated to output code for x86, ARM, or Power architectures with the flip of a configuration flag. Google, which strives to be everything to everybody, now has a highly viable alternative to Intel in terms of performance and price with POWER. At the OpenPOWER Summit early in the spring, Google made it clear it plans to build scale-out server solutions based on OpenPower.

Don’t even think, however, that Google is abandoning Intel. The majority of its systems are Intel-oriented. Still, POWER and the OpenPOWER community will provide a directly competitive processing alternative.  To underscore the situation Google and Rackspace announced they were working together on Power9 server blueprints for the Open Compute Project, designs that reportedly are compatible with the 48V Open Compute racks Google and Facebook, another hyperscale data center, already are working on.

Google represents another proof point that OpenPOWER is ready for hyperscale data centers. DancingDinosaur, however, really is interested most in what is coming from OpenPOWER that is new and sexy for enterprise data centers, since most DancingDinosaur readers are focused on the enterprise data center. Of course, they still need ever better performance and scalability too. In that regard OpenPOWER has much for them in the works.

For starters, POWER8 is currently delivered as a 12-core, 22nm processor. POWER9, expected in 2017, will be delivered as 14nm processor with 24 cores and CAPI and NVlink accelerators. That is sure to deliver more performance with greater energy efficiency.  By 2018, the IBM roadmap shows POWER8/9 as a 10nm, maybe even 7nm, processor, based on the existing micro-architecture.

The real POWER future, arriving around 2020, will feature a new micro-architecture, sport new features and functions, and bring new technology. Expect much, if not almost all, of the new functions to come from various OpenPOWER Foundation partners,

POWER9, only a year or so out, promises a wealth of improvements in speeds and feeds. Although intended to serve the traditional Power Server market, it also is expanding its analytics capabilities and bringing new deployment models for hyperscale, cloud, and technical computing through scale out deployment. This will include deployment in both clustered or multiple formats. It will feature a shorter pipeline, improved branch execution, and low latency on the die cache as well as PCI gen 4.

Expect a 3x bandwidth improvement with POWER9 over POWER8 and a 33% speed increase. POWER9 also will continue to speed hardware acceleration and support next gen NVlink, improved coherency, enhance CAPI, and introduce a 25 GPS high speed link. Although the 2-socket chip will remain, IBM suggests larger socket counts are coming. It will need that to compete with Intel.

As a data center manager, will a POWER9 machine change your data center dynamics?  Maybe, you decide: a dual-socket Power9 server with 32 DDR4 memory slots, two NVlink slots, three PCIe gen-4 x16 slots, and a total 44 core count. That’s a lot of computing power in one rack.

Now IBM just has to crank out similar advances for the next z System (a z14 maybe?) through the Open Mainframe Project.

DancingDinosaur is Alan Radding, a veteran information technology analyst and writer. Please follow DancingDinosaur on Twitter, @mainframeblog. See more of his IT writing at technologywriter.com and here.

 

IBM to Acquire EZSource to Bolster IBM z System for Digital Transformation

June 9, 2016

Over the past two years you have been reading in DancingDinosaur the new tricks that your z System can do—real time analytics, IoT, Blockchain, and more. This is part of the digital transformation that is blowing through enterprises everywhere. EZSource facilitates and simplifies how you can play in this new areas. See the IBM announcement here.

zOS Security2 PNG ez Complexity

EZSource Dashboard, Credit: EZSource

EZSource expedites digital transformations by unlocking core business logic and apps. Specifically it will pinpoint your valuable mainframe code assets in preparation for leveraging them through a hybrid cloud strategy. In the process it will enable the understanding business-critical assets in preparation of deployment of a z-centered hybrid cloud. This also enables enterprise DevOps, which is necessary to keep up with the pace of changes overtaking existing business processes.

Specifically, this can entail the need to:

  • Identify API candidates to play in the API economy
  • Embrace micro services to deliver versatile apps fast
  • Identify code quality concerns, including dead code, to improve reliability and maintainability
  • Mitigate risk of change through understanding code, data and schedule interdependencies
  • Aid in the sizing of a change effort
  • Automate documentation to improve understanding
  • Reduce learning curve as new people are on-boarded
  • Add application understanding to DevOps lifecycle information to identify opportunities for work optimization

Managers a z data centers often shy away from modifying aging business-critical applications for fear of breaking something—if it ain’t broke, don’t fix it—often is the mantra. They also are rationing the use of their few remaining experienced z veterans with the domain expertise and deep knowledge of software that turns out to be quite esoteric code.  This is further aggravated by poorly documented mainframe code. The way to mitigate this risk of change is through understanding code, data, and interdependencies. EZSource can handle this visually and with ease; you no longer need to be an experienced z code expert.

So what do you have to do to get on the digitization transformation bandwagon? Start by identifying your mainframe assets that are most often accessed. Most of them will be what the mobile apps are calling, usually a CICS routine or two or three. Then expose these business critical services through APIs and micro-services. This may require re-writing parts of them as platform agnostic language and Java components to work within the context of a hybrid cloud. As noted just above, EZSource can help with much of this too.

In short, EZSource performs app discovery, which facilitates code quality improvement. It helps clean up code. It also applies analytics to DevOps, in effect enabling Cognitive DevOps, which makes sense in the dynamic hybrid cloud. The result: you focus only on the relevant code and, of that, what is particularly problematic.

The goal is to increase competitiveness and business innovation through digital forms of engagement; the engagement currently being fueled by mobile, social, analytic, and cognitive computing in a hybrid cloud environment. The expectation is that you will be able to tap the accumulated software assets for insights while modernizing business critical applications already resident on the z. IBM contends that this is the fastest and most cost effective way to drive new value and agility and DancingDinosaur agrees.

Is it worth it?  Most DancingDinosaur readers probably already believe that the mainframe and its data and software assets sit smack at the center of a digital enterprise. (Just a glimpse of the growth of monthly software peak workload charges should confirm that). It makes no sense not to leverage this complex resource to the max. EZSource, with its smart code analytics and visual capabilities, can save thousands of hours of work, avoid mistakes, and speed the delivery of the kind of slick new hybrid cloud apps that customers are demanding.  EZSource is primarily a Linux, Windows, and Java tool with only a few pieces residing on the z to handle necessary z-specific connectivity.

DancingDinosaur is Alan Radding, a veteran information technology analyst and writer. Please follow DancingDinosaur on Twitter, @mainframeblog. See more of his IT writing at technologywriter.com and here.

 

Medical Mutual Gains Fast Access to z/OS Log Data via Splunk and Ironstream

June 3, 2016

Running Syncsort’s Ironstream and leveraging Splunk Enterprise, Medical Mutual of Ohio has now implemented mainframe security in real time through the Splunk® Enterprise platform. One goal is to help protect customer information stored in DB2 from unauthorized access. Syncsorts’s Ironstream, a utility, collects and forwards z/OS log data, including security data, to Splunk Enterprise and Splunk Enterprise Security.

zOS Security2 PNG

z/OS security data, courtesy of Syncsort

“We’ve always had visibility. Now we can get it faster, in real time directly from the mainframe,” said the insurer’s enterprise security supervisor. Previously, the company would do a conventional data transfer, which could take several hours. The new approach, sometimes referred to as a big iron-to-big data strategy, now delivers security log data in near real time. This enables the security team to correlate all the security data from across the enterprise to effectively and quickly gain visibility into user-authentication data and access attempts tracked on the mainframe. And they can do it without needing specialized expertise or different monitoring systems for z/OS.

Real-time analytics, including real-time predictive analytics, are increasingly attractive as solutions for the growng security challenges organizations are facing. These challenges are due, in large part, to the explosion of transaction activity driven by mobile computing, and soon, IoT, and Blockchain, most of which eventually finds its way to the mainframe. All of these present immediate security concerns and require fast, nearly instant security decisions. Even cloud usage, which one would expect to be mainstream in enterprises by now, often is curtailed due to security fears.

With the Ironstream and Splunk combination, Medical Mutual Medical Mutual can see previously slow-to-access mainframe data alongside other security information it was already analyzing in Splunk Enterprise. Splunk Enterprise enables a consolidated enterprise-wide view of machine data collected across the business, which makes it possible to correlate events that might not raise suspicion alone but could be indicative of a threat when seen together.

The deployment proved to be straightforward. Medical Mutual’s in-house IT team set it up in a week with Syncsort answering deployment questions to assist. Although there are numerous tools to capture log data from the mainframe, the insurer chose to go with the Splunk-Ironstream combination because it already was using Splunk in house for centralized logging. Adding mainframe security logs was an easy step. “This was affordable and it saved us from having to learn another product,” the security supervisor added. Medical Mutual runs a z13, model 409 with Ironstream.

According to the announcement, by having Ironstream leverage z/OS log data via Splunk Enterprise, Medical Mutual has enables the organization to:

  • Track security events and data from multiple platforms including IBM z/OS mainframes, Windows and distributed servers and correlate the information in Splunk Enterprise for better security.
  • Diagnose and respond to high severity security issues more quickly since data from across the entire enterprise is being monitored in real time.
  • Provide monthly and daily reporting with an up-to-the-minute account of unusual user activity.
  • Detect security anomalies and analyze their trends – the cornerstone of Security Information and Event Management (SIEM) strategies.

Real time monitoring with analytics has proven crucial for security. You can actually detect fraud while it is taking place and before serious damage is done. It is much harder to recoup loses hours, days, or, what is often the case, months later.

The Splunk platform can handle massive amounts of data from different formats and indexes and decipher and correlate security events through analytics. Ironstream brings the ability to stream mainframe security data for even greater insights, and Ironstream’s low overhead keeps mainframe processing costs low.

To try the big iron-to-big data strategy organizations can download a free Ironstream Starter Edition and begin streaming z/OS Syslog data into Splunk solutions. Unlike typical technology trials, the Starter Edition is not time-limited and may be used in production at no charge. This includes access to the Ironstream applications available for download on Splunkbase.

DancingDinosaur is Alan Radding, a veteran information technology analyst and writer. Please follow DancingDinosaur on Twitter, @mainframeblog. See more of his IT writing at technologywriter.com and here.

 

BMC Annual Mainframe Survey Starts Now

May 24, 2016

Each year, BMC conducts a survey of mainframe users to identify trends in mainframe usage. The 2016 survey cycle kicks off May 24, and BMC will be collecting data for its eleventh survey, which will be published this fall. The company is reaching out to all those involved in mainframe management or operations to participate.

The BMC mainframe survey is one of the largest in the industry with more than 1,200 mainframe professionals and executives participating. DancingDinosaur almost always covers the new survey. You can see recent coverage  here and hereThe results of the survey are used by vendors, industry analysts, media and technical and executive users to make significant decisions and draw conclusions on the current and future state of mainframe usage.

To participate in the 20 minute survey, technical IT staff involved in mainframe management or operations, mainframe executives, managers or technical architects should visit, http://bmc.co1.qualtrics.com/jfe/form/SV_81caWGB4gUqRncF?survey_source=external . The survey will be open from May 24 to June 6, 2016.

IBM Advances SSD with Phase-Change Memory Breakthrough

May 20, 2016

Facing an incessant demand to speed data through computers the latest IBM storage memory advance, announced earlier this week, will ratchet up the speed another notch or two. Scientists at IBM Research have demonstrated storing 3 bits of data per cell using phase-change memory (PCM). Until now, PCM had been tried but had never caught on for a variety of reasons. By storing 3 bits per cell, IBM can boost PCM capacity and speed and lower the cost.

TLCPCMSmall (1)

IBM multi-bit PCM chip connected to a standard integrated circuit board.

Pictured above, the chip consists of a 2 × 2 Mcell array with a 4- bank interleaved architecture, IBM explained. The memory array size is 2 × 1000 μm × 800 μm. The PCM cells are based on doped-chalcogenide alloy and were integrated into the prototype chip serving as a characterization vehicle in 90 nm CMOS baseline technology.

Although PCM has been around for some years only with this latest advance is it attracting the industry’s attention as a potential universal memory technology based on its combination of read/write speed, endurance, non-volatility, and density. Specifically, PCM doesn’t lose data when powered off, unlike DRAM, and the technology can endure at least 10 million write cycles, compared to an average flash USB stick, which tops out at 3,000 write cycles.  Primary use cases will be capturing massive volumes of data expected from mobile devices and the Internet of Things.

PCM, in effect, adds another tier to the storage/memory hierarchy, coming in between DRAM and Flash at the upper levels of the storage performance pyramid. The IBM researchers envision both standalone PCM and hybrid applications, which combine PCM and flash storage together. For example, PCM can act as an extremely fast cache by storing a mobile phone’s operating system and enabling it to launch in seconds. For enterprise data centers, IBM envisions entire databases could be stored in PCM for blazing fast query processing of time-critical online applications, such as financial transactions.

As reported by CNET, PCM fits neatly between DRAM and flash. DRAM is 5-10x faster at retrieving data than PCM, while PCM is about 70x faster than flash. IBM reportedly expects PCM to be cheaper than DRAM, eventually becoming as cheap as flash (or course flash keeps getting cheaper too). PCM’s ability to hold three bits of data rather than 2 bits, PCM’s previous best, enables packing more data into a chip, which lowers the cost of PCM storage and boosts its competitive position against technologies like Flash and DRAM.

Phase change memory is the first instantiation of a universal memory with properties of both DRAM and flash, thus answering one of the grand challenges of our industry,” wrote Haris Pozidis, key researcher and manager of non-volatile memory research at IBM Research –in the published announcement. “Reaching 3 bits per cell is a significant milestone because at this density the cost of PCM will be significantly less than DRAM and closer to flash.”

IBM explains how PCM works: PCM materials exhibit two stable states, the amorphous (without a clearly defined structure) and crystalline (with structure) phases, of low and high electrical conductivity, respectively. In digital systems, data is stored as a 0 or a 1. To store a 0 or a 1 on a PCM cell, a high or medium electrical current is applied to the material. A 0 can be programmed to be written in the amorphous phase or a 1 in the crystalline phase, or vice versa. Then to read the bit back, a low voltage is applied.

To achieve multi-bit storage IBM scientists have developed two innovative enabling technologies: 1) a set of drift-immune cell-state metrics and 2) drift-tolerant coding and detection schemes. These new cell-state metrics measure a physical property of the PCM cell that remains stable over time, and are thus insensitive to drift, which affects the stability of the cell’s electrical conductivity with time. The other measures provide additional robustness of the stored data. As a result, the cell state can be read reliably over long time periods after the memory is programmed, thus offering non-volatility.

Combined these advancements address the key challenges of multi-bit PCM—drift, variability, temperature sensitivity and endurance cycling, according to IBM. From there, the experimental multi-bit PCM chip used by IBM scientists is connected to a standard integrated circuit board

Expect to see PCM first in Power Systems. At the 2016 OpenPOWER Summit in San Jose, CA, last month, IBM scientists demonstrated PCM attached to POWER8-based servers (made by IBM and TYAN® Computer Corp.) via the CAPI (Coherent Accelerator Processor Interface) protocol, which speeds the data to storage or memory. This technology leverages the low latency and small access granularity of PCM, the efficiency of the OpenPOWER architecture, and the efficiency of the CAPI protocol, an example of the OpenPower Foundation in action. Pozidis suggested PCM could be ready by 2017; maybe but don’t bet on it. IBM still needs to line up chip makers to produce it in commercial quantities among other things.

DancingDinosaur is Alan Radding, a veteran information technology analyst and writer. Please follow DancingDinosaur on Twitter, @mainframeblog. See more of his IT writing at technologywriter.com and here.

 

 

BMC MainView for Java Manages Critical Digital Business

May 16, 2016

A large European financial services firm increasingly handles a lot of critical transaction functions with Java running through CICS and WebSphere.  As the firm looks forward, its managers see Java playing a bigger and more critical role in its core business as it shifts more of its business functionality to agile programming in Java. This firm is not even thinking about abandoning its workhorse COBOL code ever, but all new work is being directed to Java.

bmc mainview java 2

With that in mind, BMC last week announced MainView for Java Environments, part of BMC’s MainView integrated systems management suite of tools that provides insight into how Java is consuming resources and affecting application performance on the z System.  It is no surprise, therefore, that the firm became an early beta user for MainView for Java Environments.

According to a BMC survey, 93% of mainframe organizations in a recent BMC survey said Java usage is growing or steady, and Java is the language of choice for writing new or rewriting existing mainframe applications. BMC MainView for Java Environments provides insight into Java resource usage and how it impacts other workloads and applications. For example it automatically discovers all the Java Virtual Machines (JVMs) across z/OS. That alone can help with identifying performance problems in an effort to find and fix problems fast.

Java is the key to both performance and cost savings by running on zIIP assist processors. Java workloads, however, can affect performance and availability on the mainframe, as they consume system resources without regard for the needs of other applications or services, which is another reason why zIIP is essentially. Also, an integrated management approach gives IT operations a holistic view of the environment to quickly and easily discover Java Virtual Machines (JVMs) and to manage the effect of their resource consumption on application performance.

Java was the first object oriented programming language DancingDinosaur tried.  Never got good enough to try it on real production work, but here’s what made it appealing: fully object oriented, produces truly portable write-once, run-anywhere code (mainly because it results in Java virtual machine bytecode) and had automatic garbage collection. For a run-of-the-mill programmer, Java was a joy compared to C or, ugh, COBOL. Some of the new languages becoming popular today, the ones driving mobile and cloud and analytics apps look even easier, but DancingDinosaur would feel too embarrassed to sit in a programming class with twenty-somethings the age of his daughters.

Java usage today, according to the BMC survey, is growing or steady, while Java has become the language of choice for writing new or rewriting existing mainframe applications. The only drawback may be that Java workloads can affect performance and resource availability on the mainframe as JVMs consume system resources oblivious to the needs of other applications or services or the cost of uncontrolled resource consumption, which is what Java unrestrained produces. An integrated management approach that allows for a holistic view of the environment can quickly and easily discover JVMs and manage can constrain the effects on resource consumption on application performance and offset any drawback.

Explained Tim Grieser, program vice president, at IDC’s Enterprise System Management Software: “Since Java manages its own resources it can consume excessive amounts of processor time and memory resources leading to performance or availability problems if not proactively managed.” The key being proactively managed.  BMC’s MainView for Java Environments promises exactly that kind of proactive management by monitoring z/OS Java runtime environments and provides a consolidated view of all resources being consumed. This will enable system admins and operators to identify and manage performance issues before they impact end users.

“Java on the mainframe is being used to develop and deploy new applications faster and more economically to meet dynamically changing digital business needs and to take advantage of widely available programming skills” IDC’s Grieser continued. Something like BMC’s MainView for Java Environments can be used to constrain Java. IBM’s Omegamon can fulfill a similar function.

According to the financial firm beta test manager, with BMC’s MainView for Java Environments tool, Java can be effectively used to unlock Java’s potential on the mainframe  vital in a changing application and systems environment as part of an integrated performance management solution that discovers and monitors JVMs. As such, it provides a single graphical console which enables you  to quickly understand the Java applications impact on resources and its effect on the performance of other applications and transactions. The solution promises to improve application performance and ensure availability while reducing Mean Time to Repair (MTTR) and lowering Monthly License Charges (MLC) by monitoring zIIP offloading, which is the key to both performance and cost management.

DancingDinosaur is Alan Radding, a veteran information technology analyst and writer and occasional wanna-be programmer. Please follow DancingDinosaur on Twitter, @mainframeblog. See more of his IT writing at technologywriter.com and here.

 

IBM’s Strategic Initiatives Gain New All-Flash Storage

May 6, 2016

Flash storage must be the latest rage among enterprise storage vendors.  Last week IBM introduced three new all-flash storage arrays, driving down latency and price/gigabyte to unheard of levels (minimum latency of 250μs, all-flash storage as low as $1.50 per gigabyte). Earlier this week EMC announced new all-flash arrays for its Unity series at prices under $18,000 (under $10,000 for hybrid arrays.) Flash storage has long beaten hard disk in terms of cost per IOPS, but now it is rivaling hard disk in terms of cost/gigabyte.

IBM_Flash_2015_1259-C-no_shadow_A9000GlamShot2

IBM A9000 All-Flash Array

OK, it looks a little—uh—boxy to say the least. But the new FlashSystem A9000 is packed with storage goodies. It comes fully configured, which helps drive down the cost of implementing an all-flash environment. Its sister, the FlashSystem A9000R, brings a grid architecture that provides for easy scaling up to the petabyte range. Both FlashSystems incorporate data reduction features, including pattern removal, deduplication and real-time compression, as well as IBM FlashCore technology to deliver consistent low latency performance. As noted above, they are priced as low as $1.50 per gigabyte.

Driving IBM’s latest interest in flash storage are its strategic initiatives, start with cloud computing. Consumers today, notes IBM, are demanding cloud-based applications that are fast, easy, and intelligent. That means minimal latency. Cloud users are demanding sub-second response times, especially when accessing critical data. They also are demanding cloud providers deliver a unique, personalized, and positive customer experience.

To deliver this, IBM is turning to hardware innovation, specifically its MicroLatency technology, to transfers data within the flash array instead of adding another layer of software. MicroLatency technology inserts FPGAs (hardware) that connects and communicates directly with the flash and RAID controllers, eliminating the latency of software and even firmware. Instead, the FlashSystems lets hardware talk directly with hardware.

In addition, IBM is packing the new FlashSystem arrays with features designed to solve cloud requirements such as quality-of-service (QoS) to prevent the noisy neighbor problems with application performance. The new arrays also feature secure multi-tenancy, thresholding, and easy-to-deploy grid scale-out capabilities.

The z System platform is not being ignored in all of this. IBM is including a new DS model, the all-flash IBM DS 8888 optimized for enterprise-class servers: With the all-flash IBM DS8888, customer databases and data-intensive applications are accelerated, resulting in improved business performance and customer satisfaction.

Specifically, the DS888 brings faster decision making and improve customer serviceability, with 4x performance over previous generations and accelerated response time for mission critical applications. The flash storage delivers up to 2.5 million IOPS, the result of having been built on the Power8 processor. It also enables organizations to streamline operations through the performance of an all flash architected solution aligned to provide the deepest integration with System z environments. For instance, IBM promises the most robust FICON connectivity through an architecture optimized for mainframe’s 4K cache segments.

In addition, the DS8888 promises 24×7 access to data and applications through superior business continuity on high demand transaction processing workloads while delivering top operations performance through its all flash architecture. It goes beyond the usual high end 5-nines availability to deliver 6-nines availability, which translates into a mere 2.59 seconds of downtime per month.  Other availability features include flexible replication (IBM FlashCopy, Metro Mirror, Global Mirror, Metro/Global Mirror, Global Copy & Multiple Target Peer-to-Peer Remote Copy). In the early years of flash reliability and availability were a concern.  With the DS8888 and 6-nines availability it isn’t any more.

Finally, it comes with a smorgasbord of security and efficiency goodies, including self-encrypted flash drives, key interoperability management protocol, syslog protocol, an intuitive GUI (IBM has learned a few tricks from Apple), innovative storage software licensing, RESTful and OpenStack APIs to connect workloads between private and public clouds, and thin provisioning for maximum utilization and reclamation of capacity from deleted data.

All-flash solutions announced last week complement IBM’s existing all-flash portfolio including FlashSystem 900 and V9000 that also leverage IBM’s FlashCore technology. IBM’s midrange all-flash solutions consist of all-flash versions of IBM’s Storwize family, which offers the performance needed for real-time insights from business data combined with advanced management functions. IBM’s Big Data all-flash solution delivers high-density multi-petabyte scale and a low-cost flash option ideal for industries such as media, genomics, and life sciences.

DancingDinosaur used to be hired to write papers around the enterprise cost-performance tradeoffs between hard disk and SSD/flash. No matter how expensive flash was at whatever point, the cost per IOPS always favored flash and cost per gigabytes always favored hard disk. That’s no longer an analysis worth even making today.

DancingDinosaur is Alan Radding, a veteran information technology analyst and writer. Please follow DancingDinosaur on Twitter, @mainframeblog. See more of his IT writing at technologywriter.com and here.

 

 

IBM Drives Platforms to the Cloud

April 29, 2016

IBM hasn’t been shy about its shift of focus from platforms and systems to cloud, mobile, analytics, and cognitive computing. But it didn’t hit home until last week’s release of 1Q2016 financials, which mentioned the z System just once. For the quarter IBM systems hardware and operating systems software revenues (lumped into one category, almost an after-thought) rang up $1.7 billion, down 21.8 percent.

This is ugly, and DancingDinosaur isn’t even a financial analyst. After the z System showed attractive revenue growth through all of 2015 suddenly its part of a loss. You can’t even find the actual numbers for z or Power in the new report format. As IBM notes: the company has revised its financial reporting structure to reflect the transformation of the business and provide investors with increased visibility into the company’s operating model by disclosing additional information on its strategic imperatives revenue by segment. BTW, IBM did introduce new advanced storage this week, which was part of the Systems Hardware loss too. DancingDinosaur will take up the storage story here next week.

openstack-logo

But the 1Q2016 report was last week. To further emphasize its shift IBM this week announced that it was boosting support of OpenStack’s RefStack project, which is intended to advance common language between clouds and facilitate interoperability across clouds. DancingDinosaur applauds that but if you are a z data center manager you better take note that the z along with all the IBM platforms, mainly Power and storage, being pushed to the back of the bus behind IBM’s strategic imperatives.

DancingDinosaur supports the strategic initiatives and you can throw blockchain and IoT in with them too. These initiatives will ultimately save the mainframe data center. All the transactions and data swirling around and through these initiatives eventually need to land in a safe, secure, utterly reliable place where they can be processed in massive volume, kept accessible, highly available, and protected for subsequent use, for compliance, and for a variety of other purposes. That place most likely will be the z data center. It might be on premise or in the cloud but if organizations need rock solid transaction performance, security, availability, scalability, and such they will want the z, which will do it better and be highly price competitive. In short, the z data center provides the ideal back end for all the various activities going on through IBM’s strategic initiative.

The z also has a clear connection to OpenStack. Two years ago IBM announced expanding its support of open technologies by providing advanced OpenStack integration and cloud virtualization and management capabilities across IBM’s entire server portfolio through IBM Cloud Manager with OpenStack. According to IBM, Cloud Manager with OpenStack will provide support for the latest OpenStack release, dubbed Icehouse at that time, and full access to the complete core OpenStack API set to help organizations ensure application portability and avoid vendor lock-in. It also extends cloud management support to the z, in addition to Power Systems, PureFlex/Flex Systems, System x (which was still around then)  or any other x86 environment. It also would provide support for IBM z/VM on the z, and PowerVC for PowerVM on Power Systems to add more scalability and security to its Linux environments.

At the same time IBM also announced it was beta testing a dynamic, hybrid cloud solution on the IBM Cloud Manager with OpenStack platform. That would allow workloads requiring additional infrastructure resources to expand from an on premise cloud to remote cloud infrastructure.  Since that announcement, IBM has only gotten more deeply enamored with hybrid clouds.  Again, the z data center should have a big role as the on premise anchor for hybrid clouds.

With the more recent announcement RefStack, officially launched last year and to which IBM is the lead contributor, becomes a critical pillar of IBM’s commitment to ensuring an open cloud – helping to advance the company’s long-term vision of mitigating vendor lock-in and enabling developers to use the best combination of cloud services and APIs for their needs. The new functionality includes improved usability, stability, and other upgrades, ensuring better cohesion and integration of any cloud workloads running on OpenStack.

RefStack testing ensures core operability across the OpenStack ecosystem, and passing RefStack is a prerequisite for all OpenStack certified cloud platforms. By working on cloud platforms that are OpenStack certified, developers will know their workloads are portable across IBM Cloud and the OpenStack community.  For now RefStack acts as the primary resource for cloud providers to test OpenStack compatibility, RefStack also maintains a central repository and API for test data, allowing community members visibility into interoperability across OpenStack platforms.

One way or another, your z data center will have to coexist with hybrid clouds and the rest of IBM’s strategic imperatives or face being displaced. With RefStack and the other OpenStack tools this should not be too hard. In the meantime, prepare your z data center for new incoming traffic from the strategic imperatives, Blockchain, IoT, Cognitive Computing, and whatever else IBM deems strategic next.

DancingDinosaur is Alan Radding, a veteran information technology analyst and writer. Please follow DancingDinosaur on Twitter, @mainframeblog. See more of his IT writing at technologywriter.com and here.

IBM Puts Blockchain on the z System for a Disruptive Edge

April 22, 2016

Get ready for Blockchain to alter your z-based transaction environment. Blockchain brings a new class of distributed ledger applications. Bitcoin, the first Blockchain system to grab mainstream data center attention, is rudimentary compared to what the Linux Foundation’s open HyperledgerProject will deliver.

ibm-blockchain-adept1

As reported in CIO Magazine, Blockchain enables a distributed ledger technology with ability to settle transactions in seconds or minutes automatically via computers. This is a faster, potentially more secure settlement process than is used today among financial institutions, where clearing houses and other third-party intermediaries validate accounts and identities over a few days. Financial services, as well as other industries, are exploring blockchain for conducting transactions as diverse as trading stock, buying diamonds, and streaming music.

IBM in conjunction with the Linux Foundation’s HyperledgerProject expects the creation and management of Blockchain network services to power a new class of distributed ledger applications. With the HyperLedger and Blockchain developers could create digital assets and accompanying business logic to more securely and privately transfer assets among members of a permissioned Blockchain network running on IBM LinuxONE or Linux on z.

In addition, IBM will introduce fully integrated DevOps tools for creating, deploying, running and monitoring Blockchain applications on the IBM Cloud and enable applications to be deployed on IBM z Systems. Furthermore, by using Watson as part of an IoT platform IBM intends to make possible information from devices such as RFID-based locations, barcode-scan events, or device-recorded data to be used with IBM Blockchain apps. Clearly, IBM is looking at Blockchain for more than just electronic currency. In fact, Blockchain will enable a wide range of secure transactions between parties without the use of intermediaries, which should speed transaction flow. For starters, the company brought to the effort 44,000 lines of code as a founding member of the Linux Foundation’s HyperledgerProject

The z, with its rock solid reputation for no-fail, extreme high volume and performance, and secure processing, is a natural for Blockchain applications and system. In the process it brings the advanced cryptography, security, and reliability of the z platform. No longer content just to handle traditional backend systems-of-record processing IBM is pushing to bring the z into new areas that leverage the strength and flexibility of today’s mainframe.  As IoT ramps up expect the z to handle escalating volumes of IoT traffic, mobile traffic, and now blockchain distributed ledger traffic.  Says IBM: “We intend to support clients looking to deploy this disruptive technology at scale, with performance, availability and security.” That statement has z written all over it.

Further advancing the z into new areas, IBM reemphasized its advantages through built-in hardware accelerators for hashing and digital signatures, tamper-proof security cards, unlimited random keys to encode transactions, and integration to existing business data with Smart Contract APIs. IBM believes the z could take blockchain performance to new levels with the world’s fastest commercial processor, which is further optimized through the use of hundreds of internal processors. The highly scalable I/O system can handle massive amounts of transactions and the optimized network between virtual systems in a z Systems cloud can speed up blockchain peer communications.

An IBM Blockchain DevOps service will also enable blockchain applications to be deployed on the z, ensuring an additional level of security, availability and performance for handling sensitive and regulated data. Blockchain applications can access existing transactions on distributed servers and z through APIs to support new payment, settlement, supply chain, and business processes.

Use Blockchain on the z to create and manage Blockchain networks to power the emerging new classes of distributed ledger applications.  According to IBM, developers can create digital assets and the accompanying business logic to more securely and privately transfer assets among members of a permissioned Blockchain network. Using fully integrated DevOps tools for creating, deploying, running, and monitoring Blockchain applications on IBM Cloud, data centers can enable applications to be deployed on the z. Through the Watson IoT Platform, IBM will make it possible for information from devices such as RFID-based locations, barcode scans, or device-recorded data to be used with IBM Blockchain.

However, Blockchain remains nascent technology. Although the main use cases already are being developed and deployed many more ideas for blockchain systems and applications are only just being articulated. Nobody, not even the Linux Foundation, knows what ultimately will shake out. Blockchain enables developers to easily build secure distributed ledgers that can be used to exchange most anything of value fast and securely. Now is the time for data center managers at z shops to think what they might want to do with such extremely secure transactions on their z.

DancingDinosaur is Alan Radding, a veteran information technology analyst and writer. Please follow DancingDinosaur on Twitter, @mainframeblog. See more of his IT writing at technologywriter.com and here.

 


Follow

Get every new post delivered to your Inbox.

Join 870 other followers

%d bloggers like this: