Posts Tagged ‘zEC12’
June 24, 2016
A global study sponsored by IBM and conducted by the Ponemon Institute found that the average cost of a data breach for companies surveyed has grown to $4 million, representing a 29 percent increase since 2013. With cybersecurity incidents continuing to increase with 64% more security incidents in 2015 than in 2014 the costs are poised to grow.

z13–world’s most secure system
The z13, at least, is one way to keep security costs down. It comes with a cryptographic processor unit available on every core, enabled as a no-charge feature. It also provides EAL5+ support, a regulatory certification for LPARS, which verifies the separation of partitions to further improve security along with a dozen or so other built-in security features for the z13. For a full list of z13 security features click here. There also is a Redbook, Ultimate Security with the IBM z13 here. A midsize z, the z13s brings the benefits of mainframe security and mainframe computing to smaller organizations. You read about the z13s here on DancingDinosaur this past February.
As security threats become more complex, the researchers noted, the cost to companies continues to rise. For example, the study found that companies lose $158 per compromised record. Breaches in highly regulated industries were even more costly, with healthcare reaching $355 per record – a full $100 more than in 2013. And the number of records involved can run from the thousands to the millions.
Wow, why so costly? The researchers try to answer that too: leveraging an incident response team was the single biggest factor associated with reducing the cost of a data breach – saving companies nearly $400,000 on average (or $16 per record). In fact, response activities like incident forensics, communications, legal expenditures and regulatory mandates account for 59 percent of the cost of a data breach. Part of these high costs may be linked to the fact that 70 percent of U.S. security executives report they don’t even have incident response plans in place.
The process of responding to a breach is extremely complex and time consuming if not properly planned for. As described by the researchers, the process of responding to a breach consists of a minimum of four steps. Among the specified steps, a company must:
- Work with IT or outside security experts to quickly identify the source of the breach and stop any more data leakage
- Disclose the breach to the appropriate government/regulatory officials, meeting specific deadlines to avoid potential fines
- Communicate the breach with customers, partners, and stakeholders
- Set up any necessary hotline support and credit monitoring services for affected customers
And not even included in the researchers’ list are tasks like inventorying and identifying the data records that have been corrupted or destroyed, remediating the damaged data, and validating it against the last known clean backup copy. Am surprised the costs aren’t even higher. Let’s not even talk about the PR damage or loss of customer goodwill. Now, aren’t you glad you have a z13?
That’s not even the worst of it. The study also found the longer it takes to detect and contain a data breach, the more costly it becomes to resolve. While breaches that were identified in less than 100 days cost companies an average of $3.23 million, breaches that were found after the 100-day mark cost over $1 million more on average ($4.38 million). The average time to identify a breach in the study was estimated at 201 days and the average time to contain a breach was estimated at 70 days. The cost of a z13 or even the lower cost z13s could justify itself by averting just one data breach.
The researchers also found that companies with predefined Business Continuity Management (BCM) processes in place found and contained breaches more quickly, discovering breaches 52 days earlier and containing them 36 days faster than companies without BCM. Still, the cheapest solution is to avert breaches in the first place.
Not surprisingly, IBM is targeting the incident response business as an up and coming profit center. The company increased its investment in the Incident response market with the recent acquisition of Resilient Systems, which just came out with an updated version that graphically displays the relationships between Indicators of Compromise (IOCs) and incidents in an organization’s environment. But the z13 is probably a better investment if you want to avoid data breaches in the first place.
Surprisingly, sometimes your blogger is presented as a mainframe guru. Find the latest here.
DancingDinosaur is Alan Radding, a veteran information technology analyst writer. Please follow DancingDinosaur on Twitter, @mainframeblog. See more of his IT writing at technologywriter.com and here.
Like this:
Like Loading...
Tags:(BDM), business continuity management, Cryptographic Processor Unit, EAL5+, incident forensics, incident response plan, Linux, mainframe, mobile, Resilient Systems, System z, technology, z13, z13s, zEC12, zEnterprise
Posted in Uncategorized | Leave a Comment »
June 3, 2016
Running Syncsort’s Ironstream and leveraging Splunk Enterprise, Medical Mutual of Ohio has now implemented mainframe security in real time through the Splunk® Enterprise platform. One goal is to help protect customer information stored in DB2 from unauthorized access. Syncsorts’s Ironstream, a utility, collects and forwards z/OS log data, including security data, to Splunk Enterprise and Splunk Enterprise Security.

z/OS security data, courtesy of Syncsort
“We’ve always had visibility. Now we can get it faster, in real time directly from the mainframe,” said the insurer’s enterprise security supervisor. Previously, the company would do a conventional data transfer, which could take several hours. The new approach, sometimes referred to as a big iron-to-big data strategy, now delivers security log data in near real time. This enables the security team to correlate all the security data from across the enterprise to effectively and quickly gain visibility into user-authentication data and access attempts tracked on the mainframe. And they can do it without needing specialized expertise or different monitoring systems for z/OS.
Real-time analytics, including real-time predictive analytics, are increasingly attractive as solutions for the growng security challenges organizations are facing. These challenges are due, in large part, to the explosion of transaction activity driven by mobile computing, and soon, IoT, and Blockchain, most of which eventually finds its way to the mainframe. All of these present immediate security concerns and require fast, nearly instant security decisions. Even cloud usage, which one would expect to be mainstream in enterprises by now, often is curtailed due to security fears.
With the Ironstream and Splunk combination, Medical Mutual Medical Mutual can see previously slow-to-access mainframe data alongside other security information it was already analyzing in Splunk Enterprise. Splunk Enterprise enables a consolidated enterprise-wide view of machine data collected across the business, which makes it possible to correlate events that might not raise suspicion alone but could be indicative of a threat when seen together.
The deployment proved to be straightforward. Medical Mutual’s in-house IT team set it up in a week with Syncsort answering deployment questions to assist. Although there are numerous tools to capture log data from the mainframe, the insurer chose to go with the Splunk-Ironstream combination because it already was using Splunk in house for centralized logging. Adding mainframe security logs was an easy step. “This was affordable and it saved us from having to learn another product,” the security supervisor added. Medical Mutual runs a z13, model 409 with Ironstream.
According to the announcement, by having Ironstream leverage z/OS log data via Splunk Enterprise, Medical Mutual has enables the organization to:
- Track security events and data from multiple platforms including IBM z/OS mainframes, Windows and distributed servers and correlate the information in Splunk Enterprise for better security.
- Diagnose and respond to high severity security issues more quickly since data from across the entire enterprise is being monitored in real time.
- Provide monthly and daily reporting with an up-to-the-minute account of unusual user activity.
- Detect security anomalies and analyze their trends – the cornerstone of Security Information and Event Management (SIEM) strategies.
Real time monitoring with analytics has proven crucial for security. You can actually detect fraud while it is taking place and before serious damage is done. It is much harder to recoup loses hours, days, or, what is often the case, months later.
The Splunk platform can handle massive amounts of data from different formats and indexes and decipher and correlate security events through analytics. Ironstream brings the ability to stream mainframe security data for even greater insights, and Ironstream’s low overhead keeps mainframe processing costs low.
To try the big iron-to-big data strategy organizations can download a free Ironstream Starter Edition and begin streaming z/OS Syslog data into Splunk solutions. Unlike typical technology trials, the Starter Edition is not time-limited and may be used in production at no charge. This includes access to the Ironstream applications available for download on Splunkbase.
DancingDinosaur is Alan Radding, a veteran information technology analyst and writer. Please follow DancingDinosaur on Twitter, @mainframeblog. See more of his IT writing at technologywriter.com and here.
Like this:
Like Loading...
Tags:analytics, Big Data, blockchain, IBM, IoT, Ironstream, mainframe, Medical Mutual of Ohio, real-time analytics, Splunk Enterprise, Splunk Enterprise Security, Syncsort, System z, technology, z/OS, z13, z196, zEC12, zEnterprise
Posted in Uncategorized | Leave a Comment »
April 1, 2016
IBM keeps rolling out new versions of the z System. The latest is the z/OS Platform for Apache Spark announced earlier this week. The new machine is optimized for marketers, data analysts, and developers eager to apply advanced analytics to the z’s rich, resident data sets for real-time insights.

z/OS Platform for Apache Spark
Data is everything in the new economy; and the most and best data you can grab and the fastest you can analyze it, the more likely you will win. The z, already the center of a large, expansive data environment, is well positioned to drive winning data-fueled strategies.
IBM z/OS Platform for Apache Spark enables Spark, an open-source analytics framework, to run natively on z/OS. According to IBM, the new system is available now. Its key advantage: to enable data scientists to analyze data in place on the system of origin. This eliminates the need to perform extract, transform and load (ETL), a cumbersome, slow, and costly process. Instead, with Spark the z breaks the bind between the analytics library and underlying file system.
Apache Spark provides an open-source cluster computing framework with in-memory processing to speed analytic applications up to 100 times faster compared to other technologies on the market today, according to IBM. Apache Spark can help reduce data interaction complexity, increase processing speed, and enhance mission-critical applications by enabling analytics that deliver deep intelligence. Considered highly versatile in many environments, Apache Spark is most regarded for its ease of use in creating algorithms that extract insight from complex data.
IBM’s goal lies not in eliminating the overhead of ETL but in fueling interest in cognitive computing. With cognitive computing, data becomes a fresh natural resource—an almost infinite and forever renewable asset—that can be used by computer systems to understand, reason and learn. To succeed in this cognitive era businesses must be able to develop and capitalize on insights before the insights are no longer relevant. That’s where the z comes in.
With this offering, according to IBM, accelerators from z Systems business partners can help organizations more easily take advantage of z Systems data and capabilities to understand market changes alongside individual client needs. With this kind of insight managers should be able to make the necessary business adjustments in real-time, which will speed time to value and advance cognitive business transformations among IBM customers.
At this point IBM has identified 3 business partners:
- Rocket Software, long a mainframe ISV, is bringing its new Rocket Launchpad solution, which allows z shops to try the platform using data on z/OS.
- DataFactZ is a new partner working with IBM to develop Spark analytics based on Spark SQL and MLlib for data and transactions processed on the mainframe.
- Zementis brings its in-transaction predictive analytics offering for z/OS with a standards-based execution engine for Apache Spark. The product promises to allow users to deploy and execute advanced predictive models that can help them anticipate end users’ needs, compute risk, or detect fraud in real-time at the point of greatest impact, while processing a transaction.
This last point—detecting problems in real time at the point of greatest impact—is really the whole reason for Spark on z/OS. You have to leverage your insight before the prospect makes the buying decision or the criminal gets away with a fraudulent transaction. After that your chances are slim to none of getting a prospect to reverse the decision or to recover stolen goods. Having the data and logic processing online and in-memory on the z gives you the best chance of getting the right answer fast while you can still do something.
As IBM also notes, the z/OS Platform for Apache Spark includes Spark open source capabilities consisting of the Apache Spark core, Spark SQL, Spark Streaming, Machine Learning Library (MLlib) and Graphx, combined with the industry’s only mainframe-resident Spark data abstraction solution. The new platform helps enterprises derive insights more efficiently and securely. In the processing the platform can streamline development to speed time to insights and decision and simplify data access through familiar data access formats and Apache Spark APIs.
Best of all, however, is the in-memory capabilities as noted above. Apache Spark uses an in-memory approach for processing data to deliver results quickly. The platform includes data abstraction and integration services that enable z/OS analytics applications to leverage standard Spark APIs. It also allows analysts to collect unstructured data and use their preferred formats and tools to sift through data.
At the same time developers and analysts can take advantage of the familiar tools and programming languages, including Scala, Python, R, and SQL to reduce time to value for actionable insights. Of course all the familiar z/OS data formats are available too: IMS, VSAM, DB2 z/OS, PDSE or SMF along with whatever you get through the Apache Spark APIs.
This year we already have seen the z13s and now the z/OS Platform for Apache Spark. Add to that the z System LinuxOne last year. z-Based data centers suddenly have a handful of radically different new mainframes to consider. Can Watson, a POWER-based system, be far behind? Your guess is as good as anyone’s.
DancingDinosaur is Alan Radding, a veteran information technology analyst and writer. Please follow DancingDinosaur on Twitter, @mainframeblog. See more of his IT writing at technologywriter.com and here.
Like this:
Like Loading...
Tags:analytics, and SQL, Apache, Apache Spark API, Big Data, Cloud, cognitive computing, DataFact Z, DB2 z/OS, ETL, IBM, IBM z/OS, IMS, in-memory processing, Linux, mainframe, mobile, open source, PDSE, Python, R, real-time insight, Rocket Software, Scala, SMF, Spark, system of origin, System z, technology, VSAM, z/OS Platform for Apache Spark, zEC12, Zementis, zEnterprise
Posted in Uncategorized | 1 Comment »
March 18, 2016
IBM retained the number 3 spot with 14.1% share for the quarter as revenue increased 8.9% year-over-year to $2.2 billion in 4Q15. More impressively, IBM experienced strong growth for POWER Systems and double-digit growth for its z System mainframes in the quarter, according to IDC. You can check out the IDC announcement here. IDC credits z and POWER for IBM’s strong platform finish in 2015.
zSystem-based LinuxONE
DancingDinosaur has expected these results and been reporting IBM’s z System and POWER System successes for the past year. You can check it out here (z13s) and here (LinuxOne) and here (Power Systems LC).
Along with deservedly crowing about its latest IDC ranking IBM added: z Systems saw double digit growth due to a number of new portfolio enhancements. The next-generation z13 mainframe, optimized for digital businesses and hybrid cloud environments, is designed to handle mobile transactions securely and at scale, while enabling clients to run analytics on the system and in real time. IBM expanded its commitment to offering open-source on the mainframe by launching a line of Linux-only systems in August of 2015. LinuxONE is based on the latest generation of z Systems technology and enables popular open-source tools and software on the mainframe. IBM also added what amounts to a Business Class z with the z13s to go along with a Business Class dedicated Linux z, the LinuxONE Rockhopper.
Meanwhile, IBM has started to get some uptake for its Open Mainframe Project. In addition to announcing support from the usual mainframe suspects—IBM, CA, Compuware, SUSE, BMC, and others—it also announced its first projects. These include an effort to find ways to leverage new software and tools in the Linux environment that can better take advantage of the mainframe’s speed, security, scalability, and availability. DancingDinosaur is hoping that in time the Open Mainframe Project will produce the kind of results the Open POWER Foundation has recently generated for the POWER Platform
IBM attributes the growing traction of Linux running on POWER Systems in large part to optimized solutions such as DB2 BLU, SAP HANA, and other industry big data software, built on POWER Systems running Linux. In October 2015, IBM expanded its Linux on Power Systems portfolio with the LC line of servers. These servers are infused with OpenPOWER Foundation technology and bring the higher performance of the POWER CPU to the broad Linux community. The POWER-based LC line along with the z-based LinuxONE Rockhopper should give any data center manager looking to run a large, efficient Linux server farm a highly cost-competitive option that can rival or even beat the x86 option. And given that both platforms will handle Docker containers and microservices and support all of today’s popular development tools there is no reason to stick with x86.
From a platform standpoint, IBM appears to be in sync with what IDC is reporting: Datacenter buildout continues, and the main beneficiary this quarter is the density-optimized segment of the market, where growth easily outpaced the overall server market. Density-optimized servers achieved a 30.2% revenue growth rate this quarter, contributing a full 2 percentage points to the overall 5.2% revenue growth in the market.
“The fourth quarter (of 2015) was a solid close to a strong year of growth in the server market, driven by on premise refresh deployments as well as continued hyperscale cloud deployments,” said Kuba Stolarski, Research Director, Servers and Emerging Technologies at IDC. “As the cyclical refresh of 2015 comes to an end, the market focus has begun to shift towards software-defined infrastructure and hybrid environment management, as organizations begin to transform their IT infrastructure as well as prepare for the compute demands expected over the next few years from next-gen IT domains such as IoT and cognitive analytics. In the short term, 2016 looks to be a year of accelerated cloud infrastructure expansion with existing footprints filling out and new cloud datacenter buildouts across the globe.”
After a seemingly endless string of dismal quarters DancingDinosaur is encouraged by what IBM is doing now with the z, POWER Systems, and its strategic initiatives. With its strategic focus on cloud, mobile, big data analytics, cognitive computing, and IoT as well as its support for the latest approaches to software development, tools, and languages, IBM should be well positioned to continue its platform success in 2016.
DancingDinosaur is Alan Radding, a veteran information technology analyst and writer. Please follow DancingDinosaur on Twitter, @mainframeblog. See more of his IT writing at technologywriter.com and here.
Like this:
Like Loading...
Tags:analytics, Big Data, Cloud, DB2 BLU, IBM, IBM Power Systems LC line, IDC, Linux, LinuxONE Rockhopper, mainframe, mobile, Open Mainframe Project, Open Power Foundation, OpenStack, Power Systems, POWER8, SAP HANA, System z, technology, x86, zEC12, zEnterprise
Posted in Uncategorized | Leave a Comment »
February 19, 2016
Earlier this week IBM introduced the z13s, what it calls World’s most secure server, built for hybrid cloud, and sized for mid-sized organizations. The z13s promises better business outcomes, faster decision making, less regulatory exposure, greater scale, and better fraud protection. And at the low end it is accessible to smaller enterprises, maybe those who have never tried a z before.

z13s features embedded cryptography that brings the benefits of the mainframe to mid-sized organizations . Courtesy IBM
A machine like the low end z13s used to be referred to as a business class (BC) mainframe. IBM declined to quote a price, except to say z13s will go “for about the same price as previous generations for the equivalent capacity.” OK, back in July 2013 IBM published the base price of the zEC12 BC machine at $75,000. IBM made a big deal of that pricing at the time.
The key weasel phrase in IBM’s statement is: “for the equivalent capacity.” Two and a half years ago the $75k zEC12 BC offered significantly more power than its predecessor. Figuring out equivalent capacity today given all the goodies IBM is packing into the new machine, like built-in chip-based cryptography and more, is anybody’s guess. However, given the plummeting costs of IT components over the past two years, you should get it at a base price of $100k or less. If not, call Intel. Adds IBM: The infrastructure costs of z13s are comparable to the Public Cloud infrastructure costs with enterprise support; significant software savings result from core consolidation on the z13s.
But the z13s is not just about price. As digital business becomes a standard practice and transaction volumes increase, especially mobile transaction volumes, the need for increased security becomes paramount. Cybercrime today has shifted. Rather than stealing data criminals are compromising data accuracy and reliability. This is where the z13s’ bolstered built-in security and access to APIs and microservices in a hybrid cloud setting can pay off by keeping data integrity intact.
IBM’s z13s, described as the new entry point to the z Systems portfolio for enterprises of all sizes, is packed with a number of security innovations. (DancingDinosaur considered the IBM LinuxONE Rockhopper as the current z entry point but it is a Linux-only machine.) For zOS the z13s will be the entry point. The security innovations include:
- Ability to encrypt sensitive data without compromising transactional throughput and response time through its updated cryptographic and tamper-resistant hardware-accelerated cryptographic coprocessor cards with faster processors and more memory. In short: encryption at twice the speed equates to processing twice as many online or mobile device purchases in the same time, effectively helping to lower the cost per transaction.
- Leverage the z Systems Cyber Security Analytics offering, which delivers an advanced level of threat monitoring based on behavior analytics. Also part of the package, IBM® Security QRadar® security software correlates data from more than 500 sources to help organizations determine if security-related events are simply anomalies or potential threats, This z Systems Cyber Security Analytics service will be available at no-charge, as a beta offering for z13 and z13s customers.
- IBM Multi-factor Authentication for z/OS (MFA) is now available on z/OS. The solution adds another layer of security by requiring privileged users to enter a second form of identification, such as a PIN or randomly generated token, to gain access to the system. This is the first time MFA has been tightly integrated in the operating system, rather than through an add-on software solution. This level of integration is expected to deliver more streamlined configuration and better stability and performance.
Hybrid computing and hybrid cloud also play a big part in IBM’s thinking latest around z Systems. As IBM explains, hybrid cloud infrastructure offers advantages in flexibility but can also present new vulnerabilities. When paired with z Systems, IBM’s new security solutions can allow clients to establish end-to-end security in their hybrid cloud environment.
Specifically, IBM Security Identity Governance and Intelligence can help prevent inadvertent or malicious internal data loss by governing and auditing access based on known policies while granting access to those who have been cleared as need-to-know users. IBM Security Guardium uses analytics to help ensure data integrity by providing intelligent data monitoring, which tracks users as they access specific data and help to identify threat sources quickly in the event of a breach. IBM Security zSecure and QRadar use real-time alerts to focus on the identified critical security threats that matter the most.
Conventional z System data centers should have no difficulty migrating to the z13 or even the z13s. IBM told DancingDinosaur it will continue to protect a client’s investment in technology with serial number preservation on the IBM z13s. The company also is offering upgrades from the zEnterprise BC12 (zBC12) and from the zEnterprise 114 (z114) to the z13s. Of course, it supports upgradeability within the IBM z13 family; a z13s N20 model can be upgraded to the z13 N30 model. And once the z13s is installed it allows on demand offerings to access temporary or permanent capacity as needed.
DancingDinosaur is Alan Radding, a veteran information technology analyst and writer. Please follow DancingDinosaur on Twitter, @mainframeblog. See more of his IT writing at technologywriter.com and here.
Like this:
Like Loading...
Tags:analytics, APIs, Big Data, business outcomes, Cloud, cybercrime, decision-making, fraud protection, hybrid computing, IBM, IBM Secuirty Grardium, Linux, LinuxONE Rockhopper, mainframe, microservices, mobile, QRadar, System z, technology, z/OS Multi-factor Authentication, zEC12, zEC12 BC, zEnterprise, zEnterprise 114, zSecure, zSystems Cyber Security Analytics
Posted in Uncategorized | 3 Comments »
February 4, 2016
The z13 is the most powerful general purpose computer IBM has ever made. The key to capturing the maximum value from the z13, however, lies in how you plan, design, configure, and optimize your systems and software for everything from COBOL and Java to process parallelization and analytics. What you do in this regard will have significant impact on not only the price/performance you experience but on your success at achieving the business outcomes you are expecting.

IBM System z13
This really becomes a software configuration challenge. By tapping approximately 600 internal processors IBM already has optimized the hardware, input, output, memory, and networking/communications about as much as it can be. Your job is to optimize the software you are running, which will require working closely with your ISV.
The place to start is by leveraging the z13’s new compiler technology, parallelism, zIIP and assist processors. This will enable you to save significant money while boosting workload performance. You will literally be doing more for less.
Similarly, in the not too distant past Moore’s Law would virtually guarantee a 15-20% price/performance gain automatically just by taking a new machine out of the box and plugging it in. That’s no longer the case. Now you will have to partner with your ISV to exploit advanced software to maximize the hardware payback and continue the ride along the favorable Moore’s Law price/performance slope.
Then look at the latest COBOL V5.x and its compiler on the z13. Out of the box it is better optimized than previous compilers. In general, the strategic value of COBOL V5.x comes from migrating high CPU usage programs as quickly as possible, effectively saving organizations considerable money by running optimized code.
Some organizations report a 15% on average reduction of CPU time, which adds up to significant savings in monthly CPU charges. How significant? Up to $150k less on a $1 million bill, with some reporting even higher percentage reductions producing even greater savings. Just migrate to COBOL V5.2 (or at least V5.1) to achieve the savings. In general, staying on the software curve with the latest releases of the OS, languages, and compilers with applications optimized for them is the best way to ensure your workloads are achieving top performance in the most cost-effective way.
For example, the new z13 processor leverages a new Vector Facility for certain COBOL statements and expands the use of Decimal Floating Point Facility for packed decimal calculations. Well-structured, compute-intensive batch applications running on z13 and compiled with the Enterprise COBOL V5.2 compiler have shown CPU reduction usage of up to 14% over the same applications running on zEC12 (compiled with the GA release of Enterprise COBOL V5.1), according to IBM. The result: improved workload price/performance.
Enterprise COBOL V5.2 also includes new features to improve programmability, developer productivity, and application modernization. Supporting JSON, for instance, will provide mobile applications easy access to data and the processing they need from business critical production applications written in COBOL.
The z13 and its z sister, the latest LinuxONE dedicated Linux models, were designed and optimized from the start for cloud, mobile, and analytics. They were intended to run alongside traditional mainframe workloads with z/OS or Linux running on the appropriate models.
Finally, plan to take advantage of the new assist processors and expanded memory capacity to further boost performance and lower cost. With the z13, there is a mandatory migration of all zAAP-enabled applications to zIIP. Expect the usage of the zIIP assist processors to surge when all those Java applications move from the zAAP. ISVs like Compuware should be able to help with this. In addition, if you enable SMT on the z13, you’ll immediately get more Java capacity. Applications that run under IBM WebSphere (WAS) on z/OS will benefit too.
The z13 and especially the LinuxONE are breaking new ground. IBM has established, in conjunction with the Linux Foundation, an Open Mainframe Project to support and advance ongoing open source Linux innovation on the mainframe. IBM also is breaking with its traditional mainframe pricing model by offering a pay-per-use option in the form of a fixed monthly payment with costs scaling up or down based on usage. It also offers per-core pricing with software licenses for designated cores. See DancingDinosaur here.
An upcoming DancingDinosaur will look at more of the enhancements being added to these machines, including some of the latest LinuxOne enhancements like support for Google’s Go language and Cloudant’s NoSQL services. The message: the new z System can take you to the places you will want to be in this emerging cloud-mobile-analytics era.
DancingDinosaur is Alan Radding, a veteran information technology analyst and writer. Please follow DancingDinosaur on Twitter, @mainframeblog. See more of his IT writing at technologywriter.com and here.
Like this:
Like Loading...
Tags:analytics, Big Data, CICS, Cloud, Cloudant, COBOL v5.x, compiler, Compuware, Decimal Floating Point Facility, Google Go, hybrid computing, IBM, Java, JSON, Linux, LinuxONE, mainframe, mobile, Open Mainframe Project, OpenStack, price/performance, software, System z, technology, Vector Facility, z13, zAAP, zEC12, zIIP
Posted in Uncategorized | 1 Comment »
October 2, 2015
The last z System that conformed to the expectations of Moore’s Law was the zEC12. IBM could boast that it had the fastest commercial processor available. The subsequent z13 didn’t match it in processor speed. The z13 chip runs a 22 nm core at 5 GHz, one-half a GHz slower than the zEC12, which ran its 32nm core at 5.5 GHz. Did you even notice?

In 2007 an IBM scientist holds a 3-D integrated stacked chip
In 2015, the z13 delivers about a 10 percent performance bump per core thanks to the latest tweaks in the core design, such as better branch prediction and better pipelining. But even one-half a Ghz slower, the z13 was the first system to process 2.5 billion transactions a day. Even more importantly for enterprise data centers, z13 transactions are persistent, protected, and auditable from end-to-end, adding assurance as mobile transactions grow to an estimated 40 trillion mobile transactions per day by 2025.
IBM clearly isn’t bemoaning the decline of Moore’s Law. In fact, it has been looking beyond silicon for the processing of the future. This week it announced a major engineering breakthrough that could accelerate carbon nanotubes for the replacement of silicon transistors to power future computing. The breakthrough allows a new way to shrink transistor contacts without reducing the performance of carbon nanotube devices, essentially opening a path to dramatically faster, smaller, and more powerful computer chips beyond the capabilities of traditional semiconductors. Guess we can stop worrying about Moore’s Law.
Without Moore’s Law, IBM optimized just about everything on the z13 that could be optimized. It provides 320 separate channels dedicated to drive I/O throughput as well as such performance goodies as simultaneous multithreading (SMT), symmetric multiprocessing (SMP), and single instruction, multiple data (SIMD). Overall about 600 processors (in addition to your configurable cores) speed and streamline processes throughout the machine. Moore’s Law, in effect, has been bypassed. As much as the industry enjoyed the annual doubling of capacity and corresponding lower price/performance it doesn’t need Moore’s Law to meet today’s insatiable demand for processing power.
The company will be doing similar things with the POWER processor. Today we have the POWER8. Coming is the POWER9 followed by the POWER10. The POWER9 reportedly will arrive in 2017 at 14nm, feature a new micro-architecture, and be optimized with CAPI and NVLINK. POWER10, reportedly, arrives around 2020 optimized for extreme analytics.
As IBM explains its latest breakthrough, carbon nanotubes represent a new class of semiconductor materials that consist of single atomic sheets of carbon rolled up into a tube. The carbon nanotubes form the core of a transistor device whose superior electrical properties promise several generations of technology scaling beyond the physical limits of silicon.
The new processor technology, IBM reports, overcomes a major hurdle that silicon and any other semiconductor transistor technologies face when scaling down. In the transistor, two things scale: the channel and its two contacts. As devices become smaller, the increased contact resistance of carbon nanotubes hindered performance gains. The latest development could overcome contact resistance all the way to the 1.8 nanometer node – four technology generations away.
Carbon nanotube chips could greatly improve the capabilities of high performance computers, enabling, for example, big data to be analyzed faster, increasing the power and battery life of mobile devices, and allowing cloud data centers to deliver services more efficiently and economically. Even cognitive computing and Internet of Things can benefit.
Until now, vendors have be able to shrink the silicon transistors, but they are approaching a point of physical limitation, which is why Moore’s Law is running out of steam. Previously, IBM demonstrated that carbon nanotube transistors can operate as effective switches at channel dimensions of less than ten nanometers. IBM’s new contact approach overcomes the contact resistance by incorporating carbon nanotubes into semiconductor devices, which could result in smaller chips with greater performance and lower power consumption.
As transistors shrink in size, electrical resistance within the contacts increases, which limits performance. To overcome this resistance, IBM researchers gave up traditional contact schemes and created a metallurgical process akin to microscopic welding that chemically binds the metal atoms to the carbon atoms at the ends of nanotubes. This end-bonded contact scheme allows the contacts to be shrunken below 10 nanometers without impacting performance. This brings the industry a step closer to the goal of a carbon nanotube technology within the decade, says IBM.
Let’s hope this works as expected. If not, IBM has other possibilities already in its research labs. DancingDinosaur is Alan Radding, a veteran IT analyst and writer. Please follow DancingDinosaur on Twitter, @mainframeblog. See more of his IT writing at technologywriter.com and here.
Like this:
Like Loading...
Tags:32nm. 22nm, analytics, Big Data, branch prediction, CAPI, carbon nanotubes, Cloud, hadoop, IBM, mainframe, Moore's Law, nanotube, pipelining, Power Systems, POWER10, POWER8, POWER9, System z, technology, z13, zEC12, zEnterprise
Posted in Uncategorized | 1 Comment »
July 24, 2015
DancingDinosaur generally steers clear of writing about reported quarterly revenue. Given the general focus of this blog on enterprise and cloud computing, however, IBM’s recent 2Q15 report can’t be ignored. Although it continued IBM’s recent string of negative quarterly results, the z and cloud proved to be bright spots.

Strong IBM cloud performance, Q2 2015 (click to enlarge)
As IBM reported on Monday: Revenues from z Systems mainframe server products increased 9 percent compared with the year-ago period (up 15 percent adjusting for currency). Total delivery of z Systems computing power, as measured in MIPS, increased 24 percent. Revenues from Power Systems were down 1 percent compared with the 2014 period (up 5 percent adjusting for currency).
It’s not clear when and how Power Systems will come back. IBM has opened up the Power platform through the Open Power Foundation. A good move in theory, which DancingDinosaur applauds. Still, much depends on the Foundation gaining increased momentum and individual members rolling out successful Power-based products. The roadmap for POWER8, POWER9, and beyond looks promising but how fast products will arrive is unclear. There also is potential for the commoditization of the Power platform, a welcome development in many quarters, but commoditization’s impact on future revenue also is not clear.
Cloud revenue was up more than 70 percent, adjusting for currency and divested businesses; up more than 50 percent as reported, according to IBM. Given that cloud, along with mobile and analytics, has been designated strategic by IBM this is an encouraging development. The company’s cloud strategy is starting to bear fruit.
The big question hanging over every vendor’s cloud strategy is how to make money at it. One of the appealing aspects of the cloud in terms of cost and pricing for IT-using organizations is what amounts to a race to the bottom. With pricing immediately apparent and lower pricing just a click away it has become a feast for the bottom grazers to whom the lowest price is all that matters. For companies like IBM and Oracle, which also has declared cloud a strategic initiative, and other large legacy enterprise platform providers the challenge is to be competitive on price while differentiating their offerings in other ways. Clearly IBM has some unique cloud offerings in Watson and Bluemix and others but can they deliver enough revenue fast enough to offset the reduction in legacy platform revenue. Remember, x86 is off IBM’s menu.
Timothy Prickett Morgan, who writes frequently about IBM technology, also had plenty to say about IBM’s 2Q15 announcement, as did a zillion other financial and industry analyst. To begin he noted the irony of IBM promoting cloud computing, primarily an x86 phenomenon while trying to convince people that Power-based systems are cost competitive—which they can be—and will do a better job for many of those workloads, correct again.
Morgan also makes an interesting point in regard to the z: “IBM doesn’t have to push the System z mainframe so much as keep it on a Moore’s Law curve of its own and keep the price/performance improving to keep those customers in the mainframe fold.” That’s harder than it may seem; DancingDinosaur addressed the Moore’ Law issue last week here. As Morgan notes, with well over $1 trillion in software assets running on the mainframe, the 6,000 or so enterprises that use mainframes are unlikely to move off the platform because of the cost, disruption, and risk such a move would entail. Just ask Union-Pacific Railroad, which seems to be doing a slow-motion platform migration off the mainframe that seemingly may never actually end. Morgan concludes: “IBM can count on a certain level of money from the System z line that it just cannot with the Power Systems line.”
As noted above, how much revenue Power can generate for IBM depends on how fast the Open Power Foundation members introduce products that expand the market and how many Power processors SoftLayer can absorb as the business unit expands its global footprint. There also is the question of how many POWER8 servers Rackspace, a much larger cloud provider than SoftLayer, will take and whether the Rackspace initiative will catch on elsewhere.
In any event, IBM’s 2Q15 report showed enough positive momentum to encourage IT platform enthusiasts. For its part, DancingDinosaur is expecting a business class z13 in the coming months and more.
DancingDinosaur is Alan Radding, a veteran IT analyst and writer. Please follow DancingDinosaur on Twitter, @mainframeblog. See more of his IT writing at technologywriter.com and here.
Like this:
Like Loading...
Tags:BlueMix, Cloud, cloud computing, commoditization, IBM, IBM 2Q 2015 quarterly, Linux, mainframe, mobile, Moore's Law, Open Power Foundation, Oracle, Power platform, Power Systems, POWER8, POWER9, Rackspace, SoftLayer, System z, technology, Watson, zEC12, zEnterprise
Posted in Uncategorized | Leave a Comment »
July 17, 2015
In an announcement last week, IBM effectively extended Moore’s Law for at least another generation of chips, maybe two. This contradicts what leading vendors, including IBM, have been saying for years about the imminent diminishing returns of Moore’s Law, which postulated that chips would double in capacity every 18-24 months. Moore’s Law drove the price/performance curve the industry has been experiencing for the past several decades.

Click to enlarge, courtesy of IBM
The announcement, ironically, coincides with IBM’s completion of the sale of its semi-conductor fabrication business to GLOBALFOUNDRIES, which IBM paid to take the costly facilities off its hands. To pull off the 7nm achievement IBM ended up partnering with a handful of players including public-private partnership with New York State and joint development alliance with GLOBALFOUNDRIES, Samsung, and equipment suppliers. The team is based at SUNY Poly’s NanoTech Complex in Albany.
To achieve the higher performance, lower power, and scaling benefits promised by 7nm technology, the IBM researchers turned to two main innovations, the use Silicon Germanium (SiGe) channel transistors and Extreme Ultraviolet (EUV) lithography integration at multiple levels, in effect bypassing conventional semiconductor manufacturing approaches.
Don’t expect to see new systems featuring these 7nm chips very soon. The announcement made no mention of any timetable for producing commercial products based on this technology. As Timothy Prickett Morgan, who writes extensively on IBM POWER Systems technology observed: the use of silicon germanium for portions of the transistors cuts back on power consumption for the very fast switching necessary for improving circuit performance, and the circuits are etched using extreme ultraviolet (EUV) lithography. These technologies may be difficult and expensive to put into production.
In the meantime, IBM notes that microprocessors utilizing 22nm and 14nm technology run today’s servers, cloud data centers, and mobile devices; and already 10nm technology is well on the way to becoming a mature technology. The 7nm chips promise even more: at least a 50% power/performance improvement for next mainframe and POWER systems that will fuel the Big Data, cloud and mobile era, and soon you can add the Internet of Things too.
The z13 delivers unbeatable performance today. With the zEC12 IBM boasted of the fastest commercial chip in the industry, 5.5 GHz on a 32 nm wafer. It did not make that boast with the z13. Instead the z13 runs on a 22 nm core at 5 GHz but still delivers a 40% total capacity improvement over the zEC12.
It does this by optimizing the stack top to bottom with 600 processors and 320 separate channels dedicated just to drive I/O throughput. The reason for not cranking up the clock speed on the z13, according to IBM, was the plateauing of Moore’s Law. The company couldn’t get enough boost for the tradeoffs it would have had to make. Nobody seems to be complaining about giving up that one-half GHz. Today the machine can process 2.5 billion transactions a day.
The ride up the Moore’s Law curve has been very enjoyable for all. Companies took the additional processing power to build onto the chip more capabilities that otherwise would have required additional processors. The result: more performance and more capabilities at lower cost. But all good things come to an end.
This 7nm breakthrough doesn’t necessarily restore Moore’s Law. At this point, the best we can guess is that it temporarily moves the price/performance curve to a new plane. Until we know the economics of mass fabrication in the 7nm silicon germanium world we can’t tell whether we’ll see a doubling as before or maybe just a half or quarter or maybe it could triple. We just don’t now.
For the past decade, Morgan reports, depending on the architecture, the thermal limits of systems imposed a clock speed limit on processors, and aside from some nominal instruction per clock (IPC) improvements with each recent microarchitecture change, clock speeds and performance for a processor stayed more or less flat. This is why vendors went parallel with their CPU architectures, in effect adding cores to expand throughput rather than increasing clock speed to boost performance on a lower number of cores. Some, like IBM, also learned to optimize at every level of the stack. As the z13 demonstrates, lots of little improvements do add up.
Things won’t stop here. As Morgan observes, IBM Research and the Microelectronics Division were working with GLOBALFOUNDRIES and Samsung and chip-making equipment suppliers who collaborate through the SUNY Polytechnic Institute’s Colleges of Nanoscale Science and Engineering in nearby Albany to get a path to 10 nm and then 7 nm processes even as the sale of GLOBALFOUNDRIES was being finalized.
The next step, he suggests, could possibly be at 4 nm but no one is sure if this can be done in a way that is economically feasible. If it can’t, IBM already has previewed the possibility of other materials that show promise.
Moore’s Law has been a wonderful ride for the entire industry. Let’s wish them the best as they aim for ever more powerful processors.
DancingDinosaur is Alan Radding, a veteran IT analyst and writer. Please follow DancingDinosaur on Twitter, @mainframeblog. See more of his IT writing at technologywriter.com and here.
Like this:
Like Loading...
Tags:14 nm, 22nm, 7nm, analytics, Big Data, Cloud, CPU, Extreme Ultraviolet (EUV) lithography, GLOBALFOUNDRIES, IBM, instruction per clock (IPC), Linux, mainframe, mobile, Moore's Law, Power Systems, POWER8, Samsung, Silicon Germanium (SiGe), System z, technology, zEC12, zEnterprise
Posted in Uncategorized | 1 Comment »
June 4, 2015
For years organizations have been putting their analytics on distributed platforms thinking that was the only way to get fast, real-time and predictive analytics. Maybe once but not anymore. Turns out the IBM z System, especially the z13 not only is ideal for real time, predictive analytics but preferable.
IBM today is so bullish on analytics, especially predictive analytics, that last month it introduced 20 pre-built industry-specific predictive analytics solutions. To build these solutions IBM tapped its own experience working on 50,000 engagements but also an array of outside organizations with success in predictive analytics, including Urban Outfitters, National Grid, Deloitte, Bolsa de Santiago, Interactive Data Managed Solutions, and Bendigo and Adelaide Bank, among others.

Courtesy of IBM (click to enlarge)
The truth of the matter is that without efficient real time, predictive analytics managers get it wrong most of the time when it comes to making operational decisions, said Paul DiMarzio, IBM z Systems Big Data and Analytics Worldwide Portfolio Marketing Manager. He spoke at IBM Edge2015 in a session titled When Milliseconds Matter: Architecting Real-Time Analytics into Operational Systems. His key point: you can do this completely within the IBM z System.
The old notion of sending data to distributed systems someplace else for analytics now appears ridiculous, especially with the introduction of systems like the z13 that can handle operations and perform real time analytics concurrently. It performs analytics fast enough that you can make decisions when the action is still going on. Now the only question is whether we have the right business rules and scoring models. The data already are there and the tools are ready and waiting on the z13.
You start with the IBM SPSS Modeler with Scoring Adapter for zEnterprise. The real time predictive analytics capability delivers better, more profitable decisions at the point of customer impact. For business rules just turn to the IBM Operational Decision Manager for z/OS, which codifies business policies, practices, and regulations.
IBM SPSS improves accuracy by scoring directly within the transactional application against the latest committed data. As such it delivers the performance needed to meet operations SLAs and avoid data governance and security issues, effectively saving network bandwidth, data copying latency, and disk storage.
In addition to SPSS and the Operational Decision Manager the z13 brings many capabilities, some new for the z13 at this point. For starters, the z13 excels as a custodian of the data model, providing an accurate, secure, single copy of information that, according to IBM, ensures veracity of the data necessary for reliable analytics and provides centralized control over decision information.
Specifically, the machine brings SIMD (single instruction multiple data) and the MASS (mathematical acceleration subsystem) and ATLAS (automatically tuned linear algebra software) libraries for z/OS and Linux on z. SIMD enables the same operation to be performed on several data elements at the same time rather than sequentially. MASS and ATLAS help programmers create better and more complex analytic models.
In addition, increases in memory to as much as 10 TB, faster I/O, and simultaneous multi-threading (SMT) generally boost overall throughput of the z13, which will surely benefit any analytics being run on the machine, especially real time, predictive analytics. In addition, analytics on the z13 gains from deep integration with core systems, the integrated architecture, and its single pane management view.
The latest IBM Red Book on analytics on the z13 sums it up as such: z Systems analytics enables organizations to improve performance and lower cost by bringing the analytic processing to where the data resides. Organizations can therefore maximize their current IT investments while adding functionality and improved price and performance with the z13. And with the new z13 features, applications can gain increased throughput for operational business intelligence (operational BI) and DB2 query workloads, which saves money (hardware, software, labor).
The Red Book suggests the following example: a user with a mobile application signs on and initiates a transaction flow through an IBM MobileFirst Platform Server running on Linux on z. The event goes to an LDAP server on z/OS to validate the user’s sign-on credentials. After successful validation, the transaction then proceeds through the z/OS transaction environment where all of the data resides in DB2 z/OS. IBM CICS transactions also are processed in the same z environment and all of the analysis is performed without moving any data, resulting in extremely fast performance. Sweet.
DancingDinosaur is Alan Radding, a veteran IT analyst and writer. Please follow DancingDinosaur on Twitter, @mainframeblog. See more of his IT writing on Technologywriter.com and here.
Like this:
Like Loading...
Tags:analytics, ATLAS, Big Data, CICS, DB2, hadoop, IBM, IBM MobileFirst, IBM Operation Decision Manager for z/OS, IBM SPSS Modeler, Linux, mainframe, MASS, mobile, predictive analytics, SIMD, System z, technology, z13, zEC12, zEnterprise
Posted in Uncategorized | Leave a Comment »