Posts Tagged ‘zEC12’

Latest New Mainframe puts Apache Spark Native on the z System

April 1, 2016

IBM keeps rolling out new versions of the z System.  The latest is the z/OS Platform for Apache Spark announced earlier this week. The new machine is optimized for marketers, data analysts, and developers eager to apply advanced analytics to the z’s rich, resident data sets for real-time insights.

ibm_zos_apache_spark_app

z/OS Platform for Apache Spark

Data is everything in the new economy; and the most and best data you can grab and the fastest you can analyze it, the more likely you will win. The z, already the center of a large, expansive data environment, is well positioned to drive winning data-fueled strategies.

IBM z/OS Platform for Apache Spark enables Spark, an open-source analytics framework, to run natively on z/OS. According to IBM, the new system is available now. Its key advantage:  to enable data scientists to analyze data in place on the system of origin. This eliminates the need to perform extract, transform and load (ETL), a cumbersome, slow, and costly process. Instead, with Spark the z breaks the bind between the analytics library and underlying file system.

Apache Spark provides an open-source cluster computing framework with in-memory processing to speed analytic applications up to 100 times faster compared to other technologies on the market today, according to IBM. Apache Spark can help reduce data interaction complexity, increase processing speed, and enhance mission-critical applications by enabling analytics that deliver deep intelligence. Considered highly versatile in many environments, Apache Spark is most regarded for its ease of use in creating algorithms that extract insight from complex data.

IBM’s goal lies not in eliminating the overhead of ETL but in fueling interest in cognitive computing. With cognitive computing, data becomes a fresh natural resource—an almost infinite and forever renewable asset—that can be used by computer systems to understand, reason and learn. To succeed in this cognitive era businesses must be able to develop and capitalize on insights before the insights are no longer relevant. That’s where the z comes in.

With this offering, according to IBM, accelerators from z Systems business partners can help organizations more easily take advantage of z Systems data and capabilities to understand market changes alongside individual client needs. With this kind of insight managers should be able to make the necessary business adjustments in real-time, which will speed time to value and advance cognitive business transformations among IBM customers.

At this point IBM has identified 3 business partners:

  1. Rocket Software, long a mainframe ISV, is bringing its new Rocket Launchpad solution, which allows z shops to try the platform using data on z/OS.
  1. DataFactZ is a new partner working with IBM to develop Spark analytics based on Spark SQL and MLlib for data and transactions processed on the mainframe.
  1. Zementis brings its in-transaction predictive analytics offering for z/OS with a standards-based execution engine for Apache Spark. The product promises to allow users to deploy and execute advanced predictive models that can help them anticipate end users’ needs, compute risk, or detect fraud in real-time at the point of greatest impact, while processing a transaction.

This last point—detecting problems in real time at the point of greatest impact—is really the whole reason for Spark on z/OS.  You have to leverage your insight before the prospect makes the buying decision or the criminal gets away with a fraudulent transaction. After that your chances are slim to none of getting a prospect to reverse the decision or to recover stolen goods. Having the data and logic processing online and in-memory on the z gives you the best chance of getting the right answer fast while you can still do something.

As IBM also notes, the z/OS Platform for Apache Spark includes Spark open source capabilities consisting of the Apache Spark core, Spark SQL, Spark Streaming, Machine Learning Library (MLlib) and Graphx, combined with the industry’s only mainframe-resident Spark data abstraction solution. The new platform helps enterprises derive insights more efficiently and securely. In the processing the platform can streamline development to speed time to insights and decision and simplify data access through familiar data access formats and Apache Spark APIs.

Best of all, however, is the in-memory capabilities as noted above. Apache Spark uses an in-memory approach for processing data to deliver results quickly. The platform includes data abstraction and integration services that enable z/OS analytics applications to leverage standard Spark APIs.  It also allows analysts to collect unstructured data and use their preferred formats and tools to sift through data.

At the same time developers and analysts can take advantage of the familiar tools and programming languages, including Scala, Python, R, and SQL to reduce time to value for actionable insights. Of course all the familiar z/OS data formats are available too: IMS, VSAM, DB2 z/OS, PDSE or SMF along with whatever you get through the Apache Spark APIs.

This year we already have seen the z13s and now the z/OS Platform for Apache Spark. Add to that the z System LinuxOne last year. z-Based data centers suddenly have a handful of radically different new mainframes to consider.  Can Watson, a POWER-based system, be far behind? Your guess is as good as anyone’s.

DancingDinosaur is Alan Radding, a veteran information technology analyst and writer. Please follow DancingDinosaur on Twitter, @mainframeblog. See more of his IT writing at technologywriter.com and here.

IBM zSystem and Power Score in IDC 4Q 2015 Rankings

March 18, 2016

IBM retained the number 3 spot with 14.1% share for the quarter as revenue increased 8.9% year-over-year to $2.2 billion in 4Q15. More impressively, IBM experienced strong growth for POWER Systems and double-digit growth for its z System mainframes in the quarter, according to IDC. You can check out the IDC announcement here. IDC credits z and POWER for IBM’s strong platform finish in 2015.

Primary_LinuxONE_LeftAngle-1 (1) zSystem-based LinuxONE

DancingDinosaur has expected these results and been reporting IBM’s z System and POWER System successes for the past year. You can check it out here (z13s) and here (LinuxOne) and here (Power Systems LC).

Along with deservedly crowing about its latest IDC ranking IBM added:  z Systems saw double digit growth due to a number of new portfolio enhancements. The next-generation z13 mainframe, optimized for digital businesses and hybrid cloud environments, is designed to handle mobile transactions securely and at scale, while enabling clients to run analytics on the system and in real time. IBM expanded its commitment to offering open-source on the mainframe by launching a line of Linux-only systems in August of 2015. LinuxONE is based on the latest generation of z Systems technology and enables popular open-source tools and software on the mainframe. IBM also added what amounts to a Business Class z with the z13s to go along with a Business Class dedicated Linux z, the LinuxONE Rockhopper.

Meanwhile, IBM has started to get some uptake for its Open Mainframe Project. In addition to announcing support from the usual mainframe suspects—IBM, CA, Compuware, SUSE, BMC, and others—it also announced its first projects. These include an effort to find ways to leverage new software and tools in the Linux environment that can better take advantage of the mainframe’s speed, security, scalability, and availability. DancingDinosaur is hoping that in time the Open Mainframe Project will produce the kind of results the Open POWER Foundation has recently generated for the POWER Platform

IBM attributes the growing traction of Linux running on POWER Systems in large part to optimized solutions such as DB2 BLU, SAP HANA, and other industry big data software, built on POWER Systems running Linux. In October 2015, IBM expanded its Linux on Power Systems portfolio with the LC line of servers. These servers are infused with OpenPOWER Foundation technology and bring the higher performance of the POWER CPU to the broad Linux community. The POWER-based LC line along with the z-based LinuxONE Rockhopper should give any data center manager looking to run a large, efficient Linux server farm a highly cost-competitive option that can rival or even beat the x86 option. And given that both platforms will handle Docker containers and microservices and support all of today’s popular development tools there is no reason to stick with x86.

From a platform standpoint, IBM appears to be in sync with what IDC is reporting: Datacenter buildout continues, and the main beneficiary this quarter is the density-optimized segment of the market, where growth easily outpaced the overall server market. Density-optimized servers achieved a 30.2% revenue growth rate this quarter, contributing a full 2 percentage points to the overall 5.2% revenue growth in the market.

“The fourth quarter (of 2015) was a solid close to a strong year of growth in the server market, driven by on premise refresh deployments as well as continued hyperscale cloud deployments,” said Kuba Stolarski, Research Director, Servers and Emerging Technologies at IDC. “As the cyclical refresh of 2015 comes to an end, the market focus has begun to shift towards software-defined infrastructure and hybrid environment management, as organizations begin to transform their IT infrastructure as well as prepare for the compute demands expected over the next few years from next-gen IT domains such as IoT and cognitive analytics. In the short term, 2016 looks to be a year of accelerated cloud infrastructure expansion with existing footprints filling out and new cloud datacenter buildouts across the globe.”

After a seemingly endless string of dismal quarters DancingDinosaur is encouraged by what IBM is doing now with the z, POWER Systems, and its strategic initiatives. With its strategic focus on cloud, mobile, big data analytics, cognitive computing, and IoT as well as its support for the latest approaches to software development, tools, and languages, IBM should be well positioned to continue its platform success in 2016.

DancingDinosaur is Alan Radding, a veteran information technology analyst and writer. Please follow DancingDinosaur on Twitter, @mainframeblog. See more of his IT writing at technologywriter.com and here.

New IBM z13s Brings Built-in Encrypted Security to Entry Level

February 19, 2016

Earlier this week IBM introduced the z13s, what it calls World’s most secure server, built for hybrid cloud, and sized for mid-sized organizations.  The z13s promises better business outcomes, faster decision making, less regulatory exposure, greater scale, and better fraud protection. And at the low end it is accessible to smaller enterprises, maybe those who have never tried a z before.

Advanced Security New z13s

z13s features embedded cryptography that brings the benefits of the mainframe to mid-sized organizations . Courtesy IBM

A machine like the low end z13s used to be referred to as a business class (BC) mainframe.  IBM declined to quote a price, except to say z13s will go “for about the same price as previous generations for the equivalent capacity.”  OK, back in July 2013 IBM published the base price of the zEC12 BC machine at $75,000. IBM made a big deal of that pricing at the time.

The key weasel phrase in IBM’s statement is: “for the equivalent capacity.”  Two and a half years ago the $75k zEC12 BC offered significantly more power than its predecessor. Figuring out equivalent capacity today given all the goodies IBM is packing into the new machine, like built-in chip-based cryptography and more, is anybody’s guess. However, given the plummeting costs of IT components over the past two years, you should get it at a base price of $100k or less. If not, call Intel. Adds IBM: The infrastructure costs of z13s are comparable to the Public Cloud infrastructure costs with enterprise support; significant software savings result from core consolidation on the z13s.

But the z13s is not just about price. As digital business becomes a standard practice and transaction volumes increase, especially mobile transaction volumes, the need for increased security becomes paramount. Cybercrime today has shifted. Rather than stealing data criminals are compromising data accuracy and reliability. This is where the z13s’ bolstered built-in security and access to APIs and microservices in a hybrid cloud setting can pay off by keeping data integrity intact.

IBM’s z13s, described as the new entry point to the z Systems portfolio for enterprises of all sizes, is packed with a number of security innovations. (DancingDinosaur considered the IBM LinuxONE Rockhopper as the current z entry point but it is a Linux-only machine.) For zOS the z13s will be the entry point. The security innovations include:

  • Ability to encrypt sensitive data without compromising transactional throughput and response time through its updated cryptographic and tamper-resistant hardware-accelerated cryptographic coprocessor cards with faster processors and more memory. In short: encryption at twice the speed equates to processing twice as many online or mobile device purchases in the same time, effectively helping to lower the cost per transaction.
  • Leverage the z Systems Cyber Security Analytics offering, which delivers an advanced level of threat monitoring based on behavior analytics. Also part of the package, IBM® Security QRadar® security software correlates data from more than 500 sources to help organizations determine if security-related events are simply anomalies or potential threats, This z Systems Cyber Security Analytics service will be available at no-charge, as a beta offering for z13 and z13s customers.
  • IBM Multi-factor Authentication for z/OS (MFA) is now available on z/OS. The solution adds another layer of security by requiring privileged users to enter a second form of identification, such as a PIN or randomly generated token, to gain access to the system. This is the first time MFA has been tightly integrated in the operating system, rather than through an add-on software solution. This level of integration is expected to deliver more streamlined configuration and better stability and performance.

Hybrid computing and hybrid cloud also play a big part in IBM’s thinking latest around z Systems. As IBM explains, hybrid cloud infrastructure offers advantages in flexibility but can also present new vulnerabilities. When paired with z Systems, IBM’s new security solutions can allow clients to establish end-to-end security in their hybrid cloud environment.

Specifically, IBM Security Identity Governance and Intelligence can help prevent inadvertent or malicious internal data loss by governing and auditing access based on known policies while granting access to those who have been cleared as need-to-know users. IBM Security Guardium uses analytics to help ensure data integrity by providing intelligent data monitoring, which tracks users as they access specific data and help to identify threat sources quickly in the event of a breach. IBM Security zSecure and QRadar use real-time alerts to focus on the identified critical security threats that matter the most.

Conventional z System data centers should have no difficulty migrating to the z13 or even the z13s.  IBM told DancingDinosaur it will continue to protect a client’s investment in technology with serial number preservation on the IBM z13s.  The company also is offering upgrades from the zEnterprise BC12 (zBC12) and from the zEnterprise 114 (z114) to the z13s.   Of course, it supports upgradeability within the IBM z13 family; a z13s N20 model can be upgraded to the z13 N30 model. And once the z13s is installed it allows on demand offerings to access temporary or permanent capacity as needed.

DancingDinosaur is Alan Radding, a veteran information technology analyst and writer. Please follow DancingDinosaur on Twitter, @mainframeblog. See more of his IT writing at technologywriter.com and here.

Exploiting the IBM z13 for Maximum Price/Performance Advantage

February 4, 2016

The z13 is the most powerful general purpose computer IBM has ever made. The key to capturing the maximum value from the z13, however, lies in how you plan, design, configure, and optimize your systems and software for everything from COBOL and Java to process parallelization and analytics. What you do in this regard will have significant impact on not only the price/performance you experience but on your success at achieving the business outcomes you are expecting.

z13-under the covers

IBM System z13

This really becomes a software configuration challenge. By tapping approximately 600 internal processors IBM already has optimized the hardware, input, output, memory, and networking/communications about as much as it can be. Your job is to optimize the software you are running, which will require working closely with your ISV.

The place to start is by leveraging the z13’s new compiler technology, parallelism, zIIP and assist processors. This will enable you to save significant money while boosting workload performance. You will literally be doing more for less.

Similarly, in the not too distant past Moore’s Law would virtually guarantee a 15-20% price/performance gain automatically just by taking a new machine out of the box and plugging it in. That’s no longer the case. Now you will have to partner with your ISV to exploit advanced software to maximize the hardware payback and continue the ride along the favorable Moore’s Law price/performance slope.

Then look at the latest COBOL V5.x and its compiler on the z13. Out of the box it is better optimized than previous compilers. In general, the strategic value of COBOL V5.x comes from migrating high CPU usage programs as quickly as possible, effectively saving organizations considerable money by running optimized code.

Some organizations report a 15% on average reduction of CPU time, which adds up to significant savings in monthly CPU charges. How significant? Up to $150k less on a $1 million bill, with some reporting even higher percentage reductions producing even greater savings. Just migrate to COBOL V5.2 (or at least V5.1) to achieve the savings. In general, staying on the software curve with the latest releases of the OS, languages, and compilers with applications optimized for them is the best way to ensure your workloads are achieving top performance in the most cost-effective way.

For example, the new z13 processor leverages a new Vector Facility for certain COBOL statements and expands the use of Decimal Floating Point Facility for packed decimal calculations. Well-structured, compute-intensive batch applications running on z13 and compiled with the Enterprise COBOL V5.2  compiler have shown CPU reduction usage of up to 14% over the same applications running on zEC12 (compiled with the GA release of Enterprise COBOL V5.1), according to IBM. The result: improved workload price/performance.

Enterprise COBOL V5.2 also includes new features to improve programmability, developer productivity, and application modernization. Supporting JSON, for instance, will provide mobile applications easy access to data and the processing they need from business critical production applications written in COBOL.

The z13 and its z sister, the latest LinuxONE dedicated Linux models, were designed and optimized from the start for cloud, mobile, and analytics. They were intended to run alongside traditional mainframe workloads with z/OS or Linux running on the appropriate models.

Finally, plan to take advantage of the new assist processors and expanded memory capacity to further boost performance and lower cost. With the z13, there is a mandatory migration of all zAAP-enabled applications to zIIP. Expect the usage of the zIIP assist processors to surge when all those Java applications move from the zAAP.  ISVs like Compuware should be able to help with this.  In addition, if you enable SMT on the z13, you’ll immediately get more Java capacity.  Applications that run under IBM WebSphere (WAS) on z/OS will benefit too.

The z13 and especially the LinuxONE are breaking new ground. IBM has established, in conjunction with the Linux Foundation, an Open Mainframe Project to support and advance ongoing open source Linux innovation on the mainframe. IBM also is breaking with its traditional mainframe pricing model by offering a pay-per-use option in the form of a fixed monthly payment with costs scaling up or down based on usage. It also offers per-core pricing with software licenses for designated cores. See DancingDinosaur here.

An upcoming DancingDinosaur will look at more of the enhancements being added to these machines, including some of the latest LinuxOne enhancements like support for Google’s Go language and Cloudant’s NoSQL services. The message: the new z System can take you to the places you will want to be in this emerging cloud-mobile-analytics era.

DancingDinosaur is Alan Radding, a veteran information technology analyst and writer. Please follow DancingDinosaur on Twitter, @mainframeblog. See more of his IT writing at technologywriter.com and here.

 

IBM z System After Moore’s Law

October 2, 2015

The last z System that conformed to the expectations of Moore’s Law was the zEC12. IBM could boast that it had the fastest commercial processor available.  The subsequent z13 didn’t match it in processor speed.  The z13 chip runs a 22 nm core at 5 GHz, one-half a GHz slower than the zEC12, which ran its 32nm core at 5.5 GHz. Did you even notice?

third dimension chip

In 2007 an IBM scientist holds a 3-D integrated stacked chip

In 2015, the z13 delivers about a 10 percent performance bump per core thanks to the latest tweaks in the core design, such as better branch prediction and better pipelining. But even one-half a Ghz slower, the z13 was the first system to process 2.5 billion transactions a day.  Even more importantly for enterprise data centers, z13 transactions are persistent, protected, and auditable from end-to-end, adding assurance as mobile transactions grow to an estimated 40 trillion mobile transactions per day by 2025.

IBM clearly isn’t bemoaning the decline of Moore’s Law. In fact, it has been looking beyond silicon for the processing of the future.  This week it announced a major engineering breakthrough that could accelerate carbon nanotubes for the replacement of silicon transistors to power future computing. The breakthrough allows a new way to shrink transistor contacts without reducing the performance of carbon nanotube devices, essentially opening a path to dramatically faster, smaller, and more powerful computer chips beyond the capabilities of traditional semiconductors. Guess we can stop worrying about Moore’s Law.

Without Moore’s Law, IBM optimized just about everything on the z13 that could be optimized. It provides 320 separate channels dedicated to drive I/O throughput as well as such performance goodies as simultaneous multithreading (SMT), symmetric multiprocessing (SMP), and single instruction, multiple data (SIMD). Overall about 600 processors (in addition to your configurable cores) speed and streamline processes throughout the machine. Moore’s Law, in effect, has been bypassed. As much as the industry enjoyed the annual doubling of capacity and corresponding lower price/performance it doesn’t need Moore’s Law to meet today’s insatiable demand for processing power.

The company will be doing similar things with the POWER processor. Today we have the POWER8. Coming is the POWER9 followed by the POWER10. The POWER9 reportedly will arrive in 2017 at 14nm, feature a new micro-architecture, and be optimized with CAPI and NVLINK. POWER10, reportedly, arrives around 2020 optimized for extreme analytics.

As IBM explains its latest breakthrough, carbon nanotubes represent a new class of semiconductor materials that consist of single atomic sheets of carbon rolled up into a tube. The carbon nanotubes form the core of a transistor device whose superior electrical properties promise several generations of technology scaling beyond the physical limits of silicon.

The new processor technology, IBM reports, overcomes a major hurdle that silicon and any other semiconductor transistor technologies face when scaling down. In the transistor, two things scale: the channel and its two contacts. As devices become smaller, the increased contact resistance of carbon nanotubes hindered performance gains. The latest development could overcome contact resistance all the way to the 1.8 nanometer node – four technology generations away.

Carbon nanotube chips could greatly improve the capabilities of high performance computers, enabling, for example, big data to be analyzed faster, increasing the power and battery life of mobile devices, and allowing cloud data centers to deliver services more efficiently and economically. Even cognitive computing and Internet of Things can benefit.

Until now, vendors have be able to shrink the silicon transistors, but they are approaching a point of physical limitation, which is why Moore’s Law is running out of steam. Previously, IBM demonstrated that carbon nanotube transistors can operate as effective switches at channel dimensions of less than ten nanometers. IBM’s new contact approach overcomes the contact resistance by incorporating carbon nanotubes into semiconductor devices, which could result in smaller chips with greater performance and lower power consumption.

As transistors shrink in size, electrical resistance within the contacts increases, which limits performance. To overcome this resistance, IBM researchers gave up traditional contact schemes and created a metallurgical process akin to microscopic welding that chemically binds the metal atoms to the carbon atoms at the ends of nanotubes. This end-bonded contact scheme allows the contacts to be shrunken below 10 nanometers without impacting performance. This brings the industry a step closer to the goal of a carbon nanotube technology within the decade, says IBM.

Let’s hope this works as expected. If not, IBM has other possibilities already in its research labs. DancingDinosaur is Alan Radding, a veteran IT analyst and writer. Please follow DancingDinosaur on Twitter, @mainframeblog. See more of his IT writing at technologywriter.com and here.

z Systems and Cloud Lead IBM 2Q Results

July 24, 2015

DancingDinosaur generally steers clear of writing about reported quarterly revenue. Given the general focus of this blog on enterprise and cloud computing, however, IBM’s recent 2Q15 report can’t be ignored. Although it continued IBM’s recent string of negative quarterly results, the z and cloud proved to be bright spots.

Infographic - IBM Q2 2015 Earnings - Cloud - July 20 2015 - Final

Strong IBM cloud performance, Q2 2015 (click to enlarge)

As IBM reported on Monday: Revenues from z Systems mainframe server products increased 9 percent compared with the year-ago period (up 15 percent adjusting for currency).  Total delivery of z Systems computing power, as measured in MIPS, increased 24 percent.  Revenues from Power Systems were down 1 percent compared with the 2014 period (up 5 percent adjusting for currency).

It’s not clear when and how Power Systems will come back. IBM has opened up the Power platform through the Open Power Foundation. A good move in theory, which DancingDinosaur applauds. Still, much depends on the Foundation gaining increased momentum and individual members rolling out successful Power-based products. The roadmap for POWER8, POWER9, and beyond looks promising but how fast products will arrive is unclear. There also is potential for the commoditization of the Power platform, a welcome development in many quarters, but commoditization’s impact on future revenue also is not clear.

Cloud revenue was up more than 70 percent, adjusting for currency and divested businesses; up more than 50 percent as reported, according to IBM. Given that cloud, along with mobile and analytics, has been designated strategic by IBM this is an encouraging development. The company’s cloud strategy is starting to bear fruit.

The big question hanging over every vendor’s cloud strategy is how to make money at it. One of the appealing aspects of the cloud in terms of cost and pricing for IT-using organizations is what amounts to a race to the bottom. With pricing immediately apparent and lower pricing just a click away it has become a feast for the bottom grazers to whom the lowest price is all that matters. For companies like IBM and Oracle, which also has declared cloud a strategic initiative, and other large legacy enterprise platform providers the challenge is to be competitive on price while differentiating their offerings in other ways. Clearly IBM has some unique cloud offerings in Watson and Bluemix and others but can they deliver enough revenue fast enough to offset the reduction in legacy platform revenue. Remember, x86 is off IBM’s menu.

Timothy Prickett Morgan, who writes frequently about IBM technology, also had plenty to say about IBM’s 2Q15 announcement, as did a zillion other financial and industry analyst. To begin he noted the irony of IBM promoting cloud computing, primarily an x86 phenomenon while trying to convince people that Power-based systems are cost competitive—which they can be—and will do a better job for many of those workloads, correct again.

Morgan also makes an interesting point in regard to the z: “IBM doesn’t have to push the System z mainframe so much as keep it on a Moore’s Law curve of its own and keep the price/performance improving to keep those customers in the mainframe fold.” That’s harder than it may seem; DancingDinosaur addressed the Moore’ Law issue last week here. As Morgan notes, with well over $1 trillion in software assets running on the mainframe, the 6,000 or so enterprises that use mainframes are unlikely to move off the platform because of the cost, disruption, and risk such a move would entail. Just ask Union-Pacific Railroad, which seems to be doing a slow-motion platform migration off the mainframe that seemingly may never actually end. Morgan concludes: “IBM can count on a certain level of money from the System z line that it just cannot with the Power Systems line.”

As noted above, how much revenue Power can generate for IBM depends on how fast the Open Power Foundation members introduce products that expand the market and how many Power processors SoftLayer can absorb as the business unit expands its global footprint.  There also is the question of how many POWER8 servers Rackspace, a much larger cloud provider than SoftLayer, will take and whether the Rackspace initiative will catch on elsewhere.

In any event, IBM’s 2Q15 report showed enough positive momentum to encourage IT platform enthusiasts. For its part, DancingDinosaur is expecting a business class z13 in the coming months and more.

DancingDinosaur is Alan Radding, a veteran IT analyst and writer. Please follow DancingDinosaur on Twitter, @mainframeblog. See more of his IT writing at technologywriter.com and here.

IBM Extends Moore’s Law with First 7nm Test Chip

July 17, 2015

In an announcement last week, IBM effectively extended Moore’s Law for at least another generation of chips, maybe two.  This contradicts what leading vendors, including IBM, have been saying for years about the imminent diminishing returns of Moore’s Law, which postulated that chips would double in capacity every 18-24 months. Moore’s Law drove the price/performance curve the industry has been experiencing for the past several decades.

Post-Silicon-R&D_Infographic_070715_Final

Click to enlarge, courtesy of IBM

The announcement, ironically, coincides with IBM’s completion of the sale of its semi-conductor fabrication business to GLOBALFOUNDRIES, which IBM paid to take the costly facilities off its hands. To pull off the 7nm achievement IBM ended up partnering with a handful of players including public-private partnership with New York State and joint development alliance with GLOBALFOUNDRIES, Samsung, and equipment suppliers. The team is based at SUNY Poly’s NanoTech Complex in Albany.

To achieve the higher performance, lower power, and scaling benefits promised by 7nm technology, the IBM researchers turned to two main innovations, the use Silicon Germanium (SiGe) channel transistors and Extreme Ultraviolet (EUV) lithography integration at multiple levels, in effect bypassing conventional semiconductor manufacturing approaches.

Don’t expect to see new systems featuring these 7nm chips very soon. The announcement made no mention of any timetable for producing commercial products based on this technology. As Timothy Prickett Morgan, who writes extensively on IBM POWER Systems technology observed: the use of silicon germanium for portions of the transistors cuts back on power consumption for the very fast switching necessary for improving circuit performance, and the circuits are etched using extreme ultraviolet (EUV) lithography. These technologies may be difficult and expensive to put into production.

In the meantime, IBM notes that microprocessors utilizing 22nm and 14nm technology run today’s servers, cloud data centers, and mobile devices; and already 10nm technology is well on the way to becoming a mature technology. The 7nm chips promise even more: at least a 50% power/performance improvement for next mainframe and POWER systems that will fuel the Big Data, cloud and mobile era, and soon you can add the Internet of Things too.

The z13 delivers unbeatable performance today. With the zEC12 IBM boasted of the fastest commercial chip in the industry, 5.5 GHz on a 32 nm wafer. It did not make that boast with the z13. Instead the z13 runs on a 22 nm core at 5 GHz but still delivers a 40% total capacity improvement over the zEC12.

It does this by optimizing the stack top to bottom with 600 processors and 320 separate channels dedicated just to drive I/O throughput. The reason for not cranking up the clock speed on the z13, according to IBM, was the plateauing of Moore’s Law. The company couldn’t get enough boost for the tradeoffs it would have had to make. Nobody seems to be complaining about giving up that one-half GHz. Today the machine can process 2.5 billion transactions a day.

The ride up the Moore’s Law curve has been very enjoyable for all. Companies took the additional processing power to build onto the chip more capabilities that otherwise would have required additional processors.  The result: more performance and more capabilities at lower cost. But all good things come to an end.

This 7nm  breakthrough doesn’t necessarily restore Moore’s Law. At this point, the best we can guess is that it temporarily moves the price/performance curve to a new plane. Until we know the economics of mass fabrication in the 7nm silicon germanium world we can’t tell whether we’ll see a doubling as before or maybe just a half or quarter or maybe it could triple. We just don’t now.

For the past decade, Morgan reports, depending on the architecture, the thermal limits of systems imposed a clock speed limit on processors, and aside from some nominal instruction per clock (IPC) improvements with each  recent microarchitecture change, clock speeds and performance for a processor stayed more or less flat. This is why vendors went parallel with their CPU architectures, in effect adding cores to expand throughput rather than increasing clock speed to boost performance on a lower number of cores. Some, like IBM, also learned to optimize at every level of the stack. As the z13 demonstrates, lots of little improvements do add up.

Things won’t stop here. As Morgan observes, IBM Research and the Microelectronics Division were working with GLOBALFOUNDRIES and Samsung and chip-making equipment suppliers who collaborate through the SUNY Polytechnic Institute’s Colleges of Nanoscale Science and Engineering in nearby Albany to get a path to 10 nm and then 7 nm processes even as the sale of GLOBALFOUNDRIES was being finalized.

The next step, he suggests, could possibly be at 4 nm but no one is sure if this can be done in a way that is economically feasible. If it can’t, IBM already has previewed the possibility of other materials that show promise.

Moore’s Law has been a wonderful ride for the entire industry. Let’s wish them the best as they aim for ever more powerful processors.

DancingDinosaur is Alan Radding, a veteran IT analyst and writer. Please follow DancingDinosaur on Twitter, @mainframeblog. See more of his IT writing at technologywriter.com and here.

Real Time Analytics on the IBM z13

June 4, 2015

For years organizations have been putting their analytics on distributed platforms thinking that was the only way to get fast, real-time and predictive analytics. Maybe once but not anymore. Turns out the IBM z System, especially the z13 not only is ideal for real time, predictive analytics but preferable.

IBM today is so bullish on analytics, especially predictive analytics, that last month it introduced 20 pre-built industry-specific predictive analytics solutions. To build these solutions IBM tapped its own experience working on 50,000 engagements but also an array of outside organizations with success in predictive analytics, including Urban Outfitters, National Grid, Deloitte, Bolsa de Santiago, Interactive Data Managed Solutions, and Bendigo and Adelaide Bank, among others.

Web

Courtesy of IBM (click to enlarge)

The truth of the matter is that without efficient real time, predictive analytics managers get it wrong most of the time when it comes to making operational decisions, said Paul DiMarzio, IBM z Systems Big Data and Analytics Worldwide Portfolio Marketing Manager. He spoke at IBM Edge2015 in a session titled When Milliseconds Matter: Architecting Real-Time Analytics into Operational Systems. His key point: you can do this completely within the IBM z System.

The old notion of sending data to distributed systems someplace else for analytics now appears ridiculous, especially with the introduction of systems like the z13 that can handle operations and perform real time analytics concurrently. It performs analytics fast enough that you can make decisions when the action is still going on. Now the only question is whether we have the right business rules and scoring models. The data already are there and the tools are ready and waiting on the z13.

You start with the IBM SPSS Modeler with Scoring Adapter for zEnterprise. The real time predictive analytics capability delivers better, more profitable decisions at the point of customer impact. For business rules just turn to the IBM Operational Decision Manager for z/OS, which codifies business policies, practices, and regulations.

IBM SPSS improves accuracy by scoring directly within the transactional application against the latest committed data. As such it delivers the performance needed to meet operations SLAs and avoid data governance and security issues, effectively saving network bandwidth, data copying latency, and disk storage.

In addition to SPSS and the Operational Decision Manager the z13 brings many capabilities, some new for the z13 at this point. For starters, the z13 excels as a custodian of the data model, providing an accurate, secure, single copy of information that, according to IBM, ensures veracity of the data necessary for reliable analytics and provides centralized control over decision information.

Specifically, the machine brings SIMD (single instruction multiple data) and the MASS (mathematical acceleration subsystem) and ATLAS (automatically tuned linear algebra software) libraries for z/OS and Linux on z. SIMD enables the same operation to be performed on several data elements at the same time rather than sequentially. MASS and ATLAS help programmers create better and more complex analytic models.

In addition, increases in memory to as much as 10 TB, faster I/O, and simultaneous multi-threading (SMT) generally boost overall throughput of the z13, which will surely benefit any analytics being run on the machine, especially real time, predictive analytics.  In addition, analytics on the z13 gains from deep integration with core systems, the integrated architecture, and its single pane management view.

The latest IBM Red Book on analytics on the z13 sums it up as such: z Systems analytics enables organizations to improve performance and lower cost by bringing the analytic processing to where the data resides. Organizations can therefore maximize their current IT investments while adding functionality and improved price and performance with the z13. And with the new z13 features, applications can gain increased throughput for operational business intelligence (operational BI) and DB2 query workloads, which saves money (hardware, software, labor).

The Red Book suggests the following example: a user with a mobile application signs on and initiates a transaction flow through an IBM MobileFirst Platform Server running on Linux on z. The event goes to an LDAP server on z/OS to validate the user’s sign-on credentials. After successful validation, the transaction then proceeds through the z/OS transaction environment where all of the data resides in DB2 z/OS. IBM CICS transactions also are processed in the same z environment and all of the analysis is performed without moving any data, resulting in extremely fast performance. Sweet.

DancingDinosaur is Alan Radding, a veteran IT analyst and writer. Please follow DancingDinosaur on Twitter, @mainframeblog. See more of his IT writing on Technologywriter.com and here.

Storage Looms Large at IBMEdge 2015

April 17, 2015

Been a busy year in storage with software defined storage (SDS), real-time compression, flash, storage virtualization, OpenStack, and more all gaining traction. Similarly, big data, analytics, cloud, and mobile are impacting storage. You can expect to find them and more at IBM Edge2015, coming May 10-15 in Las Vegas.

 But storage continues to make news every week. Recently IBM scientists demonstrated an areal recording density triumph, hitting 123 billion bits of uncompressed data per square inch on low cost, particulate magnetic tape. That translates into the equivalent of a 220 terabyte tape cartridge that could fit in the palm of your hand, or comparable to 1.37 trillion mobile text messages or the text of 220 million books, which would require a 2,200 km bookshelf spanning from Las Vegas to Houston, Texas. (see graphic below)

Tape compression breakthrough

Courtesy of IBM (click to enlarge)

Let’s take a look at some sessions delving into the current hot storage topics at Edge2015, starting with tape, since we’ve been talking about it.

(sSS1335) The Future of Tape; presenter Mark Lantz. He discusses current and future scaling trends of magnetic tape technology—see announcement above—from the perspective of IBM Research. He begins by first comparing recent scaling trends of both tape and hard disk drive technology. He then looks at future capacity scaling potential of tape and hard disks. In that context he offers an in-depth look at a new world record tape areal density demonstration of more than 100 Gb/in2, performed by IBM research in collaboration with Fujifilm, using low cost particulate tape media. He also discusses the new hardware and tape media technologies developed for this demonstration as well as key challenges for the continued scaling of tape.

If you are thinking future, check out this session too. (sBA2523) Part III: A Peek into the Future; presenter Bruce Hillsberg. This session looks at novel and innovate technologies to address clients’ most challenging technical and business problems across a wide range of technologies and disciplines. The presentation looks at everything from the most fundamental materials level all the way to working on the world’s largest big data problems. Many of the technologies developed by the Storage Systems research team lead to new IBM products or become new features in existing products. Topics covered in this lecture include atomic scale storage, research into new materials, advances in current storage media, advanced object stores, cloud storage, and more.

Combine big data, flash, and the z13 all here. (sBA1952) How System z13 and IBM DS8870 Flash Technology Enables Your Hadoop Environments; presenter Renan Ugalde.  Analyzing large amounts of data introduces challenges that can impact the goals of any organization. Companies require a reliable and high performing infrastructure to extract value from their structure and unstructured data. The unique features offered by the integration of IBM System z13 and DS8870 Flash technology enable a platform to support real-time decisions such as fraud detection. This session explains how integration among System z13, DS8870, and Hadoop maximizes performance by enabling the infrastructure’s unique big data capabilities.

Jon Toigo is an outstanding non-IBM presenter and somewhat of an iconoclast when it comes to storage. This year he is offering a 3-part session on Disaster Recovery Planning in an Era of Mobile Computing and Big Data:

  • (aBA2511) Part I: For all the hype around hypervisor-based computing and new software-defined infrastructure models, the ongoing need for disaster preparedness is often being buried in the discussion. High availability server clustering is increasingly believed to trump disaster recovery preparations, despite the fact that the transition to an agile data center is fraught with disaster potentials. In the first of three sessions, Toigo looks at the trends that are occurring in IT and the potential they present for disruption.
  • sBA2512) Part II: builds on the previous session by examining the technologies available for data protection and the trend away from backups in favor of real-time mirroring and replication. He notes promising approaches, including storage virtualization and object storage that can make a meaningful contribution.
  • (sBA2513) Part III: completes his disaster recovery planning series with the use of mobile computing technologies and public clouds as an adjunct to successful business recovery following an unplanned interruption event. Here he discusses techniques and technologies that either show promise as recovery expediters or may place businesses at risk of an epic fail.

Several SDS sessions follow: (sSS0884) Software Defined Storage — Why? What? How? Presenter: Tony Pearson. Here Pearson explains why companies are excited about SDS, what storage products and solutions IBM has to offer, and how they are deployed. This session provides an overview of the new IBM Spectrum Storage family of offerings.

 A second session by Pearson. (sCV3179): IBM Spectrum Storage Integration in IBM Cloud Manager with OpenStack: IBM’s Cloud Storage Options; presenter Tony Pearson. This session will look at the value of IBM storage products in the cloud with a focus on OpenStack. Specifically, it will look at how Spectrum Virtualize can be integrated and used in a complete 3-tier app with OpenStack.

Finally, (sSS2453) Myth Busting Software Defined Storage – Top 7 Misconceptions; presenter Jeffrey Barnett. This session looks at the top misconceptions to cut through the hype and understand the real value potential. DancingDinosaur could only come up with six misconceptions. Will have to check out this session for sure.

Of course, save time for the free evening entertainment. In addition to Penn & Teller, a pair of magicians, and rocker Grace Potter, here. There also will be a weird but terrific group, 2Cellos. Stick with it to the end (about 3 min.) for the kicker.

DancingDinosaur is Alan Radding, a veteran IT analyst and writer. Follow DancingDinosaur on Twitter, @mainframeblog. See more of his IT writing on Technologywriter.com and here. And join DancingDinsosaur at IBM Edge2015. You will find me hanging out wherever people gather around available power outlets to recharge mobile devices.

IBM Edge 2015 as All Platforms Infrastructure Innovation Conference

April 2, 2015

Please join DancingDinosaur at IBM Edge2015 at the Venetian in Las Vegas, May 10-15. It will consist of an Executive Edge track and a Technical track. The program is crammed with hundreds of sessions.  You can find the Technical track session list here. Dare you to find even 10 sessions that don’t interest you.

 marie wieck with IBM poster

Courtesy of IBM: Marie Wieck, General Manager, Middleware (click to enlarge)

This year Edge2015 merges last year’s two events, IBMEdge and Enterprise 2014, into what IBM calls the Infrastructure Innovation Conference. It is IBM’s only US event covering all IBM platforms—System Storage, IBM z Systems, IBM Power Systems, and IBM Middleware at a single venue.  It includes three Technical Universities: System Storage, z Systems, and Power Systems for those working toward certification.

Executive Edge, which this post will look at a little more closely below, offers an exclusive two-day summit for IT executives and leaders , as IBM explains, featuring the latest innovations and technology announcements, client success stories, insightful presentations from IBM executives and industry thought leaders. Plus, IBM promises top tier, one-on-one executive meetings and exclusive networking opportunities.

The IBM Systems and Technology Group (STG) top brass will be there. This IBM Systems lineup includes: Tom Rosamilia, Senior Vice President; Stephen Leonard, General Manager, Sales; Jamie M. Thomas, General Manager, IBM Storage & Software Defined Systems; Ross Mauri, General Manager, z Systems; Doug Balog, General Manager, Power Systems; and Marie Wieck General Manager, Middleware.

And then there is the free entertainment IBM provides. The headliner act is Penn & Teller, a pair of magicians. DancingDinosaur’s favorite, however, is Grace Potter, who delivers terrific hard rock and roll. If you skip the casinos you can catch both and avoid losing money in the process.

With the Executive track IBM promises to present its most innovative approaches to using IBM Systems and Middleware as a foundation for challenging new areas of information management including:

  • Cloud Infrastructure, especially hybrid clouds
  • Big Data and Analytics
  • Systems of Record
  • Systems of Engagement
  • Mobile and Security Services
  • Flash and Storage Virtualization
  • Software Defined Infrastructure

Cloud and big data/analytics have become accepted pillars of IT business value. Mobile, flash, and software-defined infrastructure are being widely embraced as the next wave of IT value delivery. And security must be a priority for everything. Also included will be dozens of client case studies.

Throughout both the Executive and Technology tracks there will be numerous sessions citing client cases and use cases. Although not the same both show how to actually deploy technology for business value.

For example, the session (cCV0821) titled Be Hybrid or Die, revolves around hybrid clouds. The session promises a clear understanding of the term hybrid and why hybrid has become the next step in IT value creation, extraction, and efficiency gains. Citing use cases the session will show how to map your business needs to the functional business blocks of hybrid clouds and to the IBM products portfolio that address those needs. It concludes by examining where IBM is investing, its long term view, and how this will increase your IT flexibility.

Speaking of cases, a session (cIT0514) titled How to Create Rock Solid Business Cases to Get IT Projects Approved looks at the subject of case studies from a standpoint of the IT person charged with answering the unavoidable ROI question. BTW, DancingDinosaur develops business cases based on various looks at cost of ownership should you need help.  They are time consuming but necessary. Management requires an understanding of current IT costs and strengths and the expected payback to better assess new ideas and technologies. This session identifies the key elements of an effective IT cost analysis and explores how to build compelling business cases around those costs and, you hope, quantifiable benefits. Concepts discussed include complexity analysis, platform assessment, Fit for Purpose analysis, and financial case structures. Hmmm, definitely one DancingDinosaur will attend.

Another session looks at the first customer experiences using SAP HANA on Power. SAP HANA is the company’s in-memory, column-oriented RDBMS that handles both high volume transactions and complex analytical query processing on the same platform, and does so very fast since all is in-memory. The session, (lBA0464) titled SAP HANA on Power Systems: First Experiences from Customers in the Test and Evaluation Program, reports on the first experiences gathered from the pilot clients. This presentation begins with a short overview of SAP HANA in general, and then covers specific aspects in the deployment of SAP HANA on IBM Power Systems and IBM storage. You’ll hear about the advantages of SAP HANA on Power Systems (vs. x86) and discover how fast and easy it is to implement in a private cloud with full use of PowerVM capabilities.

In about six weeks DancingDinosaur will be heading to IBM Edge2015. Please join me there. You can find me hanging out wherever people gather around available power outlets to recharge mobile devices. DancingDinosaur is Alan Radding, a veteran IT analyst and writer. Follow DancingDinosaur on Twitter, @mainframeblog. See more of his IT writing on Technologywriter.com and here. And join DancingDinsosaur at IBM Edge2015.


Follow

Get every new post delivered to your Inbox.

Join 858 other followers

%d bloggers like this: