Posts Tagged ‘analytics’

Meet the new IBM LinuxONE Emperor II

September 15, 2017

Early this week IBM introduced the newest generation of the LinuxONE, the IBM LinuxONE Emperor II, built on the same technology as the IBM z14, which DancingDinosaur covered on July 19. The key feature of the new LinuxONE Emperor II, is IBM Secure Service Container, presented as an exclusive LinuxONE technology representing a significant leap forward in data privacy and security capabilities. With the z14 the key capability was pervasive encryption. This time the Emperor II promises very high levels of security and data privacy assurance while rapidly addressing unpredictable data and transaction growth. Didn’t we just hear a story like this a few weeks ago?

IBM LinuxONE Emperor (not II)

Through the IBM Secure Service Container, for the first time data can be protected against internal threats at the system level from users with elevated credentials or hackers who obtain a user’s credentials, as well as external threats. Software developers will benefit by not having to create proprietary dependencies in their code to take advantage of these security capabilities. An application only needs to be put into a Docker container to be ready for Secure Service Container deployment. The application can be managed using the Docker and Kubernetes tools that are included to make Secure Service Container environments easy to deploy and use.

The Emperor II and the LinuxONE are being positioned as the premier Linux system for highly secured data serving. To that end, it promises:

  • Ultimate workload isolation and pervasive encryption through Secure Service Containers (SoD)
  • Encryption of data at rest without application change and with better performance than x86
  • Protection of data in flight over the network with full end-to-end network security
  • Use of Protected Keys to secure data without giving up performance
  • Industry-leading secure Java performance via TLS (2-3x faster than Intel)

With the z14 you got this too, maybe worded slightly differently.

In terms of performance and scalability, IBM promises:

  • Industry-leading performance of Java workloads, up to 50% faster than Intel
  • Vertical scale to 170 cores, equivalent to hundreds of x86 cores
  • Simplification to make the most of your Linux skill base and speed time to value
  • SIMD to accelerate analytics workloads & decimal compute (critical to financial applications)
  • Pause-less garbage collection to enable vertical scaling while maintaining predictable performance

Like the z14, the Emperor II also lays a foundation for data serving and next gen apps, specifically:

  • Adds performance and security to new open source DBaaS deployments
  • Develops new blockchain applications based on the proven IBM Blockchain Platform—in terms of security, blockchain may prove more valuable than even secure containers or pervasive encryption
  • Support for data-in-memory applications and new workloads using 32 TB of memory—that’s enough to run production databases entirely in memory (of course, you’ll have to figure out if the increased performance, which should be significant, is worth the extra memory cost)
  • A build-your-cloud approach for providers wanting a secure, scalable, open source platform

If you haven’t figured it out yet, IBM sees itself in a titanic struggle with Intel’s x86 platform.  With the LinuxONE Emperor II IBM senses it can gain the upper hand with certain workloads. Specifically:

  • EAL 5+ isolation, best in class crypto key protection, and Secure Service Containers
  • 640 Power cores in its I/O channels (that aren’t included in the core count) giving the platform the best I/O capacity and performance in the industry
  • Its shared memory, vertical scale architecture delivers a measurably better architecture for stateful workloads like databases and systems of record
  • The LinuxONE/z14 hardware designed to still give good response time at up to 100% utilization, which simplifies the solution and reduces the extra costs many data centers assume are necessary because they’re used to 50% utilization
  • The Emperor II can be ordered designed and tested for earthquake resistance
  • The z-based LinuxONE infrastructure has survived fire and flood scenarios where all other server infrastructures have failed

That doesn’t mean, however, the Emperor II is a Linux no brainer, even for shops facing pressure around security compliance, never-fail mission critical performance, high capacity, and high performance. Change is hard and there remains a cultural mindset based on the lingering myth of the cheap PC of decades ago.

DancingDinosaur is Alan Radding, a veteran information technology analyst, writer, and ghost-writer. Please follow DancingDinosaur on Twitter, @mainframeblog. See more of his IT writing at technologywriter.com and here.

 

Syncsort Finds New Corporate Home and Friend

September 8, 2017

Centerbridge Partners, L.P. a private investment firm, completed the $1.26 billion acquisitions of enterprise software providers Syncsort Incorporated and Vision Solutions, Inc. from affiliates of Clearlake Capital Group, L.P. Clearlake, which acquired Syncsort in 2015 and Vision in 2016, will retain a minority ownership stake in the combined company.

Syncsort is a provider of enterprise software and a player in Big Iron to Big Data solutions. DancingDinosaur has covered it here and here. According to the company, customers in more than 85 countries rely on Syncsort to move and transform mission-critical data and workloads. Vision Solutions provides business resilience tools addressing high availability, disaster recovery, migration, and data sharing for IBM Power Systems.

The company apparently hasn’t suffered from being passed between owners. Syncsort has been active in tech acquisitions for the past two years as it builds its data transformation footprint. Just a couple of weeks ago, it acquired Metron, a provider of cross-platform capacity management software, services. Metron’s signature athene solution delivers trend-based forecasting, capacity modeling, and planning capabilities that enable enterprises to optimize their data infrastructure to improve performance and control costs on premise or in the cloud.

This acquisition is the first since the announcement that Syncsort and Vision Solutions are combining, adding expertise and proven leadership in IBMi and AIX Power Systems platforms and to reinforce its ‘Big Iron to big data’ focus. Syncsort has also long established player in the mainframe business. Its Big Iron to Big Data promises to be a fast-growing market segment comprised of solutions that optimize traditional data systems and deliver mission-critical data from these systems to next-generation analytic environments using innovative Big Data technologies. Metron’s solutions and expertise is expected to contribute to the company’s data infrastructure optimization portfolio.

Syncsort has been on a roll since late in 2016 when, backed by Clearlake, it acquired Trillium Software, a global provider of data quality solutions. The acquisition of Trillium was the largest in Syncsort’s history then, and brings together data quality and data integration technology for enterprise environments. The combination of Syncsort and Trillium, according to the company, enables enterprises to harness all their valuable data assets for greater business insights, applying high-performance and scalable data movement, transformation, profiling, and quality across traditional data management technology stacks as well as Hadoop and cloud environments.

Specifically, Syncsort and Trillium both have a substantial number of large enterprise customers seeking to generate new insights by combining traditional corporate data with diverse information sources from mobile, online, social, and the Internet of Things. Syncsort expects these organizations to continue to rely heavily on next-generation analytic capabilities, creating a growing need for its best-in-class data integration and quality solutions to make their Big Data initiatives successful. Together, Syncsort and Trillium will continue to focus on providing customers with these capabilities for traditional environments, while leading the industry in delivering them for Hadoop and Spark too.

Earlier this year Syncsort integrated its own Big Data integration solution, DMX-h, with Cloudera Director, enabling organizations to easily deploy DMX-h along with Cloudera Enterprise on Amazon Web Services, Microsoft Azure, or Google Cloud. By deploying DMX-h with CDH, organizations can quickly pull data into new, ready-to-work clusters in the cloud—accelerating the time to capture cloud benefits, including cost savings and Data-as-a-Service (DaaS) delivery.

“As organizations liberate data from across the enterprise and deliver it into the cloud, they are looking for a self-service, elastic experience that’s easy to deploy and manage. This is a requirement for a variety of use cases – from data archiving to analytics that combine data originating in the cloud with on premise reference data,” said Tendü Yoğurtçu, Chief Technology Officer.

“By integrating DMX-h with Cloudera Director,” Yoğurtçu continued, “DMX-h is instantly available and ready to put enterprise data to work in newly activated cloud clusters.”

Syncsort DMX-h pulls enterprise data into Hadoop in the cloud and prepares that data for business workloads using native Hadoop frameworks, Apache Spark, or MapReduce, effectively enabling IT to achieve time-to-value goals and quickly deliver business insights.

It is always encouraging to see the mainframe eco-system continue to thrive. IBM’s own performance over the past few years has been anything but encouraging.

DancingDinosaur is Alan Radding, a veteran information technology analyst, writer, and ghost-writer. Please follow DancingDinosaur on Twitter, @mainframeblog. See more of his IT writing at technologywriter.com and here.

 

New Software Pricing for IBM Z

July 27, 2017

One of the often overlooked benefits of the introduction of a new mainframe like the Z is cost savings. Even though the machine may cost more, the cost of the performance and capabilities it delivers typically cost less on a per unit basis. In the case of the new Z, it’s not just a modest drop in price/performance. With the new Z, IBM announced, three new Container Pricing models for IBM Z, providing greatly simplified software pricing that promises flexible deployment with competitive economics vs. public clouds and on-premises x86 environments.

Working on the new IBM Z

Here are the three biggest software pricing changes:

  • Predictable and Transparent Container Pricing—providing organizations greatly simplified software pricing that combines flexible deployment with competitive economics vs. public clouds and on-premises x86 environments. To IBM, a container can be any address space, however large and small. You can have any number of containers. “Container Pricing provides collocated workloads with line-of-sight pricing to a solution,” explained Ray Jones, VP, IBM Z Software and Hybrid Cloud. With container pricing, Jones continued, “the client determines where to deploy using WLM, z/OS and SCRT do the rest.”
  • Application dev and test—highly competitive stand-alone pricing for z/OS based development and test workloads. Organizations can increase their DevTest capacity up to 3 times at no additional MLC cost. This will be based on the organization’s existing DevTest workload size. Or a company can choose the multiplier it wants and set the reference point for both MLC and OTC software.
  • Payment systems pricing are based on the business metric of payments volume a bank processes, not the available capacity. This gives organizations much greater flexibility to innovate affordably in a competitive environment, particularly in the fast-growing Instant Payment segment. To use the new per payment pricing, Jones added, up front licensing of IBM Financial Transaction Manager (FTM) software is required.

The Container Pricing options are designed to give clients the predictability and transparency they require for their business. The pricing models are scalable both within and across logical partitions (LPARs) and deliver greatly enhanced metering, capping and billing capabilities. Container Pricing for IBM Z is planned to be available by year-end 2017 and enabled in z/OS V2.2 and z/OS V2.3

Jones introduced the software discounts by reiterating that this was focused on software container pricing for IBM z and promised that there will be a technology software benefit with z14 as there was with the z13. IBM, he added, will offer a way to migrate to the new pricing, “This is a beginning of a new beginning. Clearly as we go forward we want to expand what’s applicable to container pricing.” His clear implication: IBM is intent on expanding the discounting it started when, several years ago, it introduced discounts for mobile transactions running on the z, which was driving up monthly software cost averages as mobile transaction volume began to skyrocket.

To understand the latest changes you need to appreciate what IBM means by container. This is not just about Docker containers. A container to IBM simply is an address space.  An organization can have multiple containers in a logical partition and have as many containers as it wants and change the size of containers as needed.

The fundamental advantage of IBM’s container pricing is that it enables co-location of workloads to get improved performance and remove latency, thus IBM’s repeated references to line-of-sight pricing. In short, this is about MLC (4hr) pricing. The new pricing eliminates what goes on in container from consideration. The price of container is just that; the price of the container. It won’t impact the 4hr rolling average, resulting in very predictable pricing.

The benefits are straightforward: simplified pricing for qualified solutions and allowance to deploy in the best way. And IBM can price competitively to the customer’s solution; in effect solution-specific pricing. When combined with the new price metric-payments pricing IBM trying to put together a competitive cost/price story. Of course, it is all predicated on the actual prices IBM finally publishes.  Let’s hope they are as competitive as IBM implies.

DancingDinosaur never passes up an opportunity to flog IBM for overpricing its systems and services. From discussions with Jones and other IBM during the pre-launch briefings managers the company may finally understand the need to make the mainframe or z or Z or whatever IBM calls it price-competitive on an operational level today. Low TCO or low cost of IOPS or low cost of QoS is not the same.

This is especially important now. Managers everywhere appear to be waking up to the need transform their mainframe-based businesses, at least in part, by becoming competitive digital businesses. DancingDinosaur never imagined that he would post something referencing the mainframe as a cost-competitive system able to rival x86 systems not just on quality of service but on cost. With the IBM Z the company is talking about competing with an aggressive cost strategy. It’s up to you, paying customers, to force them to deliver.

DancingDinosaur is Alan Radding, a veteran information technology analyst, writer, and ghost-writer. Please follow DancingDinosaur on Twitter, @mainframeblog. See more of his IT writing at technologywriter.com and here.

 

New IBM Z Redefines Mainframe and Security and Cloud

July 19, 2017

By now you have certainly heard of IBM’s latest mainframe, the long-awaited z14, which the company refers to as Z. An announcement of a new mainframe usually doesn’t attract much notice, but maybe this announcement should. Even if you are not a mainframe fan this machine offers a solution that helps everybody—pervasive encryption of all data with no impact on operations or performance and with no need to take much action on your part, except to plug the machine in.

10-core z14 chip

At a time when organizations of all types and in every market segment are under attack from hackers, ransomware, data breaches, and more all data center managers should welcome automatic pervasive encryption. Yet 96% don’t. Of the 9 billion records breached since 2013 only 4% were encrypted! You already know why: encryption is a chore, impacts staff, slows system performance, costs money, and more. You know all the complaints better than DancingDinosaur.

The z14 changes everything from this point going forward. IBM has committed a 4x increase in silicon dedicated to cryptographic algorithms for pervasive encryption. In effect the Z encrypts all data associated with an entire application, cloud service, and database, in flight and at rest, automatically. This amounts to bulk encryption at cloud scale made possible by a massive 7x increase in cryptographic performance over the z13. This is 18x faster than comparable x86 systems and at just five percent of the cost of x86-based solutions.

In truth, it’s better than this. You get this encryption automatically virtually for free. IBM insists it will deliver the z14 at the same price/performance of the z13 or less. The encryption is built into the cost of silicon out of the box. DancingDinosaur has not seen any specific prices yet but you are welcome to scream if IBM doesn’t come through.

You immediately get rid of all the encryption headaches; you don’t have to classify data, manage encryption, or do any of the other chores typically associated with encryption. You just get it, automatically. The z14 also relieves you from managing encryption keys; only IBM Z can protect millions of keys (as well as the process of accessing, generating and recycling them) in tamper-responsive hardware that causes keys to be invalidated at any sign of intrusion and then be restored in safety.

When it comes to security, the z14 truly is a game changer. And it finally will get compliance auditors off your back once they realize how extensive z14 protection is.

IBM downplayed speeds and feeds with the z13 but they’re back with the z14. Specifically, a 5.2 GHz (versus 5.0 GHz IBM z13) is still a bit short of z12, which ran 5.5 GHz. But as with the z13, IBM makes up for it with more memory. The z14 can handle 32 TB of memory. It also includes up to 170 configurable cores (up to 10 per chip) for a total of 1832 MIPS. The L1 and L2 cache is on the core.  The L3 cache also sits on chip and is shared by on-chip cores, and communicates with cores, memory, I/O, and system controller as a single chip module.

Maybe not the richest specs but impressive nonetheless. IBM has been tweaking the box from top to bottom to boost performance. And all the while it will take over end-to-end encryption automatically, including encrypted APIs. Surprisingly, IBM has said nothing about Z’s power consumption but constantly on encrpytion/decryption has to draw more power than, say, the z13. Am waiting to hear what IBM has to say.

This is not just for mainframe jocks. Optimized IBM z/OS Connect technologies make it straightforward for cloud developers to discover and call any IBM Z application or data from a cloud service, or for Z developers to call any cloud service. IBM Z now allows organizations to encrypt these APIs and still run nearly 3x faster than alternatives based on comparable x86 systems.  These speeds and feeds have all been thoroughly documented and detailed at the bottom of the IBM Z press release here.

Will the z14 return the mainframe to positive revenue?  Probably for a few quarters, maybe more if non-mainframe shops want the clear payback of pervasive encryption, although it won’t be an easy transition for them without IBM assistance and incentives.

Next week DancingDinosaur will take up the Z’s three new container pricing models intended to make the Z competitive with public clouds and on-premises x86 environments.

DancingDinosaur is Alan Radding, a veteran information technology analyst, writer, and ghost-writer. Please follow DancingDinosaur on Twitter, @mainframeblog. See more of his IT writing at technologywriter.com and here.

 

Get a Next-Gen Datacenter with IBM-Nutanix POWER8 System

July 14, 2017

First announced by IBM on May 16 here, this solution, driven by client demand for a simplified hyperconverged—combined server, network, storage, hardware, software—infrastructure, is designed for data-intensive enterprise workloads.  Aimed for companies increasingly looking for the ease of deployment, use, and management that hyperconverged solutions promise. It is being offered as an integrated hardware and software offering in order to deliver on that expectation.

Music made with IBM servers, storage, and infrastructure

IBM’s new POWER8 hyperconverged solutions enable a public cloud-like experience through on-premises infrastructure with top virtualization and automation capabilities combined with Nutanix’s public and on-premises cloud capabilities. They provide a combination of reliable storage, fast networks, scalability and extremely powerful computing in modular, scalable, manageable building blocks that can be scaled simply by adding nodes when needed.

Over time, IBM suggests a roadmap of offerings that will roll out as more configurations are needed to satisfy client demand and as feature and function are brought into both the IBM Cognitive Systems portfolio and the Nutanix portfolio. Full integration is key to the value proposition of this offering so more roadmap options will be delivered as soon as feature function is delivered and integration testing can be completed.

Here are three immediate things you might do with these systems:

  1. Mission-critical workloads, such as databases, large data warehouses, web infrastructure, and mainstream enterprise apps
  2. Cloud native workloads, including full stack open source middleware, enterprise databases
    and containers
  3. Next generation cognitive workloads, including big data, machine learning, and AI

Note, however, the change in IBM’s pricing strategy. The products will be priced with the goal to remain neutral on total cost of acquisition (TCA) to comparable offerings on x86. In short, IBM promises to be competitive with comparable x86 systems in terms of TCA. This is a significant deviation from IBM’s traditional pricing, but as we have started to see already and will continue to see going forward IBM clearly is ready to play pricing flexibility to win the deals on products it wants to push.

IBM envisions the new hyperconverged systems to bring data-intensive enterprise workloads like EDB Postgres, MongoDB and WebSphere into a simple-to-manage, on-premises cloud environment. Running these complex workloads on IBM Hyperconverged Nutanix POWER8 system can help an enterprise quickly and easily deploy open source databases and web-serving applications in the data center without the complexity of setting up all of the underlying infrastructure plumbing and wrestling with hardware-software integration.

And maybe more to IBM’s ultimate aim, these operational data stores may become the foundational building blocks enterprises will use to build a data center capable of taking on cognitive workloads. These ever-advancing workloads in advanced analytics, machine learning and AI will require the enterprise to seamlessly tap into data already housed on premises. Soon expect IBM to bring new offerings to market through an entire family of hyperconverged systems that will be designed to simply and easily deploy and scale a cognitive cloud infrastructure environment.

Currently, IBM offers two systems: the IBM CS821 and IBM CS822. These servers are the industry’s first hyperconverged solutions that marry Nutanix’s one-click software simplicity and scalability with the proven performance of the IBM POWER architecture, which is designed specifically for data-intensive workloads. The IBM CS822 (the larger of the two offerings) sports 22 POWER8 processor cores. That’s 176 compute threads, with up to 512 GB of memory and 15.36 TB of flash storage in a compact server that meshes seamlessly with simple Nutanix Prism management.

This server runs Nutanix Acropolis with AHV and little endian Linux. If IBM honors its stated pricing policy promise, the cost should be competitive on the total cost of acquisition for comparable offerings on x86. DancingDinosaur is not a lawyer (to his mother’s disappointment), but it looks like there is considerable wiggle room in this promise. IBM Hyperconverged-Nutanix Systems will be released for general availability in Q3 2017. Specific timelines, models, and supported server configurations will be announced at the time of availability.

DancingDinosaur is Alan Radding, a veteran information technology analyst, writer, and ghost-writer. Please follow DancingDinosaur on Twitter, @mainframeblog. See more of his IT writing at technologywriter.com and here.

 

IBM Power and z Platforms Show Renewed Excitement

June 30, 2017

Granted, 20 consecutive quarters of posting negative revenue numbers is enough to get even the most diehard mainframe bigot down. If you ran your life like that your house and your car would have been seized by the bank months ago.

Toward the end of June, however, both z and Power had some good news. First,  a week ago IBM announced that corporate enterprise users ranked the IBM z  enterprise servers as the most reliable hardware platform available on the market today. In its enterprise server category the survey also found that IBM Power Systems achieved the highest levels of reliability and uptime when compared with 14 server hardware options and 11 server hardware virtualization platforms.

IBM links 2 IBM POWER8 with NVIDIA NVLink with 4 NVIDIA Tesla P100 accelerators

The results were compiled and reported by the ITIC 2017 Global Server Hardware and Server OS Reliability survey, which polled 750 organizations worldwide during April/May 2017. Also among the survey finding:

  • IBM z Systems Enterprise mainframe class systems, had zero percent incidents of more than four hours of per server/per annum downtime of any hardware platform. Specifically, IBM z Systems mainframe class servers exhibit true mainframe fault tolerance experiencing just 0.96 minutes of minutes of unplanned per server, per annual downtime. That equates to 8 seconds per month of “blink and you miss it,” or 2 seconds of unplanned weekly downtime. This is an improvement over the 1.12 minutes of per server/per annum downtime the z Systems servers recorded in ITIC’s 2016 – 2017 Reliability poll nine months ago.
  • IBM Power Systems has the least amount of unplanned downtime, with 2.5 minutes per server/per year of any mainstream Linux server platforms.
  • IBM and the Linux operating system distributions were either first or second in every reliability category, including virtualization and security.

The survey also highlighted market reliability trends. For nearly all companies surveyed, having four nines (99.99%) of availability, equating to less than one hour of system downtime per year was a key factor in its decision.

Then consider the increasing costs of downtime. Nearly all survey respondents claimed that one hour of downtime costs them more than $150k, with one-third estimating that the same will cost their business up to $400k.

With so much activity going on 24×7, for an increasing number of businesses, 4 nines of availability is no longer sufficient.  These businesses are adopting carrier levels of availability; 5 nines or 6 nines (or 99.999 to 99.9999 percent) availability, which translates to downtime per year of 30 seconds (6 nines) or 5 minutes (5 nines) of downtime per year.

According to ITIC’s 2016 report: IBM’s z Enterprise mainframe customers reported the least amount of unplanned downtime and the highest percentage of five nines (99.999%) uptime of any server hardware platform.

Just this week, IBM announced that according to results from International Data Corporation (IDC) Worldwide Quarterly Server Tracker® (June, 2017) IBM exceeded market growth by 3x compared with the total Linux server market, which grew at 6 percent. The improved performance are the result of success across IBM Power Systems including IBM’s OpenPOWER LC servers and IBM Power Systems running SAP HANA as well as the OpenPOWER-Ready servers developed through the OpenPOWER Foundation.

As IBM explains it: Power Systems market share growth is underpinned by solutions that handle fast growing applications, like the deep learning capabilities within the POWER8 architecture. In addition these are systems that expand IBM’s Linux server portfolio, which have been co-developed with fellow members of the OpenPOWER Foundation

Now all that’s needed is IBM’s sales and marketing teams to translate this into revenue. Between that and the new systems IBM has been hinting at for the past year maybe the consecutive quarterly losses might come to an end this year.

DancingDinosaur is Alan Radding, a veteran information technology analyst, writer, and ghost-writer. Please follow DancingDinosaur on Twitter, @mainframeblog. See more of his IT writing at technologywriter.com and here.

 

IBM Resurrects Moore’s Law

June 23, 2017

Guess Moore’s Law ain’t as dead as we were led to believe. On Jun 5 IBM and Research Alliance partners GLOBALFOUNDRIES and Samsung, along with equipment suppliers announced the development of an industry-first process to build silicon nano sheet transistors that will enable 5nm chips. Previously, IBM announced a 7nm process using a silicon germanium (SiGe) alloy.

As DancingDinosaur wrote in early Oct. 2015, the last z System that conformed to the expectations of Moore’s Law was the zEC12, introduced Aug 2012. IBM could boast then it had the fastest commercial processor available.  The subsequent z13 didn’t match it in processor speed.  The z13 chip runs a 22 nm core at 5 GHz, one-half a GHz slower than the zEC12, which ran its 32nm core at 5.5 GHz. IBM compensated for the slower chip speed by adding more processors throughout the system to boost I/O and other functions and optimizing the box every way possible.

5nm silicon nano-sheet transistors delivers 40% performance gain

By 2015, the z13 delivered about a 10 percent performance bump per core thanks to the latest tweaks in the core design, such as better branch prediction and better pipelining. But even at one-half Ghz slower, the z13 was the first system to process 2.5 billion transactions a day.  Even more importantly for enterprise data centers, z13 transactions are persistent, protected, and auditable from end-to-end, adding assurance as mobile transactions grow to an estimated 40 trillion mobile transactions per day by 2025. The z13 also received and continues to receive praise for its industry leading security ratings as well as its scalability and flexibility.

Just recently Hitachi announced a partnership with IBM to develop a version of the z13 to run its own operating system, VOS3. The resulting z13 will run the next generation of Hitachi’s AP series.

But IBM isn’t back in pursuit of Moore’s Law just to deliver faster traditional mainframe workloads. Rather, the company is being driven by its strategic initiatives, mainly cognitive computing. As IBM explained in the announcement: The resulting increase in performance will help accelerate cognitive computing, the Internet of Things (IoT), and other data-intensive applications delivered in the cloud. The power savings could also mean that the batteries in smartphones and other mobile products could last two to three times longer than today’s devices, before needing to be charged.

Scientists working as part of the IBM-led Research Alliance at the SUNY Polytechnic Institute Colleges of Nanoscale Science and Engineering’s NanoTech Complex in Albany, NY achieved the breakthrough by using stacks of silicon nanosheets as the device structure of the transistor instead of the standard FinFET architecture, which is the blueprint for the semiconductor industry up through 7nm node technology. “For business and society to meet the demands of cognitive and cloud computing in the coming years, advancement in semiconductor technology is essential,” said Arvind Krishna, senior vice president, Hybrid Cloud, and director, IBM Research in the announcement. “That’s why IBM aggressively pursues new and different architectures and materials that push the limits of this industry, and brings them to market in technologies like mainframes and our cognitive systems.”

Compared to the leading edge 10nm technology available in the market, according to IBM, a nanosheet-based 5nm technology can deliver 40 percent performance enhancement at fixed power, or 75 percent power savings at matched performance. This improvement enables a significant boost to meeting the future demands of artificial intelligence (AI) systems, virtual reality, and mobile devices.

These may not sound like the workloads you are running on your mainframe now, but systems with these chips are not going to be shipped in the next mainframe either. So, you have a couple of years. The IBM team expects to make progress toward commercializing 7nm in 2018. By the time they start shipping 5nm systems you might be desperate for a machine to run such workloads and others like them.

DancingDinosaur is Alan Radding, a veteran information technology analyst, writer, and ghost-writer. Please follow DancingDinosaur on Twitter, @mainframeblog. See more of his IT writing at technologywriter.com and here.

 

IBM Puts Open DBaaS on IBM OpenPOWER LC Servers

June 15, 2017

Sometimes IBM seems to be thrashing around looking for anything hot that’s selling, and the various NoSQL databases definitely are hot. The interest is driven by DevOps, cloud, and demand for apps fast.

A month or so ago the company took its Power LC server platform to the OpenPOWER Developer Conference in San Francisco where they pitched Database-as-a-Service (DBaaS) and a price-performance guarantee: OpenPOWER LC servers designed specifically for Big Data to deliver a 2.0x price-performance advantage over x86 for MongoDB and 1.8x for EDB PostgreSQL 9.5 guaranteed. With organizations seeking any performance advantage, these gains matter.

There are enough caveats that IBM will almost never be called to deliver on the guarantee. So, don’t expect to cash in on this very quickly. As IBM says in the miles of fine print: the company will provide additional performance optimization and tuning services consistent with IBM Best Practices, at no charge.  But the guarantee sounds intriguing. If you try it, please let DancingDinosaur know how it works out.

IBM Power System S822LC for Big Data

BTW, IBM published the price for the S822LC for big data as starting at $6,399.00 USD. Price includes shipping. Linux OS, however, comes for an additional charge.

Surprisingly, IBM is not aiming this primarily to the IBM Cloud. Rather, the company is targeting the private cloud, the on-premises local version. Its Open DBaaS toolkit, according to IBM, provides enterprise clients with a turnkey private cloud solution that pre-integrates an Open Source DB image library, OpenStack-based private cloud, and DBaaS software packages with hardware (servers/storage/network switches/rack) and a single source of support to enable a DBaaS self-service portal for enterprise developers and LOB users to provision MongoDB, Postgres, and others in minutes. But since it is built on OpenStack, it also supports hybrid cloud integration with IBM Cloud offerings via OpenStack APIs.

In terms of cost it seems remarkably reasonable. It comes in four reference configurations. The Starter configuration is ~$80k (US list price) and includes 3 Power 822LC servers, pair of network switches, rack, DBaaS Toolkit software, and IBM Lab Services. Other configurations include Entry, Cloud Scale, and Performance configurations that have been specified for additional compute, storage, and OpenStack control plane nodes along with high-capacity JBOD storage drawers. To make this even easier, each configuration can be customized to meet user requirements. Organizations also can provide their own racks and/or network switches.

Furthermore, the Power 822LC and Power 821LC form the key building blocks for the compute, storage and OpenStack control plane nodes. As a bonus, however, IBM includes the new 11-core Power 822LC, which provides an additional 10-15% performance boost over the 10-core Power 822LC for the same price.

This is a package deal, at least if you want the best price and to deploy it fast. “As the need for new applications to be delivered faster than ever increases in a digital world, developers are turning to modern software development models including DevOps, as-a-Service, and self-service to increase the volume, velocity and variety of business applications,” said Terri Virnig, VP, Power Ecosystem and Strategy at IBM. Open Platform for DBaaS on IBM in the announcement. Power Systems DBaaS package  includes:

  • A self-service portal for end users to deploy their choice of the most popular open source community databases including MongoDB, PostgreSQL, MySQL, MariaDB, Redis, Neo4j and Apache Cassandra deployable in minutes
  • An elastic cloud infrastructure for a highly scalable, automated, economical, and reliable open platform for on-premises, private cloud delivery of DBaaS
  • A disk image builder tool for organizations that want to build and deploy their own custom databases to the database image library

An open source, cloud-oriented operations manager with dashboards and tools will help you visualize, control, monitor, and analyze the physical and virtual resources. A turnkey, engineered solution comprised of compute, block and archive storage servers, JBOD disk drawers, OpenStack control plane nodes, and network switches pre-integrated with the open source DBaaS toolkit is available through GitHub here.

DancingDinosaur is Alan Radding, a veteran information technology analyst, writer, and ghost-writer. Please follow DancingDinosaur on Twitter, @mainframeblog. See more of his IT writing at technologywriter.com and here.

 

Syncsort Drives zSystem and Distributed Data Integration

June 8, 2017

IBM appears to be so busy pursuing its strategic imperatives—security, blockchain, quantum computing, and cognitive computing—that it seems to have forgotten the daily activities that make up the bread-and-butter of mainframe data centers. Stepping up to fill the gap have been mainframe ISVs like Compuware, Syncsort, Data Kinetics, and a few others.

IBM’s Project DataWorks taps into unstructured data often missed

IBM hasn’t completely ignored this need. For instance, Project DataWorks uses Watson Analytics and natural language processing to analyze and create complex visualizations. Syncsort, on the other hand, latched onto open Apache technologies, starting in the fall of 2015. Back then it introduced a set of tools to facilitate data integration through Apache Kafka and Apache Spark, two of the most active Big Data open source projects for handling real-time, large-scale data processing, feeds, and analytics.

Syncsort’s primary integration vehicle then revolved around the Intelligent Execution capabilities of its DMX data integration product suite with Apache Spark. Intelligent Execution allows users to visually design data transformations once and then run them anywhere – across Hadoop, MapReduce, Spark, Linux, Windows, or Unix, both on premise or in the cloud.

Since then Syncsort, in March, announced another big data integration solution. This time its DMX-h, is now integrated with Cloudera Director, enabling organizations to easily deploy DMX-h along with Cloudera Enterprise on Amazon Web Services, Microsoft Azure, or Google Cloud. By deploying DMX-h with CDH, Syncsort explained, organizations can quickly pull data into new, ready-to-work clusters in the cloud. This accelerates how quickly they can take advantage of big data cloud benefits, including cost savings and Data-as-a-Service (DaaS) delivery.

A month before that, this past February, Syncsort introduced new enhancements in its Big Data integration solution by again deploying DMX-h to deliver integrated workflow capabilities and Spark 2.0 integration, which simplifies Hadoop and Spark application development, effectively enabling mainframe data centers to extract maximum value from their data assets.

In addition, Syncsort brought new integrated workflow capabilities and Spark 2.0 integration to simplify Hadoop and Spark application development. It lets data centers tap value from their enterprise data assets regardless of where it resides, whether on the mainframe, in distributed systems, or in the cloud.

Syncsort’s new integrated workflow capability also gives organizations a simpler, more flexible way to create and manage their data pipelines. This is done through the company’s design-once, deploy-anywhere architecture with support for Apache Spark 2.0, which makes it easy for organizations to take advantage of the benefits of Spark 2.0 and integrated workflow without spending time and resources redeveloping their jobs.

Assembling such an end-to-end data pipeline can be time-consuming and complicated, with various workloads executed on multiple platforms, all of which need to be orchestrated and kept up to date. Delays in such complicated development, however, can prevent organizations from getting the timely insights they need for effective decision-making.

Enter Syncsort’s Integrated Workflow, which helps organizations manage various workloads, such as batch ETL on large repositories of historical data. This can be done by referencing business rules during data ingest in a single workflow, in effect simplifying and speeding development of the entire data pipeline, from accessing critical enterprise data, to transforming that data, and ultimately analyzing it for business insights.

Finally, in October 2016 Syncsort announced new capabilities in its Ironstream software that allows organizations to access and integrate mainframe log data in real-time to Splunk IT Service Intelligence (ITSI). Further, the integration of Ironstream and Compuware’s Application Audit software deliver the audit data to Splunk Enterprise Security (ES) for Security Information and Event Management (SIEM). This integration improves an organization’s ability to detect threats against critical mainframe data, correlate them with related information and events, and satisfy compliance requirements.

DancingDinosaur is Alan Radding, a veteran information technology analyst, writer, and ghost-writer. Please follow DancingDinosaur on Twitter, @mainframeblog. See more of his IT writing at technologywriter.com and here.

 

IBM Introduces Hitachi-Specific z13

May 30, 2017

Remember when rumors were flying that Hitachi planned to buy the mainframe z Systems business from IBM?  DancingDinosaur didn’t believe it at that time, and now we have an official announcement that IBM is working with Hitachi to deliver mainframe z System hardware for use with Hitachi customers.

Inside the IBM z13

DancingDinosaur couldn’t see Hitachi buying the z. The overhead would be too great. IBM has been sinking hundreds of millions of dollars into the z, adding new capabilities ranging from Hadoop and Spark natively on z to whatever comes out of the Open Mainframe Project.

The new Hitachi deal takes the z in a completely different direction. The plans calls for using Hitachi’s operating system, VOS3, running on the latest IBM z13 hardware to provide Hitachi users with better performance while sustaining their previous investments in business-critical Hitachi data and software, as IBM noted. VOS3 started as a fork of MVS and has been repeatedly modified since.

According to IBM, Hitachi will exclusively adopt the IBM z Systems high-performance mainframe hardware technology as the only hardware for the next generation of Hitachi’s AP series. These systems primarily serve major organizations in Japan. This work expands Hitachi’s cooperation with IBM to make mainframe development more efficient through IBM’s global capabilities in developing and manufacturing mainframe systems. The Open Mainframe Project, BTW, is a Linux initiative.

The collaboration, noted IBM, reinforces its commitment to delivering new innovations in mainframe technology and fostering an open ecosystem for the mainframe to support a broad range of software and applications. IBM recently launched offerings for IBM z Systems that use the platform’s capabilities for speed, scale and security to deliver cloud-based blockchain services for building new transaction systems and machine learning for analyzing large amounts of data.

If you count VOS3, the mainframe now runs a variety of operating systems, including z/OS, z/TPF and z/VM operating systems as well as the Linux. Reportedly, Hitachi plans to integrate its new mainframe with its Lumada Internet of Things (IoT) offerings. With z scalability, security, massive I/O, and performance the z makes an ideal IoT platform, and IoT is a market IBM targets today. Now IBM is seeding a competitor with the z running whatever appealing capabilities Hitachi’s Lumada offers. Hope whatever revenue or royalties IBM gets is worth it.

IBM and Hitachi, as explained in the announcement, have a long history of cooperation and collaboration in enterprise computing technologies. Hitachi decided to expand this cooperation at this time to utilize IBM’s most advanced mainframe technologies. Hitachi will continue to provide its customers with a highly reliable, high-performance mainframe environment built around the Hitachi VOS3 operating system. Hitachi also continues to strengthen mainframe functionality and services which contributes to lower TCO, improved ease of system introduction and operation, and better serviceability.

Of course, the mainframe story is far from over. IBM has been hinting at a new mainframe coming later this year for months.  Since IBM stopped just automatically cranking up core processor speed to boost price/performance it will employ an array of assist processors and software optimizations to boost performance wherever it can, but particularly in the area of its current critical imperatives—security, cognitive computing, blockchain, and cloud. One thing DancingDinosaur doesn’t expect to see in the new z, however, will be qubits embedded, but who knows?

DancingDinosaur is Alan Radding, a veteran information technology analyst, writer, and ghost-writer. Please follow DancingDinosaur on Twitter, @mainframeblog. See more of his IT writing at technologywriter.com and here.

 


%d bloggers like this: