Posts Tagged ‘cognitive computing’

BMC’s 12th Annual Mainframe Survey Shows Z Staying Power

November 17, 2017

ARM processors are invading HPC and supercomputer segments. The Power9 is getting closer and closer to general commercial availability. IBM unveiled not one but two new quantum computers. Meanwhile, the Z continues to roll right along without skipping a beat, according to BMC’s 12th mainframe survey.

There is no doubt that the computing landscape is changing dramatically and will continue to change. Yet mainframe shops appear to be taking it all in stride. As Mark Wilson reported on the recently completed SHARE Europe conference in the UK, citing the keynote delivered by Compuware’s CEO Chris O’Malley: “By design, the post-modern mainframe is the most future ready platform in the world: the most reliable, securable, scalable, and cost efficient. Unsurprisingly, the mainframe remains the dominant, growing, and vital backbone for the worldwide economy. However, outdated processes and tools ensnared in an apathetic culture doggedly resistant to change, prevent far too many enterprises from unleashing its unique technical virtues and business value.”  If you doubt we are entering the post-modern mainframe era just look at the LinuxONE Emperor II or the z14.

Earlier this month BMC released its 12th annual mainframe survey. Titled 5 Myths Busted, you can find the report here.  See these myths right below:

  • Myth 1: Organizations have fully optimized mainframe availability
  • Myth 2: The mainframe is in maintenance mode; no one is modernizing
  • Myth 3: Executives are planning to replace their mainframes
  • Myth 4: Younger IT professionals are pessimistic about mainframe careers
  • Myth 5: People working on the mainframe today are all older

Everyone from prestigious executives like O’Malley to a small army of IBMers to lowly bloggers and analysts like DancingDinosaur have been pounding away at discrediting these myths for years. And this isn’t the first survey to thoroughly discredit mainframe skeptics.

The mainframe is growing: 48% of respondents saw MIPS growth in the last 12 months, over 50% of respondents forecast MIPS growth in the next 12 months, and 71% of large shops (10,000 MIPS or more) experienced MIPS growth in the last year. Better yet, these same shops forecast more growth in the next 12 months.

OK, the top four priorities of respondents remained the same this year. The idea that mainframe shops, however, are fully optimized and just cruising is dead wrong. Survey respondents still have a list of to-do of priorities:

  1. Cost reduction/optimization
  2. Data privacy/compliance
  3. Availability
  4. Application modernization

Maybe my favorite myth is that younger people have given up on the mainframe. BMC found that 53% of respondents are under age 50 and of this group, (age 30-49 with under 10 years of experience) overwhelmingly report a very positive view of the the mainframe future. The majority went so far as to say they see the workload of their mainframe growing and also view the mainframe as having a strong position of growth in the industry overall. This is reinforced by the growth of IBM’s Master of the Mainframe competition, which attracts young people in droves, over 85,000 to date, to work with the so-called obsolete mainframe.

And the mainframe, both the Z and the LinuxONE, is packed with technology that will continue to attract young people: Linux, Docker, Kubernetes, Java, Spark, and support for a wide range of both relational databases like DB2 and NoSQL databases like MongoDB. They use this technology to do mobile, IoT, blockchain, and more. Granted most mainframe shops are not ready yet to run these kinds of workloads. IBM, however, even introduced new container pricing for the new Z to encourage such workloads.

John McKenny, BMC’s VP of Strategy, has noticed growing interest in new workloads. “Yes, they continue to be mainly transactional applications but they are aimed to support new digital workloads too, such as doing business with mobile devices,” he noted.  Mobility and analytics, he added, are used increasingly to improve operations, and just about every mainframe shop has some form of cloud computing, often multiple clouds.

The adoption of Linux on the mainframe a decade ago imediatey put an end to the threat posed by x86. Since then, IBM has become a poster child for open source and a slew of new technologies, from Java to Hadoop to Spark to whatever comes next. Although traditional mainframe data centers have been slow to adopt these new technologies some are starting, and that along with innovative machines like the z14 and LinuxONE Emperor ll are what, ultimately, will keep the mainframe young and competitive.

DancingDinosaur is Alan Radding, a veteran information technology analyst, writer, and ghost-writer. Please follow DancingDinosaur on Twitter, @mainframeblog. See more of his IT writing at technologywriter.com and here.

IBM 3Q17 Results Break Consecutive Quarters Losing Streak

November 2, 2017

DancingDinosaur generally does not follow the daily gyrations of IBM’s stock, assuming that readers like you are not really active investors in the company’s stock. That is not to say, however, that you don’t have an important, even critical interest in the company’s fortunes.  As users of Z or Power systems, you want to know that IBM has the means to continue to invest in and advance your preferred platform.  And a 20+ consecutive quarters losing streak doesn’t exactly inspire confidence.

What is interesting about IBM’s latest 3Q17 financials, which ends the string of consecutive revenue losses, is the performance of the Z and storage, two things most of us are concerned with.

Blockchain simplifies near real-time clearing and settlement

Here is what Martin Schroeter, IBM Senior Vice President and Chief Financial Officer said to the investment analysts he briefs: In Systems, we had strong growth driven by the third consecutive quarter of growth in storage, and a solid launch of our new z14 mainframe, now just called Z, which was available for the last two weeks of the quarter.

DancingDinosaur has followed the mainframe for several decades at least, and the introduction of a new mainframe always boosts revenue for the next quarter or two. The advantages were apparent on Day 1 when the machine was introduced. As DancingDinosaur wrote: You get this encryption automatically, virtually for free. IBM insists it will deliver the z14 at the same price/performance of the z13 or less. The encryption is built into the cost of silicon out of the box.

A few months later IBM introduced a new LinuxOne mainframe, the Emperor II. The new LinuxOne doesn’t yet offer pervasive encryption but provides Secure Service Containers. As it was described here at that time: Through the Secure Service Container data can be protected against internal threats at the system level even from users with elevated credentials or hackers who obtain a user’s credentials, as well as external threats.

Software developers will benefit by not having to create proprietary dependencies in their code to take advantage of these security capabilities. An application only needs to be put into a Docker container for Secure Service Container deployment. The application can be managed using the Docker and Kubernetes tools that are included to make Secure Service Container environments easy to deploy and use. Again, it will likely take a few quarters for LinuxONE shops and other Linux shops to seek out the Emperor II and Secure Service Containers.

Similarly, in recent weeks, IBM has been bolstering its storage offerings. As Schroeter noted, storage, including Spectrum storage and Flash, have been experiencing a few positive quarters and new products should help to continue that momentum. For example, products like IBM Spectrum Protect Plus promises to make data protection available in as little as one hour.

Or the IBM FlashSystem 900, introduced at the end of October promises to deliver efficient, ultra dense flash with CAPEX and OPEX savings due to 3x more capacity in a 2U enclosure. It also offers to maximize efficiency using inline data compression with no application performance impact as it achieves consistent 95 microsecond response times.

But probably the best 3Q news came from the continuing traction IBM’s strategic imperatives are gaining. Here these imperatives—cloud, security, cognitive computing—continue to make a serious contribution to IBM revenue. Third-quarter cloud revenues increased 20 percent to $4.1 billion.  Cloud revenue over the last 12 months was $15.8 billion, including $8.8 billion delivered as-a-service and $7.0 billion for hardware, software and services to enable IBM clients to implement comprehensive cloud solutions.  The annual exit run rate for as-a-service revenue increased to $9.4 billion from $7.5 billion in the third quarter of 2016.  In the quarter, revenues from analytics increased 5 percent.  Revenues from mobile increased 7 percent and revenues from security increased 51 percent. Added Schroeter: Revenue from our strategic imperatives over the last 12 months was also up 10% to $34.9 billion, and now represents 45% of IBM.

OK, so IBM is no longer a $100 + billion company and hasn’t been for some time. Maybe in a few years if blockchain and the strategic imperatives continue to grow and quantum catches fire it may be back over the $100 billion mark, but not sure how much it matters.

DancingDinosaur is Alan Radding, a veteran information technology analyst, writer, and ghost-writer. Please follow DancingDinosaur on Twitter, @mainframeblog. See more of his IT writing at technologywriter.com and here.

Get a Next-Gen Datacenter with IBM-Nutanix POWER8 System

July 14, 2017

First announced by IBM on May 16 here, this solution, driven by client demand for a simplified hyperconverged—combined server, network, storage, hardware, software—infrastructure, is designed for data-intensive enterprise workloads.  Aimed for companies increasingly looking for the ease of deployment, use, and management that hyperconverged solutions promise. It is being offered as an integrated hardware and software offering in order to deliver on that expectation.

Music made with IBM servers, storage, and infrastructure

IBM’s new POWER8 hyperconverged solutions enable a public cloud-like experience through on-premises infrastructure with top virtualization and automation capabilities combined with Nutanix’s public and on-premises cloud capabilities. They provide a combination of reliable storage, fast networks, scalability and extremely powerful computing in modular, scalable, manageable building blocks that can be scaled simply by adding nodes when needed.

Over time, IBM suggests a roadmap of offerings that will roll out as more configurations are needed to satisfy client demand and as feature and function are brought into both the IBM Cognitive Systems portfolio and the Nutanix portfolio. Full integration is key to the value proposition of this offering so more roadmap options will be delivered as soon as feature function is delivered and integration testing can be completed.

Here are three immediate things you might do with these systems:

  1. Mission-critical workloads, such as databases, large data warehouses, web infrastructure, and mainstream enterprise apps
  2. Cloud native workloads, including full stack open source middleware, enterprise databases
    and containers
  3. Next generation cognitive workloads, including big data, machine learning, and AI

Note, however, the change in IBM’s pricing strategy. The products will be priced with the goal to remain neutral on total cost of acquisition (TCA) to comparable offerings on x86. In short, IBM promises to be competitive with comparable x86 systems in terms of TCA. This is a significant deviation from IBM’s traditional pricing, but as we have started to see already and will continue to see going forward IBM clearly is ready to play pricing flexibility to win the deals on products it wants to push.

IBM envisions the new hyperconverged systems to bring data-intensive enterprise workloads like EDB Postgres, MongoDB and WebSphere into a simple-to-manage, on-premises cloud environment. Running these complex workloads on IBM Hyperconverged Nutanix POWER8 system can help an enterprise quickly and easily deploy open source databases and web-serving applications in the data center without the complexity of setting up all of the underlying infrastructure plumbing and wrestling with hardware-software integration.

And maybe more to IBM’s ultimate aim, these operational data stores may become the foundational building blocks enterprises will use to build a data center capable of taking on cognitive workloads. These ever-advancing workloads in advanced analytics, machine learning and AI will require the enterprise to seamlessly tap into data already housed on premises. Soon expect IBM to bring new offerings to market through an entire family of hyperconverged systems that will be designed to simply and easily deploy and scale a cognitive cloud infrastructure environment.

Currently, IBM offers two systems: the IBM CS821 and IBM CS822. These servers are the industry’s first hyperconverged solutions that marry Nutanix’s one-click software simplicity and scalability with the proven performance of the IBM POWER architecture, which is designed specifically for data-intensive workloads. The IBM CS822 (the larger of the two offerings) sports 22 POWER8 processor cores. That’s 176 compute threads, with up to 512 GB of memory and 15.36 TB of flash storage in a compact server that meshes seamlessly with simple Nutanix Prism management.

This server runs Nutanix Acropolis with AHV and little endian Linux. If IBM honors its stated pricing policy promise, the cost should be competitive on the total cost of acquisition for comparable offerings on x86. DancingDinosaur is not a lawyer (to his mother’s disappointment), but it looks like there is considerable wiggle room in this promise. IBM Hyperconverged-Nutanix Systems will be released for general availability in Q3 2017. Specific timelines, models, and supported server configurations will be announced at the time of availability.

DancingDinosaur is Alan Radding, a veteran information technology analyst, writer, and ghost-writer. Please follow DancingDinosaur on Twitter, @mainframeblog. See more of his IT writing at technologywriter.com and here.

 

IBM Resurrects Moore’s Law

June 23, 2017

Guess Moore’s Law ain’t as dead as we were led to believe. On Jun 5 IBM and Research Alliance partners GLOBALFOUNDRIES and Samsung, along with equipment suppliers announced the development of an industry-first process to build silicon nano sheet transistors that will enable 5nm chips. Previously, IBM announced a 7nm process using a silicon germanium (SiGe) alloy.

As DancingDinosaur wrote in early Oct. 2015, the last z System that conformed to the expectations of Moore’s Law was the zEC12, introduced Aug 2012. IBM could boast then it had the fastest commercial processor available.  The subsequent z13 didn’t match it in processor speed.  The z13 chip runs a 22 nm core at 5 GHz, one-half a GHz slower than the zEC12, which ran its 32nm core at 5.5 GHz. IBM compensated for the slower chip speed by adding more processors throughout the system to boost I/O and other functions and optimizing the box every way possible.

5nm silicon nano-sheet transistors delivers 40% performance gain

By 2015, the z13 delivered about a 10 percent performance bump per core thanks to the latest tweaks in the core design, such as better branch prediction and better pipelining. But even at one-half Ghz slower, the z13 was the first system to process 2.5 billion transactions a day.  Even more importantly for enterprise data centers, z13 transactions are persistent, protected, and auditable from end-to-end, adding assurance as mobile transactions grow to an estimated 40 trillion mobile transactions per day by 2025. The z13 also received and continues to receive praise for its industry leading security ratings as well as its scalability and flexibility.

Just recently Hitachi announced a partnership with IBM to develop a version of the z13 to run its own operating system, VOS3. The resulting z13 will run the next generation of Hitachi’s AP series.

But IBM isn’t back in pursuit of Moore’s Law just to deliver faster traditional mainframe workloads. Rather, the company is being driven by its strategic initiatives, mainly cognitive computing. As IBM explained in the announcement: The resulting increase in performance will help accelerate cognitive computing, the Internet of Things (IoT), and other data-intensive applications delivered in the cloud. The power savings could also mean that the batteries in smartphones and other mobile products could last two to three times longer than today’s devices, before needing to be charged.

Scientists working as part of the IBM-led Research Alliance at the SUNY Polytechnic Institute Colleges of Nanoscale Science and Engineering’s NanoTech Complex in Albany, NY achieved the breakthrough by using stacks of silicon nanosheets as the device structure of the transistor instead of the standard FinFET architecture, which is the blueprint for the semiconductor industry up through 7nm node technology. “For business and society to meet the demands of cognitive and cloud computing in the coming years, advancement in semiconductor technology is essential,” said Arvind Krishna, senior vice president, Hybrid Cloud, and director, IBM Research in the announcement. “That’s why IBM aggressively pursues new and different architectures and materials that push the limits of this industry, and brings them to market in technologies like mainframes and our cognitive systems.”

Compared to the leading edge 10nm technology available in the market, according to IBM, a nanosheet-based 5nm technology can deliver 40 percent performance enhancement at fixed power, or 75 percent power savings at matched performance. This improvement enables a significant boost to meeting the future demands of artificial intelligence (AI) systems, virtual reality, and mobile devices.

These may not sound like the workloads you are running on your mainframe now, but systems with these chips are not going to be shipped in the next mainframe either. So, you have a couple of years. The IBM team expects to make progress toward commercializing 7nm in 2018. By the time they start shipping 5nm systems you might be desperate for a machine to run such workloads and others like them.

DancingDinosaur is Alan Radding, a veteran information technology analyst, writer, and ghost-writer. Please follow DancingDinosaur on Twitter, @mainframeblog. See more of his IT writing at technologywriter.com and here.

 

IBM Demonstrates Quantum Computing Advantage

May 12, 2017

In an announcement last week, IBM reported that scientists from IBM Research and Raytheon BBN have demonstrated one of the first proven examples of a quantum computer’s advantage over a conventional computer. By probing a black box containing an unknown string of bits, they showed that just a few superconducting qubits can discover the hidden string faster and more efficiently than today’s computers. Their research was published in a paper titled, “Demonstration of quantum advantage in machine learning” in nature.com.

With IBM’s current 5 qubit processor, the quantum algorithm consistently identified the sequence in up to 100x fewer computational steps and was more tolerant of noise than the conventional (non-quantum) algorithm. This is much larger than any previous head-to-head comparison between quantum and conventional processors.

Courtesy: IBM Research

The graphic above defines 3 types of quantum computers. At the top is the quantum annealer, described as the least powerful and most restrictive.  In the middle sits analog quantum, 50-100 qubits, a device able to simulate complex quantum interactions. This will probably be IBM’s next quantum machine; currently IBM offers a 5 qubit device. At the bottom sits the universal quantum. IBM suggests this will scale to over 100,000 qubits and be capable of handling machine learning, quantum chemistry, optimization problems, secure computing, and more. It will be exponentially faster than traditional computers and be able to handle just about all the things even the most powerful conventional supercomputers cannot do now.

The most powerful z System, regardless of how many cores or accelerators or memory or bandwidth, remains a traditional, conventional computer. It deals with problems as a series of basic bits, sequences of 0 or 1. That it runs through these sequences astoundingly fast fools us into thinking that there is something beyond the same old digital computing we have known for the last 50 years or more.

Digital computers see the world and the problems you trying to solve as sequences of 0 and 1. That’s it; there is nothing in-between. They store numbers as sequences of 0 and 1 in memory, and they process stored numbers using only the simplest mathematical operations, add and subtract. As a college student DancingDinosaur was given the most powerful TI programmable calculator then available and, with a few buddies, we tried to come up with things it couldn’t do. No matter how many beer-inspired tries, we never found something it couldn’t handle.  The TI was just another digital device.

Quantum computers can digest 0 and 1 but have a broader array of tricks. For example, contradictory things can exist concurrently. Quantum geeks often cite a riddle dubbed Schrödinger’s cat. In this riddle the cat can be alive and dead at the same time because quantum system can handle multiple, contradictory states. If we had known of Schrödinger’s cat my buddies and I might have stumped that TI calculator.

In an article on supercomputing in Explain That Stuff by Chris Woodford he shows the thinking behind Schrödinger’s cat, called superposition.  This is where two waves, representing a live cat and a dead one, combine to make a third that contains both cats or maybe hundreds of cats. The wave inside the pipe contains all these waves simultaneously: they’re added to make a combined wave that includes them all. Qubits use superposition to represent multiple states (multiple numeric values) simultaneously.

In its latest quantum achievement IBM with only a 5 cubit the quantum algorithm consistently identified the sequence in up to a 100x fewer computational steps and was more tolerant of noise than the conventional (non-quantum) algorithm. This is much larger than any previous head-to-head comparison between quantum and conventional processors.

In effect, the IBM-Raytheon team programmed a black box such that, with the push of a button, it produces a string of bits with a hidden a pattern (such as 0010) for both a conventional computation and a quantum computation. The conventional computer examines the bits one by one. Each result gives a little information about the hidden string. By forcing the conventional computer to query the black box many times it can determine the full answer.

The quantum computer employs a quantum algorithm that extracts the information hidden in the quantum phase — information to which a conventional algorithm is completely blind. The bits are then measured as usual and, in about half the time, the hidden string can be fully revealed.

Most z data centers can’t use quantum capabilities for their daily work, at least not yet. As Woodford noted: It’s very early for the whole field—and most researchers agree that we’re unlikely to see practical quantum computers appearing for many years—perhaps even decades. Don’t bet on it; at the rate IBM is driving this, you’ll probably see useful things much sooner. Maybe tomorrow.

DancingDinosaur is Alan Radding, a veteran information technology analyst, writer, and ghost-writer. Please follow DancingDinosaur on Twitter, @mainframeblog. See more of his IT writing at technologywriter.com and here.

 

IBM Shows Off POWER and NVIDIA GPU Setting High Performance Record 

May 4, 2017

The record achievement used 60 Power processors and 120 GPU accelerators to shatter the previous supercomputer record, which used over a 700,000 processors. The results point to how dramatically the capabilities of high performance computing (HPC) has increase while the cost of HPC systems has declined. Or put another way: the effort demonstrates the ability of NVIDIA GPUs to simulate one billion cell models in a fraction of the time, while delivering 10x the performance and efficiency.

Courtesy of IBM: Takes a lot of processing to take you into a tornado

In short, the combined success of IBM and NVIDIA puts the power of cognitive computing within the reach of mainstream enterprise data centers. Specifically the project performed reservoir modeling to predict the flow of oil, water, and natural gas in the subsurface of the earth before they attempt to extract the maximum oil in the most efficient way. The effort, in this case, involved a billion-cell simulation, which took just 92 minutes using 30 for HPC servers equipped with 60 POWER processors and 120 NVIDIA Tesla P100 GPU accelerators.

“This calculation is a very salient demonstration of the computational capability and density of solution that GPUs offer. That speed lets reservoir engineers run more models and ‘what-if’ scenarios than previously,” according to Vincent Natoli, President of Stone Ridge Technology, as quoted in the IBM announcement. “By increasing compute performance and efficiency by more than an order of magnitude, we’re democratizing HPC for the reservoir simulation community,” he added.

“The milestone calculation illuminates the advantages of the IBM POWER architecture for data-intensive and cognitive workloads.” said Sumit Gupta, IBM Vice President, High Performance Computing, AI & Analytics in the IBM announcement. “By running Stone Ridge’s ECHELON on IBM Power Systems, users can achieve faster run-times using a fraction of the hardware.” Gupta continued. The previous record used more than 700,000 processors in a supercomputer installation that occupies nearly half a football field while Stone Ridge did this calculation on two racks of IBM Power Systems that could fit in the space of half a ping-pong table.”

This latest advance challenges perceived misconceptions that GPUs could not be efficient on complex application codes like reservoir simulation and are better suited to simple, more naturally parallel applications such as seismic imaging. The scale, speed, and efficiency of the reported result disprove this misconception. The milestone calculation with a relatively small server infrastructure enables small and medium-size oil and energy companies to take advantage of computer-based reservoir modeling and optimize production from their asset portfolio.

Billion cell simulations in the industry are rare in practice, but the calculation was accomplished to highlight the performance differences between new fully GPU-based codes like the ECHELON reservoir simulator and equivalent legacy CPU codes. ECHELON scales from the cluster to the workstation and while it can simulate a billion cells on 30 servers, it can also run smaller models on a single server or even on a single NVIDIA P100 board in a desktop workstation, the latter two use cases being more in the sweet spot for the energy industry, according to IBM.

As importantly, the company notes, this latest breakthrough showcases the ability of IBM Power Systems with NVIDIA GPUs to achieve similar performance leaps in other fields such as computational fluid dynamics, structural mechanics, climate modeling, and others that are widely used throughout the manufacturing and scientific community. By taking advantage of POWER and GPUs organizations can literally do more with less, which often is an executive’s impossible demand.

DancingDinosaur is Alan Radding, a veteran information technology analyst, writer, and ghost-writer. Please follow DancingDinosaur on Twitter, @mainframeblog. See more of his IT writing at technologywriter.com and here.

 

IBM Gets Serious About Open Data Science (ODS) with Anaconda

April 21, 2017

As IBM rapidly ramps up cognitive systems in various forms, its two remaining platforms, z System and POWER, get more and more interesting. This week IBM announced it was bringing the Anaconda Open Data Science (ODS) platform to its Cognitive Systems and PowerAI.

Anaconda, Courtesy Pinterest

Specifically, Anaconda will integrate with the PowerAI software distribution for machine learning (ML) and deep learning (DL). The goal: make it simple and fast to take advantage of Power performance and GPU optimization for data-intensive cognitive workloads.

“Anaconda on IBM Cognitive Systems empowers developers and data scientists to build and deploy deep learning applications that are ready to scale,” said Bob Picciano, senior vice president of IBM Cognitive Systems. Added Travis Oliphant, co-founder and chief data scientist, Continuum Analytics, which introduced the Anaconda platform: “By optimizing Anaconda on Power, developers will also gain access to the libraries in the PowerAI Platform for exploration and deployment in Anaconda Enterprise.”

With more than 16 million downloads to date, Anaconda has emerged as the Open Data Science platform leader. It is empowering leading businesses across industries worldwide with tools to identify patterns in data, uncover key insights, and transform basic data into the intelligence required to solve the world’s most challenging problems.

As one of the fastest growing fields of AI, DL makes it possible to process enormous datasets with millions or even billions of elements and extract useful predictive models. DL is transforming the businesses of leading consumer Web and mobile application companies, and it is catching on with more traditional business.

IBM developed PowerAI to accelerate enterprise adoption of open-source ML and DL frameworks used to build cognitive applications. PowerAI promises to reduce the complexity and risk of deploying these open source frameworks for enterprises on the Power architecture and is tuned for high performance, according to IBM. With PowerAI, organizations also can realize the benefit of enterprise support on IBM Cognitive Systems HPC platforms used in the most demanding commercial, academic, and hyperscale environments

For POWER shops getting into Anaconda, which is based on Python, is straightforward. You need a Power8 with IBM GPU hardware or a Power8 combined with a Nvidia GPU, in effect a Minsky machine. It’s essentially a developer’s tool although ODS proponents see it more broadly, bridging the gap between traditional IT and lines of business, shifting traditional roles, and creating new roles. In short, they envision scientists, mathematicians, engineers, business people, and more getting involved in ODS.

The technology is designed to run on the user’s desktop but is packaged and priced as a cloud subscription with a base package of 20 users. User licenses range from $500 per year to $30,000 per year depending on which bells and whistles you include. The number of options is pretty extensive.

According to IBM, this started with PowerAI to accelerate enterprise adoption of open-source ML/DL learning frameworks used to build cognitive applications. Overall, the open Anaconda platform brings capabilities for large-scale data processing, predictive analytics, and scientific computing to simplify package management and deployment. Developers using open source ML/DL components can use Power as the deployment platform and take advantage of Power optimization and GPU differentiation for NVIDIA.

Not to be left out, IBM noted growing support for the OpenPOWER Foundation, which recently announced the OpenPOWER Machine Learning Work Group (OPMLWG). The new OPMLWG includes members like Google, NVIDIA and Mellanox to provide a forum for collaboration that will help define frameworks for the productive development and deployment of ML solutions using OpenPOWER ecosystem technology. The foundation has also surpassed 300-members, with new participants such as Kinetica, Red Hat, and Toshiba. For traditional enterprise data centers, the future increasingly is pointing toward cognitive in one form or another.

DancingDinosaur is Alan Radding, a veteran information technology analyst, writer, and ghost-writer. Please follow DancingDinosaur on Twitter, @mainframeblog. See more of his IT writing at technologywriter.com and here.

 

Compuware-Syncsort-Splunk to Boost Mainframe Security

April 6, 2017

The mainframe has proven to be remarkably secure over the years, racking up the highest security certifications available. But there is still room for improvement. Earlier this week Compuware announced Application Audit, a software tool that aims to transform mainframe cybersecurity and compliance through real-time capture of user behavior.

Capturing user behavior, especially in real-time, is seemingly impossible if you have to rely on the data your collect from the various logs and SMF data.  Compuware’s solution, Application Audit, in conjunction with Syncsort and Splunk, fully captures and analyzes start-to-finish mainframe application user behavior.

As Compuware explains: Most enterprises still rely on disparate logs and SMF data from security products such as RACF, CA-ACF2 and CA-Top Secret to piece together user behavior.  This is too slow if you want to capture bad behavior while it’s going on. Some organization try to apply analytics to these logs but that also is too slow. By the time you have collected enough logs to deduce who did what and when the damage may have been done.  Throw in the escalating demands of cross-platform enterprise cybersecurity and increasingly burdensome global compliance mandates you haven’t a chance without an automated tool optimized for this.

Fortunately, the mainframe provides rich and comprehensive session data you can run through and analyze with Application Audit and in conjunction with the organization’s security information and event management (SIEM) systems to more quickly and effectively see what really is happening. Specifically, it can:

  • Detect, investigate, and respond to inappropriate behavior by internal users with access
  • Detect, investigate, and respond to hacked or illegally accessed user accounts
  • Support criminal/legal investigations with complete and credible forensics
  • Fulfill compliance mandates regarding protection of sensitive data

IBM, by the way, is not ignoring the advantages of analytics for z security.  Back in February you read about IBM bringing its cognitive system to the z on DancingDinosaur.  IBM continues to flog cognitive on z for real-time analytics and security; promising to enable faster customer insights, business insights, and systems insights with decisions based on real-time analysis of both current and historical data delivered on an analytics platform designed for availability, optimized for flexibility, and engineered with the highest levels of security. Check out IBM’s full cognitive for z pitch.

The data Compuware and Syncsort collect with Application Audit is particularly valuable for maintaining control of privileged mainframe user accounts. Both private- and public-sector organizations are increasingly concerned about insider threats to both mainframe and non-mainframe systems. Privileged user accounts can be misused by their rightful owners, motivated by everything from financial gain to personal grievances, as well as by malicious outsiders who have illegally acquired the credentials for those accounts. You can imagine what havoc they could wreak.

In addition, with Application Audit Compuware is orchestrating a number of players to deliver the full security picture. Specifically, through collaboration with CorreLog, Syncsort and Splunk, Compuware is enabling enterprise customers to integrate Application Audit’s mainframe intelligence with popular SIEM solutions such as Splunk, IBM QRadar, and HPE Security ArcSight ESM. Additionally, Application Audit provides an out-of-the-box Splunk-based dashboard that delivers value from the start. As Compuware explains, these integrations are particularly useful for discovering and addressing security issues associated with today’s increasingly common composite applications, which have components running on both mainframe and non-mainframe platforms. SIEM integration also ensures that security, compliance and other risk management staff can easily access mainframe-related data in the same manner as they access data from other platforms.

“Effective IT management requires effective monitoring of what is happening for security, cost reduction, capacity planning, service level agreements, compliance, and other purposes,” noted Stu Henderson, Founder and President of the Henderson Group in the Compuware announcement. “This is a major need in an environment where security, technology, budget, and regulatory pressures continue to escalate.”

DancingDinosaur is Alan Radding, a veteran information technology analyst, writer, and ghost-writer. Please follow DancingDinosaur on Twitter, @mainframeblog. See more of his IT writing at technologywriter.com and here.

 

 

IBM Changes the Economics of Cloud Storage

March 31, 2017

Storage tiering used to be simple: active data went to your best high performance storage, inactive data went to low cost archival storage, and cloud storage filled in for one or whatever else was needed. Unfortunately, today’s emphasis on continuous data analytics, near real-time predictive analytics, and now cognitive has complicated this picture and the corresponding economics of storage.

In response, last week IBM unveiled new additions to the IBM Cloud Object Storage family. The company is offering clients new choices for archival data and a new pricing model to more easily apply intelligence to unpredictable data patterns using analytics and cognitive tools.

Analytics drive new IBM cloud storage pricing

By now, line of business (LOB) managers, having been exhorted to leverage big data and analytics for years, are listening. More recently, the analytics drumbeat has expanded to include not just big data but sexy IoT, predictive analytics, machine learning, and finally cognitive science. The idea of keeping data around for a few months and parking it in a long term archive to never be looked at again until it is finally deleted permanently just isn’t happening as it was supposed to (if it ever did). The failure to permanently remove expired data can become costly from a storage standpoint as well as risky from an e-discovery standpoint.

IBM puts it this way: Businesses typically have to manage across three types of data workloads: “hot” for data that’s frequently accessed and used; “cool” for data that’s infrequently accessed and used; and “cold” for archival data. Cold storage is often defined as cheaper but slower. For example, if a business uses cold storage, it typically has to wait to retrieve and access that data, limiting the ability to rapidly derive analytical or cognitive insights. As a result, there is a tendency to store data in more expensive hot storage.

IBM’s new cloud storage offering, IBM Cloud Object Storage Flex (Flex), uses a “pay as you use” model of storage tiers potentially lowering the price by 53 percent compared to AWS S3 IA1 and 75 percent compared to Azure GRS Cool Tier.2 (See footnotes at the bottom of the IBM press release linked to above. However IBM is not publishing the actual Flex storage prices.) Flex, IBM’s new cloud storage service, promises simplified pricing for clients whose data usage patterns are difficult to predict. Flex promises organizations will benefit from the cost savings of cold storage for rarely accessed data, while maintaining high accessibility to all data.

Of course, you could just lower the cost of storage by permanently removing unneeded data.  Simply insist that the data owners specify an expiration date when you set up the storage initially. When the date arrives in 5, 10, 15 years automatically delete the data. At least that’s how I was taught eons ago. Of course storage costs orders of magnitude less now although storage volumes are orders of magnitude greater and near real-time analytics weren’t in the picture.

Without the actual rates for the different storage tiers you cannot determine how much Storage Flex may save you.  What it will do, however, is make it more convenient to perform analytics on archived data you might otherwise not bother with.  Expect this issue to come up increasingly as IoT ramps up and you are handling more data that doesn’t need hot storage beyond the first few minutes of its arrival.

Finally, the IBM Cloud Object Storage Cold Vault (Cold Vault) service gives clients access to cold storage data on the IBM Cloud and is intended to lead the category for cold data recovery times among its major competitors. Cold Vault joins its existing Standard and Vault tiers to complete a range of IBM cloud storage tiers that are available with expanded expertise and methods via Bluemix and through the IBM Bluemix Garages.

DancingDinosaur is Alan Radding, a veteran information technology analyst, writer, and ghost-writer. Please follow DancingDinosaur on Twitter, @mainframeblog. See more of his IT writing at technologywriter.com and here.

 


%d bloggers like this: