Posts Tagged ‘Linux’

Get a Next-Gen Datacenter with IBM-Nutanix POWER8 System

July 14, 2017

First announced by IBM on May 16 here, this solution, driven by client demand for a simplified hyperconverged—combined server, network, storage, hardware, software—infrastructure, is designed for data-intensive enterprise workloads.  Aimed for companies increasingly looking for the ease of deployment, use, and management that hyperconverged solutions promise. It is being offered as an integrated hardware and software offering in order to deliver on that expectation.

Music made with IBM servers, storage, and infrastructure

IBM’s new POWER8 hyperconverged solutions enable a public cloud-like experience through on-premises infrastructure with top virtualization and automation capabilities combined with Nutanix’s public and on-premises cloud capabilities. They provide a combination of reliable storage, fast networks, scalability and extremely powerful computing in modular, scalable, manageable building blocks that can be scaled simply by adding nodes when needed.

Over time, IBM suggests a roadmap of offerings that will roll out as more configurations are needed to satisfy client demand and as feature and function are brought into both the IBM Cognitive Systems portfolio and the Nutanix portfolio. Full integration is key to the value proposition of this offering so more roadmap options will be delivered as soon as feature function is delivered and integration testing can be completed.

Here are three immediate things you might do with these systems:

  1. Mission-critical workloads, such as databases, large data warehouses, web infrastructure, and mainstream enterprise apps
  2. Cloud native workloads, including full stack open source middleware, enterprise databases
    and containers
  3. Next generation cognitive workloads, including big data, machine learning, and AI

Note, however, the change in IBM’s pricing strategy. The products will be priced with the goal to remain neutral on total cost of acquisition (TCA) to comparable offerings on x86. In short, IBM promises to be competitive with comparable x86 systems in terms of TCA. This is a significant deviation from IBM’s traditional pricing, but as we have started to see already and will continue to see going forward IBM clearly is ready to play pricing flexibility to win the deals on products it wants to push.

IBM envisions the new hyperconverged systems to bring data-intensive enterprise workloads like EDB Postgres, MongoDB and WebSphere into a simple-to-manage, on-premises cloud environment. Running these complex workloads on IBM Hyperconverged Nutanix POWER8 system can help an enterprise quickly and easily deploy open source databases and web-serving applications in the data center without the complexity of setting up all of the underlying infrastructure plumbing and wrestling with hardware-software integration.

And maybe more to IBM’s ultimate aim, these operational data stores may become the foundational building blocks enterprises will use to build a data center capable of taking on cognitive workloads. These ever-advancing workloads in advanced analytics, machine learning and AI will require the enterprise to seamlessly tap into data already housed on premises. Soon expect IBM to bring new offerings to market through an entire family of hyperconverged systems that will be designed to simply and easily deploy and scale a cognitive cloud infrastructure environment.

Currently, IBM offers two systems: the IBM CS821 and IBM CS822. These servers are the industry’s first hyperconverged solutions that marry Nutanix’s one-click software simplicity and scalability with the proven performance of the IBM POWER architecture, which is designed specifically for data-intensive workloads. The IBM CS822 (the larger of the two offerings) sports 22 POWER8 processor cores. That’s 176 compute threads, with up to 512 GB of memory and 15.36 TB of flash storage in a compact server that meshes seamlessly with simple Nutanix Prism management.

This server runs Nutanix Acropolis with AHV and little endian Linux. If IBM honors its stated pricing policy promise, the cost should be competitive on the total cost of acquisition for comparable offerings on x86. DancingDinosaur is not a lawyer (to his mother’s disappointment), but it looks like there is considerable wiggle room in this promise. IBM Hyperconverged-Nutanix Systems will be released for general availability in Q3 2017. Specific timelines, models, and supported server configurations will be announced at the time of availability.

DancingDinosaur is Alan Radding, a veteran information technology analyst, writer, and ghost-writer. Please follow DancingDinosaur on Twitter, @mainframeblog. See more of his IT writing at technologywriter.com and here.

 

IBM Power and z Platforms Show Renewed Excitement

June 30, 2017

Granted, 20 consecutive quarters of posting negative revenue numbers is enough to get even the most diehard mainframe bigot down. If you ran your life like that your house and your car would have been seized by the bank months ago.

Toward the end of June, however, both z and Power had some good news. First,  a week ago IBM announced that corporate enterprise users ranked the IBM z  enterprise servers as the most reliable hardware platform available on the market today. In its enterprise server category the survey also found that IBM Power Systems achieved the highest levels of reliability and uptime when compared with 14 server hardware options and 11 server hardware virtualization platforms.

IBM links 2 IBM POWER8 with NVIDIA NVLink with 4 NVIDIA Tesla P100 accelerators

The results were compiled and reported by the ITIC 2017 Global Server Hardware and Server OS Reliability survey, which polled 750 organizations worldwide during April/May 2017. Also among the survey finding:

  • IBM z Systems Enterprise mainframe class systems, had zero percent incidents of more than four hours of per server/per annum downtime of any hardware platform. Specifically, IBM z Systems mainframe class servers exhibit true mainframe fault tolerance experiencing just 0.96 minutes of minutes of unplanned per server, per annual downtime. That equates to 8 seconds per month of “blink and you miss it,” or 2 seconds of unplanned weekly downtime. This is an improvement over the 1.12 minutes of per server/per annum downtime the z Systems servers recorded in ITIC’s 2016 – 2017 Reliability poll nine months ago.
  • IBM Power Systems has the least amount of unplanned downtime, with 2.5 minutes per server/per year of any mainstream Linux server platforms.
  • IBM and the Linux operating system distributions were either first or second in every reliability category, including virtualization and security.

The survey also highlighted market reliability trends. For nearly all companies surveyed, having four nines (99.99%) of availability, equating to less than one hour of system downtime per year was a key factor in its decision.

Then consider the increasing costs of downtime. Nearly all survey respondents claimed that one hour of downtime costs them more than $150k, with one-third estimating that the same will cost their business up to $400k.

With so much activity going on 24×7, for an increasing number of businesses, 4 nines of availability is no longer sufficient.  These businesses are adopting carrier levels of availability; 5 nines or 6 nines (or 99.999 to 99.9999 percent) availability, which translates to downtime per year of 30 seconds (6 nines) or 5 minutes (5 nines) of downtime per year.

According to ITIC’s 2016 report: IBM’s z Enterprise mainframe customers reported the least amount of unplanned downtime and the highest percentage of five nines (99.999%) uptime of any server hardware platform.

Just this week, IBM announced that according to results from International Data Corporation (IDC) Worldwide Quarterly Server Tracker® (June, 2017) IBM exceeded market growth by 3x compared with the total Linux server market, which grew at 6 percent. The improved performance are the result of success across IBM Power Systems including IBM’s OpenPOWER LC servers and IBM Power Systems running SAP HANA as well as the OpenPOWER-Ready servers developed through the OpenPOWER Foundation.

As IBM explains it: Power Systems market share growth is underpinned by solutions that handle fast growing applications, like the deep learning capabilities within the POWER8 architecture. In addition these are systems that expand IBM’s Linux server portfolio, which have been co-developed with fellow members of the OpenPOWER Foundation

Now all that’s needed is IBM’s sales and marketing teams to translate this into revenue. Between that and the new systems IBM has been hinting at for the past year maybe the consecutive quarterly losses might come to an end this year.

DancingDinosaur is Alan Radding, a veteran information technology analyst, writer, and ghost-writer. Please follow DancingDinosaur on Twitter, @mainframeblog. See more of his IT writing at technologywriter.com and here.

 

IBM Puts Open DBaaS on IBM OpenPOWER LC Servers

June 15, 2017

Sometimes IBM seems to be thrashing around looking for anything hot that’s selling, and the various NoSQL databases definitely are hot. The interest is driven by DevOps, cloud, and demand for apps fast.

A month or so ago the company took its Power LC server platform to the OpenPOWER Developer Conference in San Francisco where they pitched Database-as-a-Service (DBaaS) and a price-performance guarantee: OpenPOWER LC servers designed specifically for Big Data to deliver a 2.0x price-performance advantage over x86 for MongoDB and 1.8x for EDB PostgreSQL 9.5 guaranteed. With organizations seeking any performance advantage, these gains matter.

There are enough caveats that IBM will almost never be called to deliver on the guarantee. So, don’t expect to cash in on this very quickly. As IBM says in the miles of fine print: the company will provide additional performance optimization and tuning services consistent with IBM Best Practices, at no charge.  But the guarantee sounds intriguing. If you try it, please let DancingDinosaur know how it works out.

IBM Power System S822LC for Big Data

BTW, IBM published the price for the S822LC for big data as starting at $6,399.00 USD. Price includes shipping. Linux OS, however, comes for an additional charge.

Surprisingly, IBM is not aiming this primarily to the IBM Cloud. Rather, the company is targeting the private cloud, the on-premises local version. Its Open DBaaS toolkit, according to IBM, provides enterprise clients with a turnkey private cloud solution that pre-integrates an Open Source DB image library, OpenStack-based private cloud, and DBaaS software packages with hardware (servers/storage/network switches/rack) and a single source of support to enable a DBaaS self-service portal for enterprise developers and LOB users to provision MongoDB, Postgres, and others in minutes. But since it is built on OpenStack, it also supports hybrid cloud integration with IBM Cloud offerings via OpenStack APIs.

In terms of cost it seems remarkably reasonable. It comes in four reference configurations. The Starter configuration is ~$80k (US list price) and includes 3 Power 822LC servers, pair of network switches, rack, DBaaS Toolkit software, and IBM Lab Services. Other configurations include Entry, Cloud Scale, and Performance configurations that have been specified for additional compute, storage, and OpenStack control plane nodes along with high-capacity JBOD storage drawers. To make this even easier, each configuration can be customized to meet user requirements. Organizations also can provide their own racks and/or network switches.

Furthermore, the Power 822LC and Power 821LC form the key building blocks for the compute, storage and OpenStack control plane nodes. As a bonus, however, IBM includes the new 11-core Power 822LC, which provides an additional 10-15% performance boost over the 10-core Power 822LC for the same price.

This is a package deal, at least if you want the best price and to deploy it fast. “As the need for new applications to be delivered faster than ever increases in a digital world, developers are turning to modern software development models including DevOps, as-a-Service, and self-service to increase the volume, velocity and variety of business applications,” said Terri Virnig, VP, Power Ecosystem and Strategy at IBM. Open Platform for DBaaS on IBM in the announcement. Power Systems DBaaS package  includes:

  • A self-service portal for end users to deploy their choice of the most popular open source community databases including MongoDB, PostgreSQL, MySQL, MariaDB, Redis, Neo4j and Apache Cassandra deployable in minutes
  • An elastic cloud infrastructure for a highly scalable, automated, economical, and reliable open platform for on-premises, private cloud delivery of DBaaS
  • A disk image builder tool for organizations that want to build and deploy their own custom databases to the database image library

An open source, cloud-oriented operations manager with dashboards and tools will help you visualize, control, monitor, and analyze the physical and virtual resources. A turnkey, engineered solution comprised of compute, block and archive storage servers, JBOD disk drawers, OpenStack control plane nodes, and network switches pre-integrated with the open source DBaaS toolkit is available through GitHub here.

DancingDinosaur is Alan Radding, a veteran information technology analyst, writer, and ghost-writer. Please follow DancingDinosaur on Twitter, @mainframeblog. See more of his IT writing at technologywriter.com and here.

 

Syncsort Drives zSystem and Distributed Data Integration

June 8, 2017

IBM appears to be so busy pursuing its strategic imperatives—security, blockchain, quantum computing, and cognitive computing—that it seems to have forgotten the daily activities that make up the bread-and-butter of mainframe data centers. Stepping up to fill the gap have been mainframe ISVs like Compuware, Syncsort, Data Kinetics, and a few others.

IBM’s Project DataWorks taps into unstructured data often missed

IBM hasn’t completely ignored this need. For instance, Project DataWorks uses Watson Analytics and natural language processing to analyze and create complex visualizations. Syncsort, on the other hand, latched onto open Apache technologies, starting in the fall of 2015. Back then it introduced a set of tools to facilitate data integration through Apache Kafka and Apache Spark, two of the most active Big Data open source projects for handling real-time, large-scale data processing, feeds, and analytics.

Syncsort’s primary integration vehicle then revolved around the Intelligent Execution capabilities of its DMX data integration product suite with Apache Spark. Intelligent Execution allows users to visually design data transformations once and then run them anywhere – across Hadoop, MapReduce, Spark, Linux, Windows, or Unix, both on premise or in the cloud.

Since then Syncsort, in March, announced another big data integration solution. This time its DMX-h, is now integrated with Cloudera Director, enabling organizations to easily deploy DMX-h along with Cloudera Enterprise on Amazon Web Services, Microsoft Azure, or Google Cloud. By deploying DMX-h with CDH, Syncsort explained, organizations can quickly pull data into new, ready-to-work clusters in the cloud. This accelerates how quickly they can take advantage of big data cloud benefits, including cost savings and Data-as-a-Service (DaaS) delivery.

A month before that, this past February, Syncsort introduced new enhancements in its Big Data integration solution by again deploying DMX-h to deliver integrated workflow capabilities and Spark 2.0 integration, which simplifies Hadoop and Spark application development, effectively enabling mainframe data centers to extract maximum value from their data assets.

In addition, Syncsort brought new integrated workflow capabilities and Spark 2.0 integration to simplify Hadoop and Spark application development. It lets data centers tap value from their enterprise data assets regardless of where it resides, whether on the mainframe, in distributed systems, or in the cloud.

Syncsort’s new integrated workflow capability also gives organizations a simpler, more flexible way to create and manage their data pipelines. This is done through the company’s design-once, deploy-anywhere architecture with support for Apache Spark 2.0, which makes it easy for organizations to take advantage of the benefits of Spark 2.0 and integrated workflow without spending time and resources redeveloping their jobs.

Assembling such an end-to-end data pipeline can be time-consuming and complicated, with various workloads executed on multiple platforms, all of which need to be orchestrated and kept up to date. Delays in such complicated development, however, can prevent organizations from getting the timely insights they need for effective decision-making.

Enter Syncsort’s Integrated Workflow, which helps organizations manage various workloads, such as batch ETL on large repositories of historical data. This can be done by referencing business rules during data ingest in a single workflow, in effect simplifying and speeding development of the entire data pipeline, from accessing critical enterprise data, to transforming that data, and ultimately analyzing it for business insights.

Finally, in October 2016 Syncsort announced new capabilities in its Ironstream software that allows organizations to access and integrate mainframe log data in real-time to Splunk IT Service Intelligence (ITSI). Further, the integration of Ironstream and Compuware’s Application Audit software deliver the audit data to Splunk Enterprise Security (ES) for Security Information and Event Management (SIEM). This integration improves an organization’s ability to detect threats against critical mainframe data, correlate them with related information and events, and satisfy compliance requirements.

DancingDinosaur is Alan Radding, a veteran information technology analyst, writer, and ghost-writer. Please follow DancingDinosaur on Twitter, @mainframeblog. See more of his IT writing at technologywriter.com and here.

 

IBM Demonstrates Quantum Computing Advantage

May 12, 2017

In an announcement last week, IBM reported that scientists from IBM Research and Raytheon BBN have demonstrated one of the first proven examples of a quantum computer’s advantage over a conventional computer. By probing a black box containing an unknown string of bits, they showed that just a few superconducting qubits can discover the hidden string faster and more efficiently than today’s computers. Their research was published in a paper titled, “Demonstration of quantum advantage in machine learning” in nature.com.

With IBM’s current 5 qubit processor, the quantum algorithm consistently identified the sequence in up to 100x fewer computational steps and was more tolerant of noise than the conventional (non-quantum) algorithm. This is much larger than any previous head-to-head comparison between quantum and conventional processors.

Courtesy: IBM Research

The graphic above defines 3 types of quantum computers. At the top is the quantum annealer, described as the least powerful and most restrictive.  In the middle sits analog quantum, 50-100 qubits, a device able to simulate complex quantum interactions. This will probably be IBM’s next quantum machine; currently IBM offers a 5 qubit device. At the bottom sits the universal quantum. IBM suggests this will scale to over 100,000 qubits and be capable of handling machine learning, quantum chemistry, optimization problems, secure computing, and more. It will be exponentially faster than traditional computers and be able to handle just about all the things even the most powerful conventional supercomputers cannot do now.

The most powerful z System, regardless of how many cores or accelerators or memory or bandwidth, remains a traditional, conventional computer. It deals with problems as a series of basic bits, sequences of 0 or 1. That it runs through these sequences astoundingly fast fools us into thinking that there is something beyond the same old digital computing we have known for the last 50 years or more.

Digital computers see the world and the problems you trying to solve as sequences of 0 and 1. That’s it; there is nothing in-between. They store numbers as sequences of 0 and 1 in memory, and they process stored numbers using only the simplest mathematical operations, add and subtract. As a college student DancingDinosaur was given the most powerful TI programmable calculator then available and, with a few buddies, we tried to come up with things it couldn’t do. No matter how many beer-inspired tries, we never found something it couldn’t handle.  The TI was just another digital device.

Quantum computers can digest 0 and 1 but have a broader array of tricks. For example, contradictory things can exist concurrently. Quantum geeks often cite a riddle dubbed Schrödinger’s cat. In this riddle the cat can be alive and dead at the same time because quantum system can handle multiple, contradictory states. If we had known of Schrödinger’s cat my buddies and I might have stumped that TI calculator.

In an article on supercomputing in Explain That Stuff by Chris Woodford he shows the thinking behind Schrödinger’s cat, called superposition.  This is where two waves, representing a live cat and a dead one, combine to make a third that contains both cats or maybe hundreds of cats. The wave inside the pipe contains all these waves simultaneously: they’re added to make a combined wave that includes them all. Qubits use superposition to represent multiple states (multiple numeric values) simultaneously.

In its latest quantum achievement IBM with only a 5 cubit the quantum algorithm consistently identified the sequence in up to a 100x fewer computational steps and was more tolerant of noise than the conventional (non-quantum) algorithm. This is much larger than any previous head-to-head comparison between quantum and conventional processors.

In effect, the IBM-Raytheon team programmed a black box such that, with the push of a button, it produces a string of bits with a hidden a pattern (such as 0010) for both a conventional computation and a quantum computation. The conventional computer examines the bits one by one. Each result gives a little information about the hidden string. By forcing the conventional computer to query the black box many times it can determine the full answer.

The quantum computer employs a quantum algorithm that extracts the information hidden in the quantum phase — information to which a conventional algorithm is completely blind. The bits are then measured as usual and, in about half the time, the hidden string can be fully revealed.

Most z data centers can’t use quantum capabilities for their daily work, at least not yet. As Woodford noted: It’s very early for the whole field—and most researchers agree that we’re unlikely to see practical quantum computers appearing for many years—perhaps even decades. Don’t bet on it; at the rate IBM is driving this, you’ll probably see useful things much sooner. Maybe tomorrow.

DancingDinosaur is Alan Radding, a veteran information technology analyst, writer, and ghost-writer. Please follow DancingDinosaur on Twitter, @mainframeblog. See more of his IT writing at technologywriter.com and here.

 

IBM Shows Off POWER and NVIDIA GPU Setting High Performance Record 

May 4, 2017

The record achievement used 60 Power processors and 120 GPU accelerators to shatter the previous supercomputer record, which used over a 700,000 processors. The results point to how dramatically the capabilities of high performance computing (HPC) has increase while the cost of HPC systems has declined. Or put another way: the effort demonstrates the ability of NVIDIA GPUs to simulate one billion cell models in a fraction of the time, while delivering 10x the performance and efficiency.

Courtesy of IBM: Takes a lot of processing to take you into a tornado

In short, the combined success of IBM and NVIDIA puts the power of cognitive computing within the reach of mainstream enterprise data centers. Specifically the project performed reservoir modeling to predict the flow of oil, water, and natural gas in the subsurface of the earth before they attempt to extract the maximum oil in the most efficient way. The effort, in this case, involved a billion-cell simulation, which took just 92 minutes using 30 for HPC servers equipped with 60 POWER processors and 120 NVIDIA Tesla P100 GPU accelerators.

“This calculation is a very salient demonstration of the computational capability and density of solution that GPUs offer. That speed lets reservoir engineers run more models and ‘what-if’ scenarios than previously,” according to Vincent Natoli, President of Stone Ridge Technology, as quoted in the IBM announcement. “By increasing compute performance and efficiency by more than an order of magnitude, we’re democratizing HPC for the reservoir simulation community,” he added.

“The milestone calculation illuminates the advantages of the IBM POWER architecture for data-intensive and cognitive workloads.” said Sumit Gupta, IBM Vice President, High Performance Computing, AI & Analytics in the IBM announcement. “By running Stone Ridge’s ECHELON on IBM Power Systems, users can achieve faster run-times using a fraction of the hardware.” Gupta continued. The previous record used more than 700,000 processors in a supercomputer installation that occupies nearly half a football field while Stone Ridge did this calculation on two racks of IBM Power Systems that could fit in the space of half a ping-pong table.”

This latest advance challenges perceived misconceptions that GPUs could not be efficient on complex application codes like reservoir simulation and are better suited to simple, more naturally parallel applications such as seismic imaging. The scale, speed, and efficiency of the reported result disprove this misconception. The milestone calculation with a relatively small server infrastructure enables small and medium-size oil and energy companies to take advantage of computer-based reservoir modeling and optimize production from their asset portfolio.

Billion cell simulations in the industry are rare in practice, but the calculation was accomplished to highlight the performance differences between new fully GPU-based codes like the ECHELON reservoir simulator and equivalent legacy CPU codes. ECHELON scales from the cluster to the workstation and while it can simulate a billion cells on 30 servers, it can also run smaller models on a single server or even on a single NVIDIA P100 board in a desktop workstation, the latter two use cases being more in the sweet spot for the energy industry, according to IBM.

As importantly, the company notes, this latest breakthrough showcases the ability of IBM Power Systems with NVIDIA GPUs to achieve similar performance leaps in other fields such as computational fluid dynamics, structural mechanics, climate modeling, and others that are widely used throughout the manufacturing and scientific community. By taking advantage of POWER and GPUs organizations can literally do more with less, which often is an executive’s impossible demand.

DancingDinosaur is Alan Radding, a veteran information technology analyst, writer, and ghost-writer. Please follow DancingDinosaur on Twitter, @mainframeblog. See more of his IT writing at technologywriter.com and here.

 

No letup by IBM on Blockchain

April 27, 2017

IBM continues to push blockchain. Its latest announcement on the subject, Three Blockchain Adoption Principles Essential for Every CEO, came early this week. The basic pitch: in certain market segments blockchain could potentially help save billions of dollars annually and significantly reduce delays and spoilage. Citing the World Economic Forum, the company adds: “reducing barriers within the international supply chain could increase worldwide GDP by almost five percent and total trade volume by 15 percent.”  That should be sweet music to any C-suite exec.

Blockchain enables transparent food chain

In a related announcement also this week, IBM Japan, Mizuho Financial Group, and Mizuho Bank are building a blockchain-based trade financing platform for trade financing. With the platform, Mizuho is aiming to streamline trading operations and improve supply chain efficiency. The resulting timely and highly secure exchange of trade documents turns out to be essential for stakeholders in the supply chain ecosystem. Digitizing trade information on a blockchain can help alter the way information is shared, infusing greater trust into transactions, making it easier for parties involved in the supply chain, including exporters, importers, shippers, insurance companies, port operators, and port authorities to share critical shipment data in near real-time.

IBM is emerging as a leader in secure open-source blockchain solutions built for the enterprise. An early member of the Linux Foundation’s Hyperledger Project, the company has worked with more than 400 clients across multiple business segments to implement blockchain applications delivered via the IBM Cloud.

DancingDinosaur has its own 3 reasons enterprise data center execs should be excited by blockchain. They are different and more z-centric than IBM’s. First, you probably already have a z System, and the z’s legendary security, availability, and scalability make it a natural for blockchain. Second, the z already comes optimized to handle transactions and most of your transaction data already lives on the z, making it very efficient from a processing standpoint.  Third, until or unless your blockchain grows to some enormous size, it will barely consume any system resources or generate overhead. In that sense, blockchain on your z comes virtually free.

The following blockchain principles are based on IBM’s customer experience:

  1. Blockchain has the potential to transform trade, transactions and business processes: The two concepts underpinning blockchain are “business network” and “ledger.” Taken together, these are what make blockchain a smart, tamper-resistant way to conduct trade, transactions and business processes. Network members exchange assets through a ledger that all members can access and share. The ledger syncs across the network with all members needing to confirm a transaction of tangible or intangible assets before it is approved and stored on the blockchain. This shared view helps establish legitimacy and transparency, even when parties are not familiar with one another.
  2. The value, it turns out, resides in the ecosystem as the blockchain network grows: This should be no surprise to an exec who saw the growth, first of LANs and WANs, and later the Internet and Web. So too, as a business network blockchain can include several different types of participants. Depending on the number of participants in a blockchain network, the value of assets being exchanged, and the need to authorize members with varying credentials adopters should observe the difference between “permissioned” and “permission-less” blockchain networks. The real value for blockchain is achieved when these business networks grow. With a strong ecosystem, blockchain networks can more easily reach critical mass, allowing the users to build new business models and reinvent and automate transaction processes.
  3. Blockchain can significantly improve visibility and trust across business: Block chains can reduce transaction settlement times from days or weeks to seconds by providing immediate visibility to all participants. The technology can also be used to cut excess costs by removing intermediary third-parties, those typically required to verify transactions. Because blockchain is built on the concept of trust, it can help reduce risks of illicit practices carried out over payment networks, helping to mitigate fraud and cybercrimes. Speed, cost efficiency, and transparency are among blockchain’s most significant benefits in the enterprise and within ecosystems of companies conducting trade. IBM, Walmart and Tsinghua University, for example, are exploring the use of blockchain to help address the challenges surrounding food safety [see graphic above]. By allowing members within the supply chain to see the same records, blockchain helps narrow down the source of a contamination

“Critical success factors in blockchain engagements require top-down executive support for innovative use cases and bringing key network participants into the dialogue from the start,” according to Marie Wieck, general manager, IBM Blockchain.

DancingDinosaur is Alan Radding, a veteran information technology analyst, writer, and ghost-writer. Please follow DancingDinosaur on Twitter, @mainframeblog. See more of his IT writing at technologywriter.com and here.

IBM Gets Serious About Open Data Science (ODS) with Anaconda

April 21, 2017

As IBM rapidly ramps up cognitive systems in various forms, its two remaining platforms, z System and POWER, get more and more interesting. This week IBM announced it was bringing the Anaconda Open Data Science (ODS) platform to its Cognitive Systems and PowerAI.

Anaconda, Courtesy Pinterest

Specifically, Anaconda will integrate with the PowerAI software distribution for machine learning (ML) and deep learning (DL). The goal: make it simple and fast to take advantage of Power performance and GPU optimization for data-intensive cognitive workloads.

“Anaconda on IBM Cognitive Systems empowers developers and data scientists to build and deploy deep learning applications that are ready to scale,” said Bob Picciano, senior vice president of IBM Cognitive Systems. Added Travis Oliphant, co-founder and chief data scientist, Continuum Analytics, which introduced the Anaconda platform: “By optimizing Anaconda on Power, developers will also gain access to the libraries in the PowerAI Platform for exploration and deployment in Anaconda Enterprise.”

With more than 16 million downloads to date, Anaconda has emerged as the Open Data Science platform leader. It is empowering leading businesses across industries worldwide with tools to identify patterns in data, uncover key insights, and transform basic data into the intelligence required to solve the world’s most challenging problems.

As one of the fastest growing fields of AI, DL makes it possible to process enormous datasets with millions or even billions of elements and extract useful predictive models. DL is transforming the businesses of leading consumer Web and mobile application companies, and it is catching on with more traditional business.

IBM developed PowerAI to accelerate enterprise adoption of open-source ML and DL frameworks used to build cognitive applications. PowerAI promises to reduce the complexity and risk of deploying these open source frameworks for enterprises on the Power architecture and is tuned for high performance, according to IBM. With PowerAI, organizations also can realize the benefit of enterprise support on IBM Cognitive Systems HPC platforms used in the most demanding commercial, academic, and hyperscale environments

For POWER shops getting into Anaconda, which is based on Python, is straightforward. You need a Power8 with IBM GPU hardware or a Power8 combined with a Nvidia GPU, in effect a Minsky machine. It’s essentially a developer’s tool although ODS proponents see it more broadly, bridging the gap between traditional IT and lines of business, shifting traditional roles, and creating new roles. In short, they envision scientists, mathematicians, engineers, business people, and more getting involved in ODS.

The technology is designed to run on the user’s desktop but is packaged and priced as a cloud subscription with a base package of 20 users. User licenses range from $500 per year to $30,000 per year depending on which bells and whistles you include. The number of options is pretty extensive.

According to IBM, this started with PowerAI to accelerate enterprise adoption of open-source ML/DL learning frameworks used to build cognitive applications. Overall, the open Anaconda platform brings capabilities for large-scale data processing, predictive analytics, and scientific computing to simplify package management and deployment. Developers using open source ML/DL components can use Power as the deployment platform and take advantage of Power optimization and GPU differentiation for NVIDIA.

Not to be left out, IBM noted growing support for the OpenPOWER Foundation, which recently announced the OpenPOWER Machine Learning Work Group (OPMLWG). The new OPMLWG includes members like Google, NVIDIA and Mellanox to provide a forum for collaboration that will help define frameworks for the productive development and deployment of ML solutions using OpenPOWER ecosystem technology. The foundation has also surpassed 300-members, with new participants such as Kinetica, Red Hat, and Toshiba. For traditional enterprise data centers, the future increasingly is pointing toward cognitive in one form or another.

DancingDinosaur is Alan Radding, a veteran information technology analyst, writer, and ghost-writer. Please follow DancingDinosaur on Twitter, @mainframeblog. See more of his IT writing at technologywriter.com and here.

 

IBM Changes the Economics of Cloud Storage

March 31, 2017

Storage tiering used to be simple: active data went to your best high performance storage, inactive data went to low cost archival storage, and cloud storage filled in for one or whatever else was needed. Unfortunately, today’s emphasis on continuous data analytics, near real-time predictive analytics, and now cognitive has complicated this picture and the corresponding economics of storage.

In response, last week IBM unveiled new additions to the IBM Cloud Object Storage family. The company is offering clients new choices for archival data and a new pricing model to more easily apply intelligence to unpredictable data patterns using analytics and cognitive tools.

Analytics drive new IBM cloud storage pricing

By now, line of business (LOB) managers, having been exhorted to leverage big data and analytics for years, are listening. More recently, the analytics drumbeat has expanded to include not just big data but sexy IoT, predictive analytics, machine learning, and finally cognitive science. The idea of keeping data around for a few months and parking it in a long term archive to never be looked at again until it is finally deleted permanently just isn’t happening as it was supposed to (if it ever did). The failure to permanently remove expired data can become costly from a storage standpoint as well as risky from an e-discovery standpoint.

IBM puts it this way: Businesses typically have to manage across three types of data workloads: “hot” for data that’s frequently accessed and used; “cool” for data that’s infrequently accessed and used; and “cold” for archival data. Cold storage is often defined as cheaper but slower. For example, if a business uses cold storage, it typically has to wait to retrieve and access that data, limiting the ability to rapidly derive analytical or cognitive insights. As a result, there is a tendency to store data in more expensive hot storage.

IBM’s new cloud storage offering, IBM Cloud Object Storage Flex (Flex), uses a “pay as you use” model of storage tiers potentially lowering the price by 53 percent compared to AWS S3 IA1 and 75 percent compared to Azure GRS Cool Tier.2 (See footnotes at the bottom of the IBM press release linked to above. However IBM is not publishing the actual Flex storage prices.) Flex, IBM’s new cloud storage service, promises simplified pricing for clients whose data usage patterns are difficult to predict. Flex promises organizations will benefit from the cost savings of cold storage for rarely accessed data, while maintaining high accessibility to all data.

Of course, you could just lower the cost of storage by permanently removing unneeded data.  Simply insist that the data owners specify an expiration date when you set up the storage initially. When the date arrives in 5, 10, 15 years automatically delete the data. At least that’s how I was taught eons ago. Of course storage costs orders of magnitude less now although storage volumes are orders of magnitude greater and near real-time analytics weren’t in the picture.

Without the actual rates for the different storage tiers you cannot determine how much Storage Flex may save you.  What it will do, however, is make it more convenient to perform analytics on archived data you might otherwise not bother with.  Expect this issue to come up increasingly as IoT ramps up and you are handling more data that doesn’t need hot storage beyond the first few minutes of its arrival.

Finally, the IBM Cloud Object Storage Cold Vault (Cold Vault) service gives clients access to cold storage data on the IBM Cloud and is intended to lead the category for cold data recovery times among its major competitors. Cold Vault joins its existing Standard and Vault tiers to complete a range of IBM cloud storage tiers that are available with expanded expertise and methods via Bluemix and through the IBM Bluemix Garages.

DancingDinosaur is Alan Radding, a veteran information technology analyst, writer, and ghost-writer. Please follow DancingDinosaur on Twitter, @mainframeblog. See more of his IT writing at technologywriter.com and here.

 


%d bloggers like this: