Posts Tagged ‘blockchain’

Arcati 2017 Mainframe Survey—Cognitive a No-Show

February 2, 2017

DancingDinosaur checks into Arcati’s annual mainframe survey every few years. You can access a copy of the 2017 report here.  Some of the data doesn’t change much, a few percentage points here or there. For example, 75% of the respondents consider the mainframe too expensive. OK, people have been saying that for years.

On the other hand, 65% of the respondents’ mainframes are involved with web services. Half also run Java-based mainframe apps, up from 30% last year, while 17% more are planning to run Java with their mainframe this year. Similarly, 35% of respondents report running Linux on the mainframe, up from 22% last year. Again, 13% of the respondents expect to add Linux this year.  Driving this is the advantageous cost and management benefits that result from consolidating distributed Linux workloads on the z. Yes, things are changing.

linuxone-5558_d_ibm_linuxone_social_tile_990_550_4_081515

The biggest surprise for DancingDinosaur, however, revolved around IBM’s latest strategic initiatives, especially cognitive computing and blockchain.  Other strategic initiatives may include, depending on who is briefing you at the moment—security, data analytics, cloud, hybrid cloud, and mobile. These strategic imperatives, especially cognitive computing, are expected to drive IBM’s revenue. In the latest statement, reported last week in DancingDinosaur, strategic imperatives amounted to 41% of revenue.  Cloud revenue and Cloud-as-a-service also rose considerably, 35% and 61% respectively.

When DancingDinosaur searched the accompanying Arcati vendor report (over 120 vendors with brief descriptions) for cognitive only GT Software came up. IBM didn’t even mention cognitive in its vendor listing, which admittedly was skimpy. The case was the same with Blockchain; only one vendor, Atos, mentioned it and nothing about blockchain in the IBM listing. More vendors, however, noted supporting one or some of the other supposed strategic initiatives.

Overall, the Arcati survey is quite positive about the mainframe. The survey found that 50 percent of sites viewed their mainframe as a legacy system (down from last year’s 62 percent). However, 22 percent (up from 16 percent last year) viewed mainframe as strategic, with 28 percent (up from 22 percent) viewing mainframes as both strategic and legacy.

Reinforcing the value of the mainframe, the survey found 78 percent of sites experienced some kind of increase in capacity. With increased demand for mainframe resources (data and processing), it should not be surprising that respondents report an 81 percent an increase in technology costs. Yet, 38 percent of sites report their people costs have decreased or stayed the same.

Unfortunately, the survey also found that 70 percent of respondents thought there were a cultural barrier between mainframe and other IT professionals. That did not discourage respondents from pointing out the mainframe advantages: 100 percent highlighted the benefit of the mainframe’s availability, 83 percent highlighted security, 75 percent identified scalability, and 71 percent picked manageability as a mainframe benefit.

Also, social media runs on the mainframe. Respondents found social media (Facebook, Twitter, YouTube) useful for their work on the mainframe. Twenty-seven percent report using social (up slightly from 25 percent last year) with the rest not using it at all despite IBM offering Facebook pages dedicated to IMS, CICS, and DB2. DancingDinosaur, only an occasional FB visitor, will check it out and report.

In terms of how mainframes are being used, the Arcati survey found that 25 percent of sites are planning to use Big Data; five percent of sites have adopted it for DevOps while 48 percent are planning to use mainframe DevOps going forward. Similarly, 14 percent of respondents already are reusing APIs while another 41 percent are planning to.

Arcati points out another interesting thought: The survey showed a 55:45 percent split in favor of distributed systems. So, you might expect the spend on the two types of platform to be similar. Yet, the survey found that 87 percent of an organization’s IT spend was going to distributed systems! Apparently mainframes aren’t as expensive as people think. Or put it another way, the cost of owning and operating distributed systems with mainframe-caliber QoS amounts to a lot more than people are admitting.

DancingDinosaur is Alan Radding, a veteran information technology analyst, writer, and ghost-writer. Please follow DancingDinosaur on Twitter, @mainframeblog. See more of his IT writing at technologywriter.com and here.

 

BMC Mainframe Survey Confirms z System Is Here to Stay

November 11, 2016

No surprise there. BMC’s 11th annual mainframe survey covering 1,200 mainframe executives and tech professionals found 58% of respondents reported usage of the mainframe is increasing as they look to capitalize on every infrastructure advantage it provides and add more workloads. Another 23% consider the mainframe as the best option to run critical work.

ibm_system_z10

IBM z10

Driving the continuing interest in the mainframe are the new demands for data handling, scalable processing, analytics, and more. According to the BMC survey nearly 60% of companies are seeing increased data and transaction volumes. They opt to stay with the mainframe for its highly secure, superior data handling and transaction serving, particularly as digital business adds unpredictability and volatility to workloads.

Overall respondents fell into three primary groups: 1) entrenched mainframe shops, 58% that are on board for the long haul; 2) shops, 23% that intend to maintain a steady amount of work on the mainframe; and 3) the 19% that are moving away from the mainframe.  The first two groups, committed mainframe shops, amount to just over survey 80% of the respondents.

Many companies surveyed are focused on addressing the increased workload demands, especially the rapidly growing demand for new applications. But surprisingly, the survey does not directly touch on hybrid cloud, cognitive computing or any of the latest technologies IBM has been promoting, not even DevOps, which can streamline mainframe application development and deployment. “We are not hearing much about a hybrid cloud environments or blockchain yet. Most companies seem to be in the early tire kicking stage, observed John McKenny, BMC Vice President, Strategy and Operations.

Eighty-eight percent of companies in the first group, entrenched mainframe shops, for example, are looking to increase the workloads they run on Java on the mainframe, primarily to address new application demands. It also doesn’t hurt that Java on the mainframe also can help lower data center costs by directing workloads to lower cost assist processors.

Other interesting BMC survey findings:

  • Half of the respondents report keeping 50% of their data on the mainframe and continue to invest in the platform for reasons you already know—security, availability, data serving capability
  • Continued steady growth of Linux in production on the z: 41% in 2014, 48% in 2015, 52% in 2016
  • Increased use of Java on the mainframe report as 67% of respondents cite need to meet growing application demand

Those looking to reduce mainframe presence cited three reasons: 1) perception of high cost, 2) outdated management understanding, and 3) looking for ways to reduce workloads over time.  DancingDinosaur has spoken with mainframe shops intending to migrate off the z and they cite the usual reasons, especially #1 above.

Top mainframe priorities for 2016 according to the BMC survey:  Cost reduction/optimization (65%); data privacy, compliance, security (50%); application availability (49%); application modernization (41%. Responses indicated the priorities for next year haven’t changed at all.

Surprisingly, many of the latest technologies for the z that IBM has touted recently have not yet shown up in the BMC survey responses, except maybe Java and Linux. This would include hybrid clouds, blockchain, IoT, and cognitive computing. IDC, for example, already is projecting cognitive computing to grow at a CAGR of 55.1% from 2016 to 2020. For z shops, however, cognitive computing appears almost invisible.

In some case with surveys like this you need to read between the lines. Where respondents report changes in activity levels driving application growth or the growth of interest in Java or the frequency of application changes and references to operational analytics they’re making oblique references to mobile or big data or even cognitive computing or other recent technologies for the z.

At its best, the BMC notes that digital technologies are transforming the ways in which mainframe shops conduct business and interact with their customers.  Adds BMC mainframe customer Credit Suisse: “IT departments are moving toward centralized, virtualized, and highly automated environments. This is being pursued to drive cost and processing efficiencies. Many companies realize that the Mainframe has provided these benefits for many years and is a mature and stable environment,” said Frank Cortell, Credit Suisse Director of Information Technology.

DancingDinosaur is Alan Radding, a veteran information technology analyst, writer, and ghost-writer. Please follow DancingDinosaur on Twitter, @mainframeblog. See more of his IT writing at technologywriter.com and here.

 

 

IBM 3Q16 Results Telegraph a New z System in 2017

October 27, 2016

DancingDinosaur usually doesn’t like to read too much into the statements of IBM suits at financial briefings. This has been especially true since IBM introduced a new presentation format this year to downplay its platform business and emphasize its strategic imperatives. (Disclaimer: DancingDinosaur is NOT a financial analyst but a technology analyst.)

But this quarter the CFO said flat out: “Our z Systems results reflect a product cycle dynamic, seven quarters into the z13 cycle; revenue was down while margins continue to expand. We continue to add new clients to the platform and we are introducing new technologies like block chain. We announced new services to make it easier to build and test block chain networks in a secure environment as we build our block chain platform it’s been engineered to run on multiple platforms but is optimized for scale, security and resilience on both the IBM mainframe and the IBM cloud.”

linuxone-emperorLinuxONE Emperor

If you parse the first sentence–reflect a product cycle dynamic–he is not too subtly hinting that IBM needs a z System refresh if they want to stop the financial losses with z. You don’t have to be a genius to expect a new z, probably the z14, in 2017. Pictured above is the LinuxONE Emperor, a z optimized to run Linux. The same suit said “We’ve been shifting our platform to address Linux, and in the third quarter Linux grew at a double digit rate, faster than the market.” So based on that we can probably guess that the z14 (or whatever it will be called) will run z/OS, followed shortly by a LinuxONE version to further expand the z System’s Linux footprint.

Timothy Prickett Morgan picked that up too and more. He expects a z14 processor complex will be announced next year around the same time that the Power9 chip ships. In both cases, Power and z customers who can wait will wait, or, if they are smart, will demand very steep discounts on current Power8 hardware to make up for the price/performance improvements that are sure to accompany the upcoming Power9 and z machines.

When it comes to revenue 3Q16 was at best flat, but actually was down again overall. The bright spot again was IBM’s strategic imperatives. As the suit stated: in total, we continue to deliver double-digit revenue growth in our strategic imperatives led by our cloud business. Specifically, cognitive solutions were up 5% and, within that, solution software was up 8%.

Overall, growth in IBM’s strategic imperatives rose 15%. Over the last 12 months, strategic imperatives delivered nearly $32 billion in revenue and now represent 40% of IBM. The suit also emphasized strong performance in IBM’s cloud offerings which increased over 40%, led by the company’s as-a-service offerings. IBM ended the third quarter with an as-a-service run rate of $7.5 billion, up from $6.7 billion last quarter. Most of that was attributed to organic growth, not acquisitions. Also strong was IBM’s revenue performance in security and mobile. In addition, the company experienced growth in its analytic offerings, up 14% this quarter with contributions from the core analytics platform, especially the Watson platform, Watson Health, and Watson IoT.

IBM apparently is convinced that cognitive computing, defined as using data and adding intelligence into products and services to help companies make better decisions, is the wave of the future. As the company sees it, real value lies in providing cognitive capabilities via the IBM cloud. A critical element of its strategy is IBM’s industry focus. Initially industry platforms will address two substantial opportunity areas, financial services and block chain solutions. You can probably add healthcare too.

Blockchain may emerge as the sleeper, although DancingDinosaur has long been convinced that blockchain is ideal for z shops—the z already handles the transactions and delivers the reliability, scalability, availability, and security to do it right.  As IBM puts it, “we believe block chain has the potential to do for trusted transactions what the Internet did for information.” Specifically, IBM is building a complete block chain platform and is now working with over 300 clients to pioneer block chain for business, including CLS, which settles $5 trillion per day in the currency markets, to implement a distributed ledger in support of its payment netting service, and Bank of Tokyo Mitsubishi, for smart contracts to manage service level agreements and automate multi party transactions.

Says Morgan: “IBM is very enthusiastic about using Blockchain in commercial transaction processing settings, and has 40 clients testing it out on mainframes, but this workload will take a long time to grow. Presumably, IBM will also push Blockchain on Power as well.”  Morgan may be right about blockchain coming to Power, but it is a natural for the z right now, whether as a new z14 or a new z-based LinuxONE machine.

DancingDinosaur is Alan Radding, a veteran information technology analyst, writer, and ghostwriter. Please follow DancingDinosaur on Twitter, @mainframeblog. See more of his IT writing at technologywriter.com and here.

 

OpenCAPI, Gen-Z, CCIX Initiate a New Computing Era

October 20, 2016

The next generation data center will be a more open, cooperative, and faster place judging from the remarkably similar makeup of three open consortia, OpenCAPI , Gen-Z, and CCIX. CCIX allows processors based on different instruction set architectures to extend their cache coherency to accelerators, interconnect, and I/O.

OpenCAPI provides a way to attach accelerators and I/O devices with coherence and virtual addressing to eliminate software inefficiency associated with the traditional I/O subsystem, and to attach advanced memory technologies.  The focus of OpenCAPI is on attached devices primarily within a server. Gen-Z, announced around the same time, is a new data access technology that primarily enables read and write operations among disaggregated memory and storage.

open-power-rethink-datacenter

Rethink the Datacenter

It’s quite likely that your next data center will use all three. The OpenCAPI group includes AMD, Dell EMC, Google, Hewlett Packard Enterprise, IBM, Mellanox Technologies, Micron, NVIDIA and Xilinx. Their new specification promises to enable up to 10X faster server performance with the first products expected in the second half of 2017.

The Gen-Z consortium consists Advanced Micro Devices, Broadcom, Huawei Technologies, Red Hat, Micron, Xilinx, Samsung, IBM, and Cray. Other founding members are Cavium, IDT, Mellanox Technologies, Microsemi, Seagate, SK Hynix, and Western Digital. They plan to develop a scalable computing interconnect and protocol that will enable systems to keep with the rapidly rising tide of data that is being generated and that needs to be analyzed. This will require the rapid movement of high volumes of data between memory and storage.

The CCIX initial members include Amphenol Corp., Arteris Inc., Avery Design Systems, Atos, Cadence Design Systems, Inc., Cavium, Inc., Integrated Device Technology, Inc., Keysight Technologies, Inc., Micron Technology, Inc., NetSpeed Systems, Red Hat Inc., Synopsys, Inc., Teledyne LeCroy, Texas Instruments, and TSMC.

The basic problem all three address revolves around how to make the volume and variety of new hardware forge fast communications and work together. In effect each group, from its own particular perspective, aims to boost the performance and interoperability of data center servers, devices, and components engaged in generating and handling myriad data and tasked with analyzing large amounts of that data. This will only be compounded as IoT, blockchain, and cognitive computing ramp up.

To a large extent, this results from the inability of Moore’s Law to continue to double the number of processors indefinitely. Future advances must rely on different sorts of hardware tweaks and designs to deliver greater price/performance.

Then in Aug. 2016 IBM announced a related chip breakthrough.  It unveiled the industry’s first 7 nm chip that could hold more than 20 billion tiny switches or transistors for improved computing power. The new chips could help meet demands of future cloud computing and Big Data systems, cognitive computing, mobile products and other emerging technologies, according to IBM.

Most chips today in servers and other devices use microprocessors between 14 and 22 nanometers (nm). The 7nm technology represents at least a 50 percent power improvement. IBM intends to apply the new chips to analyze DNA, viruses, and exosomes. IBM expects to test this lab-on-a-chip technology starting with prostate cancer.

The point of this digression into chips and Moore’s Law is to suggest the need for tools and interfaces like Open CAPI, Gen-Z, and CCIX. As the use cases for ultra fast data analytics expands along with the expected proliferation of devices speed becomes critical. How long do you want to wait for an analysis of your prostate or breast cells? If the cells are dear to you, every nanosecond matters.

For instance, OpenCAPI provides an open, high-speed pathway for different types of technology – advanced memory, accelerators, networking and storage – to more tightly integrate their functions within servers. This data-centric approach to server design puts the compute power closer to the data and removes inefficiencies in traditional system architectures to help eliminate system bottlenecks that significantly improve server performance.  In some cases OpenCAPI enables system designers to access memory with sub-500 nanosecond latency.

IBM plans to introduce POWER9-based servers that leverage the OpenCAPI specification in the second half of 2017. Similarly, expect other members of OpenPOWER Foundation to introduce OpenCAPI enabled products in the same time frame. In addition, Google and Rackspace’s new server under development, codenamed Zaius and announced at the OpenPOWER Summit in San Jose, will leverage POWER9 processor technology and plans to provide the OpenCAPI interface in its design. Also, Mellanox plans to enable the new specification capabilities in its future products and Xilinx plans to support OpenCAPI enabled FPGAs

As reported at the Gen-Z announcement, “The formation of these new consortia (CCIX, OpenCAPI, and Gen-Z), backed by more than 30 industry-leading global companies, supports the premise that the datacenter of the future will require open standards. We look forward to collaborating with CCIX and OpenCAPI as this new ecosystem takes shape,” said Kurtis Bowman, Gen-Z Consortium president. Welcome to the 7nm computing era.

DancingDinosaur is Alan Radding, a veteran information technology analyst, writer, and ghostwriter. Please follow DancingDinosaur on Twitter, @mainframeblog. See more of his IT writing at technologywriter.com and here.

 

Put your z System at the Center of Blockchain

October 6, 2016

The zSystem has been a leading platform for the world’s top banks for decades and with blockchain the z could capture even more banking and financial services data centers. Two recent IBM Institute for Business Value (IBV) studies show commercial blockchain solutions are rapidly being adopted throughout banking and financial markets dramatically faster than initially expected, according to an IBM announcement late in Sept.  Of course, not every blockchain deployment runs on z but more should be.

blockchainexplained-willian-mougayer

Copyright William Mougayer

According to an IBV study, more than 70 percent of early adopters are prioritizing blockchain efforts in order to break down current barriers to creating new business models and reaching new markets. IBV analyst report the respondents are better positioned to defend themselves against competitors, including those untraditional disruptors like non-bank startups. The majority of respondents are focusing their blockchain efforts on four areas: clearing and settlement, wholesale payments, equity and debt issuance, and reference data.

But blockchain isn’t just a financial services story. Mougayer identifies government services, healthcare, energy, supply chains, and world trade as blockchain candidates. IoT will also be an important area for blockchain, according to a new book on IoT by Maciej Kranz, an IoT pioneer.

As Kranz explains: blockchain has emerged as a technology that allows a secure exchange of value between entities in a distributed fashion. The technology first appeared on most IT radar screens a few years ago in the form of Bitcoin, a virtual currency that relies on blockchain technology to ensure its security and integrity. Although Bitcoin’s future is still uncertain, blockchain is a completely different story.

Blockchain is attracting considerable attention for its ability to ensure the integrity of transactions over the network between any entities. Automobile companies are considering the technology to authenticate connected vehicles in the vehicle-to-vehicle (V2V) environment, notes Kranz. Still others are looking at blockchain to trace the sources of goods, increase food safety, create smart contracts, perform audits, and do much more. Blockchain also provides a natural complement to IoT security in a wide variety of use cases.

The z and especially the newest generation of z Systems is ideal for blockchain work. Zero downtime, industry-leading security, massive I/O, flexibility, high performance at scale, and competitive price/performance along with its current presence in the middle of most transactions, especially financial transactions, makes z a natural for blockchain.

A key driver for blockchain, especially in the banking and financial services segment is the Linux Foundation’s HyperLedger project. This entails a collaborative, open source effort to establish an open blockchain platform that will satisfy a variety of use cases across multiple industries to streamline business processes. Through a cross-industry, open standard for distributed ledgers, virtually any digital exchange of value, such as real estate contracts, energy trades, even marriage licenses can securely and cost-effectively be tracked and traded.

According to Linux Foundation documents, “the Hyperledger Project has ramped up incredibly fast, a testament to how much pent-up interest, potential, and enterprise demand there is for a cross-industry open standard for distributed ledgers.” Linux Foundation members of the Hyperledger Project are moving blockchain technology forward at remarkable speed. IBM has been an early and sizeable contributor of code to the project. It contributed 44,000 lines of code as a founding member.

That it is catching on so quickly in the banking and financial services sector shouldn’t be a surprise either.  What blockchain enables is highly secure and unalterable distributed transaction tracking at every stage of the transaction.  Said Likhit Wagle, Global Industry General Manager, IBM Banking and Financial Markets, when ticking off blockchain advantages: To start, first movers are setting business standards and creating new models that will be used by future adopters of blockchain technology. We’re also finding that these early adopters are better able to anticipate disruption and fight off new competitors along the way.

It is the larger banks leading the charge to embrace blockchain technology with early adopters twice as likely to be large institutions with more than a hundred thousand employees. Additionally, 77 percent of these larger banks are retail banking organizations.

As the IBV surveys found, trailblazers expect the benefits from blockchain technology to impact several business areas, including reference data (83 percent), retail payments (80 percent) and consumer lending (79 percent). When asked which blockchain-based new business models could emerge, 80 percent of banks surveyed identified trade finance, corporate lending, and reference data as having the greatest potential.

IBM is making it easy to tap blockchain by making it available through Docker containers, as a signed and certified distribution of IBM’s code submission to Hyperledger, and through Bluemix services. As noted above, blockchain is a natural fit for the z and LinuxOne. To that end, Bluemix Blockchain Services and a fully integrated DevOps Tool is System z- and IoT-enabled.

DancingDinosaur is Alan Radding, a veteran information technology analyst, writer, and ghostwriter. Please follow DancingDinosaur on Twitter, @mainframeblog. See more of his IT writing at technologywriter.com and here.

 

IBM Discounts z/OS Cloud Activity

August 12, 2016

The latest iteration of IBM’s z/OS workload pricing aims at to lower the cost of running cloud workloads.  In a recent announcement, z Systems Workload Pricing for Cloud (zWPC) for z/OS seeks to minimize the impact of new public cloud workload transaction growth on Sub-Capacity license charges. IBM did the same thing with mobile workloads when they started driving up the 4-hour workload averages on the z. As more z workloads interact with public clouds this should start to add up, if it hasn’t already.

bluemix garage -ni_5554516560

Bluemix Garages in the Cloud

As IBM puts it: zWPC applies to any organization that has implemented Sub-Capacity pricing via the basic AWLC or AEWLC pricing mechanisms for the usual MLC software suspects. These include z/OS, CICS, DB2, IMS, MQ and WebSphere Application Server (WAS).  An eligible transaction is one classified as Public Cloud-originated, connecting to a z/OS hosted transactional service and/or data source via a REST or SOAP web service.  Public cloud workloads are defined as transactions processed by named Public cloud application transactions identified as originating from a recognized Public Cloud offering, including but not limited to, Amazon Web Services (AWS), Microsoft Azure, IBM Bluemix, and more.

IBM appears to have simplified how you identify eligible workloads. As the company notes: zWPC does not require you to isolate the public cloud work in separate partitions, but rather offers an enhanced way of reporting. The z/OS Workload Manager (WLM) allows clients to use WLM classification rules to distinguish cloud workloads, effectively easing the data collection requirements for public cloud workload transactions.

So how much will you save? It reportedly reduces eligible hourly values by 60 percent. The discount produces an adjusted Sub-Capacity value for each reporting hour. What that translates into on your monthly IBM software invoice once all the calculations and fine print are considered amounts to a guess at this point. But at least you’ll save something. The first billing eligible under this program starts Dec. 1, 2016.

DancingDinosaur expects IBM to eventually follow with discounted z/OS workload pricing for IoT and blockchain transactions and maybe even cognitive activity. Right now the volume of IoT and blockchain activity is probably too low to impact anybody’s monthly license charges. Expect those technologies ramp up in coming years with many industry pundits projecting huge numbers—think billions and trillions—that will eventually impact the mainframe data center and associated software licensing charges.

Overall, Workload License Charges (WLC) constitute a monthly software license pricing metric applicable to IBM System z servers running z/OS or z/TPF in z/Architecture (64-bit) mode.  The driving principle of WLS amounts to pay-for-what-you-use, a laudable concept. In effect it lowers the cost of incremental growth while further reducing software costs by proactively managing associated peak workload utilization.

Generally, DancingDinosaur applauds anything IBM does to lower the cost of mainframe computing.  Playing with workload software pricing in this fashion, however, seems unnecessary. Am convinced there must be simpler ways to lower software costs without the rigmarole of metering and workload distribution tricks. In fact, a small mini-industry has cropped up among companies offering tools to reduce costs, primarily through various ways to redistribute workloads to avoid peaks.

A modification to WLC, the variable WLC (VWLC) called AWLC (Advanced) and the EWLC (Entry), aligns with most of the z machines introduced over the past couple of years.  The result, according to IBM, forms a granular cost structure based on MSU (CPU) capacity that applies to VWLC and associated pricing mechanisms.

From there you can further tweak the cost by deploying Sub-Capacity and Soft Capping techniques.  Defined Capacity (DC), according to IBM, allows the sizing of an LPAR in MSU such that the LPAR will not exceed the designated MSU amount.  Group Capacity Limit (GCL) extends the Defined Capacity principle for a single LPAR to a group of LPARs, allowing MSU resources to be shared accordingly.  BTW, a potential downside of GCL is that is one LPAR in the group can consume all available MSUs due to a rogue transaction. Again, an entire mini industry, or maybe no so mini, has emerged to help handle workload and capacity pricing on the z.

At some point in most of the conference pricing sessions the eyes of many attendees glaze over.  By Q&A time the few remaining pop up holding a copy of a recent invoice and ask what the hell this or that means and what the f$#%@#$ they can do about it.

Have to admit that DancingDinosaur did not attend the most recent SHARE conference, where pricing workshops can get quite energetic, so cannot attest to the latest fallout. Still, the general trend with mobile and now with cloud pricing discounts should be lower costs.

DancingDinosaur is Alan Radding, a veteran information technology analyst and writer. Please follow DancingDinosaur on Twitter, @mainframeblog. See more of his IT writing at technologywriter.com and here.

 

IBM 2Q 2016 Report—Where’s z System and POWER?

July 22, 2016

“IBM continues to establish itself as the leading cognitive solutions and cloud platform company,” said Ginni Rometty, IBM chairman, president and chief executive officer, in a statement accompanying the latest IBM 2Q financial report. The strategic imperatives grew; second-quarter revenues from its cloud, analytics, and engagement units— increased 12 percent year to year.

IBM Quantum Computing Lab - Friday, April 29, 2016, Yorktown Heights, NY (Jon Simon/Feature Photo Service for IBM)

IBM Quantum Experience delivered via Cloud (Jon Simon/Feature Photo Service for IBM)

Where’s z and POWER? The z and POWER platforms continued to flounder: revenues of $2.0 billion, down 23.2 percent. Revenue reflects z Systems product cycle dynamics; gross profit margin improved in both z Systems and Power. “Product cycle dynamics” refers to the lack of a new z.  In the past year IBM introduced the new LinuxONE and, more recently a new z13s, essentially what used to be known as a Business Class mainframe.

There is no hint, however, of a new z, a z14 that will drive product dynamics upward. IBM showed a POWER roadmap going all the way out to the POWER10 in 2020 but nothing comparable for the z.

DancingDinosaur, a longtime big iron bigot, remains encouraged by IBM’s focus on its strategic initiatives and statements like this: “And we continue to invest for growth with recent breakthroughs in quantum computing, Internet of Things and blockchain solutions for the IBM Cloud.” IBM strategic initiatives in cloud, mobile, IoT, and blockchain will drive new use of the mainframe, especially as the projected volumes of things, transactions, users, and devices skyrocket.

Second-quarter revenues from the company’s strategic imperatives — cloud, analytics and engagement — increased 12 percent year to year.  Cloud revenues (public, private and hybrid) for the quarter increased 30 percent.  Cloud revenue over the trailing 12 months was $11.6 billion.  The annual run rate for cloud as-a-service revenue — a subset of total cloud revenue — increased to $6.7 billion from $4.5 billion in the second quarter of 2015.  Revenues from analytics increased 5 percent.  Revenues from mobile increased 43 percent and from security increased 18 percent.

IBM indirectly is trying to boost the z and the cloud. CSC and IBM  announced an alliance with IBM in which IBM will provide CSC Cloud Managed Services for z Systems. CSC already includes IBM SoftLayer as part of its “Service-enabled Enterprise” strategy. “Cloud for z” extends that offering and will be of interest to current and potential mainframe customers in healthcare, insurance, and finance. CSC still sees life in the managed mainframe market, and IBM Global Technology Services, a competitor to CSC, apparently is happy to let them sell managed cloud services for mainframes. All this is taking place as IBM scrambles to secure a leadership share of cloud revenue, and any cloud billing CSC brings will help.

Microsoft, like IBM, claimed big cloud momentum on its fourth quarter conference call, according to a report in Fortune Magazine. It was enough to send Microsoft share price up 4% at one point in after hours trading.

As Fortune notes, for Microsoft as for IBM and other legacy IT providers like Oracle, putting up big cloud numbers is mandatory as more companies change the way they buy IT products. Instead of purchasing hundreds or thousands of new servers or storage boxes every few years, more companies are running their software and storing their data on shared public cloud infrastructure, like Microsoft Azure, Amazon Web Services, the Google Compute Platform, or the IBM Cloud.

For reporting purposes, Microsoft combines Azure with other products in its intelligent cloud product segment. Overall, that segment’s revenue grew about 7% year over year to $6.7 billion from about $6.3 billion.

Oracle, too, is facing the same scramble to establish an enterprise cloud presence. Cloud software as a service (SaaS) and platform as a service (PaaS) revenues were $690 million, up 66% in U.S. dollars. Total Cloud revenues, including infrastructure as a service (IaaS), were $859 million, up 49% in U.S. dollars. At the same time, Oracle’s hardware revenue fell by 7% to $1.3 billion, and its software license revenue fell by 2% to $7.6 billion.

“We added more than 1,600 new SaaS customers and more than 2,000 new PaaS customers in Q4” (which ended in June), said Oracle CEO, Mark Hurd. “In Fusion ERP alone, we added more than 800 new cloud customers. Today, Oracle has nearly 2,600 Fusion ERP customers in the Oracle Public Cloud — that’s ten-times more cloud ERP customers than Workday.”

Hewlett Packard Enterprise (HPE) is the last of the big public enterprise platform vendors, along with IBM and Oracle. (Dell is private and acquired EMC). HPE recently reported its best quarter in years. Second quarter net revenue of $12.7 billion, up 1% from the prior-year period. “Today’s results represent our best performance since I joined in 2011,” said Meg Whitman, president and chief executive officer, Hewlett Packard Enterprise. The businesses comprising HPE grew revenue over the prior-year period on an as-reported basis for the first time in five years.

IBM needs to put up some positive numbers. Seventeen consecutive losing quarters is boring. Wouldn’t it be exciting if a turnaround started with a new enterprise z14?

DancingDinosaur is Alan Radding, a veteran information technology analyst and writer. Please follow DancingDinosaur on Twitter, @mainframeblog. See more of his IT writing at technologywriter.com and here.

 

IBM Racks Up Blockchain Success

July 15, 2016

It hasn’t even been a year (Dec. 17, 2015) since IBM first publicly introduced its participation in the Linux Foundation’s newest collaborative project, Open Ledger Project, a broad-based Blockchain initiative.  And only this past April did IBM make serious noise publicly about Blockchain on the z, here. But since then IBM has been ramping up Blockchain initiatives fast.

LinuxONE rockhopper

Courtesy of IBM: LinuxONE Rockhopper

Just this week IBM made its security framework for blockchain public, first announced in April, by releasing the beta of IBM’s Blockchain Enterprise Test Network. This enables organizations to easily access a secure, partitioned blockchain network on the cloud to deploy, test, and run blockchain projects.

The IBM Blockchain Enterprise Test Network is a cloud platform built on a LinuxONE system.  Developers can now test four-node networks for transactions and validations with up to four parties.  The Network provides the next level of service for developers ready to go beyond the two-node blockchain service currently available in Bluemix for testing and simulating transactions between two parties. The Enterprise Test Network runs on LinuxONE, which IBM touts as the industry’s most secure Linux server due to the z mainframe’s Evaluation Assurance Level 5+ (EAL5+) security rating.

Also this week, Everledger, a fraud detection system for use with big data, announced it is building a business network using IBM Blockchain for their global certification system designed to track valuable items through the supply chain. Such items could be diamonds, fine art, and luxury goods.

Things continued to crank up around blockchain with IBM announcing a collaboration with the Singapore Economic Development Board (EDB) and the Monetary Authority of Singapore (MAS). With this arrangement IBM researchers will work with government, industries, and academia to develop applications and solutions based on enterprise blockchain, cyber-security, and cognitive computing technologies. The effort will draw on the expertise in the Singapore talent pool as well as that of the IBM Research network.  The Center also is expected to engage with small- and medium-sized enterprises to create new applications and grow new markets in finance and trade.

Facilitating this is the cloud. IBM expects new cloud services around blockchain will make these technologies more accessible and enable leaders from all industries to address what is already being recognized as profound and disruptive implications in finance, banking, IoT, healthcare, supply chains, manufacturing, technology, government, the legal system, and more. The hope, according to IBM, is that collaboration with the private sector and multiple government agencies within the same country will advance the use of Blockchain and cognitive technologies to improve business transactions across several different industries.

That exactly is the goal of blockchain. In a white paper from the IBM Institute of Business Value on blockchain, here, the role of blockchain is as a distributed, shared, secure ledger. These shared ledgers write business transactions as an unbreakable chain that forms a permanent record viewable by the parties in a transaction. In effect, blockchains shifts the focus from information held by an individual party to the transaction as a whole, a cross-entity history of an asset or transaction. This alone promises to reduce or even eliminate friction in the transaction while removing the need for most middlemen.

In that way, the researchers report, an enterprise, once constrained by complexity, can scale without unnecessary friction. It can integrate vertically or laterally across a network or ecosystem, or both. It can be small and transact with super efficiency. Or, it can be a coalition of individuals that come together briefly. Moreover, it can operate autonomously; as part of a self-governing, cognitive network. In effect, distributed ledgers can become the foundation of a secure distributed system of trust, a decentralized platform for massive collaboration. And through the Linux Foundation’s Open Ledger Project, blockchain remains open.

Even at this very early stage there is no shortage of takers ready to push the boundaries of this technology. For example, Crédit Mutuel Arkéa recently announced the completion of its first blockchain project to improve the bank’s ability to verify customer identity. The result is an operational permissioned blockchain network that provides a view of customer identity to enable compliance with Know Your Customer (KYC) requirements. The bank’s success demonstrated the disruptive capabilities of blockchain technology beyond common transaction-oriented use cases.

Similarly, Mizuho Financial Group and IBM announced in June a test of the potential of blockchain for use in settlements with virtual currency. Blockchain, by the way, first gained global attention with Bitcoin, an early virtual currency. By incorporating blockchain technology into settlements with virtual currency, Mizuho plans to explore how payments can be instantaneously swapped, potentially leading to new financial services based on this rapidly evolving technology. The pilot project uses the open source code IBM contributed to the Linux Foundation’s Hyperledger Project.

Cloud-based blockchain running on large LinuxONE clusters may turn out to play a big role in ensuring the success of IoT by monitoring and tracking the activity between millions of things participating in a wide range of activities. Don’t let your z data center get left out; at least make sure it can handle Linux at scale.

DancingDinosaur is Alan Radding, a veteran information technology analyst and writer. Please follow DancingDinosaur on Twitter, @mainframeblog. See more of his IT writing at technologywriter.com and here.

 

Compuware Continues Mainframe GUI Tool Enhancements

July 1, 2016

Early in 2015 Compuware announced the first in what it promised would be a continuing stream of new mainframe tools and tool enhancements. Did anyone really believe them? Mainframe ISVs are not widely regarded for their fast release cycles. DancingDinosaur reported on it then here and has continued to follow up and report its progress through a handful of new releases. This past week, DancingDinosaur received new Compuware mainframe tool announcements. For a mainframe ISV this is almost unheard of. IBM sometimes releases new mainframe products in intense spurts but then quickly resumes its typical languid release pace.

compuware ispw

Screen from Compuware’s ISPW for Continuous Delivery to the Mainframe

Let’s take a look at each of these new releases. First, ISPW Deploy, an advanced mainframe release automation solution that enables large enterprises to bring continuous delivery best practices to their IBM z/OS environments. ISPW Deploy, built on the ISPW technology Compuware acquired in January 2016, facilitates faster and more reliable mainframe software deployment. Specifically, it helps, according to Compuware, in three ways, through:

  1. Automation that rapidly moves code through the deployment process, including test staging and approvals, while also providing greatly simplified full or partial rollbacks.
  1. Visualization that enables DevOps managers to quickly pinpoint deployment issues in order to both solve immediate rollout problems and address persistent bottlenecks in code promotion.
  1. Integrations with both third-party solutions and Compuware’s own industry-leading mainframe toolkit that allow IT to build complete SCM-to-production DevOps pipelines and to quickly launch associated remediation support tools if and when deployment issues occur.

Compuware is further empowering enterprises to achieve mainframe agility by integrating. For instance, its ISPW and XebiaLabs’ cross-platform continuous delivery solutions enable IT organizations to orchestrate and visualize their mainframe DevOps processes in a common manner with their broader cross-platform DevOps automation.

The second announcement focused on Xebial Labs, as noted above. The idea here is to deliver cross-platform continuous releases for the mainframe. As Compuware explained, enterprises using XebiaLabs’ solution suite and Compuware ISPW, can now automate and monitor all phases of mainframe DevOps within the same continuous delivery management environment they use for their distributed, web, and cloud platforms. This automation and monitoring includes test/QA, pre-copy staging, and code promotion. The goal, as with all DevOps, is to speed digital agility for mainframe or distributed systems or both.

The third announcement concerned a partnership between Compuware and ConicIT that aims to help a new generation of IT ops staff proactively resolve emerging mainframe issues before they impact application service levels. It does so by integrating ConicIT’s predictive mainframe analytics with Compuware’s Strobe, which provides visually intuitive troubleshooting intelligence. Together, the two companies promise to enable even IT staff with relatively little hands-on mainframe experience to quickly identify and resolve a wide range of application performance problems.

The key to doing this is a reliance on the adoption of intuitive GUI interfaces. Compuware started this with its Topaz tools and has been continuing along this path for two years. Compuware’s CEO, Chris O’Malley, has been harping on these themes almost since he first arrived there.

Compuware customers apparently have gotten the message. As reported: “Market pressures are making it essential for us to deliver quality products and services to our clients more frequently, and the mainframe plays a critical role in that delivery,” according to Craig Danielson, Assistant Vice President for Commerce Bank. “We leverage ISPW to help in this capacity and its new capabilities will provide us the automation and visibility of our software deployment process to help us continuously improve our internal operations and services.” (note: DancingDinosaur did not validate this customer statement.)

Companies will need all the help modern mainframe tools can deliver. Mainframe data centers are facing unprecedented challenges that require unusual speed and agility. In short, they need DevOps fast. And they will have to respond with an increasingly aging core of experienced mainframe staff supplemented by millennials who have to be coaxed and cajoled onto the mainframe with easy graphical tools. If mainframe data centers can’t respond to these challenges—not just cloud, mobile, Linux, and analytics, but also IoT, blockchain, cognitive computing, and whatever else is coming along next—how are they going to cope. Already their users, the line of business managers, are turning to shadow IT out of frustration with the slow response from the mainframe data centers. And you know what comes next.

DancingDinosaur is Alan Radding, a veteran information technology analyst and writer. Please follow DancingDinosaur on Twitter, @mainframeblog. See more of his IT writing at technologywriter.com and here.

 

Medical Mutual Gains Fast Access to z/OS Log Data via Splunk and Ironstream

June 3, 2016

Running Syncsort’s Ironstream and leveraging Splunk Enterprise, Medical Mutual of Ohio has now implemented mainframe security in real time through the Splunk® Enterprise platform. One goal is to help protect customer information stored in DB2 from unauthorized access. Syncsorts’s Ironstream, a utility, collects and forwards z/OS log data, including security data, to Splunk Enterprise and Splunk Enterprise Security.

zOS Security2 PNG

z/OS security data, courtesy of Syncsort

“We’ve always had visibility. Now we can get it faster, in real time directly from the mainframe,” said the insurer’s enterprise security supervisor. Previously, the company would do a conventional data transfer, which could take several hours. The new approach, sometimes referred to as a big iron-to-big data strategy, now delivers security log data in near real time. This enables the security team to correlate all the security data from across the enterprise to effectively and quickly gain visibility into user-authentication data and access attempts tracked on the mainframe. And they can do it without needing specialized expertise or different monitoring systems for z/OS.

Real-time analytics, including real-time predictive analytics, are increasingly attractive as solutions for the growng security challenges organizations are facing. These challenges are due, in large part, to the explosion of transaction activity driven by mobile computing, and soon, IoT, and Blockchain, most of which eventually finds its way to the mainframe. All of these present immediate security concerns and require fast, nearly instant security decisions. Even cloud usage, which one would expect to be mainstream in enterprises by now, often is curtailed due to security fears.

With the Ironstream and Splunk combination, Medical Mutual Medical Mutual can see previously slow-to-access mainframe data alongside other security information it was already analyzing in Splunk Enterprise. Splunk Enterprise enables a consolidated enterprise-wide view of machine data collected across the business, which makes it possible to correlate events that might not raise suspicion alone but could be indicative of a threat when seen together.

The deployment proved to be straightforward. Medical Mutual’s in-house IT team set it up in a week with Syncsort answering deployment questions to assist. Although there are numerous tools to capture log data from the mainframe, the insurer chose to go with the Splunk-Ironstream combination because it already was using Splunk in house for centralized logging. Adding mainframe security logs was an easy step. “This was affordable and it saved us from having to learn another product,” the security supervisor added. Medical Mutual runs a z13, model 409 with Ironstream.

According to the announcement, by having Ironstream leverage z/OS log data via Splunk Enterprise, Medical Mutual has enables the organization to:

  • Track security events and data from multiple platforms including IBM z/OS mainframes, Windows and distributed servers and correlate the information in Splunk Enterprise for better security.
  • Diagnose and respond to high severity security issues more quickly since data from across the entire enterprise is being monitored in real time.
  • Provide monthly and daily reporting with an up-to-the-minute account of unusual user activity.
  • Detect security anomalies and analyze their trends – the cornerstone of Security Information and Event Management (SIEM) strategies.

Real time monitoring with analytics has proven crucial for security. You can actually detect fraud while it is taking place and before serious damage is done. It is much harder to recoup loses hours, days, or, what is often the case, months later.

The Splunk platform can handle massive amounts of data from different formats and indexes and decipher and correlate security events through analytics. Ironstream brings the ability to stream mainframe security data for even greater insights, and Ironstream’s low overhead keeps mainframe processing costs low.

To try the big iron-to-big data strategy organizations can download a free Ironstream Starter Edition and begin streaming z/OS Syslog data into Splunk solutions. Unlike typical technology trials, the Starter Edition is not time-limited and may be used in production at no charge. This includes access to the Ironstream applications available for download on Splunkbase.

DancingDinosaur is Alan Radding, a veteran information technology analyst and writer. Please follow DancingDinosaur on Twitter, @mainframeblog. See more of his IT writing at technologywriter.com and here.

 


%d bloggers like this: