Posts Tagged ‘API’

IBM Drives Platforms to the Cloud

April 29, 2016

IBM hasn’t been shy about its shift of focus from platforms and systems to cloud, mobile, analytics, and cognitive computing. But it didn’t hit home until last week’s release of 1Q2016 financials, which mentioned the z System just once. For the quarter IBM systems hardware and operating systems software revenues (lumped into one category, almost an after-thought) rang up $1.7 billion, down 21.8 percent.

This is ugly, and DancingDinosaur isn’t even a financial analyst. After the z System showed attractive revenue growth through all of 2015 suddenly its part of a loss. You can’t even find the actual numbers for z or Power in the new report format. As IBM notes: the company has revised its financial reporting structure to reflect the transformation of the business and provide investors with increased visibility into the company’s operating model by disclosing additional information on its strategic imperatives revenue by segment. BTW, IBM did introduce new advanced storage this week, which was part of the Systems Hardware loss too. DancingDinosaur will take up the storage story here next week.

openstack-logo

But the 1Q2016 report was last week. To further emphasize its shift IBM this week announced that it was boosting support of OpenStack’s RefStack project, which is intended to advance common language between clouds and facilitate interoperability across clouds. DancingDinosaur applauds that but if you are a z data center manager you better take note that the z along with all the IBM platforms, mainly Power and storage, being pushed to the back of the bus behind IBM’s strategic imperatives.

DancingDinosaur supports the strategic initiatives and you can throw blockchain and IoT in with them too. These initiatives will ultimately save the mainframe data center. All the transactions and data swirling around and through these initiatives eventually need to land in a safe, secure, utterly reliable place where they can be processed in massive volume, kept accessible, highly available, and protected for subsequent use, for compliance, and for a variety of other purposes. That place most likely will be the z data center. It might be on premise or in the cloud but if organizations need rock solid transaction performance, security, availability, scalability, and such they will want the z, which will do it better and be highly price competitive. In short, the z data center provides the ideal back end for all the various activities going on through IBM’s strategic initiative.

The z also has a clear connection to OpenStack. Two years ago IBM announced expanding its support of open technologies by providing advanced OpenStack integration and cloud virtualization and management capabilities across IBM’s entire server portfolio through IBM Cloud Manager with OpenStack. According to IBM, Cloud Manager with OpenStack will provide support for the latest OpenStack release, dubbed Icehouse at that time, and full access to the complete core OpenStack API set to help organizations ensure application portability and avoid vendor lock-in. It also extends cloud management support to the z, in addition to Power Systems, PureFlex/Flex Systems, System x (which was still around then)  or any other x86 environment. It also would provide support for IBM z/VM on the z, and PowerVC for PowerVM on Power Systems to add more scalability and security to its Linux environments.

At the same time IBM also announced it was beta testing a dynamic, hybrid cloud solution on the IBM Cloud Manager with OpenStack platform. That would allow workloads requiring additional infrastructure resources to expand from an on premise cloud to remote cloud infrastructure.  Since that announcement, IBM has only gotten more deeply enamored with hybrid clouds.  Again, the z data center should have a big role as the on premise anchor for hybrid clouds.

With the more recent announcement RefStack, officially launched last year and to which IBM is the lead contributor, becomes a critical pillar of IBM’s commitment to ensuring an open cloud – helping to advance the company’s long-term vision of mitigating vendor lock-in and enabling developers to use the best combination of cloud services and APIs for their needs. The new functionality includes improved usability, stability, and other upgrades, ensuring better cohesion and integration of any cloud workloads running on OpenStack.

RefStack testing ensures core operability across the OpenStack ecosystem, and passing RefStack is a prerequisite for all OpenStack certified cloud platforms. By working on cloud platforms that are OpenStack certified, developers will know their workloads are portable across IBM Cloud and the OpenStack community.  For now RefStack acts as the primary resource for cloud providers to test OpenStack compatibility, RefStack also maintains a central repository and API for test data, allowing community members visibility into interoperability across OpenStack platforms.

One way or another, your z data center will have to coexist with hybrid clouds and the rest of IBM’s strategic imperatives or face being displaced. With RefStack and the other OpenStack tools this should not be too hard. In the meantime, prepare your z data center for new incoming traffic from the strategic imperatives, Blockchain, IoT, Cognitive Computing, and whatever else IBM deems strategic next.

DancingDinosaur is Alan Radding, a veteran information technology analyst and writer. Please follow DancingDinosaur on Twitter, @mainframeblog. See more of his IT writing at technologywriter.com and here.

IBM Puts Blockchain on the z System for a Disruptive Edge

April 22, 2016

Get ready for Blockchain to alter your z-based transaction environment. Blockchain brings a new class of distributed ledger applications. Bitcoin, the first Blockchain system to grab mainstream data center attention, is rudimentary compared to what the Linux Foundation’s open HyperledgerProject will deliver.

ibm-blockchain-adept1

As reported in CIO Magazine, Blockchain enables a distributed ledger technology with ability to settle transactions in seconds or minutes automatically via computers. This is a faster, potentially more secure settlement process than is used today among financial institutions, where clearing houses and other third-party intermediaries validate accounts and identities over a few days. Financial services, as well as other industries, are exploring blockchain for conducting transactions as diverse as trading stock, buying diamonds, and streaming music.

IBM in conjunction with the Linux Foundation’s HyperledgerProject expects the creation and management of Blockchain network services to power a new class of distributed ledger applications. With the HyperLedger and Blockchain developers could create digital assets and accompanying business logic to more securely and privately transfer assets among members of a permissioned Blockchain network running on IBM LinuxONE or Linux on z.

In addition, IBM will introduce fully integrated DevOps tools for creating, deploying, running and monitoring Blockchain applications on the IBM Cloud and enable applications to be deployed on IBM z Systems. Furthermore, by using Watson as part of an IoT platform IBM intends to make possible information from devices such as RFID-based locations, barcode-scan events, or device-recorded data to be used with IBM Blockchain apps. Clearly, IBM is looking at Blockchain for more than just electronic currency. In fact, Blockchain will enable a wide range of secure transactions between parties without the use of intermediaries, which should speed transaction flow. For starters, the company brought to the effort 44,000 lines of code as a founding member of the Linux Foundation’s HyperledgerProject

The z, with its rock solid reputation for no-fail, extreme high volume and performance, and secure processing, is a natural for Blockchain applications and system. In the process it brings the advanced cryptography, security, and reliability of the z platform. No longer content just to handle traditional backend systems-of-record processing IBM is pushing to bring the z into new areas that leverage the strength and flexibility of today’s mainframe.  As IoT ramps up expect the z to handle escalating volumes of IoT traffic, mobile traffic, and now blockchain distributed ledger traffic.  Says IBM: “We intend to support clients looking to deploy this disruptive technology at scale, with performance, availability and security.” That statement has z written all over it.

Further advancing the z into new areas, IBM reemphasized its advantages through built-in hardware accelerators for hashing and digital signatures, tamper-proof security cards, unlimited random keys to encode transactions, and integration to existing business data with Smart Contract APIs. IBM believes the z could take blockchain performance to new levels with the world’s fastest commercial processor, which is further optimized through the use of hundreds of internal processors. The highly scalable I/O system can handle massive amounts of transactions and the optimized network between virtual systems in a z Systems cloud can speed up blockchain peer communications.

An IBM Blockchain DevOps service will also enable blockchain applications to be deployed on the z, ensuring an additional level of security, availability and performance for handling sensitive and regulated data. Blockchain applications can access existing transactions on distributed servers and z through APIs to support new payment, settlement, supply chain, and business processes.

Use Blockchain on the z to create and manage Blockchain networks to power the emerging new classes of distributed ledger applications.  According to IBM, developers can create digital assets and the accompanying business logic to more securely and privately transfer assets among members of a permissioned Blockchain network. Using fully integrated DevOps tools for creating, deploying, running, and monitoring Blockchain applications on IBM Cloud, data centers can enable applications to be deployed on the z. Through the Watson IoT Platform, IBM will make it possible for information from devices such as RFID-based locations, barcode scans, or device-recorded data to be used with IBM Blockchain.

However, Blockchain remains nascent technology. Although the main use cases already are being developed and deployed many more ideas for blockchain systems and applications are only just being articulated. Nobody, not even the Linux Foundation, knows what ultimately will shake out. Blockchain enables developers to easily build secure distributed ledgers that can be used to exchange most anything of value fast and securely. Now is the time for data center managers at z shops to think what they might want to do with such extremely secure transactions on their z.

DancingDinosaur is Alan Radding, a veteran information technology analyst and writer. Please follow DancingDinosaur on Twitter, @mainframeblog. See more of his IT writing at technologywriter.com and here.

 

IBM Continues Open Source Commitment with Apache Spark

June 18, 2015

If anyone believes IBM’s commitment to open source is a passing fad, forget it. IBM has invested billions in Linux, open Power through the Open Power Foundation, and more. Its latest is the announcement of a major commitment to Apache Spark, a fast open source and general cluster computing system for big data.

spark VGN8668

Courtesy of IBM: developers work with Spark at Galvanize Hackathon

As IBM sees it, Spark brings essential advances to large-scale data processing. Specifically, it dramatically improves the performance of data dependent-apps and is expected to play a big role in the Internet of Things (IoT). In addition, it radically simplifies the process of developing intelligent apps, which are fueled by data. It does so by providing high-level APIs in Scala, Java, and Python, and an optimized engine that supports general computation graphs for data analysis. It also supports a rich set of higher-level tools including Spark SQL for SQL and DataFrames, MLlib for machine learning, GraphX for graph processing, and Spark Streaming for stream processing.

IBM is contributing its breakthrough IBM SystemML machine learning technology to the Spark open source ecosystem. Spark brings essential advances to large-scale data processing, such as improvements in the performance of data dependent apps. It also radically simplifies the process of developing intelligent apps, which are fueled by data. But maybe the biggest advantage is that it can handle data coming from multiple, disparate sources.

What IBM likes in Spark is that it’s agile, fast, and easy to use. It also likes it being open source, which ensures it is improved continuously by a worldwide community. That’s also some of the main reasons mainframe and Power Systems data centers should pay attention to Spark.  Spark will make it easier to connect applications to data residing in your data center. If you haven’t yet noticed an uptick in mobile transactions coming into your data center, they will be coming. These benefit from Spark. And if you look out just a year or two, expect to see IoT applications adding to and needing to combine all sorts of data, much of it ending up on the mainframe or Power System in one form or another. So make sure Spark is on your radar screen.

Over the course of the next few months, IBM scientists and engineers will work with the Apache Spark open community to accelerate access to advanced machine learning capabilities and help drive speed-to-innovation in the development of smart business apps. By contributing SystemML, IBM hopes data scientists iterate faster to address the changing needs of business and to enable a growing ecosystem of app developers who will apply deep intelligence to everything.

To ensure that happens, IBM will commit more than 3,500 researchers and developers to work on Spark-related projects at more than a dozen labs worldwide, and open a Spark Technology Center in San Francisco for the Data Science and Developer community to foster design-led innovation in intelligent applications. IBM also aims to educate more than 1 million data scientists and data engineers on Spark through extensive partnerships with AMPLab, DataCamp, MetiStream, Galvanize, and Big Data University MOOC (Massive Open Online Course).

Of course, Spark isn’t going to be the end of tools to expedite the latest app dev. With IoT just beginning to gain widespread interest expect a flood of tools to expedite developing IoT data-intensive applications and more tools to facilitate connecting all these coming connected devices, estimated to number in the tens of billions within a few years.

DancingDinosaur applauds IBM’s decade-plus commitment to open source and its willingness to put real money and real code behind it. That means the IBM z System mainframe, the POWER platform, Linux, and the rest will be around for some time. That’s good; DancingDinosaur is not quite ready to retire.

DancingDinosaur is Alan Radding, a veteran IT analyst and writer. Please follow DancingDinosaur on Twitter, @mainframeblog. See more of his IT writing on Technologywriter.com and here.

API Economy Comes to the IBM z System

June 11, 2015

What comes to mind when you hear (or read) about a RESTful IBM z System? Hint: it is not a mainframe that is loafing. To the contrary, a RESTful mainframe probably is busier than it has ever been, now running a slew of new apps, most likely mobile or social apps with REST APIs connecting to z/OS-based web services plus its usual workloads. Remember web services when SOA first came to the mainframe? They continue today behind the new mobile, cloud, social, and analytical workloads that are putting the spotlight on the mainframe.

Travel and Transportation - Passenger Care

Courtesy of IBM: travel fuels mobile activity (click to enlarge)

A variety of Edge2015 sessions, given by Asit Dan, chief architect, z Service API Management and Glenn Anderson, IBM Lab Services and Training, put what the industry refers to as the emerging API economy in perspective. The z, it should come as no surprise, lies at the heart of this burgeoning API economy, not only handling transactions but also providing governance and management to the API phenomenon that is exploding. Check out IBM’s APIs for Dummies.

The difference between first generation SOA and today’s API economy lies in the new workloads—especially mobile and cloud—fueling the surging interest. The mobile device certainly is the fastest growing platform and will likely become the largest platform soon if it is not already, surpassing desktop and laptop systems.

SOA efforts initially focused on the capabilities of the providers of services, noted Dan, particularly the development, run-time invocation, and management of services. The API economy, on the other hand, focuses on the consumption of these services. It really aims to facilitate the efforts of application developers (internal developers and external business partners) who must code their apps for access to existing and new API-enabled services.

One goal of an enterprise API effort is to access already deployed services, such z-based CICS services or those of a partner. Maybe a more important goal, especially where the z is involved, is to drive use of mainframe software assets by customers, particularly mobile customers.  The API effort not only improves customer service and satisfaction but could also drive added revenue. (Have you ever fantasized of the z as a direct revenue generator?)

This calls, however, for a new set of interfaces. As Dan notes in a recent piece, APIs for accessing these assets, defined using well known standards such as web services and Representational State Transfer (REST) with JSON (JavaScript Object Notation), and published via an easily accessible catalog, make it efficient to subscribe to APIs for obtaining permissions and building new applications. Access to the APIs now can be controlled and tracked during run-time invocations (and even metered where revenue generation is the goal).

Now the API economy can morph into a commercial exchange of business functions, capabilities, and competencies as services using web APIs, noted Glenn Anderson at Edge2015. In-house business functions running on the z can evolve into an API as-a-service delivery vehicle, which amounts to another revenue stream for the mainframe data center.

The API economy often is associated with the concept of containers. Container technology provides a simplified way to make applications more mobile in a hybrid cloud, Anderson explained, and brings some distinct advantages. Specifically, containers are much smaller in size than virtual machines and provide more freedom in the placement of workloads in a cloud (private, public, hybrid) environment. Container technology is being integrated into OpenStack, which is supported on the z through IBM Cloud Manager. Docker is the best known container technology and it works with Linux on z.

With the combination of SOA, web services, REST, JSON, OpenStack, and Docker all z capable, a mainframe data center can fully participate in the mobile, apps, cloud API economy. BTW, POWER servers also can play the API, OpenStack, Docker game too. Even Watson can participate in the API economy through IBM’s early March acquisition of AlchemyAPI, a provider of scalable cognitive computing API services. The acquisition will drive the API economy into cognitive computing too. Welcome to the mainframe API economy.

DancingDinosaur is Alan Radding, a veteran IT analyst and writer. Please follow DancingDinosaur on Twitter, @mainframeblog. See more of his IT writing on Technologywriter.com and here.


%d bloggers like this: