Posts Tagged ‘microservices’

Montana Sidelines the Mainframe

January 21, 2020

Over the past 20+ years DancingDinosaur has written this story numerous times. It never ends exactly the way they think it will. Here is the one I encountered this past week.

IBM z15

But that doesn’t stop the pr writers from finding a cute way to write the story. This time the  writers turned to references to the moon landings and trilby hats (huh?). Looks like a Fedora to me, but what do I know; I only wear baseball hats. But they always have to come up with something that makes the mainframe sound completely outdated. In this case they wrote: Mainframe computers, a technology that harkens back to an era of moon landings and men in trilby hats, are still widely used throughout government, but not in Montana for much longer.

At least they didn’t write that the mainframe was dead and gone or forgotten. Usually, I follow up on stories like this months later and call whichever IT person is still there. I congratulate him or her and ask how it went. That’s when I usually start hearing ums and uhs. It turns out the mainframe is still there, handling those last few jobs they just can’t replace yet.

Depending on how playful I’m feeling that day, I ask him or her what happened to the justification presented at the start of the project. Or I might ask what happened to the previous IT person. 

Sometimes, I might even refer them to a recent DancingDinosaur piece that explains about Linux on the mainframe or Java or describes mainframes running the latest Docker container technology or microservices. I’m not doing this for spite; I’m just trying to build up my readership. DancingDinosaur hates losing any reader, even if it’s late in their game.  So I always follow up with a link to DancingDinosaur

In an interview published by StateScoop, Chief Information Officer Tim Bottenfield described how for the last several years, the last remaining agencies using the state’s mainframe have migrated their data away from it and are now developing modern applications that can be moved to the state’s private and highly virtualized cloud environment. By spring 2021, Montana expects to be mainframe-free. Will make a note to call Bottenfield in Spring 2021 and see how they are doing.  Does anyone want to bet if the mainframe actually is completely out of service and gone by then?

As you all know, mainframes can be expensive to maintain, particularly if it’s just to keep a handful of applications running, which usually turn out to be mission-critical applications. Of the three major applications Montana still runs on its mainframe, two are used by the Montana Department of Public Health and Human Services, which is in the process of recoding those programs to work on modern platforms, as if the z15 isn’t  modern.

They haven’t told us whether these applications handle payments or deliver critical services to citizens. Either way it will not be pretty if such applications go down. The third is the state’s vehicle titling and registration system, which is being rebuilt to run out of the state’s data center. Again, we don’t know much about the criticality of these systems. But think how you might feel if you can’t get accurate or timely information from one of these systems. I can bet you wouldn’t be a happy camper; neither would I.

Systems like these are difficult to get right the first time, if at all. This is especially true if you will be using the latest hybrid cloud and services technologies. Yes, skilled mainframe people are hard to find and retain but so are any technically skilled and experienced people. If I were a decade younger, I could be attracted to the wide open spaces of Montana as a relief from the congestion of Boston. But I’m not the kind of hire Montana needs or wants. Stay tuned for when I check back in Spring 2021.

DancingDinosaur is Alan Radding, a veteran information technology analyst, writer, and ghost-writer. Follow DancingDinosaur on Twitter, @mainframeblog, and see more of his work at http://technologywriter.com/ 

IBM Takes Red Hat for $34 Billion

November 2, 2018

“The acquisition of Red Hat is a game-changer. It changes everything about the cloud market,” declared Ginni Rometty, IBM Chairman. At a cost of $34 billion, 10x Red Hat’s gross revenue, it had better be a game changer. See IBM’s announcement earlier this week here.

IBM Multicloud Manager Dashboard

IBM has been hot on the tail of the top three cloud hyperscalers—AWS, Google, and Microsoft/Azure. Will this change the game? Your guess is as good as anyone’s.

The hybrid cloud market appears to be IBM’s primary target. As the company put it: “IBM will become the world’s #1 hybrid cloud provider, offering companies the only open cloud solution that will unlock the full value of the cloud for their businesses.” IBM projects the value of the hybrid cloud market at $1 trillion within a few years!

Most companies today are only 20 percent along their cloud journey, renting compute power to cut costs. The next chapter of the cloud, noted Rometty, requires shifting business applications to hybrid cloud, extracting more data, and optimizing every part of the business.

Nobody has a lock on this market yet. Not IBM, not Red Hat, not VMware, but one thing seems clear; whoever wins will involve open source.  Red Hat, with $3 billion in open source revenue has proven that open source can pay. The only question is how quickly it can pay back IBM’s $34 billion bet.

What’s needed is something that promotes data portability and applications across multiple clouds, data security in a multi-cloud environment, and consistent cloud management. This is the Red Hat and IBM party line.  Both believe they will be well positioned to address these issues to accelerate hybrid multi-cloud adoption. To succeed at this, the new entity will have to tap their leadership in Linux, containers, Kubernetes, multi-cloud management, and automation.

IBM first brought Linux to the Z 20 years ago, making IBM an early advocate of open source, collaborating with Red Hat to help grow enterprise-class Linux.  More recently the two companies worked to bring enterprise Kubernetes and hybrid cloud solutions to the enterprise. These innovations have become core technologies within IBM’s $19 billion hybrid cloud business.

The initial announcement made the point Red Hat will join IBM’s Hybrid Cloud team as a distinct unit, as IBM described, preserving the independence and neutrality of Red Hat’s open source development heritage and commitment, current product portfolio, go-to-market strategy, and unique development culture. Also Red Hat will continue to be led by Jim Whitehurst and Red Hat’s current management team.

That camaraderie lasted until the Q&A following the announcement, when a couple of disagreements arose following different answers on relatively trivial points. Are you surprised? Let’s be clear, nobody spends $34 billion on a $3 billion asset and gives it a completely free hand. You can bet IBM will be calling the shots on everything it is feels is important. Would you do less?

Dharmesh Thakker, a contributor to Forbes, focused more on Red Hat’s OpenShift family of development software. These tools make software developers more productive and are helping transform how software is created and implemented across most enterprises today. So “OpenShift is likely the focus of IBM’s interest in Red Hat” he observes.

A few years ago, he continued, the pendulum seemed to shift from companies deploying more-traditional, on-premises datacenter infrastructure to using public cloud vendors, mostly Amazon. In the last few years, he continued, we’ve seen most mission-critical apps inside companies continue to run on a private cloud but modernized by agile tools and microservices to speed innovation. Private cloud represents 15-20% of datacenter spend, Thakker reports, but the combo of private plus one or more public clouds – hybrid cloud—is here to stay, especially for enterprises. Red Hat’s OpenShift technology enables on-premises, private cloud deployments, giving IBM the ability to play in the hybrid cloud.

IBM isn’t closing this deal until well into 2019; expect to hear more about this in the coming months.

DancingDinosaur is Alan Radding, a veteran information technology analyst, writer, and ghost-writer. Follow DancingDinosaur on Twitter, @mainframeblog, and see more of his work at technologywriter.com.

 

 

 

Secure Containers for the Z

October 11, 2018

What’s all this talk about secure containers? Mainframe data center managers have long used secure containers, only they call them logical partitions (LPARs). Secure service containers must be some x86 thing.

Courtesy: Mainframe Watch Belgium

Writing the first week in Oct., Ross Mauri, General Manager IBM Z, observes: Today’s executives in a digitally empowered world want IT to innovate and deliver outstanding user experiences. But, as you know, this same landscape increases exposure and scrutiny around the protection of valuable and sensitive data. IBM’s answer: new capabilities for the IBM z14 and LinuxONE platforms that handle digital transformation while responding to immediate market needs and delivering effective solutions.

The containers provide a secure service container that hosts container-based applications for hybrid and private cloud workloads on IBM LinuxONE and Z servers as an IBM Cloud Private software solution.  This secure computing environment for microservices-based applications can be deployed without requiring code changes to exploit inherent security capabilities. In the process, it provides:

  • Tamper protection during installation time
  • Restricted administrator access to help prevent the misuse of privileged user credentials
  • Automatic encryption of data both in flight and at rest

This differs from an LPAR. According to IBM, the LPAR or logical partition are, in practice, equivalent to separate mainframes. This is not trivial power. Each LPAR runs its own operating system. This can be any mainframe operating system; there is no need to run z/OS, for example, in each LPAR. The installation planners  also may elect to share I/O devices across several LPARs, but this is a local decision.

The system administrator can assign one or more system processors for the exclusive use of an LPAR. Alternately, the administrator can allow all processors to be used on some or all LPARs. Here, the system control functions (often known as microcode or firmware) provide a dispatcher to share the processors among the selected LPARs. The administrator can specify a maximum number of concurrent processors executing in each LPAR. The administrator can also provide weightings for different LPARs; for example, specifying that LPAR1 should receive twice as much processor time as LPAR2. If the code in one LPAR crashes, it has no effect on the other LPARs. Not sure this is the case with the new microservices containers.

Mauri tries to make the case for the new containers. These containers allow applications and data to inherit a layer of security with Secure Service Containers that, in turn, inherit the embedded capabilities at the core of IBM Z and LinuxONE to help hyper protect your data, guard against internal and external threats, and simplify your data compliance initiatives. DancingDinosaur does not know what “hyper protect” means in this context. Sounds like marketing-speak.

Also Mauri explains that IBM Secure Service Containers help protect the privacy of sensitive company data and customer data from administrators with elevated credentials. At the same time they allow development teams to use cutting-edge container-based technologies to deploy new or existing containerized applications.

In fact, IBM continues the explanation by saying it selected this unique and class-leading data privacy assurance technology to allow applications and data to inherit yet another layer of security through Secure Service Containers. “We’ve embedded capabilities at the core of IBM Z and LinuxONE that help hyper protect your data, guard against internal and external threats, and simplify your data compliance initiatives.” IBM does like the hyper protect phrase; wish DancingDinosaur knew what it meant. A Google search comes up with hyper Protect Crypto Services, which IBM concedes is still an experimental phase, so, in fact, it doesn’t mean anything yet. Maybe in the future.

IBM Secure Service Containers help protect the privacy of sensitive company and customer data from administrators with elevated credentials—a serious risk—while, at the same time, allowing development teams to use cutting-edge container-based technologies to deploy new or existing containerized applications. OK, DancingDinosaur can accept this but it seems only marginally different from what you can do with good ole LPARs. Maybe the difference only becomes apparent when you attempt to build the latest generation microservices-based apps.

If your choice comes down to secure service containers or LPARs, guess you need to look at what kind of apps you want to deploy. All DancingDinosaur can add is LPARs are powerful, known, and proven technology.

DancingDinosaur is Alan Radding, a veteran information technology analyst, writer, and ghost-writer. Follow DancingDinosaur on Twitter, @mainframeblog, and see more of his work at technologywriter.com.

Hybrid Cloud to Streamline IBM Z

June 27, 2018

2020 is the year, according to IDC,  when combined IT infrastructure spending on private and public clouds will eclipse spending on traditional data centers. The researcher predicts the public cloud will account for 31.68 percent of IT infrastructure spending in 2020, while private clouds will take a 19.82 percent slice of the spending pie, totaling more than half (51.5 percent) of all infrastructure spending for the first time, with the rest going to traditional data centers.

Source: courtesy of IBM

There is no going back. By 2021 IDC expects the balance to continue tilting further toward the cloud, with combined public and private cloud dollars making up 53.15 percent of infrastructure spending. Enterprise spending on cloud, according to IDC, will grow over $530 billion as over 90 percent of enterprises will be using a mix of multiple cloud services and platforms, both on and off premises.

Technology customers want choices. They want to choose their access device, interface, deployment options, cost and even their speed of change. Luckily, today’s hybrid age enables choices. Hybrid clouds and multi-cloud IT offer the most efficient way of delivering the widest range of customer choices.

For Z shops, this shouldn’t come as a complete surprise. IBM has been preaching the hybrid gospel for years, at least since x86 machines began making significant inroads into its platform business. The basic message has always been the same: Center the core of your business on the mainframe and then build around it— using x86 if you must but now try LinuxONE and hybrid clouds, both public and on-premises.

For many organizations a multi-cloud strategy using two or more different clouds, public or on-premise, offers the fastest and most efficient way of delivering the maximum in choice, regardless of your particular strategy. For example one might prefer a compute cloud while the other a storage cloud. Or, an organization might use different clouds—a cloud for finance, another for R&D, and yet another for DevOps.

The reasoning behind a multi-cloud strategy can also vary. Reasons can range from risk mitigation, to the need for specialized functionality, to cost management, analytics, security, flexible access, and more.

Another reason for a hybrid cloud strategy, which should resonate with DancingDinosaur readers, is modernizing legacy systems. According to Gartner, by 2020, every dollar invested in digital business innovation will require enterprises to spend at least three times that to continuously modernize the legacy application portfolio. In the past, such legacy application portfolios have often been viewed as a problem subjected to large-scale rip-and-replace efforts in desperate, often unsuccessful attempts to salvage them.

With the growth of hybrid clouds, data center managers instead can manage their legacy portfolio as an asset by mixing and matching capabilities from various cloud offerings to execute business-driven modernization. This will typically include microservices, containers, and APIs to leverage maximum value from the legacy apps, which will no longer be an albatross but a valuable asset.

While the advent of multi-clouds or hybrid clouds may appear to complicate an already muddled situation, they actually provide more options and choices as organizations seek the best solution for their needs at their price and terms.

With the Z this may be easier done than it initially sounds. “Companies have lots of records on Z, and the way to get to these records is through APIs, particularly REST APIs,” explains Juliet Candee, IBM Systems Business Continuity Architecture. Start with the IBM Z Hybrid Cloud Architecture. Then, begin assembling catalogs of APIs and leverage z/OS Connect to access popular IBM middleware like CICS. By using z/OS Connect and APIs through microservices, you can break monolithic systems into smaller, more composable and flexible pieces that contain business functions.

Don’t forget LinuxONE, another Z but optimized for Linux and available at a lower cost. With the LinuxONE Rockhopper II, the latest slimmed down model, you can run 240 concurrent MongoDB databases executing a total of 58 billion database transactions per day on a single server. Accelerate delivery of your new applications through containers and cloud-native development tools, with up to 330,000 Docker containers on a single Rockhopper II server. Similarly, lower TCO and achieve a faster ROI with up to 65 percent cost savings over x86. And the new Rockhopper II’s industry-standard 19-inch rack uses 40 percent less space than the previous Rockhopper while delivering up to 60 percent more Linux capacity.

This results in what Candee describes as a new style of building IT that involves much smaller components, which are easier to monitor and debug. Then, connect it all to IBM Cloud on Z using secure Linux containers. This could be a hybrid cloud combining IBM Cloud Private and an assortment of public clouds along with secure zLinux containers as desired.

DancingDinosaur is Alan Radding, a veteran information technology analyst, writer, and ghost-writer. Follow DancingDinosaur on Twitter, @mainframeblog. See more of his work at technologywriter.com and here.

Please note: DancingDinosaur will be away for the first 2 weeks of July. The next piece should appear the week of July 16 unless the weather is unusually bad.

New IBM z13s Brings Built-in Encrypted Security to Entry Level

February 19, 2016

Earlier this week IBM introduced the z13s, what it calls World’s most secure server, built for hybrid cloud, and sized for mid-sized organizations.  The z13s promises better business outcomes, faster decision making, less regulatory exposure, greater scale, and better fraud protection. And at the low end it is accessible to smaller enterprises, maybe those who have never tried a z before.

Advanced Security New z13s

z13s features embedded cryptography that brings the benefits of the mainframe to mid-sized organizations . Courtesy IBM

A machine like the low end z13s used to be referred to as a business class (BC) mainframe.  IBM declined to quote a price, except to say z13s will go “for about the same price as previous generations for the equivalent capacity.”  OK, back in July 2013 IBM published the base price of the zEC12 BC machine at $75,000. IBM made a big deal of that pricing at the time.

The key weasel phrase in IBM’s statement is: “for the equivalent capacity.”  Two and a half years ago the $75k zEC12 BC offered significantly more power than its predecessor. Figuring out equivalent capacity today given all the goodies IBM is packing into the new machine, like built-in chip-based cryptography and more, is anybody’s guess. However, given the plummeting costs of IT components over the past two years, you should get it at a base price of $100k or less. If not, call Intel. Adds IBM: The infrastructure costs of z13s are comparable to the Public Cloud infrastructure costs with enterprise support; significant software savings result from core consolidation on the z13s.

But the z13s is not just about price. As digital business becomes a standard practice and transaction volumes increase, especially mobile transaction volumes, the need for increased security becomes paramount. Cybercrime today has shifted. Rather than stealing data criminals are compromising data accuracy and reliability. This is where the z13s’ bolstered built-in security and access to APIs and microservices in a hybrid cloud setting can pay off by keeping data integrity intact.

IBM’s z13s, described as the new entry point to the z Systems portfolio for enterprises of all sizes, is packed with a number of security innovations. (DancingDinosaur considered the IBM LinuxONE Rockhopper as the current z entry point but it is a Linux-only machine.) For zOS the z13s will be the entry point. The security innovations include:

  • Ability to encrypt sensitive data without compromising transactional throughput and response time through its updated cryptographic and tamper-resistant hardware-accelerated cryptographic coprocessor cards with faster processors and more memory. In short: encryption at twice the speed equates to processing twice as many online or mobile device purchases in the same time, effectively helping to lower the cost per transaction.
  • Leverage the z Systems Cyber Security Analytics offering, which delivers an advanced level of threat monitoring based on behavior analytics. Also part of the package, IBM® Security QRadar® security software correlates data from more than 500 sources to help organizations determine if security-related events are simply anomalies or potential threats, This z Systems Cyber Security Analytics service will be available at no-charge, as a beta offering for z13 and z13s customers.
  • IBM Multi-factor Authentication for z/OS (MFA) is now available on z/OS. The solution adds another layer of security by requiring privileged users to enter a second form of identification, such as a PIN or randomly generated token, to gain access to the system. This is the first time MFA has been tightly integrated in the operating system, rather than through an add-on software solution. This level of integration is expected to deliver more streamlined configuration and better stability and performance.

Hybrid computing and hybrid cloud also play a big part in IBM’s thinking latest around z Systems. As IBM explains, hybrid cloud infrastructure offers advantages in flexibility but can also present new vulnerabilities. When paired with z Systems, IBM’s new security solutions can allow clients to establish end-to-end security in their hybrid cloud environment.

Specifically, IBM Security Identity Governance and Intelligence can help prevent inadvertent or malicious internal data loss by governing and auditing access based on known policies while granting access to those who have been cleared as need-to-know users. IBM Security Guardium uses analytics to help ensure data integrity by providing intelligent data monitoring, which tracks users as they access specific data and help to identify threat sources quickly in the event of a breach. IBM Security zSecure and QRadar use real-time alerts to focus on the identified critical security threats that matter the most.

Conventional z System data centers should have no difficulty migrating to the z13 or even the z13s.  IBM told DancingDinosaur it will continue to protect a client’s investment in technology with serial number preservation on the IBM z13s.  The company also is offering upgrades from the zEnterprise BC12 (zBC12) and from the zEnterprise 114 (z114) to the z13s.   Of course, it supports upgradeability within the IBM z13 family; a z13s N20 model can be upgraded to the z13 N30 model. And once the z13s is installed it allows on demand offerings to access temporary or permanent capacity as needed.

DancingDinosaur is Alan Radding, a veteran information technology analyst and writer. Please follow DancingDinosaur on Twitter, @mainframeblog. See more of his IT writing at technologywriter.com and here.

Docker on IBM z System

January 7, 2016

“If you want Docker on z, you can do it in next to 30 seconds, says Dale Hoffman,Program Director, Linux SW Ecosystem & Innovation Lab.  At least if you’re running Linux on z and preferably on a LinuxONE z.  With all the work Hoffman’s team has done laying the ground work for Docker on the z, you barely have to do anything yourself.

HybridCloud_Infographic (3)

Containers are ideal for cloud computing or, more importantly, for hybrid clouds, defined as the connection of one or more clouds to other clouds. Hybrid clouds are where IBM sees the industry and the z going, and containers, particularly Docker containers, have emerged as the vehicle to get enterprises there. Click here for an FAQ on Docker with z.

z System shops can get there fast using tools Hoffman’s group has already built for the z. To get started, just click here. Or, simply go to IBM Bluemix, from which you can build and deploy Docker containers for the z and other platforms. Back in June IBM introduced enterprise class containers that make it easier for developers to deliver production applications across their hybrid environments.

IBM also offers its own IBM branded containers that allow organizations to deploy, manage, and run application components on the IBM Bluemix development platform by leveraging the open-source Docker container technology. IBM Bluemix now offers three infrastructure compute technology choices to deploy applications – Docker containers, OpenStack virtual machines, or Cloud Foundry apps. Designed for enterprise production workloads, IBM Containers can be securely deployed with integrated scalability and reliability, which enterprise customers rely upon.

In keeping with IBM’s policy of not going it alone, the company also has become a founding member of a coalition of partners and users to create the Open Container Platform (OCP) that aims to ensure containers are interoperable. Features of the IBM Containers include integrated tools such as log analytics, performance monitoring and delivery pipeline, elastic scaling, zero downtime deployments, automated image security/vulnerability scanning, and access to Bluemix’s catalog of over 100 cloud services including Watson, Analytics, IoT and Mobile.

Enterprise z shops want containers because they need to be as fast and agile as the born-in-the-cloud upstarts challenge them. Think survival. Containers like Docker really provide ease of use, portability, and fast deployment almost anywhere to get new applications into production fast. Through containers Docker basically puts its engine/runtime on top of the OS and provides the virtual containers to deploy software into the container. The appeal of this is easy portability for the application/software to any Docker container anywhere and fast deployment.

Specifically the Docker technology provides application portability by utilizing open-source, standardized, light-weight, and self-sufficient container capabilities. IBM’s implementation of the Docker technology with enterprise capabilities further strengthens IBM’s support for hybrid cloud environments. Of course, not every application at every stage in its lifecycle will run in the public cloud—many if not most won’t ever–but IBM Containers enables the developers to determine when to run containers on premise and when to deploy to the public cloud on IBM Bluemix with full Internet connectivity. Image files created within IBM Containers support portability and can be instantiated as containers on any infrastructure that runs Docker.

Through the use of containers on z you can shape your environment using system virtualization and container elements according to your landscape and your requirements with hardly any constraints in performance.  In addition, Docker on z provides greater business agility to go to market quicker and solve business problems effectively through DevOps agility via Docker containers and microservices. Then add hybrid cloud and portability by which you move the same application across multiple clouds.   In short, you can define your IT structures according to your needs, not your system constraints.

Finally, there is nothing threatening about Docker containers on z. Docker is Docker is Docker, even on z, says Hoffman; it relies on the same container technology of Linux, which has been available on z for many years. So get started with containers on z and let DancingDinosaur know when you have success deploying your z containers.

DancingDinosaur is Alan Radding, a veteran information technology analyst and writer. Please follow DancingDinosaur on Twitter, @mainframeblog. See more of his IT writing at technologywriter.com and here.


%d bloggers like this: