Posts Tagged ‘Windows’

AI and IBM Watson Fuel Interest in App Dev among Mainframe Shops

December 1, 2016

BMC’s 2016 mainframe survey, covered by DancingDinosaur here, both directly and indirectly pointed to increased activity in regard to data center applications. Mainly this took the form of increased interest in Java on the z as a platform for new applications. Specifically, 72% of overall respondents reported using Java today while 88% reported plans to increase their use Java. At the same time, the use of Linux on the z has been steadily growing year over year; 41% in 2014, 48% in 2015, 52% in 2016. This growth of both point to a heightened interest in application development, management, and change.

ibm-project-dataworks-visualization-1

IBM’s Project DataWorks uses Watson Analytics to create complex visualizations with one line of code

IBM has been feeding this kind of AppDev interest with its continued enhancement of Bluemix and the rollout of the Bluemix Garage method.  More recently, it recently announced a partnership with Topcoder, a global software development community comprised of more than one million designers, developers, data scientists, and competitive programmers with the aim of stimulating developers looking to harness the power of Watson to create the next generation AI apps, APIs, and solutions.

According to Forrester VP and Principal Analyst JP Gownder in the IBM announcement, by 2019, automation will change every job category by at least 25%. Additionally, IDC predicts that 75% of developer teams will include cognitive/AI functionality in one or more applications by 2018. The industry is driving toward a new level of computing potential not witnessed since the introduction of Big Data

To further drive the cultivation of this new style of developer, IBM is encouraging participation in Topcoder-run hackathons and coding competitions. Here developers can easily access a range of Watson services – such as Conversation, Sentiment Analysis, or speech APIs – to build powerful new tools with the help of cognitive computing and artificial intelligence. Topcoder hosts 7,000 code challenges a year and has awarded $80 million to its community. In addition, now developers will have the opportunity to showcase and monetize their solutions on the IBM Marketplace, while businesses will be able to access a new pipeline of talent experienced with Watson and AI.

In addition to a variety of academic partnerships, IBM recently announced the introduction of an AI Nano degree program with Udacity to help developers establish a foundational understanding of artificial intelligence.  Plus, IBM offers the IBM Learning Lab, which features more than 100 curated online courses and cognitive uses cases from providers like Codeacademy, Coursera, Big Data University, and Udacity. Don’t forget, IBM DeveloperWorks, which offers how-to tutorials and courses on IBM tools and open standard technologies for all phases of the app dev lifecycle.

To keep the AI development push going, recently IBM unveiled the experimental release of Project Intu, a new system-agnostic platform designed to enable embodied cognition. The new platform allows developers to embed Watson functions into various end-user devices, offering a next generation architecture for building cognitive-enabled experiences.

Project Intu is accessible via the Watson Developer Cloud and also available on Intu Gateway and GitHub. The initiative simplifies the process for developers wanting to create cognitive experiences in various form factors such as spaces, avatars, robots, or IoT devices. In effect, it extends cognitive technology into the physical world. The platform enables devices to interact more naturally with users, triggering different emotions and behaviors and creating more meaningful and immersive experiences for users.

Developers can simplify and integrate Watson services, such as Conversation, Language, and Visual Recognition with the capabilities of the device to act out the interaction with the user. Instead of a developer needing to program each individual movement of a device or avatar, Project Intu makes it easy to combine movements that are appropriate for performing specific tasks like assisting a customer in a retail setting or greeting a visitor in a hotel in a way that is natural for the visitor.

Project Intu is changing how developers make architectural decisions about integrating different cognitive services into an end-user experience – such as what actions the systems will take and what will trigger a device’s particular functionality. Project Intu offers developers a ready-made environment on which to build cognitive experiences running on a wide variety of operating systems – from Raspberry PI to MacOS, Windows to Linux machines.

With initiatives like these, the growth of cognitive-enabled applications will likely accelerate. As IBM reports, IDC estimates that “by 2018, 75% of developer teams will include Cognitive/AI functionality in one or more applications/services.”  This is a noticeable jump from last year’s prediction that 50% of developers would leverage cognitive/AI functionality by 2018

For those z data centers surveyed by BMC that worried about keeping up with Java and big data, AI adds yet an entirely new level of complexity. Fortunately, the tools to work with it are rapidly falling into place.

DancingDinosaur is Alan Radding, a veteran information technology analyst, writer, and ghostwriter. Please follow DancingDinosaur on Twitter, @mainframeblog. See more of his IT writing at technologywriter.com and here.

 

 

 

IBM to Acquire EZSource to Bolster IBM z System for Digital Transformation

June 9, 2016

Over the past two years you have been reading in DancingDinosaur the new tricks that your z System can do—real time analytics, IoT, Blockchain, and more. This is part of the digital transformation that is blowing through enterprises everywhere. EZSource facilitates and simplifies how you can play in this new areas. See the IBM announcement here.

zOS Security2 PNG ez Complexity

EZSource Dashboard, Credit: EZSource

EZSource expedites digital transformations by unlocking core business logic and apps. Specifically it will pinpoint your valuable mainframe code assets in preparation for leveraging them through a hybrid cloud strategy. In the process it will enable the understanding business-critical assets in preparation of deployment of a z-centered hybrid cloud. This also enables enterprise DevOps, which is necessary to keep up with the pace of changes overtaking existing business processes.

Specifically, this can entail the need to:

  • Identify API candidates to play in the API economy
  • Embrace micro services to deliver versatile apps fast
  • Identify code quality concerns, including dead code, to improve reliability and maintainability
  • Mitigate risk of change through understanding code, data and schedule interdependencies
  • Aid in the sizing of a change effort
  • Automate documentation to improve understanding
  • Reduce learning curve as new people are on-boarded
  • Add application understanding to DevOps lifecycle information to identify opportunities for work optimization

Managers a z data centers often shy away from modifying aging business-critical applications for fear of breaking something—if it ain’t broke, don’t fix it—often is the mantra. They also are rationing the use of their few remaining experienced z veterans with the domain expertise and deep knowledge of software that turns out to be quite esoteric code.  This is further aggravated by poorly documented mainframe code. The way to mitigate this risk of change is through understanding code, data, and interdependencies. EZSource can handle this visually and with ease; you no longer need to be an experienced z code expert.

So what do you have to do to get on the digitization transformation bandwagon? Start by identifying your mainframe assets that are most often accessed. Most of them will be what the mobile apps are calling, usually a CICS routine or two or three. Then expose these business critical services through APIs and micro-services. This may require re-writing parts of them as platform agnostic language and Java components to work within the context of a hybrid cloud. As noted just above, EZSource can help with much of this too.

In short, EZSource performs app discovery, which facilitates code quality improvement. It helps clean up code. It also applies analytics to DevOps, in effect enabling Cognitive DevOps, which makes sense in the dynamic hybrid cloud. The result: you focus only on the relevant code and, of that, what is particularly problematic.

The goal is to increase competitiveness and business innovation through digital forms of engagement; the engagement currently being fueled by mobile, social, analytic, and cognitive computing in a hybrid cloud environment. The expectation is that you will be able to tap the accumulated software assets for insights while modernizing business critical applications already resident on the z. IBM contends that this is the fastest and most cost effective way to drive new value and agility and DancingDinosaur agrees.

Is it worth it?  Most DancingDinosaur readers probably already believe that the mainframe and its data and software assets sit smack at the center of a digital enterprise. (Just a glimpse of the growth of monthly software peak workload charges should confirm that). It makes no sense not to leverage this complex resource to the max. EZSource, with its smart code analytics and visual capabilities, can save thousands of hours of work, avoid mistakes, and speed the delivery of the kind of slick new hybrid cloud apps that customers are demanding.  EZSource is primarily a Linux, Windows, and Java tool with only a few pieces residing on the z to handle necessary z-specific connectivity.

DancingDinosaur is Alan Radding, a veteran information technology analyst and writer. Please follow DancingDinosaur on Twitter, @mainframeblog. See more of his IT writing at technologywriter.com and here.

 

Syncsort’s 2015 State of the Mainframe: Little Has Changed

November 30, 2015

Syncsort’s annual survey of almost 200 mainframe shops found that 83 percent of respondents cited security and availability as key strengths of the mainframe. Are you surprised? You can view the detailed results here for yourself.

synsort mainframes Role Big Data Ecosystem

Courtesy; Syncsort

Security and availability have been hallmarks of the z for decades. Even Syncsort’s top mainframe executive, Harvey Tessler, could point to little unexpected in the latest results “Nothing surprising. At least no big surprises. Expect the usual reliability, security,” he noted. BTW, in mid-November Clearlake Capital Group, L.P. (Clearlake) announced that it had completed the acquisition of Syncsort Incorporated. Apparently no immediate changes are being planned.

The 2015 study also confirmed a few more recent trends that DancingDinosaur has long suspected. More than two-thirds (67 percent) of respondents cited integration with other standalone computing platforms such as Linux, UNIX, or Windows as a key strength of mainframe.

Similarly, the majority (79 percent) analyze real-time transactional data from the mainframe with a tool that resides directly on the mainframe. That, in fact, may be the most surprising response. Mainframe shops (or more likely the line-of-business managers they work with) are notorious for moving data off the mainframe for analytics, usually to distributed x86 platforms. The study showed respondents are also turning to platforms such as Splunk (11.8 percent), Hadoop (8.6 percent), and Spark (1.6 percent) to supplement their real-time data analysis.

Many of the respondents no doubt will continue to do so, but it makes little sense in 2015 with a modern z System running a current configuration. In truth, it makes little sense from either a performance or a cost standpoint to move data off the z to perform analytics elsewhere. The z runs Hadoop and Spark natively. With your data and key analytics apps already on the z, why bother incurring both the high overhead and high latency entailed in moving data back and forth to run on what is probably a slower platform anyway.

The only possible reason might be that the mainframe shop doesn’t run Linux on the mainframe at all. That can be easily remedied, however, especially now with the introduction of Ubuntu Linux for the z. C’mon, it’s late 2015; modernize your z for the cloud-mobile-analytics world and stop wasting time and resources jumping back and forth to distributed systems that will run natively on the z today.

More encouraging is the interest of the respondents in big data and analytics. “The survey demonstrates that many big companies are using the mainframe as the back-end transaction hub for their Big Data strategies, grappling with the same data, cost, and management challenges they used it to tackle before, but applying it to more complex use cases with more and dauntingly large and diverse amounts of data,” said Denny Yost, associate publisher and editor-in-chief for Enterprise Systems Media, which partnered with Syncsort on the survey. The results show the respondents’ interest in mainframe’s ability to be a hub for emerging big data analytics platforms also is growing.

On other issues, almost one-quarter of respondents ranked as very important the ability of the mainframe to run other computing platforms such as Linux on an LPAR or z/VM virtual machines as a key strength of the mainframe at their company. Over one-third of respondents ranked as very important the ability of the mainframe to integrate with other standalone computing platforms such as Linux, UNIX, or Windows as a key strength of the mainframe at their company.

Maybe more surprising; only 70% on the respondents ranked as very important their organizations use of the mainframe for performing large-scale transaction processing or use of the mainframe for hosting mission-critical applications. Given that the respondents appeared to come from large, traditional mainframe shops you might have expected those numbers to be closer to 85-90%. Go figure.

When asked to rank their organization’s use of the mainframe to supplement or replace non-mainframe servers (i.e. RISC or x86-based servers) just 10% of the respondents considered it important. Clearly the hybrid mainframe-based data center is not a priority with these respondents.

So, what are they looking to improve in the next 12 months? The respondents’ top three initiatives are:

  1. Meeting Security and Compliance Requirements
  2. Reducing CPU usage and related costs
  3. Meeting Service Level Agreements (SLAs)

These aren’t the most ambitious goals DancingDinosaur has ever encountered but they should be quite achievable in 2016.

DancingDinosaur is Alan Radding, a veteran information technology analyst and writer. Please follow DancingDinosaur on Twitter, @mainframeblog. See more of his IT writing at technologywriter.com and here.

Syncsort Brings z System Integration Software to Open Source Tools

October 13, 2015

In a series of announcements last month, Syncsort integrated its DMX-h data integration software with Apache Kafka, an open distributed messaging system. This will enable mainframe shops to tap DMX-h’s easy-to-use GUI to subscribe, transform, enrich, and distribute enterprise-wide data for real-time Kafka messaging.

Spark graphic

Courtesy of IBM

Syncsort also delivered an open source contribution of an IBM z Systems mainframe connector that makes mainframe data available to the Apache Spark open-source analytics platform. Not stopping there, Syncsort is integrating the Intelligent Execution capabilities of its DMX data integration product suite with Apache Spark too. Intelligent Execution allows users to visually design data transformations once and then run them anywhere – across Hadoop, MapReduce, Spark, Linux, Windows, or Unix, on premise or in the cloud.

Said Tendü Yoğurtçu, General Manager of Syncsort’s big data business, in the latest announcement: “We are seeing increased demand for real-time analytics in industries such as healthcare, financial services, retail, and telecommunications.” With these announcements, Syncsort sees itself delivering the next generation streaming ETL and Internet of Things data integration platform.

Of course, the Syncsort offer should be unnecessary for most z System users except those that are long term Syncsort shops or are enamored of Syncsort’s GUI.  IBM already  offers Spark native on z/OS and Linux on z so there is no additional cost.  BTW, Syncsort itself was just acquired. What happens with its various products remains to be seen.

Still  IBM has been on a 12-year journey to expand mainframe workloads—Linux to Hadoop and Spark and beyond—the company has been urging mainframe shops as fast as fast as possible to become fully engaged in big data, open source, and more. The Syncsort announcements come at a precipitous time; mainframe data centers can more easily participate in the hottest use cases: real-time data analytics, streaming data analytics across diverse data sources, and more at the time when the need for such analytics is increasing.

Apache Spark and some of these other technologies should already be a bit familiar to z System data centers; Apache Kafka will be less familiar. DancingDinosaur noted Spark and others here, when LinuxOne was introduced.

To refresh, Apache Spark consists of a fast engine for large-scale data processing that provides over 80 high-level operators to make it easy to build parallel apps or use them interactively from the Scala, Python, and R shells. It also offers a stack of libraries including SQL and DataFrames, MLlib for machine learning, GraphX, and Spark Streaming. You can combine these libraries seamlessly in the same application.  As noted above Syncsort offers an open source version of the IBM z Systems mainframe connector that makes mainframe data available to the Apache Spark open-source analytics platform.

Spark already has emerged as one of the most active big data open source projects, initially as a fast memory-optimized processing engine for machine learning and now as the single compute platform for all types of workloads including real-time data processing, interactive queries, social graph analysis, and others. Given Spark’s success, there is a growing need to securely access data from a diverse set of sources, including mainframes, and to transform the data into a format that is easily understandable by Spark.

Apache Kafka, essentially an enterprise service bus, is less widely known. Apache Kafka brings a fast, scalable, durable, and fault-tolerant publish-subscribe messaging system. Kafka is often used in place of traditional message brokers like JMS and AMQP because of its higher throughput, reliability and replication. Syncsort has integrated its data integration software with Apache Kafka’s distributed messaging system to enable users to leverage DMX-h’s GUI as part of an effort to subscribe, transform, enrich, and distribute enterprise-wide data for real-time Kafka messaging.

According to Matei Zaharia, creator of Apache Spark and co-founder & CTO of Databricks: “Organizations look to Spark to enable a variety of use cases, including streaming data analytics across diverse data sources”.  He continues: “Syncsort has recognized the importance of Spark in the big data ecosystem for real-time streaming applications and is focused on making it easy to bring diverse data sets into Spark.” IBM certainly recognizes this too, and the z System is the right platform for making all of this happen.

DancingDinosaur is Alan Radding, a veteran IT analyst and writer. Please follow DancingDinosaur on Twitter, @mainframeblog. See more of his IT writing at technologywriter.com and here.

Compuware Aims for Mainframe Literacy in CIOs

November 13, 2014

Many IT professionals, especially younger ones, are clueless about the mainframe. Chris O’Malley, president of the mainframe business at Compuware, has met CIOs who are versed in everything about IT and have seemingly done everything there is with computers, but “they are not literate about the mainframe.” That means the mainframe never comes to mind. IBM could give away a zEnterprise for free, which it comes close to doing today through the System z Solution Edition program and these CIOs would ignore it. O’Malley wants to address that.

compuware MainframeExcellence2025_cover

In response, Compuware is following the path of the IBM System z Academic Initiative, but without the extensive global involvement of colleges and universities, with a program called Mainframe Excellence 2025, which it describes as a generational call for strategic platform stewardship. “We’re also trying to debunk a lot of issues around the mainframe,” O’Malley continues.

compuware O'Malley headshot

Chris O’Malley, Pres. Mainframe, Compuware

Compuware refers to Mainframe Excellence 2025 as a manifesto, something of a call to arms for millennials to storm the IT gates and liberate IT management from enslavement to x86 computing. Somehow DancingDinosaur doesn’t see it happening exactly that way; it envisions coexistence and synergy.

Most of the Mainframe Excellence document goes over ground DancingDinosaur and many others have covered before. It is delightful, however, to see others refreshing the arguments. And, the document adds some interesting data. For instance, over 1.15 million CICS transactions are executed on System z every second of every day! That’s more than all Google searches, YouTube views, Facebook likes, and Twitter tweets combined.

It also pays homage to what it refers to as the mainframe’s culture of excellence. It characterizes this culture by rigorous adherence to a standard of excellence demonstrably higher than that associated with other platforms, notably x86. IT organizations actually expect, accept, and plan for problems and patches in other platforms (think Microsoft Patch Tuesday). Mainframe professionals, on the other hand, have zero-tolerance for downtime and system failures and the mainframe generally lives up to those high expectations.

Ironically, the document points out that the culture of excellence has created a certain chasm between mainframe professionals and the rest of IT. In fact, this ingrained zero-failure culture of the mainframe community—including both vendors and enterprise IT staffs—can sometimes put it at odds with the very spirit of innovation that allows the mainframe to deliver the repeated advances in price/performance and new capabilities that consistently produce tremendous value.

Compuware concludes its report with an action checklist:

  • Fully inventory existing mainframe data, applications (including business rules), capacity, utilization/MSUs and management tools, a veritable trove a value embedded in mainframe code and business rules.
  • Build a fact-based skills plan with a realistic timeline.
  • Ramp up current and road-mapped mainframe capabilities.
  • Rightsize investments in mainframe application stewardship.
  • Institute an immediate moratorium on short-term cost-cutting that carries long-term negative consequences.
  • Combat denial and hype in regards to non-mainframe platform capabilities, costs and risks.

And Compuware’s final thought should give encouragement to all those who must respond to the mainframe-costs-too-much complaint:  IT has a long history of under-estimating real TCO and marginal costs for new platforms while over-estimating their benefits. A more sober assessment of these platforms will make the strategic value and economic advantages of the mainframe much more evident in comparison.

Compuware certainly is on the right track with Mainframe Excellence 2025. Would like, however, to see the company coordinate its efforts with the System z Academic Initiative, the Master the Mainframe effort, and such.

DancingDinosaur is Alan Radding, a veteran IT writer/analyst. You can follow DancingDinosaur on Twitter, @mainframeblog. Also check out my other IT writing at Technologywriter.com and here.

IBM Gets Serious About Mobile

February 28, 2013

Just last week IBM announced IBM MobileFirst, a multi-product initiative to pull together a comprehensive mobile computing platform.  There was nothing in the announcement specific to the zEnterprise, but IBM has been telegraphing System z involvement in mobile for over a year.

In November of last year DancingDinosaur wrote of the z and all other platforms going mobile. Over a year earlier, DancingDinosaur was writing about  using the z with smartphones. With SOA, Java, Linux, WebSphere, and Lotus running on the z and with data that mobile apps and users want residing on the machine, the zEnterprise should become over time a prime player in enterprise mobile business.

Doug Balog, general manager of IBM’s System z mainframe business, might have had MobileFirst in mind when he said in Computerworld that the next steps IBM is considering include making it easier for customers to run mobile and social networking applications on mainframes.  Such an approach would, for example, benefit banks that want to offer mobile apps but still want the power and resilience of a mainframe behind those apps.

The first mobile workload you see on the zEnterprise, however, will not be Foursquare or some other funky mobile app.  More likely, it will be an operational analytics app dissecting mobile banking transaction data or analyzing the behavior of anyone making purchases through their smartphone.

MobileFirst boasts what IBM describes as the broadest portfolio of mobile offerings covering platform, management, security, and analytics.  In terms of platform, for instance, it currently offers streamlined deployment for private clouds on the PureApplication System. It provides single sign-on across multiple apps on a device, and supports all four of the latest mobile operating systems (iOS, Android, Windows, and BlackBerry). It can handle native, web, or hybrid app development, promises easy connectivity to existing data and services for mobile usage, and can be deployed on premise or through managed service delivery.

In terms of management and security MobileFirst offers unified management across all devices, making it suitable for BYOD. Similarly, it can secure sensitive data regardless of the device, including the option to remotely wipe corporate data. It also supports DOD-grade encryption and FIPS 140-2 compliance and will grant or deny email access based on device compliance.  It also provides context-aware risk-based access control through IBM Worklight. More security is delivered through IBM Security Access Manager for Mobile and Cloud and IBM AppScan.

As for analytics, MobileFirst will automatically detect customer issues through user and mobile device data. It offers user behavior drill down through high fidelity replay and reporting to analyze the user experience. Finally, it correlates customer behavior with network and application data to determine conversion and retention rates and quantify business impact. It also can capture all activity on a device and link it to backend resources. Recently acquired Tealeaf will play a key role for user analytics and behavior.

As you would expect, in addition to acquisitions IBM is rapidly assembling an ecosystem of mobile players, carriers, and ISVs to build out a complete MobileFirst offering starting with players like AT&T, IBM as a surprising Apple VAR (US only), working with Nokia Siemens Networks to develop the IBM WebSphere Application Service Platform for Networks to run IT apps at the mobile network edge, and a slew of resources for developers. There even is an IBM Academic Initiative for Mobile patterned after the System z Academic Initiative to increase the availability of skilled mobile developers. IBM also is jump starting Mobile First with about 200 of its own applications; mainly old favorites like Cognos and its key middleware.

But MobileFirst isn’t IBM’s only initiative with a mobile component. IBM Connections has had a mobile component since August 2011. Similarly, Lotus Notes Traveler supports Notes mobile users on all the major smartphones through IBM Lotus Domino or Lotus Domino Express deployments, and in the IBM cloud with IBM SmartCloud Notes.  Although they weren’t specifically called out in the MobileFirst briefing IBM assures DancingDinosaur they are included as part of the initiative’s application layer.

From the standpoint of a zEnterprise data center or any enterprise-class data center MobileFirst shouldn’t present a problem. Yes, it will increase the number and frequency of users accessing data handled through the data center and the number of devices they are using. And you’ll be running more data analytics more often. But IBM clearly has put effort into thinking through the critical security challenges of mobile and is providing a broad set of tools to begin addressing them. Sure, there is no RACF for mobile, at least not yet, but if it is needed you can bet there will be.

ISPW Brings Agile Development to the zEnterprise

April 2, 2012

When this blog was named people still scoffed at the notion of an agile mainframe. Many thought the mainframe, at best, a dancing dinosaur.  This skepticism persisted despite growing adoption of Linux on z and mainframe-centered SOA. Even now, the advent of the zEnterprise and hybrid computing, including the ability to run virtualized Windows on x86 blades running in the zBX, has not completely silenced the skeptics.

The zEnterprise today is as agile as any system Kent Beck, one of the pioneers of extreme programming and agile development and co-author of the Agile Manifesto could have imagined. To underscore the point ISPW, a software tool provider, offers what it calls an agile IDE for the System z. A combination of agile IDE and configuration management on steroids ISPW’s toolset enables mainframe development spanning multiple platforms, operating systems, and software development environments ranging from COBOL and Java to REST. ISPW’s agile IDE handles everything from traditional mainframe procedural code to cross-platform, event-driven code and Web services.

The problem the agile IDE addresses is application bloat, a problem Gartner has been citing for years, and one that is particularly prevalent in the mainframe world with its decades of legacy code. For Gartner the bloated application portfolio is the 1000-pound gorilla in the room, one that piles on unnecessary maintenance and support costs. In 2010 Gartner estimated application portfolios were growing at 4-7% annually, which puts a huge strain on budgets anyway.

Until you gain clear visibility into the application environment cannot you effectively manage the increasingly bloated application portfolio. It is this lack of visibility along with control over the application portfolio that hinders organizations, noted Gartner.

But even this application bloat is misunderstood. The bloat is not the huge amount of legacy spaghetti code that results from continuously modifying applications without removing obsolete code for fear of breaking something.  Today’s ultra fast, multi-core processors can race through that bloated code.

The real bloat comes from redundancy in the development process. Typically organizations deploy numerous tools to handle the different components and the various aspects of the software lifecycle for each platform and operating system. This results in multiple tools doing essentially the same thing. In a bloated application environment just about everything having to do with application development and deployment ends up being repeated multiple times. This slows down the organization and constrains it from doing new things to improve the business as well as unnecessarily increasing the cost of the software portfolio.

The agile mainframe IDE, as ISPW sees it, blurs the distinction between coding and other phases of the software development lifecycle. It may include version control, GUI tools, and support for multiple development languages. With an agile mainframe IDE, developers work within a single tool to address all aspects of the lifecycle, including testing, promotion, deployment, change management, and support. ISPW’s agile IDE also functions across mainframe and distributed systems with ease, moving and modifying code between environments as needed.

Application bloat is costly. An early user of the ISPW tool, an energy company, initiated a program to reduce costs in their existing System z software portfolio. The effort enabled the company to eliminate 23 software products and saved $13.2 million in future budget requests projected over the subsequent five years.

When the IT systems at an insurance company failed an external audit the company quickly adopted and deployed the ISPW agile IDE across 400 developers, 60 applications and 175,000 components. Since then they have not received a single auditor complaint.

As mainframe shops find themselves involved in multi-platform, hybrid computing increasingly they will need tools like ISPW’s agile IDE for the efficiency it brings to the application portfolio. An agile IDE is exactly what is needed for the hybrid zEnterprise world going forward.

Virtualized Storage Comes to the zEnterprise/zBX

March 27, 2012

Huh? Mainframe storage has been virtualized for decades. In a presentation at the latest SHARE gathering, however, Dave Lytle from Brocade and Bill Smith from Hitachi Data Systems gave a joint presentation about bringing virtual storage to the z.

In the presentation they explained how the Brocade DCX 8510 Backbone switch combined with the Hitachi Virtual Storage Platform (VSP) provide an alternative for mainframe-attached storage environments.  They aren’t suggesting you replace the workhorse DS8000 mainframe storage but, rather, augment it.

They call for an open virtualized storage infrastructure that makes it possible to deploy lower cost storage devices in conjunction with automated data tiering to lower the overall total cost of storage.  The cost savings result from shifting more of the storage to slower but less expensive open systems storage through the Hitachi Virtual Storage Platform (VSP).  When a piece of data needs the faster, more costly primary z storage, the automation brings it back. Without the automation, this would be a slow, error-prone operation that almost nobody would bother with.

The promise, say Lytle and Smith, is faster deployment of new applications and non-disruptive re-deployment of storage assets between mainframe and open system environments. This kind of dynamic tiering already has gained traction in the open systems world.  Even the big mainframe storage players, IBM and EMC, have products there. IBM has the automated System Storage Easy Tier offering and EMC brings its Fully Automated Storage Tiering (FAST) product.

In addition to the HDS VSP offering the approach described by Lytle and Smith is built around a new 16 Gbps Fibre Channel Brocade DCX 8510 Backbones switching infrastructure. According to Brocade, they match the industry’s fastest and most scalable System z mainframes with the highest-performing and most reliable FICON infrastructure to maximize consolidation and virtualization of traditional mainframe and emerging Linux and Windows workloads. All the while they simplifying fabric management for FICON and Fibre Channel intermix environments. The Brocade DCX 8510 directors have been qualified for mainframe environments, allowing enterprises to fully exploit the capabilities of the IBM zEnterprise FICON infrastructure.

The reference to traditional mainframe and emerging Linux and Windows workloads sounds like the zEnterprise/zBX combination. Basically, you should be able to connect your lower cost, lower performing open system storage for use with your zBX Windows blades and manage it all through the z.  DancingDinosaur sees some definite cost, efficiency, and convenience advantages in that alone while providing one more reason for organizations to consider the zBX with Windows blades.

The Brocade and HDS products do boast some impressive capabilities.  The Brocade FICON product offers a simultaneous send-and-receive 16Gbps line rate on all chassis ports concurrently (no blocking), five-nines (99.999%) availability, and 4x improvement in energy efficiency over competitive switches.

The HDS VSP is fully mainframe compatible. It provides frontend and backend directors, cache, and processors to handle time-sensitive processing tasks and supports multiple types of disk drive (SSD, SAS and SATA) to meet a variety of performance and cost requirements. The virtual storage directors and cache combine to deliver performance throughput of over 1 million IOP/s with full FICON support.

The issue of connecting zEnterprise and open systems storage for the purposes of tiering is just ramping up.  Lytle and Smith report plans already underway for the next SHARE gathering (Aug. 5-10, 2012) in Anaheim, CA.  The MVS group at SHARE apparently is preparing to bring IBM, EMC, and HDS together to talk about tiering on the mainframe.

New zEnterprise Developments

October 17, 2011

After telegraphing the arrival of x86 blades running Windows for months (previously reported back in January on Dancingdinosaur here), Doris Conti, System z Marketing, made it official last week announcing Windows blades for the z running in the zBX. That was just one piece of a slew of zEnterprise-related announcements Conti made when briefing analysts the day prior to the official Oct. 12 announcement.

At the same time, IBM revealed a few more details about the zBX, which still seems to continue to run in stealth mode although IBM publicly announced it when it introduced the z196. To date Conti reported over 80 zBX units shipped and over 400 blades sold to more than 60 clients. IBM says there is nothing proprietary in the zBX and its blades so it is pricing the components to be competitive with standard industry blade center pricing.

Conti also announced a zEnterprise Starter Edition for the Cloud, which will provide a fast and lower cost way to jump start a private cloud based on the zEnterprise. The product delivers an entry-level Infrastructure-as-a-Service (IaaS) delivery capability for Linux on System z in conjunction with Tivoli Provisioning Manager.

Private clouds are emerging as the preferred flavor of cloud computing for enterprises of all sizes. Top management prefers private clouds for their perceived better security and for the apparent control they allow.  Many mainframe data centers consider that they have long delivered private cloud capabilities, and with the hybrid zEnterprise the idea of a multi-platform private cloud certainly is much closer to a reality. If you’ve been thinking of a private cloud, the zEnterprise Starter Edition for the Cloud, possibly combined with a z114 Solution Edition Cloud Starter system could be a good way to get there fast and at a discounted price.

IBM also confirmed to Dancingdinosaur that the existing System z Solution Edition discounts also would be made available for the z196 and z114 with the appropriate adjustments. Besides the Cloud Starter Solution Edition referenced above, there are Solution Editions for Enterprise Linux Consolidation, Web Sphere, SAP, Application Development, and more. Now that Cognos is available on z/OS  don’t be surprised to see a Solution Edition for BI with Cognos, either running on z/OS or Linux on z.

The real question is what new workloads will the zEnterprise enable in practice. Certainly a prime candidate is server consolidation. Linux server consolidation already is a proven workload. It remains to be seen which Windows applications get moved to zBX Windows blades. Enterprises will likely pick and choose among Windows workloads. The initial workloads will probably be those that will benefit from close proximity to DB2 data residing on the z196 or z114. The same could probably be said for the initial AIX workloads heading to Power blades.

IBM does not foresee massive consolidation of distributed servers on the zEnterprise; rather companies will use tuned-for-task/fit-for-purpose analyses to select the best prospective workloads. The exception being distributed Linux workloads; a z196 can handle more than a thousand Linux virtual machines. And already there is an Enterprise Linux Solution Edition program for such massive consolidation on z.

The push for new workloads on the z seems to be gaining traction. In 2Q11 IBM reported SAP revenue up 54% year to year, installed IFL MIPS up 26% (total clients running Linux on z up 34%), total clients running WebSphere App Server up 21%, and Cognos revenue growth up 131%. Overall, the z has attracted 68 new IT shops since 3Q10.

IBM also announced some other goodies. Particularly attractive are APIs for the Unified Resource Manager that will enable the integration of the cross platform manager with the broader set of management tools while providing programmatic access to the same functions currently accessed through the hardware management console.

z/VM 6.2 sports an improved Live Guest Relocation (LGR), the ability to move Linux virtual machines without disruption. It also adds the ability to cluster z/VM, allowing up to four z/VM instances to be clustered as a single system image. In addition, you can scale up to four systems horizontally, even on mixed hardware generations. z/VM has long been the best hypervisor of all that are out there, but it is only starting to get any respect. Observed the information services VP at Baldor, “z/VM’s LGR is the very best z/VM software enhancement since 64-bit support became available.”

Other announcements last week addressed a wide range of software issues from data analytics to storage management to Notes, which has emerged as a key social networking product for IBM. Dancingdinosaur will take those up another time.

Two Mainframe Career Futures

September 20, 2011

From a career standpoint, are these the best of times or the worst of times for mainframe people? If you read the front page of last Sunday’s Boston Globe, it looks terrible. If you listen to IBM, it couldn’t be better, and some job boards appear to back up IBM on this.

Boston Globe writer Katie Johnston started her piece this way: Brewster Smith specialized in mainframe systems for 35 years in the technology industry, recently converting his employer’s mainframe to servers that use newer programming languages. When Smith completed the project in July, his company laid him off because his skills no longer fit the new system. “It will take at least two years to train you to be productive,’’ he recalled his Concord, N.H., employer telling him. “Why do that when we can just hire someone off the street and they’ll be productive immediately because they know the languages.’’

Smith recently got a call from John Hancock Financial Services. The conversation ended quickly when the hiring manager found out he didn’t have experience with the current Microsoft Windows development framework.

IBM takes a decidedly different view. In a recent survey sponsored by the IBM Academic Initiative, it reports customers and business partners placed a high priority on the need for mainframe skills: Over 85% ranked mainframe application development skills as strongly required or required within their organization. These results point to an increasing need for organizations to groom the next generation of mainframe development skills.

As Johnston noted in her piece:  There is a dark side of tech, an industry in which skills and people can quickly become obsolete and some companies, believing high unemployment will give them the pick of ready-to-produce workers, don’t provide training. In fact, many companies demand candidates with skills that perfectly match their requirements.

There are very few jobs anymore where the skills you originally mastered will keep you securely employed for a decade or more. Almost every job skill in the computer industry is fleeting. Just think of all those Symbian programmers who recently had mastered a key mobile technology only to be reduced to near irrelevance by the rapid rise of smartphones with totally different operational attributes.

One high level IT manager in a leading mainframe shop puts it this way: There are some professions– dentistry, the priesthood, psychology, law– which require that members of that profession acquire a vast amount of knowledge and skill early and then can coast along for the next 40 years simply using that knowledge. Or maybe not. You can’t go skiing at Vail or golfing in the Virgin Islands without running into educational seminars for doctors or lawyers.

Then there are other professions, like IT, where everything one learns is obsolete within ten years or sooner.  You have to keep learning new things just to keep abreast of the technology, notes the IT manager.

Both types of professions can be rewarding, he concludes, if you go into them with the proper attitude. And that attitude is that you have to be willing to learn, even if you have to do it on your own nickel and your own time.

Are there programmers out there, asks the IT manager, spending 10 hours a week expanding their skills but learning the wrong things?  Undoubtedly.  Good IT managers not only should encourage their staff to broaden their skills but guide them toward which skills will be most valuable going forward even if they are not given the budget to support it.

The hybrid zEnterprise provides a valuable opportunity for mainframers to expand their skills into Linux, Java, and soon even Windows. The hybrid mainframe can handle SOA and mobile technologies and play in the cloud. Start familiarizing yourself with these technologies.

Today, every mainframer has access to other means to gain leading edge skills. All they need is a smartphone in their pocket. Apple and Droid provide rich SDKs to develop apps and marketplaces to distribute those apps. One mainframer leveraged his mainframe knowledge and rudimentary Java skills to write an iPhone app that sent a photo of he took of a wiring mistake to the trouble ticket system. The wiring got fixed, the company streamlined a process, and he demonstrated a valuable leading edge skill. The lesson: both old and new IT dogs must continually learn new tricks.


%d bloggers like this: