Archive for the ‘Uncategorized’ Category

Attract Young Techies to the Z

September 14, 2018

A decade ago DancingDinosaur was at a major IBM mainframe event and looked around at the analysts milling about and noticed all the gray hair and balding heads and very few women, and, worse, few appeared to be under 40, not exactly a crowd that would excite young male computer geeks. At the IBM introduction of the Z it had become even worse; more gray or balding heads, mine included, and none of the few Z professional female analysts that I knew under 40 were there at all.

millions of young eager to join the workforce (Image by © Reuters/CORBIS)

An IBM analyst relations person agreed, noting that she was under pressure from IBM to get some young techies at Z events.  Sounded like Mission Impossible to me. But my thinking has changed in the last couple of weeks. A couple of discussions with 20-something techies suggested that Zowe has the potential to be a game changer as far as young techies are concerned.

DancingDinosaur covered Zowe two weeks ago here. It represents the first open source framework for z/OS. As such it provides solutions for development and operations teams to securely manage, control, script, and develop on the mainframe like any other cloud platform.

Or, to put it another way, with Zowe IBM and partners CA Technologies and Rocket Software are enabling users to access z/OS using a new open-source framework. Zowe, more than anything before, brings together generations of systems that were not designed to handle global networks of sensors and devices. Now, decades since IBM brought Linux to the mainframe IBM, CA, and Rocket Software are introducing Zowe, as a new open-source software framework that bridges the divide between modern challenges like IoT and the mainframe.

Says Sean Grady, a young (under 30) software engineer at Rocket Software: Zowe to me is really cool, the first time I could have a sustained mainframe conversation with my peers. Their first reactions were really cynical, he recalls. Zowe changed that. “My peers know Linux tools really well,” he notes.

The mainframe is perceived as separate thing, something my peers couldn’t touch, he added. But Linux is something his peers know really well so through Zowe it has tools they know and like. Suddenly, the mainframe is no longer a separate, alien world but a familiar place. They can do the kind of work they like to do, in a way they like to do it by using familiar tools.

And they are well paid, much better than they can get coding here-and-gone mobile apps for some startup. Grady reports his starting offers ran up to $85k, not bad for a guy just out of college. And with a few years of experience now you can bet he’s doing a lot better than that.

The point of Zowe is to enable any developer, but especially new developers who don’t know or care about the mainframe, to manage, control, script, and develop on the mainframe like any other cloud platform. Additionally, Zowe allows teams to use the same familiar, industry-standard, open-source tools they already know to access mainframe resources and services.

The mainframe is older than many of the programmers IBM hopes Zowe will attract. But it opens new possibilities for next generation applications for mainframe shops desperately needing new mission-critical applications for which customers are clamoring. Already it appears ready to radically reduce the learning curve for the next generation.

Initial open source Zowe modules will include an extensible z/OS framework that provides new APIs and z/OS REST services to transform enterprise tools and DevOps processes that can incorporate new technology, languages, and workflows. It also will include a unifying workspace providing a browser-based desktop app container that can host both traditional and modern user experiences and is extensible via the latest web toolkits. The framework will also incorporate an interactive and scriptable command-line interface that enables new ways to integrate z/OS in cloud and distributed environments.

These modules represent just the start. More will be developed over time, enabling development teams to manage and develop on the mainframe like any other cloud platform. Additionally, the modules reduce risk and cost by allowing teams to use familiar, industry-standard, open source tools that can accelerate mainframe integration into their enterprise DevOps initiatives. Just use Zowe to entice new mainframe talent.

DancingDinosaur is Alan Radding, a veteran information technology analyst, writer, and ghost-writer. Follow DancingDinosaur on Twitter, @mainframeblog. See more of his work at technologywriter.com.

Can IBM find a place for Watson?

September 7, 2018

After beating 2 human Jeopardy game champions three times in a row in 2011 IBM’s Watson has been hard pressed to come up with a comparable winning streak. Initially IBM appeared to expect its largest customers to buy richly configured Power Servers to run Watson on prem. When they didn’t get enough takers the company moved Watson to the cloud where companies could lease it for major knowledge-driven projects. When that didn’t catch on IBM started to lease Watson’s capabilities by the drink, promising to solve problems in onesies and twosies.

Jeopardy champs lose to Watson

Today Watson is promising to speed AI success through IBM’s Watson Knowledge Catalog. As IBM puts it: IBM Watson Knowledge Catalog powers intelligent, self-service discovery of data, models, and more; activating them for artificial intelligence, machine learning, and deep learning. Access, curate, categorize and share data, knowledge assets, and their relationships, wherever they reside.

DancingDinosaur has no doubt that Watson is stunning technology and has been rooting for its success since that first Jeopardy round seven years ago. Over that time, Watson and IBM have become a case study in how not to price, package, and market powerful yet expensive technology. The Watson Knowledge Catalog is yet another pricing and packaging experiment.

Based on the latest information online, Watson Knowledge Catalog is priced according to number of provisioned catalogs and discovery connections. There are two plans available: Lite and Professional. The Lite plan allows 1 catalog and 5 free discovery connections while the Professional plan provides unlimited of both. Huh? This statement begs for clarification and there probably is a lot of information and fine print required to answer the numerous questions the above description raises, but life is too short for DancingDinosaur to rummage around on the Watson Knowledge Catalog site to look for answers. Doesn’t this seem like something Watson itself should be able to clarify with a single click?

But no, that is too easy. Instead IBM takes the high road, which DancingDinosaur calls the education track.  Notes Jay Limburn, Senior Technical Staff Member and IBM Offering Manager: there are two main challenges that might impede you from realizing the true value of your data and slowing your journey to adopting artificial intelligence (AI). They are 1) inefficient data management and 2) finding the right tools for all data users.

Actually, the issues start even earlier. In attempting AI most established organizations start at a disadvantage, notes IBM. For example:

  • Most enterprises do not know what and where their data is
  • Data science and compliance teams are handicapped by the lack of data accessibility
  • Enterprises with legacy data are even more handicapped than digitally savvy startups
  • AI projects will expose problems with limited data and poor quality; many will simply fail just due to that.
  • The need to differentiate through monetization increases in importance with AI

These are not new. People have been whining about this since the most rudimentary data mining attempts were made decades ago. If there is a surprise it is that they have not been resolved by now.

Or maybe they finally have with the IBM Watson Knowledge Catalog. As IBM puts it, the company will deliver what promises to be the ultimate data Catalog that actually illuminates data:

  • Knows what data your enterprise has
  • Where it resides
  • Where it came from
  • What it means
  • Provide quick access to it
  • Ensure protection of use
  • Exploit Machine Learning for intelligence and automation
  • Enable data scientists, data engineers, stewards and business analysts
  • Embeddable everywhere for free, with premium features available in paid editions

OK, after 7 years Watson may be poised to deliver and it has little to do with Jeopardy but with a rapidly growing data catalog market. According to a Research and Markets report, the data catalog market is expected to grow from $210 million in 2017 to $620 million by 2022. How many sales of the Professional version gets IBM a leading share.

DancingDinosaur is Alan Radding, a veteran information technology analyst, writer, and ghost-writer. Follow DancingDinosaur on Twitter, @mainframeblog. See more of his work at technologywriter.com and here.

Can Zowe Bring Young Developers to the Z

August 31, 2018

Are you ever frustrated by the Z? As powerful as it gets mainframes remain a difficult nut to crack, particularly for newcomers who have grown up with easier technologies. Even Linux on Z is not as simple or straightforward as on other platforms. This poses a problem for Z-based shops that are scrambling to replace retiring mainframers.

IBM – Jon Simon/Feature Photo Service

Shopping via smartphone

Certainly other organizations, mainly mainframe ISVs like Compuware and Syncsort, have succeeded in extending the GUI deeper into the Z but that alone is not enough. It remains too difficult for newcomers to take their newly acquired computer talents and readily apply them to the mainframe. Maybe Zowe can change this.

And here’s how:  Recent surveys show that flexibility, agility and speed are key.  Single platforms are out, multi-platforms, and multi-clouds are in. IBM’s reply: let’s bring things together with the announcement of Zowe, pronounced like joey starting with a z. Zowe represents the first open source framework for z/OS. As such it provides solutions for development and operations teams to securely manage, control, script, and develop on the mainframe like any other cloud platform. Launched with partners CA Technologies and Rocket Software along with the support of the Open Mainframe Project, the goal is to drive innovation for the community of next-generation mainframe developers and enable interoperability and scalability between products. Zowe promotes a faster team on-ramp to mainframe productivity, collaboration, knowledge sharing, and communication.

In short, IBM and partners are enabling users to access z/OS using a new open-source framework. Zowe, more than anything before, brings together generations of systems that were not designed to handle global networks of sensors and devices. Now, decades since IBM brought Linux to the mainframe IBM, CA, and Rocket Software are introducing Zowe, a new open-source software framework that bridges the divide between modern challenges like IoT and the mainframe.

Zowe has four components:

  1. Zowe APIs: z/OS has a set of Representational State Transfer (REST) operating system APIs. These are made available by the z/OS Management Facility (z/OSMF). Zowe uses these REST APIs to submit jobs, work with the Job Entry Subsystem (JES) queue, and manipulate data sets. Zowe Explorers are visual representations of these APIs that are wrapped in the Zowe web UI application. Zowe Explorers create an extensible z/OS framework that provides new z/OS REST services to enterprise tools and DevOps processes.
  2. Zowe API Mediation Layer: This layer has several key components, including that API Gateway built using Netflix Zuul and Spring Boot technology to forward API requests to the appropriate corresponding service through the micro-service endpoint UI and the REST API Catalog. This publishes APIs and their associated documentation in a service catalog. There also is a Discovery Service built on Eureka and Spring Boot technology, acting as the central point in the API Gateway. It accepts announcements of REST services while providing a repository for active services.
  3. Zowe Web UI: Named zLUX, the web UI modernizes and simplifies working on the mainframe and allows the user to create modern applications. This is what will enable non-mainframers to work productively on the mainframe. The UI works with the underlying REST APIs for data, jobs, and subsystems, and presents the information in a full-screen mode compared to the command-line interface.
  4. Zowe Command Line Interface (CLI): Allows users to interact with z/OS from a variety of other platforms, such as cloud or distributed systems, submit jobs, issue Time Sharing Option (TSO) and z/OS console commands, integrate z/OS actions into scripts, and produce responses as JSON documents. With this extensible and scriptable interface, you can tie in mainframes to the latest distributed DevOps pipelines and build in automation.

The point of all this is to enable any developer to manage, control, script, and develop on the mainframe like any other cloud platform. Additionally, Zowe allows teams to use the same familiar, industry-standard, open-source tools they already know to access mainframe resources and services too.

The mainframe may be older than many of the programmers IBM hopes Zowe will attract. But it opens new possibilities for next generation applications and for mainframe shops desperately needing new mission-critical applications for which customers are clamoring. This should radically reduce the learning curve for the next generation while making experienced professionals more efficient. Start your free Zowe trial here. BTW, Zowe’s code will be made available under the open-source Eclipse Public License 2.0.

DancingDinosaur is Alan Radding, a veteran information technology analyst, writer, and ghost-writer. Follow DancingDinosaur on Twitter, @mainframeblog. See more of his work at technologywriter.com and here.

 

Travelport and IBM launch industry AI travel platform

August 24, 2018

Uh oh, if you have been a little sloppy with travel expenses, it’s time to clean up your travel act before AI starts monitoring your reimbursed travel. IBM and Travelport are teaming up to offer the industry’s first AI-based travel platform to intelligently manage corporate travel spend while leveraging IBM Watson capabilities to unlock previously unavailable data insights.

As IBM explains it, the new travel platform will be delivered via the IBM Cloud and exploits IBM Watson capabilities to intelligently track, manage, predict and analyze travel costs to fundamentally change how companies manage and optimize their travel programs. Typically, each work group submits its own travel expenses and reconciliation and reimbursement can be handled by different groups.

With annual global business travel spend estimated to reach a record $1.2 trillion this year, as projected by the Global Business Travel Association, corporate travel managers need new ways to reduce costs. That requires consolidating and normalizing all the information. Currently for businesses to get a full picture of travel patterns a travel manager might have to sift through data silos from travel agencies, cards, expense systems, and suppliers for end-to-end visibility of spend and compliance across all travel subcategories.  This, however, is usually undertaken from an historical view rather than in real time, which is one reason why reimbursement can take so long. As an independent contractor, DancingDinosaur generally has to submit travel expenses at the end of the project and wait forever for payment.

IBM continues: The new platform, dubbed Travel Manager,  features advanced artificial intelligence, and provides cognitive computing and predictive data analytics using what-if type scenarios, while integrated with travel and expense data to help travel management teams, procurement category managers, business units, finance, and human resource departments optimize their travel program, control spend, and enhance the end-traveler experience.  Maybe they will even squeeze independent contractors into the workflow.

The special sauce in all of this results from how IBM combines data with Travelport, a travel commerce platform on its own, to produce IBM Travel Manager as an AI platform that oversees corporate travel expenses. In the process, IBM Travel Manager gives users complete, unified access to previously siloed information, which, when combined with travel data from the Travelport global distribution system (GDS), can then be used to create real-time predictive analytics recommending how, say, adjustments in travel booking behavior patterns can positively impact a company’s travel budget.

Travelport, itself, is a heavyweight in the travel industry. It relies on technology to make the experience of buying and managing travel better. Through its travel commerce platform it provides distribution, technology, payment and other capabilities for the $7 trillion global travel and tourism industry. The platform facilitates travel commerce by connecting the world’s leading travel providers with online and offline travel buyers in a proprietary (B2B) travel marketplace.

The company helps with all aspects of the travel supply chain from airline merchandising, hotel content and distribution, mobile commerce to B2B payments. Last year its platform processed over $83 billion of travel spend, helping its customers maximize the value of every trip.

IBM Travel Manager combines and normalizes data from diverse sources, allowing for more robust insights and benchmarking than other reporting solutions. It also taps AI to unlock previously unavailable insights from multiple internal and external data sources. The product is expected to be commercially available to customers through both IBM and Travelport.

DancingDinosaur is Alan Radding, a veteran information technology analyst, writer, and ghost-writer. Follow DancingDinosaur on Twitter, @mainframeblog. See more of his work at technologywriter.com and here.

IBM Continues Cranking Up Blockchain

August 16, 2018

 

 

Somehow between quantum computing, AI, and hybrid clouds IBM is managing to squeeze in blockchain as an active, growing business. For instance, a previously unnamed collaborative effort between the world’s largest shipping company, Maersk, and IBM has now grown to 92 participants and been dubbed TradeLens.

IBM has 92 participants in the TradeLens blockchain network

DancingDinosaur long considered blockchain as a natural for the Z due to its zero downtime reliability and high certified levels of security (EAL4+). The most recent models include IBM’s automated pervasive encryption. No more wasting time making decisions about what to encrypt. The Z just encrypts it all with minimal overhead penalty. Your applications and workloads won’t even notice and compliance audits become a breeze.

TradeLens is emerging from its beta to accept early-adopter applications and announced a new custom contract service for executing complex shipping orders with fewer middlemen. “We have seen a lot of skeptics talk about the validity of blockchain solutions,” said Marie Wieck, IBM general manager and head of blockchain. “And I think with over 90 organizations and more than 150 million events captured on the system, “we really are seeing the proof,” she adds.

The initiative now includes Germany-based Hamburg Sud, which Maersk bought last year for $4 billion, and U.S.-based Pacific International Lines, along with numerous customs authorities, cargo owners and freight forwarders. Collectively, the shipping companies account for more than 20% of the global supply chain market share, with 20 port and terminal operators in Singapore, the U.S., Holland, and more serving 235 marine gateways around the world.

TradeLens, in practice, gives users access to their own blockchain node similar to those on the bitcoin blockchain that lets users send money without the need of banks. In the case of TradeLens a shipper can cut out as many as five middlemen, even for simple queries such as identifying the location of a shipping container.

At stake is what Transparency Market Research expects will be a $32.9 billion global supply-chain software business by 2026. As far back as 2015, the World Trade Organization estimated that simplifying the global supply chain could reduce costs among users by as much as 17.5%, with developing nations expected to see as much as a 35% increase in exports as they leapfrog over legacy technology platforms.

The cooperative effort between Maersk and IBM still needs to make money. To do so, the two companies have shifted the business model from a stand-alone joint-venture to the intellectual property that comprises TradeLens being co-owned and jointly developed.

But the new cooperative structure could unnerve some potential customers. To offset concerns, the CEO of Maersk’s New Jersey-based TradeLens operation, Mike White, says a number of barriers have been put in place, including contractual restrictions on sharing data and technical barriers in the form of the independently managed blockchain nodes.

If successful, TradeLens might literally embody the common refrain among blockchain users that “all ships will rise” when they use a shared, distributed ledger. Facing decreasing global freight rates, Maersk last quarter became just the latest container shipper to cut profit forecasts.

Among competitors aiming to cut those costs and increase profits is the former head of blockchain at accounting firm Deloitte, who earlier this year announced he was raising $100 million to launch a supply chain platform using the ethereum blockchain. Similarly, blockchain startup Fr8 is preparing to raise $60 million via an initial coin offering to build its own blockchain logistics platform.

“The value proposition is for all ecosystem participants,” said White. “The ability to get better access to more real-time data, to have better visibility end-to-end, and to be able to connect one-to-many in a more efficient and effective way, makes the cost of getting that information lower, makes the ability to manage your own business better, and makes the ability to service your customers that much stronger.”

DancingDinosaur is Alan Radding, a veteran information technology analyst, writer, and ghost-writer. Follow DancingDinosaur on Twitter, @mainframeblog. See more of his work at technologywriter.com and here.

 

IBM AI Reference Architecture Promises a Fast Start

August 10, 2018

Maybe somebody in your organization has already fooled around with a PoC for an AI project. Maybe you already want to build it out and even put it into production. Great! According to IBM:  By 2020, organizations across a wide array of different industries that don’t deploy AI will be in trouble. So those folks already fooling around with an AI PoC will probably be just in time.

To help organization pull the complicated pieces of AI together, IBM, with the help of IDC, put together its AI Infrastrucure Reference Architecture. This AI reference architecture, as IBM explains, is intended to be used by data scientists and IT professionals who are defining, deploying and integrating AI solutions into an organization. It describes an architecture that will support a promising proof of concept (PoC), experimental application, and sustain growth into production as a multitenant system that can continue to scale to serve a larger organization, while integrating into the organization’s existing IT infrastructure. If this sounds like you check it out. The document runs short, less than 30 pages, and free.

In truth, AI, for all the wonderful things you’d like to do with it, is more a system vendor’s dream than yours.  AI applications, and especially deep learning systems, which parse exponentially greater amounts of data, are extremely demanding and require powerful parallel processing capabilities. Standard CPUs, like those populating racks of servers in your data center, cannot sufficiently execute AI tasks. At some point, AI users will have to overhaul their infrastructure to deliver the required performance if they want to achieve their AI dreams and expectations.

Therefore, IDC recommends businesses developing AI capabilities or scaling existing AI capabilities, should plan to deliberately hit this wall in a controlled fashion. Do it knowingly and in full possession of the details to make the next infrastructure move. Also, IDC recommends you do it in close collaboration with a server vendor—guess who wants to be that vendor—who can guide them from early stage to advanced production to full exploitation of AI capabilities throughout the business.

IBM assumes everything is going to AI as quickly as it can, but that may not be the case for you. AI workloads include applications based on machine learning and deep learning, using unstructured data and information as the fuel to drive the next results. Some businesses are well on their way with deploying AI workloads, others are experimenting, and a third group is still evaluating what AI applications can mean for their organization. At all three stages the variables that, if addressed properly, together make up a well-working and business-advancing solution are numerous.

To get a handle on these variables, executives from IT and LOB managers often form a special committee to actively consider their organization’s approach to the AI. Nobody wants to invest in AI for the sake of AI; the vendors will get rich enough as it is. Also, there is no need to reinvent the wheel; many well-defined use cases exist that are applicable across industries. Many already are noted in the AI reference guide.

Here is a sampling:

  • Fraud analysis and investigation (banking, other industries)
  • Regulatory intelligence (multiple industries)
  • Automated threat intelligence and prevention systems (many industries)
  • IT automation, a sure winner (most industries)
  • Sales process recommendation and automation
  • Diagnosis and treatment (healthcare)
  • Quality management investigation and recommendation (manufacturing)
  • Supply and logistics (manufacturing)
  • Asset/fleet management, another sure winner (multiple industries)
  • Freight management (transportation)
  • Expert shopping/buying advisory or guide

Notes IDC: Many can be developed in-house, are available as commercial software, or via SaaS in the cloud.

Whatever you think of AI, you can’t avoid it. AI will penetrate your company embedded in the new products and services you buy.

So where does IBM hope your AI effort end up? Power9 System, hundreds of GPUs, and PowerAI. Are you surprised?

DancingDinosaur is Alan Radding, a veteran information technology analyst, writer, and ghost-writer. Follow DancingDinosaur on Twitter, @mainframeblog. See more of his work at technologywriter.com and here.

Compuware Acquisition Boosts Mainframe DevOps

August 3, 2018

The acquisition of XaTester, new enhancements, and a partnership with Parasoft moves Compuware Topaz for Total Test toward leadership in the automated unit testing that has become essential for Agile and DevOps on the mainframe.  Compuware clearly has picked up its steady but languid quarterly pace of delivering new mainframe software. This comes on top of Topaz for Enterprise Data announced just a few weeks ago here.

Especially for mainframe shops, automated mainframe unit testing may present the biggest obstacle to speedy new code delivery.  The testing must not just be automated but continuous. As such, it serves as the centerpiece of the entire agile downstream process, which also includes continuous integration and continuous delivery. Only by delivering continuous automated testing can the mainframe shop deliver the no-fail quality of service for which it is heralded. Continuous automated testing is essential for controlling business risk, especially given the increased complexity and pace of modern application delivery.

To put it another way: building and integrating code changes is certainly important. However, if the automated delivery process cannot identify how changes impact business risk or disrupt the end-user experience continuous automated testing then increased frequency and speed of continuous Integration and continuous delivery becomes more of a problem than an advantage.

To deliver on its vision of Topaz for Total Test as the defacto standard for automating mainframe unit testing across all major mainframe environments and programming languages, Compuware has:

  • Acquired XaTester from Xact Consulting A/S, enabling developers to quickly create unit tests for both batch and CICS-based programs written in COBOL, PL/I and Assembler
  • Enhanced Topaz for Total Test to provide automated unit testing for IMS batch and transactional applications. Testing for IMS is especially important given that newer developers often have little or no hands-on experience with IMS code. This presents a challenge since more than 95 percent of the top Fortune 1000 companies use IMS to process more than 50 billion transactions a day and manage 15 million gigabytes of critical business data. Fortunately, IBM continues to add new features to IMS that help adjust to the changing IT world. These enhancements complement Topaz for Total Test’s existing support for batch applications written in COBOL.
  • Partnered with Parasoft, a leading innovator in end-to-end test automation for software development. The first deliverable from the partnership is integration between Parasoft SOAtest and Topaz for Total Test. This integration enables developers working on mainframe applications to quickly and easily test API calls between mainframe and non-mainframe systems, an increasingly critical aspect of DevOps.

Topaz for Total Test transforms mainframe development by giving developers the same type of unit testing capabilities on the mainframe that distributed platform teams have become accustomed to on other platforms. Unit testing enables developers to find potential problems in their code as early as possible to more quickly and frequently deliver incremental changes in software functionality while more granularly documenting code for the benefit of other developers.

DevOps, also presents complications for the mainframe that come from its reputation for slow, painstaking, methodical release cycles. DevOps is about making sure the way an application is deployed in production is the same way it was deployed in test and development.

According to IBM writing in piece titled DevOps for the mainframe, notes DevOps also includes the notion of applying software management to the scripts and processes used for the actual deployment and monitoring and taking the monitoring capabilities from Operations into development and test to get an early understanding of how the system will actually perform.

As the IBM writers continue: In the z/OS environment, organizations are generally building only the changes, the deltas, to the application and deploying them into the environment.  It is very common to find that some parts of an application have not been rebuilt in decades. Worse yet, there are generally few z/OS test environments that are shared across application development teams.  The tools also are rarely the same tools used by the distributed teams.  These differences increase the difficultly of achieving an-end-to-end DevOps process.

This is where Compuware comes in. Topaz for Total Test fundamentally transforms mainframe development by giving developers the same type of unit testing capabilities on the mainframe they’ve become accustomed to on other platforms, mainly x86.

The result for large enterprises, Compuware continues, is a unified DevOps toolchain that accelerates development across all platforms so a multi-platform shop can more effectively compete in today’s rapidly-changing markets. “The new rules of the digital economy are putting pressure on our customers to achieve the utmost speed with the utmost quality,” said Luke Tuddenham, Vice President at CPT, a global IT consulting services firm with a significant testing practice. The new Topaz tools should The acquisition of XaTester, new enhancements, and a partnership with Parasoft moves Compuware Topaz for Total Test toward leadership in the automated unit testing that has become essential for Agile and DevOps on the mainframe. .

DancingDinosaur is Alan Radding, a veteran information technology analyst, writer, and ghost-writer. Follow DancingDinosaur on Twitter, @mainframeblog. See more of his work at technologywriter.com and here.

New Syncsort Tools Boost IBMi

July 25, 2018

Earlier this week Syncsort announced new additions to its family of products that can be used to help address top-of-mind compliance challenges faced by IT leaders, especially IBMi shops. Specifically, Syncsort’s IBMi security products can help IBMi shops comply with the EU’s General Data Protection Regulation (GDPR) and strengthen security with multi-factor authentication.

The new innovations in the Syncsort Assure products follow the recent acquisition of IBMi data privacy products from Townsend Security. The Alliance Encryption and Security Suite can be used to address protection of sensitive information and compliance with multi-factor authentication, encryption, tokenization, secure file transfer, and system log collection.

Syncsort’s Cilasoft Compliance and Security Suite for IBMi and Syncsort’s Enforcive Enterprise Security Suite provide unique tools that can help organizations comply with regulatory requirements and address security auditing and control policies. New releases of both security suites deliver technology that can be used to help accelerate and maintain compliance with GDPR.

As the bad guys get more effective, multi-factor authentication is required in many compliance regulations; such as PCI-DSS 3.2, NYDFS Cybersecurity Regulation, Swift Alliance Access, and HIPAA. Multi-factor authentication strengthens login security by requiring something more than a password or passphrase; only granting access after two or more authentication factors have been verified.

To help organizations fulfill regulatory requirements and improve the security of their IBMi systems and applications, Syncsort has delivered the new, RSA-certified Cilasoft Reinforced Authentication Manager for IBMi (RAMi). RAMi’s rules engine facilitates the set-up of multi-factor authentication screens for users or situations that require it, based on specific criteria. RAMi’s authentication features also enable self-service user profile re-enablement and password changes and support of the four eyes principle of supervised changes to sensitive data. Four eyes principle requires that any requested action must be approved by at least two people.

Syncsort expects 30% of its revenue to come from IBMi products. It also plans to integrate its Assure products with Ironstream to offer capacity management for IBMi.

In one sense, Syncsort is joining a handful of vendors, led by IBM, who continue to expand and enhance IBMi. DancingDinosaur has been writing about the IBMi even before it became the AS400, which recently celebrated its 30th birthday this week, writes Timothy Prickett Morgan, a leading analyst at the Next Platform. The predecessors to the AS/400 that your blogger wrote about back then were the System 36 and System 38, but they didn’t survive.  In those 30+ years, however, the IBMi platform has continued to evolve to meet customer needs, most recently by running on Power Systems, where it still remains a viable business, Morgan noted.

The many rivals of the OS/400 platform and its follow-ons since that initial launch of the AS/400 are now gone. You may recall a few of them: DEC’s VMS for the VAX and Alpha systems, Hewlett Packard’s MPE for the HP 3000, HP-UX for the HP 9000s, and Sun Microsystems’ Solaris for the Sparc systems.  DancingDinosaur once tried to cheerlead an effort to port Solaris/Sparc to the mainframe but IBM didn’t buy into that.

Among all of these and other platforms, IBMi is still out there, with probably around 125,000 unique customers and maybe between 250,000 and 300,000 systems. Morgan estimates.

He adds: As much as computing and automation has exploded on the scene since the first AS/400 arrived, one thing continues: Good old fashioned online transaction processing is something that every business still has to do, and even the biggest hyperscalers use traditional applications to keep the books and run the payroll.

The IBMi platform operates as more than an OLTP machine, evolving within the constantly changing environment of modern datacenters. This is a testament, Morgan believes, to the ingenuity and continuing investment by IBM in its Power chips, Power Systems servers, and the IBMi and AIX operating systems. Yes, Linux came along two decades ago and has bolstered the Power platforms, but not to the same extent that Linux bolstered the mainframe. The mainframe had much higher costs and lower priced Linux engines on mainframes exhibited a kind of elasticity of demand that IBM wishes it could get for IBMi and z/OS. Morgan is right about a lot but DancingDinosaur still wishes IBM had backed Solaris/Sparc on the z alongside Linux. Oh well.

DancingDinosaur is Alan Radding, a veteran information technology analyst, writer, and ghost-writer. Follow DancingDinosaur on Twitter, @mainframeblog. See more of his work at technologywriter.com and here.

FlashSystem 9100 Includes NVMe and Spectrum Software

July 20, 2018

The new IBM FlashSystem 9100 comes with all the bells and whistles included, especially NVMe and Spectrum Software.  For software, IBM includes its full suite of software-defined capabilities for your data both on-premises and in the cloud and across public and private clouds. It also aims to modernize your infrastructure with new capabilities for private and hybrid clouds as well as optimize operations.

FlashSystem 9100 with new capabilities built-in end-to-end

It also includes AI-assisted, next-generation technology for multi-cloud environments. This should allow you to optimize business critical workloads in an effort to optimize your technology infrastructure and prepare for the era of multi-cloud digitized business now emerging.

The IT infrastructure market is changing so quickly and so radically that technology that might have been still under consideration can no longer make it to the short list. DancingDinosuar, for example, won’t even attempt to create an ROI analysis of hard disk for primary storage. Other than straight-out falsification the numbers couldn’t work.

The driver behind this, besides the advances in technology price/performance and what seems like return to Moore’s Law levels of gains, lies the success of the big hyperscalers, who are able to sustain amazing price and performance levels. DancingDinosaur readers are no hyperscalers but they are capitalizing on hyperscaler gains in the cloud and they can emulate hyperscaler strategies in their data centers wherever possible.

IBM puts it a little more conventionally: As more and more organizations move on to a multi-cloud strategy they are having more data-driven needs such as artificial intelligence (AI), machine learning (ML), and containers, it writes. All of these new needs require a storage solution that is powerful enough to address all the needs while being built on proven technology and support both the existing and evolving data centers. IBM’s response to these issues is the expansion of its FlashSystem to include the new 9100 NVMe end-to-end solution while piling on the software.

Aside from being an all NVMe storage solution, IBM is leveraging several IBM technologies such as IBM Spectrum Virtualize and IBM FlashCore as well as software from IBM’s Spectrum family. This combination of software and technology helps the 9100 store up to 2PB of data in a 2U space (32PB in a larger rack). FlashCore also enables consistent microsecond latency, with IBM quoting performance of 2.5 million IOPS, 34GB/s, and 100μs latency for a single 2U array. For storage, the FlashSystem 9100 uses FlashCore modules with an NVMe interface. These 2.5” drives come in 4.8TB, 9.6TB, and 19.2TB capacities with up to 5:1 compression. The drives leverage 64-Layer 3D TLC NAND and can be configured with as little as four drives per system.   You might not be a hyperscaler but this is the kind of stuff you need if you hope to emulate one.

To do this, IBM packs in the goodies. For starters it is NVMe-accelerated and Multi-Cloud Enabled.  And it goes beyond the usual flash array. This is an NVMe-accelerated Enterprise Flash Array – 100% NVMe end-to-end and includes NVMe IBM FlashCore modules and NVMe industry standard SSD. It also supports physical, virtual and Docker environments.

In addition, the system includes IBM Storage Insights for AI-empowered predictive analytics, storage resource management, and support delivered over the cloud. Also, it offers Spectrum Storage Software for array management, data reuse, modern data protection, disaster recovery, and containerization (how it handles Docker). Plus, IBM adds:

  • IBM Spectrum Virtualize
  • IBM Spectrum Copy Data Management
  • IBM Spectrum Protect Plus
  • IBM Spectrum Virtualize for Public Cloud
  • IBM Spectrum Connect
  • FlashSystem 9100 Multi-Cloud Solutions

And just in case you think you are getting ahead of yourself, IBM is adding what it calls blueprints. As IBM explains them: the blueprints take the form of three pre-validated, cloud-focused solution plans.

  1. Data Reuse, Protection and Efficiency solution leverages the capabilities of IBM Spectrum Protect Plus and IBM Spectrum Copy Data Management (CDM) to provide enhanced data protection features for virtual applications with powerful data copy management and reuse functionality both on premises and in the cloud.
  2. Business Continuity and Data Reuse solution leverages IBM Spectrum Virtualize for Public Cloud to extend data protection and disaster recovery capabilities into the IBM Cloud, as well as all the copy management and data reuse features of IBM Spectrum CDM.
  3. Private Cloud Flexibility and Data Protection solution enables simplified deployment of private clouds, including the technology needed to implement container environments, and all of the capabilities of IBM Spectrum CDM to manage copy sprawl and provide data protection for containerized applications.

The blueprints may be little more than an IBM shopping list that leaves you as confused as before and a little poorer. Still, the FlashSystem 9100, along with all of IBM’s storage solutions, comes with Storage Insights, the company’s enterprise, AI-based predictive analytics, storage resource management, and support platform delivered over the cloud. If you try any blueprint, let me know how it works, anonymously of course.

DancingDinosaur is Alan Radding, a veteran information technology analyst, writer, and ghost-writer. Follow DancingDinosaur on Twitter, @mainframeblog. See more of his work at technologywriter.com and here.

 

Compuware Expedites DevOps on Z

July 13, 2018

Compuware continues its quarterly introduction of new capabilities for the mainframe, a process that has been going on for several years by now. The latest advance, Topaz for Enterprise Data, promises to expedite the way DevOps teams can access the data they need while reducing the complexity, labor, and risk through extraction, masking, and visualization of the mainframe. The result: the ability to leverage all available data sources to deliver high-value apps and analytics fast.

Topaz for Enterprise Data expedites data access for DevOps

The days when mainframe shops could take a methodical and deliberate approach—painstakingly slow—to accessing enterprise data have long passed. Your DevOps teams need to dig the value out of that data and put it into the hands of managers and LOB teams fast, in hours, maybe just minutes so they can jump on even the most fleeting opportunities.

Fast, streamlined access to high-value data has become an urgent concern as businesses seek competitive advantages in a digital economy while fulfilling increasingly stringent compliance requirements. Topaz for Enterprise Data enables developers, QA staff, operations teams, and data scientists at all skill and experience levels to ensure they have immediate, secure access to the data they need, when they need it, in any format required.

It starts with data masking, which in just the last few months has become a critical concern with the rollout of GDPR across the EU. GDPR grants considerable protections and options to the people whose data your systems have been collecting. Now you need to protect personally identifiable information (PII) and comply with regulatory mandates like GDPR and whatever similar regs will come here.

Regs like these don’t apply just to your primary transaction data. You need data masking with all your data, especially when large, diverse datasets of high business value residing on the mainframe contain sensitive business or personal information.

This isn’t going to go away anytime soon so large enterprises must start transferring responsibility for the stewardship of this data to the next generation of DevOps folks who will be stuck with it. You can bet somebody will surely step forward and say “you have to change every instance of my data that contains this or that.” Even the most expensive lawyers will not be able to blunt such requests. Better to have the tools in place to respond to this quickly and easily.

The newest tool, according to Compuware, is Topaz for Enterprise Data. It will enable even a mainframe- inexperienced DevOps team to:

  • Readily understand relationships between data even when they lack direct familiarity with specific data types or applications, to ensure data integrity and resulting code quality.
  • Quickly generate data for testing, training, or business analytics purposes that properly and accurately represents actual production data.
  • Ensure that any sensitive business or personal data extracted from production is properly masked for privacy and compliance purposes, while preserving essential data relationships and characteristics.
  • Convert file types as required.

Topaz users can access all these capabilities from within Topaz’s familiar Eclipse development environment, eliminating the need to learn yet another new and complicated tool.

Those who experience it apparently like what they find. Noted Lynn Farley, Manager of Data Management at TCF Bank: “Testing with production-like obfuscated data helps us develop and deliver better quality applications, as well as remain compliant with data privacy requirements, and Topaz provides our developers with a way to implement data privacy rules to mask multiple data types across platforms and with consistent results.”

Rich Ptak, principal of IT analyst firm Ptak Associates similarly observed: “Leveraging a modern interface for fast, simple access to data for testing and other purposes is critical to digital agility,” adding it “resolves the long-standing challenge of rapidly getting value from the reams of data in disparate sources and formats that are critical to DevOps and continuous improvement.”

“The wealth of data that should give large enterprises a major competitive advantage in the digital economy often instead becomes a hindrance due to the complexity of sourcing across platforms, databases, and formats,” said Chris O’Malley,Comp CEO of Compuware. As DancingDinosaur sees it, by removing such obstacles Compuware reduces the friction between enterprise data and business advantage.

DancingDinosaur is Alan Radding, a veteran information technology analyst, writer, and ghost-writer. Follow DancingDinosaur on Twitter, @mainframeblog. See more of his work at technologywriter.com and here.


%d bloggers like this: