Posts Tagged ‘CICS’

IBM Discounts z/OS Cloud Activity

August 12, 2016

The latest iteration of IBM’s z/OS workload pricing aims at to lower the cost of running cloud workloads.  In a recent announcement, z Systems Workload Pricing for Cloud (zWPC) for z/OS seeks to minimize the impact of new public cloud workload transaction growth on Sub-Capacity license charges. IBM did the same thing with mobile workloads when they started driving up the 4-hour workload averages on the z. As more z workloads interact with public clouds this should start to add up, if it hasn’t already.

bluemix garage -ni_5554516560

Bluemix Garages in the Cloud

As IBM puts it: zWPC applies to any organization that has implemented Sub-Capacity pricing via the basic AWLC or AEWLC pricing mechanisms for the usual MLC software suspects. These include z/OS, CICS, DB2, IMS, MQ and WebSphere Application Server (WAS).  An eligible transaction is one classified as Public Cloud-originated, connecting to a z/OS hosted transactional service and/or data source via a REST or SOAP web service.  Public cloud workloads are defined as transactions processed by named Public cloud application transactions identified as originating from a recognized Public Cloud offering, including but not limited to, Amazon Web Services (AWS), Microsoft Azure, IBM Bluemix, and more.

IBM appears to have simplified how you identify eligible workloads. As the company notes: zWPC does not require you to isolate the public cloud work in separate partitions, but rather offers an enhanced way of reporting. The z/OS Workload Manager (WLM) allows clients to use WLM classification rules to distinguish cloud workloads, effectively easing the data collection requirements for public cloud workload transactions.

So how much will you save? It reportedly reduces eligible hourly values by 60 percent. The discount produces an adjusted Sub-Capacity value for each reporting hour. What that translates into on your monthly IBM software invoice once all the calculations and fine print are considered amounts to a guess at this point. But at least you’ll save something. The first billing eligible under this program starts Dec. 1, 2016.

DancingDinosaur expects IBM to eventually follow with discounted z/OS workload pricing for IoT and blockchain transactions and maybe even cognitive activity. Right now the volume of IoT and blockchain activity is probably too low to impact anybody’s monthly license charges. Expect those technologies ramp up in coming years with many industry pundits projecting huge numbers—think billions and trillions—that will eventually impact the mainframe data center and associated software licensing charges.

Overall, Workload License Charges (WLC) constitute a monthly software license pricing metric applicable to IBM System z servers running z/OS or z/TPF in z/Architecture (64-bit) mode.  The driving principle of WLS amounts to pay-for-what-you-use, a laudable concept. In effect it lowers the cost of incremental growth while further reducing software costs by proactively managing associated peak workload utilization.

Generally, DancingDinosaur applauds anything IBM does to lower the cost of mainframe computing.  Playing with workload software pricing in this fashion, however, seems unnecessary. Am convinced there must be simpler ways to lower software costs without the rigmarole of metering and workload distribution tricks. In fact, a small mini-industry has cropped up among companies offering tools to reduce costs, primarily through various ways to redistribute workloads to avoid peaks.

A modification to WLC, the variable WLC (VWLC) called AWLC (Advanced) and the EWLC (Entry), aligns with most of the z machines introduced over the past couple of years.  The result, according to IBM, forms a granular cost structure based on MSU (CPU) capacity that applies to VWLC and associated pricing mechanisms.

From there you can further tweak the cost by deploying Sub-Capacity and Soft Capping techniques.  Defined Capacity (DC), according to IBM, allows the sizing of an LPAR in MSU such that the LPAR will not exceed the designated MSU amount.  Group Capacity Limit (GCL) extends the Defined Capacity principle for a single LPAR to a group of LPARs, allowing MSU resources to be shared accordingly.  BTW, a potential downside of GCL is that is one LPAR in the group can consume all available MSUs due to a rogue transaction. Again, an entire mini industry, or maybe no so mini, has emerged to help handle workload and capacity pricing on the z.

At some point in most of the conference pricing sessions the eyes of many attendees glaze over.  By Q&A time the few remaining pop up holding a copy of a recent invoice and ask what the hell this or that means and what the f$#%@#$ they can do about it.

Have to admit that DancingDinosaur did not attend the most recent SHARE conference, where pricing workshops can get quite energetic, so cannot attest to the latest fallout. Still, the general trend with mobile and now with cloud pricing discounts should be lower costs.

DancingDinosaur is Alan Radding, a veteran information technology analyst and writer. Please follow DancingDinosaur on Twitter, @mainframeblog. See more of his IT writing at technologywriter.com and here.

 

IBM to Acquire EZSource to Bolster IBM z System for Digital Transformation

June 9, 2016

Over the past two years you have been reading in DancingDinosaur the new tricks that your z System can do—real time analytics, IoT, Blockchain, and more. This is part of the digital transformation that is blowing through enterprises everywhere. EZSource facilitates and simplifies how you can play in this new areas. See the IBM announcement here.

zOS Security2 PNG ez Complexity

EZSource Dashboard, Credit: EZSource

EZSource expedites digital transformations by unlocking core business logic and apps. Specifically it will pinpoint your valuable mainframe code assets in preparation for leveraging them through a hybrid cloud strategy. In the process it will enable the understanding business-critical assets in preparation of deployment of a z-centered hybrid cloud. This also enables enterprise DevOps, which is necessary to keep up with the pace of changes overtaking existing business processes.

Specifically, this can entail the need to:

  • Identify API candidates to play in the API economy
  • Embrace micro services to deliver versatile apps fast
  • Identify code quality concerns, including dead code, to improve reliability and maintainability
  • Mitigate risk of change through understanding code, data and schedule interdependencies
  • Aid in the sizing of a change effort
  • Automate documentation to improve understanding
  • Reduce learning curve as new people are on-boarded
  • Add application understanding to DevOps lifecycle information to identify opportunities for work optimization

Managers a z data centers often shy away from modifying aging business-critical applications for fear of breaking something—if it ain’t broke, don’t fix it—often is the mantra. They also are rationing the use of their few remaining experienced z veterans with the domain expertise and deep knowledge of software that turns out to be quite esoteric code.  This is further aggravated by poorly documented mainframe code. The way to mitigate this risk of change is through understanding code, data, and interdependencies. EZSource can handle this visually and with ease; you no longer need to be an experienced z code expert.

So what do you have to do to get on the digitization transformation bandwagon? Start by identifying your mainframe assets that are most often accessed. Most of them will be what the mobile apps are calling, usually a CICS routine or two or three. Then expose these business critical services through APIs and micro-services. This may require re-writing parts of them as platform agnostic language and Java components to work within the context of a hybrid cloud. As noted just above, EZSource can help with much of this too.

In short, EZSource performs app discovery, which facilitates code quality improvement. It helps clean up code. It also applies analytics to DevOps, in effect enabling Cognitive DevOps, which makes sense in the dynamic hybrid cloud. The result: you focus only on the relevant code and, of that, what is particularly problematic.

The goal is to increase competitiveness and business innovation through digital forms of engagement; the engagement currently being fueled by mobile, social, analytic, and cognitive computing in a hybrid cloud environment. The expectation is that you will be able to tap the accumulated software assets for insights while modernizing business critical applications already resident on the z. IBM contends that this is the fastest and most cost effective way to drive new value and agility and DancingDinosaur agrees.

Is it worth it?  Most DancingDinosaur readers probably already believe that the mainframe and its data and software assets sit smack at the center of a digital enterprise. (Just a glimpse of the growth of monthly software peak workload charges should confirm that). It makes no sense not to leverage this complex resource to the max. EZSource, with its smart code analytics and visual capabilities, can save thousands of hours of work, avoid mistakes, and speed the delivery of the kind of slick new hybrid cloud apps that customers are demanding.  EZSource is primarily a Linux, Windows, and Java tool with only a few pieces residing on the z to handle necessary z-specific connectivity.

DancingDinosaur is Alan Radding, a veteran information technology analyst and writer. Please follow DancingDinosaur on Twitter, @mainframeblog. See more of his IT writing at technologywriter.com and here.

 

BMC MainView for Java Manages Critical Digital Business

May 16, 2016

A large European financial services firm increasingly handles a lot of critical transaction functions with Java running through CICS and WebSphere.  As the firm looks forward, its managers see Java playing a bigger and more critical role in its core business as it shifts more of its business functionality to agile programming in Java. This firm is not even thinking about abandoning its workhorse COBOL code ever, but all new work is being directed to Java.

bmc mainview java 2

With that in mind, BMC last week announced MainView for Java Environments, part of BMC’s MainView integrated systems management suite of tools that provides insight into how Java is consuming resources and affecting application performance on the z System.  It is no surprise, therefore, that the firm became an early beta user for MainView for Java Environments.

According to a BMC survey, 93% of mainframe organizations in a recent BMC survey said Java usage is growing or steady, and Java is the language of choice for writing new or rewriting existing mainframe applications. BMC MainView for Java Environments provides insight into Java resource usage and how it impacts other workloads and applications. For example it automatically discovers all the Java Virtual Machines (JVMs) across z/OS. That alone can help with identifying performance problems in an effort to find and fix problems fast.

Java is the key to both performance and cost savings by running on zIIP assist processors. Java workloads, however, can affect performance and availability on the mainframe, as they consume system resources without regard for the needs of other applications or services, which is another reason why zIIP is essentially. Also, an integrated management approach gives IT operations a holistic view of the environment to quickly and easily discover Java Virtual Machines (JVMs) and to manage the effect of their resource consumption on application performance.

Java was the first object oriented programming language DancingDinosaur tried.  Never got good enough to try it on real production work, but here’s what made it appealing: fully object oriented, produces truly portable write-once, run-anywhere code (mainly because it results in Java virtual machine bytecode) and had automatic garbage collection. For a run-of-the-mill programmer, Java was a joy compared to C or, ugh, COBOL. Some of the new languages becoming popular today, the ones driving mobile and cloud and analytics apps look even easier, but DancingDinosaur would feel too embarrassed to sit in a programming class with twenty-somethings the age of his daughters.

Java usage today, according to the BMC survey, is growing or steady, while Java has become the language of choice for writing new or rewriting existing mainframe applications. The only drawback may be that Java workloads can affect performance and resource availability on the mainframe as JVMs consume system resources oblivious to the needs of other applications or services or the cost of uncontrolled resource consumption, which is what Java unrestrained produces. An integrated management approach that allows for a holistic view of the environment can quickly and easily discover JVMs and manage can constrain the effects on resource consumption on application performance and offset any drawback.

Explained Tim Grieser, program vice president, at IDC’s Enterprise System Management Software: “Since Java manages its own resources it can consume excessive amounts of processor time and memory resources leading to performance or availability problems if not proactively managed.” The key being proactively managed.  BMC’s MainView for Java Environments promises exactly that kind of proactive management by monitoring z/OS Java runtime environments and provides a consolidated view of all resources being consumed. This will enable system admins and operators to identify and manage performance issues before they impact end users.

“Java on the mainframe is being used to develop and deploy new applications faster and more economically to meet dynamically changing digital business needs and to take advantage of widely available programming skills” IDC’s Grieser continued. Something like BMC’s MainView for Java Environments can be used to constrain Java. IBM’s Omegamon can fulfill a similar function.

According to the financial firm beta test manager, with BMC’s MainView for Java Environments tool, Java can be effectively used to unlock Java’s potential on the mainframe  vital in a changing application and systems environment as part of an integrated performance management solution that discovers and monitors JVMs. As such, it provides a single graphical console which enables you  to quickly understand the Java applications impact on resources and its effect on the performance of other applications and transactions. The solution promises to improve application performance and ensure availability while reducing Mean Time to Repair (MTTR) and lowering Monthly License Charges (MLC) by monitoring zIIP offloading, which is the key to both performance and cost management.

DancingDinosaur is Alan Radding, a veteran information technology analyst and writer and occasional wanna-be programmer. Please follow DancingDinosaur on Twitter, @mainframeblog. See more of his IT writing at technologywriter.com and here.

 

Play the Cloud-Mobile App Dev Game with z/OS Client Web Enablement

April 15, 2016

Is you z team feeling a little nervous that they are missing an important new game? Are business managers bugging you about running slick Cloud and mobile applications through the z? Worse, are they turning to third party contractors to build apps that will try to connect your z to the cloud and mobile world? If so, it is time to take a close look at IBM’s z/OS Client Web Enablement Toolkit.

mobile access backend data 1800FLOWERS

Accessing backend system through a mobile device

If you’re a z shop running Linux on z or a LinuxONE shop you don’t need z/OS Web Enablement. The issue only comes up when you need to connect the z/OS applications to cloud, web, and mobile apps. IBM began talking up z/OS Enablement Toolkit since early this year. Prior to the availability of the toolkit, native z/OS applications had little or no easy options available to participate as a web services client.

You undoubtedly know the z in its role as a no-fail transaction workhorse. More recently you’ve watched as it learned new tricks like managing big data or big data analytics through IBM’s own tools and more recently with Spark. The z absorbed the services wave with SOA and turned CICS into a handler for Web transactions. With Linux it learned an entire new way to relate to the broader distributed world. The z has rolled with all the changes and generally came out ahead.

Now the next change for z data centers has arrived. This is the cloud/web-mobile-analytics execution environment that seemingly is taking over the known world. It almost seems like nobody wants a straight DB2 CICS transaction without a slew of other devices getting involved, usually as clients. Now everything is HTTP REST to handle x86 clients and JSON along with a slew of even newer scripting languages. Heard about Python and Ruby? And they aren’t even the latest.  The problem: no easy way to perform HTTP REST calls or handle JSON parsing on z/OS. This results from the utter lack of native JSON services built into z/OS, according to Steve Warren, IBM’s z/OS Client Web Enablement guru.

Starting, however, with z/OS V2.2 and now available in z/OS V2.1 via a couple of service updates,  Warren reports, the new z/OS Client Web Enablement Toolkit changes the way a z/OS-based data center can think about z/OS applications communicating with another web server. As he explains it, the toolkit provides an easy-to-use, lightweight solution for applications looking to easily participate as a client, in a client/server web application. Isn’t that what all the kids are doing with Bluemix? So why not with the z and z/OS?

Specifically, the z/OS Toolkit provides a built-in protocol enabler using interfaces similar in nature to other industry-standard APIs along with a z/OS JSON parser to parse JSON text coming from any source and the ability to build new or add to existing JSON text, according to Warren.  Suddenly, it puts z/OS shops smack in the middle of this hot new game.

While almost all environments on z/OS can take advantage of these new services, Warren adds, traditional z/OS programs running in a native environment (apart from a z/OS UNIX or JVM environment) stand to benefit the most. Before the toolkit, native z/OS applications, as noted above, had little or no easy options available to them to participate as a web services client. Now they do.

Programs running as a batch job, a started procedure, or in almost any address space on a z/OS system have APIs they can utilize in a similar manner to any standard z/OS APIs provided by the OS. Programs invoke these APIs in the programming language of their choice. Among z languages, C/C++, COBOL, PL/I, and Assembler are fully supported, and the toolkit provides samples for C/C++, COBOL, PL/I initially. Linux on z and LinuxONE shops already can do this.

Businesses with z data centers are being forced by the market to adopt Web applications utilizing published Web APIs that can be used by something as small as the watch you wear, noted Warren. As a result, the proliferation of Web services applications in recent years has been staggering, and it’s not by coincidence. Representational state transfer (REST) applications are simple, use the ubiquitous HTTP protocol—which helps them to be platform-independent—and are easy to organize.  That’s what the young developers—the millennials—have been doing with Bluemix and other cloud-based development environments for their cloud, mobile, and  web-based applications.  With the z/OS web enablement toolkit now any z/OS shop can do the same. As IoT ramps up expect more demands for these kinds of applications and with a variety of new devices and APIs.

DancingDinosaur is Alan Radding, a veteran information technology analyst and writer. Please follow DancingDinosaur on Twitter, @mainframeblog. See more of his IT writing at technologywriter.com and here.

State of z System CICS in the Modern Enterprise

March 25, 2016

You should be very familiar with the figures describing the continued strength of mainframe computing in the enterprise today. Seventy percent of enterprise data resides on a mainframe, 71 percent of all Fortune 500 companies run their core businesses on the mainframe, and 92 of the top 100 banks rely on the mainframe to provide at-your-fingertip banking services to the customers (many via mobile).  CICS, according to IBM, handles 1.1 million transactions every second, every day. By comparison, Google handles a mere 59,421 searches every second.

cics at-interconnect-2015-1-638

CICS at IBM Interconnect 2015

H&W, a top mainframe ISV recently released its State of CICS in the Modern Enterprise study. Find a copy of the study here.  For starters, it found that nearly two-thirds on respondents run 51-100% their business-critical applications online through CICS. Within government, 32% of respondents reported running 75-100% of business-critical applications through CICS.

A different study suggests that CICS applications handle more than 30 billion transactions per day and process more than $1 trillion dollars’ worth of business each week. Mainframe data also still drives information systems worldwide. Approximately 60 percent of organizations responding to a 2013 Arcati survey said they manage 40 to 100 percent of their enterprise data on the mainframe.

Integrating legacy systems is a strategy mainframe sites continue to adopt. In fact, 74 percent of respondents in that survey said specifically they are web-enabling CICS subsystems. However, as organizations pursue this strategy, challenges can include unlocking the data, keeping the applications and data available to users, and maintaining data integrity in an efficient and cost-effective manner. Nothing new for data center managers about this.

According to the H&W study, online CICS usage has gone up in the last 3 years, from 54% of respondents reporting running over half of their business-crit applications through CICS to 62% in 2015. Hope people will finally stop talking about the mainframe heading toward extinction.

CICS also has carved out a place on the web and with mobile. Sixty-five percent of respondents say at least some of their business-crit applications are available via PC, phone, tablet, and web-based interfaces while 11% more reported plans to mobile- and web-enable their mainframe apps in the future. Thirteen percent reported no plans to do so. Government sector respondents reported that they were significantly more likely to not make the applications available for online access; so much for open government and transparency.

CICS availability proved to raise no concern although a few were concerned with performance. Based on the study results in 2012 some predicted that companies would be moving away from CICS by now. These predictions, apparently, have not come to pass, at least not yet.

In fact, as far as the future of CICS, the technology seems to be facing a remarkably stable outlook for the next 3-5 years. The largest number of respondents, 37%, expected the number of CICS applications to remain the same in that period while 34% said they would be decreasing. More encouragingly, 27% of respondents planned to increase their number of CICS applications accessible online. In the financial services segment, 38% planned to increase the number of online CICS applications while only 10% expected to decrease the number of online applications. Given the demands by banking customers for mobile apps the increase in the number of CICS applications makes perfect sense.

The researchers concluded that CICS continues to play an important role for the majority of mainframe shops surveyed and an increasingly important role for a significant chunk of them.  The respondents also reported that, in general, they were satisfied with CICS performance even in the face of increasingly complex online workloads.

Mainframe CICS may see even more action going forward depending on what companies do with Internet of Things. As with mobile traffic, companies may turn to CICS to handle critical aspects of backend IoT activity, which has the potential to become quite large.

DancingDinosaur is Alan Radding, a veteran information technology analyst and writer. Please follow DancingDinosaur on Twitter, @mainframeblog. See more of his IT writing at technologywriter.com and here.

Exploiting the IBM z13 for Maximum Price/Performance Advantage

February 4, 2016

The z13 is the most powerful general purpose computer IBM has ever made. The key to capturing the maximum value from the z13, however, lies in how you plan, design, configure, and optimize your systems and software for everything from COBOL and Java to process parallelization and analytics. What you do in this regard will have significant impact on not only the price/performance you experience but on your success at achieving the business outcomes you are expecting.

z13-under the covers

IBM System z13

This really becomes a software configuration challenge. By tapping approximately 600 internal processors IBM already has optimized the hardware, input, output, memory, and networking/communications about as much as it can be. Your job is to optimize the software you are running, which will require working closely with your ISV.

The place to start is by leveraging the z13’s new compiler technology, parallelism, zIIP and assist processors. This will enable you to save significant money while boosting workload performance. You will literally be doing more for less.

Similarly, in the not too distant past Moore’s Law would virtually guarantee a 15-20% price/performance gain automatically just by taking a new machine out of the box and plugging it in. That’s no longer the case. Now you will have to partner with your ISV to exploit advanced software to maximize the hardware payback and continue the ride along the favorable Moore’s Law price/performance slope.

Then look at the latest COBOL V5.x and its compiler on the z13. Out of the box it is better optimized than previous compilers. In general, the strategic value of COBOL V5.x comes from migrating high CPU usage programs as quickly as possible, effectively saving organizations considerable money by running optimized code.

Some organizations report a 15% on average reduction of CPU time, which adds up to significant savings in monthly CPU charges. How significant? Up to $150k less on a $1 million bill, with some reporting even higher percentage reductions producing even greater savings. Just migrate to COBOL V5.2 (or at least V5.1) to achieve the savings. In general, staying on the software curve with the latest releases of the OS, languages, and compilers with applications optimized for them is the best way to ensure your workloads are achieving top performance in the most cost-effective way.

For example, the new z13 processor leverages a new Vector Facility for certain COBOL statements and expands the use of Decimal Floating Point Facility for packed decimal calculations. Well-structured, compute-intensive batch applications running on z13 and compiled with the Enterprise COBOL V5.2  compiler have shown CPU reduction usage of up to 14% over the same applications running on zEC12 (compiled with the GA release of Enterprise COBOL V5.1), according to IBM. The result: improved workload price/performance.

Enterprise COBOL V5.2 also includes new features to improve programmability, developer productivity, and application modernization. Supporting JSON, for instance, will provide mobile applications easy access to data and the processing they need from business critical production applications written in COBOL.

The z13 and its z sister, the latest LinuxONE dedicated Linux models, were designed and optimized from the start for cloud, mobile, and analytics. They were intended to run alongside traditional mainframe workloads with z/OS or Linux running on the appropriate models.

Finally, plan to take advantage of the new assist processors and expanded memory capacity to further boost performance and lower cost. With the z13, there is a mandatory migration of all zAAP-enabled applications to zIIP. Expect the usage of the zIIP assist processors to surge when all those Java applications move from the zAAP.  ISVs like Compuware should be able to help with this.  In addition, if you enable SMT on the z13, you’ll immediately get more Java capacity.  Applications that run under IBM WebSphere (WAS) on z/OS will benefit too.

The z13 and especially the LinuxONE are breaking new ground. IBM has established, in conjunction with the Linux Foundation, an Open Mainframe Project to support and advance ongoing open source Linux innovation on the mainframe. IBM also is breaking with its traditional mainframe pricing model by offering a pay-per-use option in the form of a fixed monthly payment with costs scaling up or down based on usage. It also offers per-core pricing with software licenses for designated cores. See DancingDinosaur here.

An upcoming DancingDinosaur will look at more of the enhancements being added to these machines, including some of the latest LinuxOne enhancements like support for Google’s Go language and Cloudant’s NoSQL services. The message: the new z System can take you to the places you will want to be in this emerging cloud-mobile-analytics era.

DancingDinosaur is Alan Radding, a veteran information technology analyst and writer. Please follow DancingDinosaur on Twitter, @mainframeblog. See more of his IT writing at technologywriter.com and here.

 

Mobile Financial App Security Appears Shaky

January 15, 2016

IBM has made mobile a key strategic imperative going forward, even discounting mobile software license charges on z. However, a recent study suggests that mobile apps may be less secure than app users think. For example, 83% of the app users surveyed felt their applications were adequately secure. Yet, 90% of the applications Arxan Technologies tested were vulnerable to at least two of the Open Web Application Security Project (OWASP) Mobile Top 10 Risks.

dino Arxan_SOAS_Title_Image

The OWASP Top Ten is an awareness document for web application security. The OWASP Top Ten represents a broad consensus about what the most critical web application security flaws are. Security experts will use the list as a first step in changing the security awareness and software development culture around security in organizations around the world. You can find the Arxan report here.

In the latest study, 41% of mobile finance app users expect their finance apps to be hacked within the next six months. That’s not exactly a vote of confidence. Even worse, 42% of executive IT decision makers, those who have oversight or insight into the security of the mobile finance apps they produce, feel the same way.  Does this bother you?

It should. The researchers found that 81% of app users would change providers if apps offered by similar providers were more secure. While millennials are driving the adoption of mobile apps, their views on the importance of app security were equally as strong as the older non-millennials. Overall, survey results showed very little geographical discrepancies across the US, UK, Germany, and Japan.

This sentiment makes it sound like mobile finance applications are at a hopeless state of security where, despite Herculean efforts to thwart attackers, adversaries are expected to prevail. But the situation is not hopeless; it’s careless. Half the organizations aren’t even trying. Fully 50% of organizations have zero budget allocated for mobile app security—0, nothing, nada—according to the researchers.  By failing to step up their mobile security game organizations risk losing customers to competitors who offer alternative apps that are more secure.

How bad is the mobile security situation? When put to the test, the majority of mobile apps failed critical security tests and could easily be hacked, according to the researchers.  Among 55 popular mobile finance apps tested for security vulnerabilities, 92% were shown to have at least two OWASP Mobile Top 10 Risks. Such vulnerabilities could allow the apps to be tampered and reverse-engineered, which could clearly put sensitive financial information in the wrong hands or, even worse, potentially redirect the flow of money. Ouch!

Think about all the banks and insurance companies that are scrambling to deploy new mobile apps. As it turns out, financial services organizations, the researchers report, also are among the top targets of hackers seeking high-value payment data, intellectual property (IP), and other sensitive information. Specifically, employee, customer, and soft IP data are the top three targets of cyber-attacks in the financial services market; while at the same time theft of hard IP soared 183% in 2015, according to PwC, another firm researching the segment.

With the vast majority of cyber-attacks happening at the application layer, one would think that robust application security would be a fundamental security measure being aggressively implemented and increasingly required by regulators, particularly given the financial services industry’s rapid embrace of mobile financial apps. But apparently it is not.

So where does the financial mobile app industry stand? Among the most prevalent OWASP Mobile Top 10 Risks identified among the mobile finance apps tested the top 2 risks were:

1) Lack of binary protection (98%) – this was the most prevalent vulnerability

2) Insufficient transport layer protection (91%).

A distant third, at 58%, was unintended data leakage. All these vulnerabilities, the top two especially, make the mobile financial applications susceptible to reverse-engineering and tampering in addition to privacy violations and identity theft.

Says Arxan CTO Sam Rehman: “The impact for financial institutions and mobile finance app users can be devastating. Imagine having your mobile finance app leak your personal financial information and identity, or your app maliciously redirecting your money.” The customer outrage and bad press that followed wouldn’t be pretty, not to mention the costly lawsuits.

DancingDinosaur is Alan Radding, a veteran information technology analyst and writer. Please follow DancingDinosaur on Twitter, @mainframeblog. See more of his IT writing at technologywriter.com and here.

IBM Makes a Big Play for the API Economy with StrongLoop

September 25, 2015

APIs have become essential in connecting systems of engagement with the systems of record typically found on the IBM z System. That’s one reason why IBM earlier this month acquired StrongLoop, Inc., a software provider that helps developers connect enterprise applications to mobile, Internet of Things (IoT) and web applications in the cloud mainly through rapidly proliferating and changing APIs.  Take this as a key signal IBM intends to be a force in the emerging API economy. Its goal is to connect existing enterprise apps, data, and SOA services to new channels via APIs.

api economy ibm

Courtesy: developer.IBM.com (click to enlarge)

Key to the acquisition is StrongLoop’s position as a leading provider of Node.js, a scripting language that has become a favorite among developers needing to build applications using APIs. According to IBM it intends to integrate Node.js capabilities from StrongLoop with its own software portfolio, which already includes MobileFirst and WebSphere, to help organization better use enterprise data and conduct transactions whether in the cloud or on-premises.

These new capabilities, IBM continues, will enable organizations and developers to build scalable APIs, and more easily connect existing back-end enterprise processes with front-end mobile, IoT, and web apps in an open hybrid cloud. Node.js is one of the fastest growing development frameworks for creating and delivering APIs in part due to it similarities with JavaScript. This shortens the learning curve.

Although Node.js is emerging as the standard for APIs and micro-services, APIs still present challenges. These include the lack of an architected approach, limited scalability, multiple languages and point products, limited data connectors, and large, fragile monolithic applications.

Mainframe data centers, in particular, are sitting on proven software assets that beg to be broken out as micro-services to be combined and recombined to create new apps for use in mobile and Web contexts. As IoT ramps up the demand for these APIs and more will skyrocket.  And the mainframe data center will sit at the center of all this, possibly even becoming a revenue generator.

In response, StrongLoop brings API creation and lifecycle support and back end data connectors. It also will integrate with IBM’s API management, creating an API Platform that can enable polyglot run-times, integration, and API performance monitoring. It also will integrate with IBM’s MobileFirst Platform, WebSphere and other products, such as Bluemix, to enable Node across the product portfolio. StrongLoop also brings Arc and its LoopBack framework, which handles everything from API visual modeling to process manager to scale APIs, and a security gateway. Together StrongLoop Arc along with IBM’s API Management can deliver the full API lifecycle. IBM also will incorporate select capabilities from StrongLoop into its IoT Foundation, a topic DancingDinosaur expects to take up in the future.

At the initial StrongLoop acquisition announcement Marie Wieck, general manager, Middleware, IBM Systems, alluded to the data center possibilities, as noted above: “Enterprises are focused on digital transformation to reach new channels, tap new business models, and personalize their engagement with clients. APIs are a critical ingredient.” The fast adoption of Node.js for rapidly creating APIs combined with IBM’s strength in Java and API management on the IBM cloud platform promises a winning strategy.

To make this even more accessible, IBM is adding Node.js to Bluemix, following a summer of enhancements to Bluemix covered here by DancingDinosaur just a few weeks ago. Java remains the leading language for web applications and transaction systems. Combining StrongLoop’s Node.js tools and services with IBM’s WebSphere and Java capabilities will help organizations bridge Java and Node.js development platforms, enabling enterprises to extract greater value from their application investments. Throw in integration on IBM Bluemix and the Java and Node.js communities will gain access to many other IBM and third-party services including access to mobile services, data analytics, and Watson, IBM’s crown cognitive computing jewel.

DancingDinosaur is Alan Radding, a veteran IT analyst and writer. Please follow DancingDinosaur on Twitter, @mainframeblog. See more of his IT writing at technologywriter.com and here.

IBM POWER8 Tops STAC-A2 Benchmark in Win for OpenPOWER

June 25, 2015

In mid-March the Security Technology Analysis Center (STAC) released the first audited STAC-A2 Benchmark results for a server using the IBM Power8 architecture. STAC provides technology research and testing tools based on community-source standards. The March benchmark results showed that an IBM POWER8-based server can deliver more than twice the performance of the best x86 server when running standard financial industry workloads.

stac benchmark power8

IBM Power System S824

This is not IBM just blowing its own horn. The STAC Benchmark Council consists of a group of over 200 major financial firms and other algorithmic-driven enterprises as well as more than 50 leading technology vendors. Their mission is to explore technical challenges and solutions in financial services and develop technology benchmark standards that are useful to financial organizations.

The POWER8 system not only delivered more than twice the performance of the nearest x86 system but its set four new performance records for financial workloads, 2 of which apparently were new public records.  This marked the first time the IBM Power8 architecture has gone through STAC-A2 testing.

The community developed STAC-A2 benchmark set represents a class of financial risk analytics workloads characterized by Monte Carlo simulation and Greeks computations. Greeks computations cover theta, rho, delta, gamma, cross-gamma, model vega, and correlation vega. Together they are referred to as the Greeks. Quality is assessed for single assets by comparing the Greeks obtained from the Monte Carlo with Greeks obtained from a Heston closed form formula for vanilla puts and calls.  Suffice to say, this as an extremely CPU-intensive set of computations. For more detail, click here.

In this case, results were compared to other publicly-released results of warm runs on the Greeks benchmark (STAC-A2.β2.GREEKS.TIME.WARM). The two-socket Power8 server, outfitted with two 12-core 3.52 GHz Power8 processor cards, achieved:

  • 2.3x performance over the comparable x86 setup, an Intel white box with two Xeon E5-2699 v3 (Haswell EP) @ 2.30GHz.
  • 1.7x the performance of the best-performing x86 solution, an Intel white box with two Intel Xeon E5-2699 v3 processors (Haswell EP) @ 2.30GHz and one Intel Xeon Phi 7120A coprocessor.
  • Only 10% less performance than the best-performing solution, a Supermicro server with two 10-core Intel Xeon E5-2690 v2 @ 3.0GHz (Ivy Bridge) and one NVIDIA K80 GPU accelerator.

The Power server also set new records for path scaling (STAC-A2.β2.GREEKS.MAX_PATHS) and asset capacity (STAC-A2.β2.GREEKS.MAX_ASSETS). Compared to the best four-socket x86-based solution — a server comprised of four Xeon E7-4890 v2 (Ivy Bridge EX) parts running at 2.80 GHz — the Power8 server delivered:

  • Double the throughput.
  • 16 percent increase for asset capacity.

The STAC test system consisted of an IBM Power System S824 server with two 12-core 3.52 GHz POWER8 processor cards, equipped with 1TB of DRAM and running Red Hat Enterprise Linux version 7. The solution stack included the IBM-authored STAC-A2 Pack for Linux on Power Systems (Rev A), which used IBM XL, a suite for C/C++ developers that includes the C++ Compiler and the Mathematical Acceleration Subsystem libraries (MASS), and the Engineering and Scientific Subroutine Library (ESSL).

POWER8 processors are based on high performance, multi-threaded cores with each core of the Power System S824 server running up to eight simultaneous threads at 3.5 GHz. With POWER8 IBM also is able to tap the innovations of the OpenPOWER Foundation including CAPI and a variety of accelerators that have started to ship.

The S824 also brings a very high bandwidth memory interface that runs at 192 GB/s per socket which is almost three times the speed of a typical x86 processor. These factors along with a balanced system structure including a large internal 8MB per core L3 cache are the primary reasons why financial computing workloads run significantly faster on POWER8-based systems than alternatives, according to IBM.

Sumit Gupta, vice president of HPC and OpenPOWER operations at IBM, reports STAC-A2 gives a much more accurate view of the expected performance as compared to micro benchmarks or simple code loops. This is especially important when the challenge is big data.

In his blog on the topic, Gupta elaborated on the big data challenge in the financial industry and the POWER8 advantages. STAC-A2 is a set of standard benchmarks that help estimate the relative performance of full systems running complete financial applications. This enables clients in the financial industry to evaluate how systems will perform on real applications. “Those are the kind of results that matter—real results for real client challenges,” Gupta wrote.

Gupta went on to note that the S824 also has a very high bandwidth memory interface. Combined with the large L3 cache noted above it can run financial applications noticeably faster than alternatives.  Combine the STAC results with data recently published by Cabot Partners and you have convincing proof that IBM POWER8-based systems have taken the performance lead in the financial services space (and elsewhere). The Cabot Partners report evaluates functionality, performance, and price/performance across several industries, including life sciences, financial services, oil and gas, and analytics while referencing standard benchmarks as well as application-oriented benchmark data.

Having sat through numerous briefings on POWER8 performance, DancingDinosaur felt reassured, but he doesn’t have to actually run these workloads. It is encouraging, however, to see proof in the form of 3rd party benchmarks like STAC and reports from Cabot Partners. Check out Cabot’s OpenPOWER report here.

DancingDinosaur is Alan Radding, a veteran IT analyst and writer. Please follow DancingDinosaur on Twitter, @mainframeblog. See more of his IT writing on Technologywriter.com and here.

IBM Continues Open Source Commitment with Apache Spark

June 18, 2015

If anyone believes IBM’s commitment to open source is a passing fad, forget it. IBM has invested billions in Linux, open Power through the Open Power Foundation, and more. Its latest is the announcement of a major commitment to Apache Spark, a fast open source and general cluster computing system for big data.

spark VGN8668

Courtesy of IBM: developers work with Spark at Galvanize Hackathon

As IBM sees it, Spark brings essential advances to large-scale data processing. Specifically, it dramatically improves the performance of data dependent-apps and is expected to play a big role in the Internet of Things (IoT). In addition, it radically simplifies the process of developing intelligent apps, which are fueled by data. It does so by providing high-level APIs in Scala, Java, and Python, and an optimized engine that supports general computation graphs for data analysis. It also supports a rich set of higher-level tools including Spark SQL for SQL and DataFrames, MLlib for machine learning, GraphX for graph processing, and Spark Streaming for stream processing.

IBM is contributing its breakthrough IBM SystemML machine learning technology to the Spark open source ecosystem. Spark brings essential advances to large-scale data processing, such as improvements in the performance of data dependent apps. It also radically simplifies the process of developing intelligent apps, which are fueled by data. But maybe the biggest advantage is that it can handle data coming from multiple, disparate sources.

What IBM likes in Spark is that it’s agile, fast, and easy to use. It also likes it being open source, which ensures it is improved continuously by a worldwide community. That’s also some of the main reasons mainframe and Power Systems data centers should pay attention to Spark.  Spark will make it easier to connect applications to data residing in your data center. If you haven’t yet noticed an uptick in mobile transactions coming into your data center, they will be coming. These benefit from Spark. And if you look out just a year or two, expect to see IoT applications adding to and needing to combine all sorts of data, much of it ending up on the mainframe or Power System in one form or another. So make sure Spark is on your radar screen.

Over the course of the next few months, IBM scientists and engineers will work with the Apache Spark open community to accelerate access to advanced machine learning capabilities and help drive speed-to-innovation in the development of smart business apps. By contributing SystemML, IBM hopes data scientists iterate faster to address the changing needs of business and to enable a growing ecosystem of app developers who will apply deep intelligence to everything.

To ensure that happens, IBM will commit more than 3,500 researchers and developers to work on Spark-related projects at more than a dozen labs worldwide, and open a Spark Technology Center in San Francisco for the Data Science and Developer community to foster design-led innovation in intelligent applications. IBM also aims to educate more than 1 million data scientists and data engineers on Spark through extensive partnerships with AMPLab, DataCamp, MetiStream, Galvanize, and Big Data University MOOC (Massive Open Online Course).

Of course, Spark isn’t going to be the end of tools to expedite the latest app dev. With IoT just beginning to gain widespread interest expect a flood of tools to expedite developing IoT data-intensive applications and more tools to facilitate connecting all these coming connected devices, estimated to number in the tens of billions within a few years.

DancingDinosaur applauds IBM’s decade-plus commitment to open source and its willingness to put real money and real code behind it. That means the IBM z System mainframe, the POWER platform, Linux, and the rest will be around for some time. That’s good; DancingDinosaur is not quite ready to retire.

DancingDinosaur is Alan Radding, a veteran IT analyst and writer. Please follow DancingDinosaur on Twitter, @mainframeblog. See more of his IT writing on Technologywriter.com and here.


Follow

Get every new post delivered to your Inbox.

Join 885 other followers

%d bloggers like this: