Posts Tagged ‘SMF’

Syncsort Survey Unveils 5 Ways Z Users Are Saving Money

January 9, 2018

Syncsort Inc. recently completed its year-end 2017 State-of-the-Mainframe annual survey of IT professionals. Over In the past year, the organizations surveyed increased their spending for mainframe capacity, new mainframe applications, and mainframe data analytics. The IBM z/OS mainframe remains an important focus in organizations, with the majority of respondents reporting that the mainframe serves as the hub for business-critical applications by providing high-volume transaction and database processing.

More interestingly, Syncsort notes high number of respondents indicated they’ll use the mainframe to run revenue-generating services over the next 12 months, another clear indication that the mainframe remains integral to the business.

However, the survey also reflects concerns over the high cost of the mainframe. In effect, mainframe optimization, cost reduction, and spending remain at the forefront, with many organizations looking to leverage zIIP engines to offload general processor cycles, which maximize resources, delays or avoids hardware upgrades, and lowers monthly software charges.

At the same time some organizations are looking at mainframe optimization to fund strategic projects, such as enhanced mainframe data analytics to support better business decisions for meeting SLAs as well as security and compliance initiatives. All of this may relieve pressure to jump to a lower cost platform (x86) in the hope of reducing spending.

But apparently it is not enough in a number of cases. Despite the focus on optimization, the survey notes, nearly 20% of respondents plan to move off the mainframe completely in 2018. DancingDinosaur, however spent decades writing mainframe-is-dead pieces and this invariably takes longer, costs more, often much more, than expected, and sometimes is never fully achieved. The cost of building a no-fail, scalable, and secure business platform has proven to be extremely difficult.

However costly as the mainframe is, you can get it up running dependably for less than you will end up paying to cobble together bare metal x86 boxes. But if you try, please let me know and I will check back with you next year to publicize your success. One exception might be if you opt for a 100% cloud solution; again, let me know if it works and how much you save; I’ll make you a hero.

In the meantime, here are five ways respondents expect to save money by streamlining operations through mainframe-based optimization:

  1. This year organizations aim to redirect budget dollars to strategic projects such as mainframe data analytics. Optimization will primarily focus on general processor usage by leveraging zIIP engines and using MSU optimization tools. Some organizations will take it a step further, and target some candidate workloads to be moved off of the mainframe (possibly to a hybrid cloud) to ensure sufficient capacity remains for business critical applications.
  1. Big data analytics for operational intelligence, security, and compliance will continue to grow and emerge as a critical effort, and ensuring that IT services are delivered effectively to meet SLAs. Mainframe data sources will be critical in helping to address these challenges.
  1. Integration of mainframe data with modern analytics tools will become pervasive and critically important as organizations look to exploit this abundance of information for enhanced visibility. Integrating mainframe machine data will not only provide enhanced visualization but will enable correlation with data sources from other platforms. Additionally, new analytics technologies, like Splunk, will make mainframe application data more readily available to business analysts who typically aren’t mainframe experts while addressing the diminishing pool of mainframe talent by putting rich, easy tools into the hands of newer staff.
  1. SMF and z/OS log data will play an increased role in addressing security exposures, fulfilling audit requirements, and addressing compliance mandates, a key initiative for IT executives and IT organizations. Here think pervasive encryption on Z. Overall, organizations are looking at leveraging analytics platforms for security and compliance. Along with SMF and other z/OS log data they will look to Splunk, Elastic, and Hadoop.
  1. Data movement across the variety of platforms in distributed enterprises presents important challenges that must be secured, monitored, and performed efficiently. With over half of mainframe organizations still lacking full visibility this must become a priority for organizations.

Over the years, DancingDinosaur writes up every opportunity to lower mainframe costs or optimize operations. Find some of these here, here, and here.

DancingDinosaur is Alan Radding, a veteran information technology analyst, writer, and ghost-writer. Please follow DancingDinosaur on Twitter, @mainframeblog. See more of his IT writing at technologywriter.com and here.

Latest New Mainframe puts Apache Spark Native on the z System

April 1, 2016

IBM keeps rolling out new versions of the z System.  The latest is the z/OS Platform for Apache Spark announced earlier this week. The new machine is optimized for marketers, data analysts, and developers eager to apply advanced analytics to the z’s rich, resident data sets for real-time insights.

ibm_zos_apache_spark_app

z/OS Platform for Apache Spark

Data is everything in the new economy; and the most and best data you can grab and the fastest you can analyze it, the more likely you will win. The z, already the center of a large, expansive data environment, is well positioned to drive winning data-fueled strategies.

IBM z/OS Platform for Apache Spark enables Spark, an open-source analytics framework, to run natively on z/OS. According to IBM, the new system is available now. Its key advantage:  to enable data scientists to analyze data in place on the system of origin. This eliminates the need to perform extract, transform and load (ETL), a cumbersome, slow, and costly process. Instead, with Spark the z breaks the bind between the analytics library and underlying file system.

Apache Spark provides an open-source cluster computing framework with in-memory processing to speed analytic applications up to 100 times faster compared to other technologies on the market today, according to IBM. Apache Spark can help reduce data interaction complexity, increase processing speed, and enhance mission-critical applications by enabling analytics that deliver deep intelligence. Considered highly versatile in many environments, Apache Spark is most regarded for its ease of use in creating algorithms that extract insight from complex data.

IBM’s goal lies not in eliminating the overhead of ETL but in fueling interest in cognitive computing. With cognitive computing, data becomes a fresh natural resource—an almost infinite and forever renewable asset—that can be used by computer systems to understand, reason and learn. To succeed in this cognitive era businesses must be able to develop and capitalize on insights before the insights are no longer relevant. That’s where the z comes in.

With this offering, according to IBM, accelerators from z Systems business partners can help organizations more easily take advantage of z Systems data and capabilities to understand market changes alongside individual client needs. With this kind of insight managers should be able to make the necessary business adjustments in real-time, which will speed time to value and advance cognitive business transformations among IBM customers.

At this point IBM has identified 3 business partners:

  1. Rocket Software, long a mainframe ISV, is bringing its new Rocket Launchpad solution, which allows z shops to try the platform using data on z/OS.
  1. DataFactZ is a new partner working with IBM to develop Spark analytics based on Spark SQL and MLlib for data and transactions processed on the mainframe.
  1. Zementis brings its in-transaction predictive analytics offering for z/OS with a standards-based execution engine for Apache Spark. The product promises to allow users to deploy and execute advanced predictive models that can help them anticipate end users’ needs, compute risk, or detect fraud in real-time at the point of greatest impact, while processing a transaction.

This last point—detecting problems in real time at the point of greatest impact—is really the whole reason for Spark on z/OS.  You have to leverage your insight before the prospect makes the buying decision or the criminal gets away with a fraudulent transaction. After that your chances are slim to none of getting a prospect to reverse the decision or to recover stolen goods. Having the data and logic processing online and in-memory on the z gives you the best chance of getting the right answer fast while you can still do something.

As IBM also notes, the z/OS Platform for Apache Spark includes Spark open source capabilities consisting of the Apache Spark core, Spark SQL, Spark Streaming, Machine Learning Library (MLlib) and Graphx, combined with the industry’s only mainframe-resident Spark data abstraction solution. The new platform helps enterprises derive insights more efficiently and securely. In the processing the platform can streamline development to speed time to insights and decision and simplify data access through familiar data access formats and Apache Spark APIs.

Best of all, however, is the in-memory capabilities as noted above. Apache Spark uses an in-memory approach for processing data to deliver results quickly. The platform includes data abstraction and integration services that enable z/OS analytics applications to leverage standard Spark APIs.  It also allows analysts to collect unstructured data and use their preferred formats and tools to sift through data.

At the same time developers and analysts can take advantage of the familiar tools and programming languages, including Scala, Python, R, and SQL to reduce time to value for actionable insights. Of course all the familiar z/OS data formats are available too: IMS, VSAM, DB2 z/OS, PDSE or SMF along with whatever you get through the Apache Spark APIs.

This year we already have seen the z13s and now the z/OS Platform for Apache Spark. Add to that the z System LinuxOne last year. z-Based data centers suddenly have a handful of radically different new mainframes to consider.  Can Watson, a POWER-based system, be far behind? Your guess is as good as anyone’s.

DancingDinosaur is Alan Radding, a veteran information technology analyst and writer. Please follow DancingDinosaur on Twitter, @mainframeblog. See more of his IT writing at technologywriter.com and here.


%d bloggers like this: