Posts Tagged ‘SHARE’

Join Me at Share in Providence

August 3, 2017

Share runs all week but DancingDinosaur plans to be there on Tues., 8/8. Share is happening at the Rhode Island Convention Center, Providence, RI, August 6–11, 2017. Get details at Share.org.  The keynote looks interesting that day: As Share describes it: Security and regulatory compliance are concerns that impact every professional within your IT organization.

In the Tuesday Keynote presentation at SHARE Providence, expert panelists will offer their perspectives on how various roles are specifically impacted by security, and what areas you should be most concerned about in your own roles. Listen to your peers share their insights in a series of TED-style Talks, starting with David Hayes of the Government Accountability Office, who will focus on common compliance and risk frameworks. Stu Henderson of Henderson Consulting will discuss organizational values and how those interacting with the systems are part of the overall control environment, followed by Simon Dodge of Wells Fargo providing a look at proactive activities in the organization that are important for staying ahead of threats and reducing the need to play catch-up when the auditors arrive. In the final talk of the morning, emerging cyber security threats will be discussed by Buzz Woeckener of Nationwide Insurance, along with tips on how to be prepared. At the conclusion of their presentations, the panelists will address audience questions on the topics of security and compliance.

You’ll find me wandering around the sessions and the expo. Will be the guy wearing the Boston hat.

DancingDinosaur is Alan Radding, a veteran information technology analyst, writer, and ghost-writer. Please follow DancingDinosaur on Twitter, @mainframeblog. See more of his IT writing at technologywriter.com and here.

IBM Discounts z/OS Cloud Activity

August 12, 2016

The latest iteration of IBM’s z/OS workload pricing aims at to lower the cost of running cloud workloads.  In a recent announcement, z Systems Workload Pricing for Cloud (zWPC) for z/OS seeks to minimize the impact of new public cloud workload transaction growth on Sub-Capacity license charges. IBM did the same thing with mobile workloads when they started driving up the 4-hour workload averages on the z. As more z workloads interact with public clouds this should start to add up, if it hasn’t already.

bluemix garage -ni_5554516560

Bluemix Garages in the Cloud

As IBM puts it: zWPC applies to any organization that has implemented Sub-Capacity pricing via the basic AWLC or AEWLC pricing mechanisms for the usual MLC software suspects. These include z/OS, CICS, DB2, IMS, MQ and WebSphere Application Server (WAS).  An eligible transaction is one classified as Public Cloud-originated, connecting to a z/OS hosted transactional service and/or data source via a REST or SOAP web service.  Public cloud workloads are defined as transactions processed by named Public cloud application transactions identified as originating from a recognized Public Cloud offering, including but not limited to, Amazon Web Services (AWS), Microsoft Azure, IBM Bluemix, and more.

IBM appears to have simplified how you identify eligible workloads. As the company notes: zWPC does not require you to isolate the public cloud work in separate partitions, but rather offers an enhanced way of reporting. The z/OS Workload Manager (WLM) allows clients to use WLM classification rules to distinguish cloud workloads, effectively easing the data collection requirements for public cloud workload transactions.

So how much will you save? It reportedly reduces eligible hourly values by 60 percent. The discount produces an adjusted Sub-Capacity value for each reporting hour. What that translates into on your monthly IBM software invoice once all the calculations and fine print are considered amounts to a guess at this point. But at least you’ll save something. The first billing eligible under this program starts Dec. 1, 2016.

DancingDinosaur expects IBM to eventually follow with discounted z/OS workload pricing for IoT and blockchain transactions and maybe even cognitive activity. Right now the volume of IoT and blockchain activity is probably too low to impact anybody’s monthly license charges. Expect those technologies ramp up in coming years with many industry pundits projecting huge numbers—think billions and trillions—that will eventually impact the mainframe data center and associated software licensing charges.

Overall, Workload License Charges (WLC) constitute a monthly software license pricing metric applicable to IBM System z servers running z/OS or z/TPF in z/Architecture (64-bit) mode.  The driving principle of WLS amounts to pay-for-what-you-use, a laudable concept. In effect it lowers the cost of incremental growth while further reducing software costs by proactively managing associated peak workload utilization.

Generally, DancingDinosaur applauds anything IBM does to lower the cost of mainframe computing.  Playing with workload software pricing in this fashion, however, seems unnecessary. Am convinced there must be simpler ways to lower software costs without the rigmarole of metering and workload distribution tricks. In fact, a small mini-industry has cropped up among companies offering tools to reduce costs, primarily through various ways to redistribute workloads to avoid peaks.

A modification to WLC, the variable WLC (VWLC) called AWLC (Advanced) and the EWLC (Entry), aligns with most of the z machines introduced over the past couple of years.  The result, according to IBM, forms a granular cost structure based on MSU (CPU) capacity that applies to VWLC and associated pricing mechanisms.

From there you can further tweak the cost by deploying Sub-Capacity and Soft Capping techniques.  Defined Capacity (DC), according to IBM, allows the sizing of an LPAR in MSU such that the LPAR will not exceed the designated MSU amount.  Group Capacity Limit (GCL) extends the Defined Capacity principle for a single LPAR to a group of LPARs, allowing MSU resources to be shared accordingly.  BTW, a potential downside of GCL is that is one LPAR in the group can consume all available MSUs due to a rogue transaction. Again, an entire mini industry, or maybe no so mini, has emerged to help handle workload and capacity pricing on the z.

At some point in most of the conference pricing sessions the eyes of many attendees glaze over.  By Q&A time the few remaining pop up holding a copy of a recent invoice and ask what the hell this or that means and what the f$#%@#$ they can do about it.

Have to admit that DancingDinosaur did not attend the most recent SHARE conference, where pricing workshops can get quite energetic, so cannot attest to the latest fallout. Still, the general trend with mobile and now with cloud pricing discounts should be lower costs.

DancingDinosaur is Alan Radding, a veteran information technology analyst and writer. Please follow DancingDinosaur on Twitter, @mainframeblog. See more of his IT writing at technologywriter.com and here.

 

SHARE Puts Spotlight on Mainframe Storage, Compression, and More

August 16, 2013

The mainframe crowd gathered this week at SHARE Boston for a dive into the intricacies of the mainframe.  Storage, especially through the MVS Storage Management Project, grabbed a lot of attention. One of the project’s primary ongoing activities—identifying new requirements and communicating their priority to IBM—clearly bore fruit as seen in Barbara McDonald’s DFSMS Latest and Greatest session.

McDonald started out by noting that many of the enhancements, maybe all, in one way or another, address the explosive growth and management of customer data, which is being driven by the surging growth in interest in big data. To start, DFSMS exploits zEnterprise Data Compression (zEDC) Express, low-cost data compression for z/OS.

Specifically, zEDC, combined with a new hardware feature of the zEC12/zBC12 called zEDC Express, offers compression acceleration designed for high performance, low latency compression with little additional overhead. According to IBM it is optimized for use with large sequential files and it uses zlib, an industry standard compression library. zEDC is complementary to existing System z compression technology. Smaller records and files should still use z/OS hardware compression instructions to get best benefit while larger files will be directed to zEDC.

Support for additional access methods, specifically BSAM and QSAM, is planned for the end of the first quarter of 2014. This should save disk space and improve effective channel and network bandwidth without incurring significant CPU overhead.

The latest enhancements to DFSMS, among other things, also will enable increased data storage capacity and scalability, handle cross platform data and storage, and ensure data availability at all levels of the storage hierarchy.  They also enhance business resiliency through improved point-in-time copy, fast replication, and continuous data mirroring functions.

Although DFSMS proved interesting at SHARE and included dozens of enhancements, many suggested at previous SHARE conferences, there actually was much more going on. For example, a session on Real-time Compression (RtC), another innovative compression technology from IBM, drew a good crowd.

The RtC session was run by Steve Kenniston, IBM’s business line executive for data protection. RtC differs from zEDC, starting with different compression algorithms; RtC uses LZ where zEDC use zlib. RtC also leverages over 35 IBM patents to enable it to do real-time compression of active, production data with no impact on either application or storage performance. Kenniston also noted RtC, the industry’s only heterogeneous storage technology, now handles both block and file data. Find his RtC presentation here.

Kenniston drilled into the how and why of RtC and its key features, such as compressing active data on the fly. Some of it comes down to difference between fixed and variable compression.  Fixed compression, which most competitors do, has to add changes to the data at the end, which leads EMC and NetApp to quip that with RtC you could end up with a bigger compressed file than the original file due to the adds at the end.

Not so. RtC sits in the data stream in front of the storage and sees all the data. All the like data is compressed as it is written together.  The data then is put into known fixed segments. It amounts to a journaled file system inside the compressed file. RtC tracks where the compressed segments are put. When compressed data is deleted, RtC knows where the data was in the compressed file and can put new like data in the now empty space. At worst, the RtC compressed data temporarily becomes a single I/O larger than it started.

RtC also uses time-based optimization rather than location-based optimization as IBM’s competitors do.  Since it sits in the data stream and understands how the data was created RtC can achieve greater optimization.  And since it doesn’t have to do reads before writes it can get better compression ratios. In one example Kenniston showed RtC delivering up to 5x compression while maintaining or improving database transaction response time and overall business throughput.

In general, companies can expect a 30%-40% reduction in $/GB for common configurations.  In one case, 168 900GB SAS drives representing 100TB listed for $757,880. Using RtC, you needed only 67 SAS drives to get equivalent capacity. The cost, including the RtC software license and three years of maintenance, was $460,851. The $297,000 difference amounted to a 40% savings.

There was more interesting stuff at SHARE, including how the rise of BYOD smartphones and tablets could return the mainframe data center to a new version on 3270 computing, sort of an interesting back-to-the-future scenario.  Will take that up and more in another posting.

Open POWER Consortium Aims to Expand the POWER Ecosystem beyond IBM

August 7, 2013

With IBM’s August 6 announcement of new POWER partners, including Google, not only is IBM aiming to expand the variety of POWER workloads but establish an alternative ecosystem to Intel/ x86 that continues to dominate general corporate computing.  Through the new Open POWER Consortium, IBM will make  POWER hardware and software available for open development for the first time as well as offer open-source POWER firmware, the software that controls basic chip functions. By doing this, IBM and the consortium can enable innovative customization in creating new styles of server hardware for a variety of computing workloads.

IBM has a long history of using open consortiums to grab a foothold in different markets;  as it did with Eclipse (open software development tools), Linux (open portable operating system), KVM (open hypervisor and virtualization), and OpenStack (open cloud interoperability). In each case, IBM had proprietary technologies but could use the open source consortium strategy to expand market opportunities at the expense of entrenched proprietary competitors like Microsoft or VMware.  The Open POWER Consortium opens a new front against Intel, which already is scrambling to fend off ARM-based systems and other lightweight processors.

The establishment of the Open POWER Consortium also reinforces IBM’s commitment to the POWER platform in the face of several poor quarters. The commitment to POWER has never really wavered, insists an IBM manager, despite what financial analysts might hint at. Even stronger evidence of that commitment to POWER is POWER8, which is on track for 2014 if not sooner, and POWER9, which is currently in development, he confirmed.

As part of its initial collaboration within the consortium, IBM reported it and NVIDIA will integrate NVIDIA’s CUDA GPU and POWER.  CUDA is a parallel computing platform and programming model that enables dramatic increases in computing performance by harnessing the power of the graphics processing unit (GPU).  GPUs increasingly are being used to boost overall system performance, not just graphics performance. The two companies envision powerful computing systems based on NVIDIA GPUs and IBM’s POWER CPUs  and represent an example of the new kind of systems the open consortium can produce.

However, don’t expect immediate results.  The IBM manager told DancingDinosaur that the fruits of any collaboration won’t start showing up until sometime next year. Even the Open POWER Collaboration website has yet to post anything. The consortium is just forming up; IBM expects the public commitment of Google to attract other players, which IBM describes as the next generation of data-center innovators.

As for POWER users, this can only be a good thing. IBM is not reducing its commitment to the POWER roadmap, plus users will be able to enjoy whatever the new players bring to the POWER party, which could be considerable. In the meantime, the Open POWER Consortium welcomes any firm that wants to innovate on the POWER platform and participate in an open, collaborative effort.

An even more interesting question may be where else will IBM’s interest in open systems and open consortiums take it. IBM remains “very focused on open and it’s a safe bet that IBM will continue to support open technologies and groups that support that,” the IBM manager told DancingDinosaur.  IBM, however, has nothing to announce after the Open POWER Consortium. Hmm, might a z/OS open collaborative consortium someday be in the works?

SHARE will be in Boston next week. DancingDinosaur expects to be there and will report on the goings-on. Hope to see some of you there.  There also are plans for a big IBM System z/Power conference, Enterprise Systems 2013, toward to end of October in Florida.  Haven’t seen many details yet, but will keep you posted as they come in.

SHARE System z mainframe survey results

August 9, 2010

SHARE, the independent mainframe and System z user group, presented the results of its recent survey of data center concerns at its twice-yearly conference held in Boston last week. The biggest surprise among the results was how low identity management ranked (#20) among the issues that keep enterprise data center managers awake at night. Given all the publicity given to security breaches, identity theft, and privacy leaks, you’d think identity management would have ranked at least a little higher.

The top five concerns were predictable:

  1. Cost management, reduction, and/or avoidance
  2. Server virtualization, followed by storage and network virtualization
  3. Improving the value of IT or deriving competitive advantage from IT
  4. Enterprise security
  5. Regulatory compliance (SOX, PCI, DSS, HIPAA, Basel II, FISMA)

Given the Top 5 ranking of security and compliance, it is surprising that identity management ranked at the bottom.

The mainframe should do very well when it comes to addressing the Top 5 at least. As noted here many times, although the mainframe remains costly to acquire, it does very well in terms of long term TCO.  Similarly, the mainframe is far more extensively virtualized than any distributed platforms; nothing touches z/VM when it comes to large scale, industrial strength virtualization.

Only when it comes to deriving competitive advantage from IT do mainframe shops tend to fall short. Today big wins in improving business value or gaining competitive advantage from IT come through SOA and business intelligence (BI) on the mainframe. Mainframe shops have only started to make gains with BI and SOA in recent years. You can check out some of my System z BI and SOA mainframe case studies here and here.  The new zEnterprise should spur this along.

Another interesting result occurred around the contradictory ideas of reinvigorating/repurposing the mainframe and replacing the mainframe. The idea of reinvigorating the mainframe proved more popular among SHARE respondents (#7) than replacing the mainframe (#10). Again, the new zEnterprise with its ability to closely integrate and manage workloads running on the z and on x86 and POWER7 blades should make reinvigorating the mainframe a more attractive option, especially in organizations already running POWER7 and x86 workloads.

In keeping with the reinvigorate rather than replace tone of the survey responses, respondents also were less enthusiastic about offshore outsourcing (#14) and outsourcing in general (#16)

Among other interesting results, cloud computing ranked sixth, just out of the top five. The z certainly is a good candidate for cloud hosting. Some argue that a mainframe data center already is an internal cloud. The new zEnterprise should only encourage more thinking along the line of private clouds.

Finally, not surprising but still disappointing were the low ranking of mobile platforms (#15) and social media (#19). Mobile clients increasingly will be the way end-users access mainframe data and applications. No problem; the mainframe can handle mobile today, even interfacing with iPhones via RDz. Social media, too, will play bigger roles in enterprise computing. Again, the mainframe can play here too, especially through SOA.


%d bloggers like this: