Posts Tagged ‘zEnterprise’

IBM Commits $1B to Drive Watson into the Mainstream

January 10, 2014

IBM is ready to propel Watson beyond Jeopardy, its initial proof-of-concept, and into mainstream enterprise computing. To that end, it announced plans to spend more than $1 billion on the recently formed Watson business unit, an amount that includes $100 million in venture investments to build an ecosystem of entrepreneurs developing Watson-powered apps.

In addition, companies won’t need racks of Power servers to run Watson. With a series of announcements yesterday IBM unveiled plans to deliver Watson capabilities as business-ready cloud services. The announcement focused on three Watson services: 1)  Watson Discovery Advisor for research and development projects in industries such as pharmaceutical, publishing and biotechnology; 2) Watson Analytics to deliver visualized big data insights based on questions posed in natural language by any business user; and 3) IBM Watson Explorer to more easily uncover and share data-driven insights across the enterprise.

DancingDinosaur has been following Watson since its Jeopardy days. Having long since gotten over the disappointment that Watson didn’t run on the Power side of a hybrid zEnterprise, it turns out that IBM has managed to shrink Watson considerably.  Today Watson runs 24x faster, boasts a 2,400% improvement in performance, and is 90% smaller.  IBM has shrunk Watson from the size of a master bedroom to three stacked pizza boxes, and you don’t even need to locate it in your data center; you can run it in the cloud.

Following the introduction of Watson IBM was slow to build on that achievement. It focused on healthcare and financial services, use-cases that appeared to be no-brainers.  Eventually it experienced success, particularly in healthcare, but the initial customers came slowly and the implementations appeared to be cumbersome.

Watson, at least initially, wasn’t going to be a simple deployment. It needed a ton of Power processors. It also needed massive amounts of data; in healthcare IBM collected what amounted to the entire library of the world’s medical research and knowledge. And it needed applications that took advantage of Watson’s formidable yet unusual capabilities.

The recent announcements of delivering Watson via the cloud and committing to underwrite application developers definitely should help. And yesterday’s announcement of what amounts to three packaged Watson services should speed deployment.

For example, Watson Analytics, according to IBM, removes common impediments in the data discovery process, enabling business users to quickly and independently uncover new insights in their data. Using sophisticated analytics and aided by Watson’s natural language interface, Watson Analytics automatically prepares the data, finds the most important relationships, and presents the results in an easy to interpret interactive visual format. As a result, business users are no longer limited to predefined views or static data models. Better yet, they can feel empowered to apply their own knowledge of the business to ask and answer new questions as they emerge. They also will be able to quickly understand and make decisions based on Watson Analytics’ data-driven visualizations.

Behind the new Watson services lies IBM Watson Foundations, described as a comprehensive, integrated set of big data and analytics capabilities that enable enterprises to find and capitalize on actionable insights. Basically, it amounts to a set of user tools and capabilities to tap into all relevant data – regardless of source or type – and run analytics to gain fresh insights in real-time. And it does so securely across any part of an enterprise, including revenue generation, marketing, finance, risk, and operations.  Watson Foundations also includes business analytics with predictive and decision management capabilities, information management with in-memory and stream computing, and enterprise content management packaged into modular offerings. As such it enables organizations of any size to address immediate needs for decision support, gain sustainable value from their initial investments, and grow from there.

This apparently sounded good to Singapore’s DBS Bank, which will deploy Watson cognitive computing capabilities to deliver a next- generation client experience.  For starters, DBS intends to apply Watson to its wealth management business to improve the advice and experience delivered to affluent customers.  The bank is counting on cloud-based Watson to process enormous amounts of information with the ability to understand and learn from each interaction at unprecedented speed. This should greatly increase the bank’s ability to quickly analyze, understand and respond to the vast amounts of data it is accumulating.

Specifically, DBS will deploy IBM’s cloud-based Watson Engagement Advisor solution, to be rolled out in the second half of the year. From there the bank reportedly plans to progressively deploy these capabilities to its other businesses over time.

For fans of cognitive computing and Watson, the announcements represent a much awaited evolution in IBM’s strategy. It promises to make cognitive computing and the natural language power of Watson usable for mainstream enterprises. How excited fans should get, however, depends on the specifics of IBM’s pricing and packaging for these offerings.  Still, faced with having to recoup a $1 billion investment, don’t expect loss-leader pricing from IBM.

Follow DancingDinosaur on Twitter: @mainframeblog

Meet the Power 795—the RISC Mainframe

December 16, 2013

The IBM POWER 795 could be considered a RISC mainframe. A deep dive session on the Power 795 at Enterprise 2013 in early October presented by Patrick O’Rourke didn’t call the machine a mainframe. But when he walked attendees through the specifications, features, capabilities, architecture, and design of the machine it certainly looked like what amounted to a RISC mainframe.

Start with the latest enhancements to the POWER7 chip:

  • Eight processor cores with:

12 execution units per core

4 Way SMT per core – up to 4 threads per core

32 Threads per chip

L1: 32 KB I Cache / 32 KB D Cache

 L2: 256 KB per core

 L3: Shared 32MB on chip eDRAM

  • Dual DDR3 Memory Controllers

100 GB/s Memory bandwidth per chip

  • Scalability up to 32 Sockets

360 GB/s SMP bandwidth/chip

20,000 coherent operations in flight

Built on POWER7 and slated to be upgraded to POWER8 by the end of 2014 the Power 795 boasts a number of new features:

  • New Memory Options
  • New 64GB DIMM enable up to 16TB of memory
  • New hybrid I/O adapters will deliver Gen2 I/O connections
  • No-charge Elastic processor and memory days
  • PowerVM will enable up an 20 LPARs per core

And running at 4.2 GHz, the Power 795 clock speed starts to approach the zEC12 at 5.5 GHz while matching the clock speed of the zBC12.

IBM has also built increased flexibility into the Power 795, starting with turbo mode which allows users to turn on and off cores as they manage power consumption and performance. IBM also has enhanced the concept of Power pools, which allows users to group systems into compute clusters by setting up and moving processor and memory activations within a defined pool of systems, at the user’s convenience. With the Power 795 pool activations can be moved at any time by the user without contacting IBM, and the movement of the activations is instant, dynamic, and non-disruptive. Finally, there is no limit to the number of times activations can be moved. Enterprise pools can include the Power 795, 780, and 770 and systems with different clock speeds can coexist in the same pool. The activation assignment and movement is controlled by the HMC, which also determines the maximum number of system in any given pool.

The Power 795 provides three flavors of capacity of demand (CoD). One flavor for organizations that know they will need the extra capacity that can be turned on through easy activation over time. Another is intended for organizations that know they will need extra capacity at predictable times, such as the end of the quarter, and want to pay for the added capacity on a daily basis. Finally, there is a flavor for organizations that experience unpredictable short bursts of activity and prefer to pay for the additional capacity by the minute. Actually, there are more than the three basic flavors of CoD above but these three will cover the needs of most organizations.

And like a mainframe, the Power 795 comes with extensive hardware redundancy.  OK, the Power 795 isn’t a mainframe. It doesn’t run z/OS and it doesn’t do hybrid computing. But if you don’t run z/OS workloads and you’re not planning on running hybrid workloads yet still want the scalability, flexibility, reliability, and performance of a System z the Power 795 might prove very interesting indeed. And when the POWER8 processor is added to the mix the performance should go off the charts. This is a worthy candidate for enterprise systems consolidation.

The zEnterprise as a Hybrid Data Center

November 21, 2013

There is no doubt that the zEnterprise enables hybrid computing. Just attach a zBX to it and start plugging in Linux and x86 blades; presto, you’ve got hybrid computing.  You can manage this entire hybrid infrastructure via the Unified Resource Manager.

The zEnterprise also has a sister hybrid computing platform, IBM PureSystems. Here, too, you can add in System x and Linux or even Power and System i and do hybrid computing. You can also manage the hybrid environment through a single console, albeit a different console—the Flex System Manager—and manage this second IBM hybrid platform as a unified environment.  DancingDinosaur has noted the irony of IBM having two different, incompatible hybrid systems; IBM has reassured this blogger several times that it is trying to converge the two. Whenever it happens DancingDinosaur will be the first to report it.

The zEnterprise or even PureSystems as a hybrid computing platform, however, is not the same as a hybrid data center.  Apparently there is no definition of a hybrid data center despite all the talk about hybrid computing, hybrid clouds, and hybrid systems.  As best DancingDinosaur can piece it together, the hybrid data center is multiplatform like the zEnterprise, but it also is multi-location, often using co-location facilities or factory-built containerized data centers (IBM calls them Portable Modular Data Centers, PMDC). More often, however, hybrid data centers are associated with cloud computing as the third of the three flavors of cloud (private, public, hybrid).

Gartner recently described some architecture options for a hybrid data center. In one case you could have a zEnterprise acting as, say, a private cloud using a co-location facility as a DMZ between the private cloud and a public cloud like Amazon. Not sure, however, you would need the DMZ if your private cloud was running on the highly secure zEnterprise but Gartner included it. Go figure.

Hybrid showed up in numerous Enterprise 2013 sessions this past October. You can catch some video highlights from it here. The conference made frequent mention of hybrid in numerous sessions, some noted in previous DancingDinosaur posts, such as Exploring the World of zEnterprise Hybrid: How Does It Work and What’s the Point? The session introduced the Unified Resource Manager and described how it would allow an IT shop to manage a collection of one or more zEnterprise nodes including any optionally attached zBX cabinets as a single logical virtualized system through a Hardware Management Console (HMC). In short, it was about providing a single point of control through which data center personnel can deploy, configure, monitor, manage and maintain the integrated System z and zBX blades based on heterogeneous architectures in a unified manner. But it wasn’t talking about the hybrid enterprise data center described in the previous paragraph.

Similarly, Application Performance Management and Capacity Planning for the IBM zEnterprise Hybrid Workload focused on extending the Unified Resource Manager to goal-oriented performance management for both traditional System z and BladeCenter applications. It was about applying WLM, RMF, and Platform Performance Management to cross-platform hybrid applications. Again, this really wasn’t about the hybrid data center described above.

BTW, plans apparently already are underway for Enterprise 2014. Looks like it will be Oct. 6-10 at the Venetian in Las Vegas. It should be quite an event given that IBM will be celebrating the 50th anniversary of the mainframe in 2014.

And there is much more on z hybrid computing and hybrid clouds. The zEnterprise has its own page on cloud computing here, and last month the zEnterprise zBC12 won CRN Tech Innovator Award for the most Innovative cloud solution.  You can also click here to see how a dozen IBM customers used various IBM platforms to build hybrid clouds.

IBM has already used the zEnterprise to consolidate over 30,000 servers around the world for an 84% improvement in data center efficiency and a 50% reduction in power and cooling. This effectively freed $1 billion to spend on innovative new projects that drive business growth across the company. And IBM is about as hybrid a data center as you can find.

Follow DancingDinosaur on Twitter, @mainframeblog

Goodbye Itanium, zEnterprise Continues to Grow

November 8, 2013

The HP announcement earlier this week wasn’t specifically the death knell for Itanium-based systems, but it just as well might have been. Rather, HP disclosed plans to extend the HP NonStop architecture to the Intel x86 platform.  With NonStop to be available on x86 servers, why would anyone even consider the Itanium platform?

Meanwhile, at an IBM analyst briefing at Enterprise 2013 and again this week, IBM rattled off growth figures for the zEnterprise: 56% MIPS growth and 6% revenue growth year-to-year, over 230 new z accounts since the introduction of the hybrid zEnterprise, and over 290 hybrid computing devices shipped including over 200 zBX cabinets.  Linux on z continues to penetrate the mainframe world with 80% of the top 100 mainframe enterprises having IFLs installed. But maybe the best sign of the vitality of the zEnterprise was the news that 33 new ISVs brought product to the z platform in 3Q2013.

Another sign of zEnterprise vitality: over 65,000 students entered the Master the Mainframe competition in the last 8 years.  In addition, over 1000 universities are teaching curriculum related to mainframe topics. Are you worried that you will not be able to find mainframe talent going forward? You probably never thought that the mainframe would be cool.

Recruiters from Cigna, Fidelity, JP Morgan Chase, Baldor, Dillars, Wal-mart, and more have been actively recruiting at schools participating in the Academic Initiative. For example, a senior business leader for switching systems at Visa described the Academic Initiative as a critical success factor and a lifeline for the company’s future.

With regard to the Itanium platform, HP’s announcement is more about trying to salvage the NonStop operating system than to save the Itanium server business.  “Extending HP NonStop to an x86 server platform shows a deep level of investment in maintaining the NonStop technology for mission-critical workloads in financial markets, telecommunications and other industries. At the same time, it brings new levels of availability to the x86-based standardized data center infrastructure,” said Jean Bozman, IDC research VP in the HP announcement.

Certainly for those organizations that require continuous operations on x86 the HP move will be a boon. Otherwise, high availability on x86 has always been something of a kluge. But don’t expect HP  to get anything running overnight.  This is just the latest step in a multi-year HP effort underway since 2011, and it will probably be another two years before everything gets ported and fully tested. HP promises to help customers with migration.

DancingDinosaur’s advice to NonStop customers that are frustrated by the underwhelming performance of Itanium systems today: Jump to the zEnterprise, either zEC12 or zBC12. You are almost certain to qualify for one of the deeply discounted System z Solution Edition deals (includes hardware, software, middleware, and 3 years of maintenance).  And something like IBM’s Migration Factory can help you get there. If it has taken HP two years to get this far, you can probably be up and running on z long before they get the first lines of NonStop code ported to x86.

Meanwhile, the System z team hasn’t been twiddling their collective thumbs.  In addition to introducing the zBC12 in July (shipped in Sept.) and absorbing the CSL International acquisition, which should prove quite valuable in z cloud initiatives, there has been a new IBM Entry Cloud Configuration for SAP Solutions on zEnterprise, a version of IBM Cognos TM1 for financial planning, and improved enterprise key management capabilities based on the Crypto Analytics Tool and the Advanced Crypto Services Provider.

System z growth led the enterprise server pack in the Gartner and IDC quarterly tabulations. Ironically, HP did well too with worldwide server shipments growing by more than 5% in the third quarter, halting a slump of eight consecutive quarters of shipment declines, according to preliminary market data from Gartner. Still, DancingDinosaur doesn’t think anyone will miss Itanium.

Follow DancingDinosaur on Twitter, @mainframeblog

Technology Change is Coming for the zBX

November 1, 2013

The zBX hasn’t been subject to much in the way of big new announcements this year.  Maybe the most obvious was a quiet announcement that the zBX would connect to the zBC12, the newest System z machine announced early in the summer. Buried deeply in that July announcement was that starting in Sept. 2013 you could attach the IBM zBX Model 003 to the new machine. Machines older than the zEC12 would need the zBX Model 002.

At Enterprise 2013, however, the zBX managed to grab a little of the spotlight in a session by Harv Emery titled IBM zEnterprise BladeCenter Extension Model 3 and Model 2 Deep Dive Update. OK, it’s not exactly a riveting title, but Emery’s 60 slides were packed with far more detail than can possibly fit here.

To summarize:  a slew of software and firmware updates will be coming through the end of this year and into 2014. Similarly, starting next year and beyond, IBM will begin to stop marketing older zBX hardware and eventually stop supporting the older stuff.  This is standard IBM practice; what makes it surprising is the realization that the zBX no longer is the new kid on the scene. PureSystems in their various iterations are the sexy newcomer.  As of the end of last year somewhat over 200 z hybrid units (zBX cabinets) had been sold along with considerably more blades. Again, PureSystems are IBM’s other hybrid platform.

Still, as Emery pointed out, new zBX functionality continues to roll out. This includes:

  • CPU management for x86 blades
  • Support for Windows 12, and current LDP OS releases
  • GDPS automated site recovery for zBX
  • Ensemble Availability Manager for improved monitoring and reporting
  • Support for Layer 2 communications
  • An IBM statement of direction (SOD) on support for next generation DataPower Virtual Appliance XI52
  • Support for next generation hardware technologies in the zBX
  • zBX firmware currency
  • A stand-alone zBX node to preserve the investment
  • Bolstered networking including a new BNT Virtual Fabric 10 GbE Switch
  • zBX integrated hypervisor for IBM System x blades and running KVM

Emery also did a little crystal balling about future capabilities, relying partly on recent IBM SODs. These include:

  • Support of zBX with the next generation server
  • New technology configuration extensions in the zBX
  • CEC and zBX continued investment in the virtualization and management capabilities for hybrid computing environment
  • Enablement of Infrastructure as a Service (IAAS) for Cloud
  • Unified Resource Manager improvements and extensions for guest mobility
  • More monitoring instrumentation
  • Autonomic management functions
  • Integration with the STG Portfolio
  • Continued efforts by zEnterprise and STG to leverage the Tivoli portfolio to deliver enterprise-wide management capabilities across all STG systems

DancingDinosaur periodically has been asked questions about how to handle storage for the zBX and the blades it contains.  Emery tried to address some of those.  Certain blades, DataPower for example, now come with their own storage and don’t need to any outside storage on the host z.  Through the top of the rack switch in the zBX you can connect to a distributed SAN.

Emery also noted the latest supported storage devices.  Supported IBM storage products as of Sept. 2013 include: DS3400, 3500, 3950, 4100, 4200, 4700 4800, 5020, 5100, 5300, 6000, 8100, 8300, 8700, 8800, SVC 2145, XIV, 2105, 2107, and Storwize v7000. Non-IBM storage is possible but you or you’re the OEM storage vendor will have to figure it out.

Finally, Emery made numerous references to Unified Resource Manager (or zManager, although it manages more than z) for the zBX and Flex System Manager for PureSystems.  Right now IBM tries to bridge the two systems with higher level management from Tivoli.  Another possibility, Emery hinted, is OpenStack to unify hybrid management. Sounds very intriguing, especially given IBM’s announced intention to make extensive use of OpenStack. Is there an interoperable OpenStack version of Unified Resource Manager and Flex System Manager in the works?

Follow DancingDinosaur on Twitter, @mainframeblog.

Enterprise 2013 Details System z and Power Technology and New Capabilities

October 25, 2013

IBM announced a lot of goodies for z and Power users at Enterprise 2013 wrapping up in Orlando today. There were no blockbuster announcements, like a new z machine—we’re probably 12-18 months away from that and even then the first will likely focus on Power8—but it brought a slew of announcements nonetheless. For a full rundown on what was announced click here.

Cloud and analytics—not surprisingly—loom large. For example, Hadoop and a variety of other capabilities have been newly cobbled together, integrated, optimized, and presented as new big data offerings or as new cloud solutions.  This was exemplified by a new Cognos offering for CFOs needing to create, analyze and manage sophisticated financial plans that can provide greater visibility into enterprise profitability or the lack thereof.

Another announcement featured a new IBM Entry Cloud Configuration for SAP on zEnterprise. This is a cloud-enablement offering combining high-performance technology and services to automate, standardize and accelerate day-to-day SAP operations for reduced operational costs and increased ROI. Services also were big at the conference.

Kicking off the event was a dive into data center economics by Steve Mills, Senior Vice President & Group Executive, IBM Software & Systems. Part of the challenge of optimizing IT economics, he noted, was that the IT environment is cumulative. Enterprises keep picking up more systems, hardware and software, as new needs arise but nothing goes away or gets rationalized in any meaningful way.

Between 2000 and 2010, Mills noted, servers had grown at a 6x rate while storage grew at a 69x rate. Virtual machines, meanwhile, were multiplying at the rate of 42% per year. Does anyone see a potential problem here?

Mills’ suggestion: virtualize and consolidate. Specifically, large servers are better for consolidation. His argument goes like this: Most workloads experience variance in demand. But when you consolidate workloads with variance on a virtualized server the variance of the sum is less due to statistical multiplexing (which fits workloads into the gaps created by the variances). Furthermore, the more workloads you can consolidate, the smaller the variance of the sum. His conclusion: bigger servers with capacity to run more workloads can be driven to higher average utilization levels without violating service level agreements, thereby reducing the cost per workload. Finally, the larger the shared processor pool is the more statistical benefit you get.

On the basis of statistical multiplexing, the zEnterprise and the Power 795 are ideal choices for this. Depending on your workloads, just load up the host server, a System z or a big Power box, with as many cores as you can afford and consolidate as many workloads as practical.

Mills’ other cost savings tips: use flash to avoid the cost and complexity of disk storage. Also, eliminate duplicate applications—the fewer you run, the lower the cost. In short, elimination is the clearest path to saving money in the data center.

To illustrate the point, Jim Tussing from Nationwide described how the company virtualized and consolidated 60% on their 10,500 servers on a few mainframes and saved $46 million over five years. It also allowed the company to delay the need for an additional data center for 4 years.

See, if DancingDinosaur was an actual data center manager it could have justified attendance at the entire conference based on the economic tips from just one of the opening keynotes and spent the rest of the conference playing golf. Of course, DancingDinosaur doesn’t play golf so it sat in numerous program sessions instead, which you will hear more about in coming weeks.

You can follow DancingDinosaur on twitter, @mainframeblog

Social Business and Linux on System z at Enterprise 2013

October 17, 2013

The Enterprise 2013 conference next week in Orlando is sold out! However, you can still participate and learn from the session through Livestream, which starts Monday morning (8am ET) with two IBM senior VPs; Tom Rosamilia, followed by Steve Mills. On Tuesday Livestream sessions start at 10:30am ET. Check out the full Livestream schedule here.

Let’s expand on the social business topics to be covered at the conference. Building a Social Environment in an Enterprise Private Cloud looks at the advantages of building a social environment in an on-premise private cloud, exploiting System z where practical. The hybrid System z models seem particularly well suited for this, and the TCO should be quite favorable. Daily Business can Profit from Social Networks for System z looks at how to exploit social on the z to keep current with news and events of importance to the organization and its customers through Twitter and other social networks. Finally, Gaining Competitive Advantage with Social Business separates the social hype from the facts. The session keys in on utilizing social business relationships to help you achieve competitive advantages.

DancingDinosaur has long considered Linux on z as the single most important thing IBM did to save the mainframe from a future as a niche product serving mainly big banks and financial services firms. Today, the mainframe is the center of a hybrid computing world that can do anything business strategists want to do—mobile, cloud, open systems, Linux, Windows. Linux, the key to that, has been slow to catch on, but it is steadily gaining traction. At Enterprise 2013 you can see, to paraphrase a movie title starring Clint Eastwood; The Good, the Great, and the Ugly of Linux on System z.

Linux on System z: Controlling the Proliferating Penguin presents Mike Riggs, Manager of Systems and Database Administration at the Supreme Court of Virginia, sharing his experiences leveraging the power of Linux on System z by utilizing WebSphere, DB2, Oracle, and Java applications in concert with the longstanding success of z/VM, z/VSE, CICS applications, and other platform systems. He will explain how a funds-limited judicial branch of a state government is leveraging all possible resources to manage, grow, and support statewide judicial application systems.

What’s New with Linux on System z provides an overview of Linux on System z. It will show Linux as a very active open source project and offer insight into what makes Linux so special. It also looks at both the latest and as well as upcoming features of the Linux kernel and what these features can do for you.

From there, you can attend the session on Why Linux on System z Saves $$, which will help you build the business case for Linux on z. The presenter, Buzz Woeckener, Director of IT at Nationwide Insurance, will pepper you with facts, disprove some myths, and help you understand why Linux on System z is one of the best values in the marketplace today. DancingDinosaur has written on Nationwide’s Linux on z experience before; it is a great story.

Finally, here’s the ugly: Murphy’s Law Meets VM and Linux on System z. Murphy’s law observes that whatever can go wrong possibly (or probably, depending on your level of pessimism) will go wrong. This can also be the case in some unfortunate Linux on System z and z/VM proof of concepts or improperly configured production systems. Having been called into a number of these situations over the last couple of years, the speaker brings a lot of experience handling these problems. Where some sessions highlight successes, this one will present stories from the battlefield on what it took to get these projects back on track. It will show the mistakes and draw the lessons learned.

Plus there is networking, security, systems management, big data and analytics, development, and more. For those lucky enough to get space, you won’t be at a loss for what to do next. DancingDinosaur will there Sunday through Thursday. If you see me, please feel welcome to introduce yourself.

Enterprise 2013—System z Storage, Hybrid Computing, Social and More

October 10, 2013

The abstract for the Enterprise 2013 System z program runs 43 pages. Haven’t tallied the number of sessions offered but there certainly are enough to keep you busy for the entire conference (Oct. 20-25, in Orlando, register now) and longer.

Just the storage-related sessions are wide ranging, from  DFSM, which DancingDinosaur covered a few weeks back following the SHARE Boston event here, to the IBM Flash portfolio, System z Flash Express, dynamically provisioning Linux on z storage, capacity management, and more. For storage newcomers, there even is a two-part session on System z Storage Basics.

A storage session titled the Evolution of Space Management looks interesting.  After the advent of System Managed Storage (SMS), the mainframe went decades without much change in the landscape of space management processing. Space management consisted of the standard three-tier hierarchy of Primary Level 0 and the two Migration tiers, Migration Level 1 (disk) and Migration Level 2 (tape).This session examines recent advances in both tape and disk technologies that have dramatically changed that landscape and provided new opportunities for managing data on the z. Maybe they will add a level above primary called flash next year. This session will cover how the advances are evolving the space management hierarchy and what to consider when determining which solutions are best for your environment.

IBM has been going hog-wild with flash, the TMS acquisition playing no small part no doubt. Any number of sessions deal with flash storage. This one, IBM’s Flash Portfolio and Futures, seems particularly appealing. It takes a look at how IBM has acquired and improved upon flash technology over what amounts to eight generations technology refinements.  The session will look at how flash will play a major role across not only IBM’s storage products but IBM’s overall solution portfolio. Flash technology is changing the way companies are managing their data today and it is changing the way they understand and manage the economics of technology. This session also will cover how IBM plans to leverage flash in its roadmap moving forward.

Hybrid computing is another phenomenon that has swept over the z in recent years. For that reason this session looks especially interesting, Exploring the World of zEnterprise Hybrid: How Does It Work and What’s the Point? The IBM zEnterprise hybrid system introduces the Unified Resource Manager, allowing an IT shop to manage a collection of one or more zEnterprise nodes, including an optionally attached zBX loaded with blades for different platforms, as a single logical virtualized system through a single mainframe console. The mainframe can now act as the primary point of control through which data center personnel can deploy, configure, monitor, manage, and maintain the integrated System z and zBX blades based on heterogeneous architectures but in a unified manner. It amounts to a new world of blades and virtual servers with the z at the center of it.

Maybe one of the hardest things for traditional z data center managers to get their heads around is social business on the mainframe. But here it is: IBM DevOps Solution: Accelerating the Delivery of Multiplatform Applications looks at social business and mobile along with big data, and cloud technologies as driving the demand for faster approaches to software delivery across all platforms, middleware, and devices. The ultimate goal is to push out more features in each release and get more releases out the door with confidence, while maintaining compliance and quality. To succeed, some cultural, process, and technology gaps must be addressed through tools from Rational.

IBM has even set itself up as a poster child for social business in another session, Social Business and Collaboration at IBM, which features the current deployment within IBM of its social business and collaboration environments. Major core components are currently deployed on System z. The session will look at what IBM is doing and how they do it and the advantages and benefits it experiences.

Next week, the last DancingDinosaur posting before Enterprise 2013 begins will look at some other sessions, including software defined everything and Linux on z.

When DancingDinosaur first started writing about the mainframe over 20 years ago it was a big, powerful (for the time), solid performer that handled a few core tasks, did them remarkably well, and still does so today. At that time even the mainframe’s most ardent supporters didn’t imagine the wide variety of things it does now as can be found at Enterprise 2013.

Please follow DancingDinosaur and its sister blogs on Twitter, @mainframeblog.

Enterprise 2013 Offers a Packed Program of System z, zEnterprise, Linux on z Sessions

October 3, 2013

IBM’s Enterprise 2013 conference in Orlando is coming up soon, starting Oct. 21. It will combine the System z and the Power Systems technical universities with an Executive Summit. DancingDinosaur will be there and just had a chance to look over the System z session catalog, all 43 pages packed with interesting System z programs. Here are few that should be of particular interest.

BYOD –The Return of Terminals. DancingDinosaur touched on this just a couple of weeks ago here. The session will delve into what it sees as an IT revolution, where mobile devices start replacing PCs the way PCs replaced terminals. So why is this good news for the mainframe data center? Because it brings control of end user computing back to the mainframe data center.  Other mobile System z sessions look at ways to connect mobile apps to the z, the use of Worklight with the z, and the basics of enterprise mobile computing

z/OS Applications –Adapting at the Speed of Business.  The session looks at how to respond to business people hammering on your door to make changes to production applications immediately. Typically these changes are small, more about changing the business behavior of the application than any real structural change, or maybe they are timed to your business cycle.  In any case, the session examines ways to handle those changes with shorter turnaround while also establishing a common terminology between you and the business analysts.  IBM has decision management technology that can tightly integrate with your existing COBOL and PL/I applications to handle those changes. A sort of IBM’s version of DevOps for the z although it also has DevOps solutions. Anyway, the result can be more stable applications performing as well or better than they do now, while delivering the behavior the business wants. Specifically, the session will show how to use the IBM Operational Decision Manager to make your z/OS applications more responsive to the ever-changing demands of the business teams.

Moving from a Legacy Mainframe System to a Modern Environment—a case study. Actually Enterprise 2013 appears to be packed with case studies. Here Fidelity Investments will discuss how it moved from a legacy z system to a modernized agile z-based environment that supports the requirements of their customers. The session will focus on how Fidelity used Rational tools on z to build out and deploy the new environment.

IBM DevOps Solution: Collaborative Development to Spark Innovation and Integration among Teams—as it turns out, Enterprise 2013 features a number of sessions focused on DevOps, which combines app dev with an agile deployment approach. The basic idea is that application development cannot be sustained in disjointed silos. New mobile, social, big data and analytics projects demand the development process to be fast, integrated, creative, and affordable. Furthermore, business needs change quickly, making it necessary to re-prioritize work and shift resources to different projects efficiently. With advanced, productive and unified development environments from Rational and middleware from CICS, this session will show you how you can apply talent across boundaries and keep the focus on innovation and high quality code development and test.

A related session is IBM DevOps Solution: Accelerating the Delivery of Multiplatform Applications. Here mobile, social, big data, and cloud technologies are driving the demand for faster and more recurrent approaches to software delivery across all platforms, middleware, and devices. The ultimate goal is to push out significantly more features in each release and get more releases out the door with confidence, while maintaining compliance and quality. DevOps is hot; if your shop hasn’t tuned into DevOps yet, Enterprise 2013 will be a good place to get up to speed.

Moving CICS Applications into the Cloud—the cloud is going to be increasingly central in almost all you do going forward and CICS has to be there. This session introduces CICS TS 5.1 as new infrastructure to increase your service agility and move towards a service delivery platform for cloud computing. For agile service delivery, CICS resources are packaged together, hosted as applications on a platform, and managed dynamically with policies. The latest release of CICS IA (Interdependency Analyzer) allows you to gain far greater insight into your applications and their dependencies while fine-tuning application performance and identifying bottlenecks. And then there is the CICS concept of a platform. Platforms provide services and resources so that applications can be rapidly deployed based on their requirements, combined with policies that enable the behavior of applications and platforms to be managed by determining whether tasks running as part of a platform, as an application, or as types of operations within an application. Expect many CICS sessions on every topic imaginable as CICS emerges as one of the central components of IBM’s expanded idea of the mainframe.

The Enterprise 2013 program is rich with System z material. DancingDinosaur will take up more of it next week. In the meantime, please register for the conference and feel welcome to introduce yourself to me at the event. You’ll find me wherever analysts, bloggers, and journalists hang out. Also, feel welcome to follow me on Twitter, @mainframeblog.

Latest BMC Mainframe Survey Points to Bright System z Future

September 27, 2013

BMC Software released its 8th annual mainframe survey, and the results shouldn’t surprise any readers of DancingDinosaur. Get a copy of the results here. The company surveyed over 1000 managers and executives at mainframe shops around the world, mostly BMC customers.  Guess you shouldn’t be surprised at how remarkably traditional the respondents’ attitudes about the mainframe are.

For example, of the new areas identified by IBM as hot—mobile, cloud, big data, social business—cloud, big data, and mobile barely registered and social was nowhere to be seen.  Cloud was listed as one of the top four priorities by 19% of the respondents. Big data was listed as one of the top priorities for the coming year by only 18% of the respondents, the same as mobile.  The only topic that was less of a priority was outsourcing at 15%.

So what were the main priorities? The top four:

  • IT cost reduction—85% of respondents
  • Application availability—66%
  • Business/IT alignment—50%
  • Application modernization—50%

Where the researchers did drill down into one of the new areas of focus, big data, the biggest number of respondents, 31%, reported identifying the business use case as their biggest challenge. Other challenges were the cost of transforming/loading mainframe data to a centralized data warehouse (24%) followed by the effort such a transformation required (20%).  Another 11% noted the lack of ETL tools for business analytics.  Ten percent cited lack of knowledge about mainframe data content—huh? That might have been the one thing DancingDinosaur found truly surprising, although without knowing the specific job titles or job descriptions it might not be so surprising after all.

When it came to big data, 28% of the respondents expected to move mainframe data off the mainframe for analytics. An almost equal number (27%) expected the mainframe to act as the big data analytic engine.  Another 12% reported federating data to an off platform analytics engine. Three percent reported Linux on z for hosting the unstructured data.

Moving data off the mainframe for big data analytics can be a slow and costly strategy. One of the benefits of doing big data on the System z or the hybrid zEnterprise/zBX is taking advantage of the proximity of the data. Moving petabytes or even terabytes of data is not a trivial undertaking. For all the hype it’s clear that big data as a business strategy is still in its infancy with much left to be learned.  It will be interesting to see what this survey turns up a few years from now.

Otherwise, the survey results are very supportive to those who are fighting the seemingly perpetual battle of the mainframe as an end-of-life technology.  Almost all the respondents (93%) considered the mainframe a long-term business strategy while almost half (49%) felt the mainframe will continue to grow and attract new workloads.

Some other tidbits from the survey:

  • 70% of respondents said the mainframe will have a key role in Big Data plans.
  • 76% of large shops expect MIPS capacity to grow as they modernize and add applications to address business needs. (This highlights the need for software that minimizes expensive MIPS consumption and exploits the mainframe’s cost-efficient specialty engines.)

No large shops anyway—and only 7% of all respondents—have plans to eliminate their mainframe environment. Glad it’s not worse.

Lastly, there still is time to register for IBM’s Enterprise 2013 conference in Orlando. It will combine the System z and the Power Systems technical universities with an Executive Summit.  The session programs already are out for the System z and Power Systems tracks. Check out the System z overview here and the Power Systems overview here. DancingDinosaur will be there. In the coming weeks this blog will look more closely some intriguing sessions.

BTW–please follow DancingDinosaur at its new name on Twitter, @mainframeblog


Follow

Get every new post delivered to your Inbox.

Join 572 other followers

%d bloggers like this: