Posts Tagged ‘zBC12’

Vodafone Spain Picks IBM zSystem for Smarter Cities Initiative

March 19, 2015

The Vodafone initiative, as reported here previously, leverages the most advanced mobile communications technology including citywide sensors and a Global M2M Platform that will enable the connection of thousands of sensors to the intelligent Vodafone Connected City system. The new cloud-based system will run on IBM Linux z Systems. The Linux z Systems were selected for their high security, which protects cloud services while also delivering the speed, availability, and efficiency required to drive mobile services at scale.  To do something at scale you really do want the z System.

 vodafone zsystem running linux

Courtesy of IBM: zSystem and Linux

For Vodafone this represents the beginning of what they refer to as a Smarter Cities services initiative. The effort targets local governments and city councils with populations ranging between 20.000 – 200.000 citizens. The services provided will address customer’s needs in the following key areas: urban vitality, public lighting, energy efficiency, waste management, and citizen communications.

In effect, Vodafone is becoming a SaaS provider by leveraging their new zSystem. Vodafone’s customers for this are the government groups that opt to participate. The company announced the effort at the World Mobile Congress in Barcelona at the beginning of the month.

One of the initial participants will be Seville, the capital of the province of Andalucía, where a control and development center will be established by Vodafone. The telco will invest more than 243 million euros over two years on telecommunications infrastructure, encouraging the development of the technology sector and developing projects to create strategic growth in the region.

Initially, the center will focus on creating smart city solutions that can easily and efficiently be used by cities ranging from 20,000 to 150,000 residents; cities that otherwise may not have the funds to invest in smart city infrastructure projects on their own. This center is also expected to help make the Andalucía territory of Spain a leader in the development of Big Data and smart solutions.

IBM is delivering the full stack to Vodafone: a set of cloud services that include an enterprise zSystem Linux server (IBM zBC12), v7000 storage, IBM intelligent operations, an information services solution, and more.  Vodafone opted for the z and Linux to enable cost-efficient, highly secure cloud services while also delivering the speed, availability and efficiency required to drive mobile services at scale. IBM Intelligent Operations software will provide monitoring and management of city services. IBM’s MobileFirst platform will be used to create citizen-facing mobile applications while IBM Information Server and Maximo asset management software will round out the IBM stack.

Overall, IBM, the zSystem, and Linux brought a number of benefits to this initiative. Specifically, the zSystem proved the least expensive when running more than seven vertical services as Vodafone is planning. An example of such a vertical service is the public lighting of a city. This also is where scalability brings a big advantage. Here again, the zSystem running Linux delivers scalability along with greater security and regulatory compliance. Finally, another critical capability for Vodafone was the zSystem’s ability to isolate workloads.

In short, the zSystem’s security and regulation compliance; reliability, resilience, and robustness; strong encoding and workload isolation, workload management and ability to meet SLAs; scalability; and high efficiency clinched the Vodafone deal.

This could prove a big win for IBM and the zSystem. Vodafone has mobile operations in 26 countries, partners with mobile networks in 54 more, and runs fixed broadband operations in 17 markets. As of the end of 2014, Vodafone had 444 million mobile customers and 11.8 million fixed broadband customers. Vodafone Spain’s 14.811.000 mobile customers and 2.776.000 broadband ones will certainly take maximum advantage of the zSystem’s scalability and reliability.

…as a follow up to last week’s report on recent success coming from the OpenPower Foundation that string continued this week at the OpenPOWER Inaugural Summit with the OpenPOWER Foundation announcing more than ten hardware solutions spanning systems, boards, cards, and a new microprocessor customized for the Chinese market.  Built collaboratively by OpenPOWER members, the new solutions exploit the POWER architecture to provide more choice, customization and performance to customers, including hyperscale data centers.

Among the products and prototypes OpenPOWER members revealed are:

  • Firestone, a prototype of a new high-performance server targeting exascale computing and projected to be 5-10x faster than today’s supercomputers. It incorporate technology from NVIDIA and Mellanox.
  • The first GPU-accelerated OpenPOWER developer platform, the Cirrascale RM4950, resulting from collaboration between NVIDIA, Tyan, and Cirrascale.
  • An open server specification and motherboard mock-up combining OpenPOWER, Open Compute and OpenStack by Rackspace and designed to run OpenStack services.

Other member-developed new products leverage the Coherent Accelerator Processor Interface (CAPI), a hallmark feature built into the POWER architecture. DancingDinosaur initially covered CAPI here.

Reminder: it is time to register for IBM Edge2015 in Las Vegas May 10-15. Edge2015 combines all of IBM’s infrastructure products with both a technical track and an executive track.  You can be sure DancingDinosaur will be there. Watch for upcoming posts here that will highlight some of the more interesting sessions.

DancingDinosaur is Alan Radding, a veteran IT analyst and writer. Follow DancingDinosaur on Twitter, @mainframeblog. See more of his IT writing on Technologywriter.com and here.

Meet the Power 795—the RISC Mainframe

December 16, 2013

The IBM POWER 795 could be considered a RISC mainframe. A deep dive session on the Power 795 at Enterprise 2013 in early October presented by Patrick O’Rourke didn’t call the machine a mainframe. But when he walked attendees through the specifications, features, capabilities, architecture, and design of the machine it certainly looked like what amounted to a RISC mainframe.

Start with the latest enhancements to the POWER7 chip:

  • Eight processor cores with:

12 execution units per core

4 Way SMT per core – up to 4 threads per core

32 Threads per chip

L1: 32 KB I Cache / 32 KB D Cache

 L2: 256 KB per core

 L3: Shared 32MB on chip eDRAM

  • Dual DDR3 Memory Controllers

100 GB/s Memory bandwidth per chip

  • Scalability up to 32 Sockets

360 GB/s SMP bandwidth/chip

20,000 coherent operations in flight

Built on POWER7 and slated to be upgraded to POWER8 by the end of 2014 the Power 795 boasts a number of new features:

  • New Memory Options
  • New 64GB DIMM enable up to 16TB of memory
  • New hybrid I/O adapters will deliver Gen2 I/O connections
  • No-charge Elastic processor and memory days
  • PowerVM will enable up an 20 LPARs per core

And running at 4.2 GHz, the Power 795 clock speed starts to approach the zEC12 at 5.5 GHz while matching the clock speed of the zBC12.

IBM has also built increased flexibility into the Power 795, starting with turbo mode which allows users to turn on and off cores as they manage power consumption and performance. IBM also has enhanced the concept of Power pools, which allows users to group systems into compute clusters by setting up and moving processor and memory activations within a defined pool of systems, at the user’s convenience. With the Power 795 pool activations can be moved at any time by the user without contacting IBM, and the movement of the activations is instant, dynamic, and non-disruptive. Finally, there is no limit to the number of times activations can be moved. Enterprise pools can include the Power 795, 780, and 770 and systems with different clock speeds can coexist in the same pool. The activation assignment and movement is controlled by the HMC, which also determines the maximum number of system in any given pool.

The Power 795 provides three flavors of capacity of demand (CoD). One flavor for organizations that know they will need the extra capacity that can be turned on through easy activation over time. Another is intended for organizations that know they will need extra capacity at predictable times, such as the end of the quarter, and want to pay for the added capacity on a daily basis. Finally, there is a flavor for organizations that experience unpredictable short bursts of activity and prefer to pay for the additional capacity by the minute. Actually, there are more than the three basic flavors of CoD above but these three will cover the needs of most organizations.

And like a mainframe, the Power 795 comes with extensive hardware redundancy.  OK, the Power 795 isn’t a mainframe. It doesn’t run z/OS and it doesn’t do hybrid computing. But if you don’t run z/OS workloads and you’re not planning on running hybrid workloads yet still want the scalability, flexibility, reliability, and performance of a System z the Power 795 might prove very interesting indeed. And when the POWER8 processor is added to the mix the performance should go off the charts. This is a worthy candidate for enterprise systems consolidation.

The zEnterprise as a Hybrid Data Center

November 21, 2013

There is no doubt that the zEnterprise enables hybrid computing. Just attach a zBX to it and start plugging in Linux and x86 blades; presto, you’ve got hybrid computing.  You can manage this entire hybrid infrastructure via the Unified Resource Manager.

The zEnterprise also has a sister hybrid computing platform, IBM PureSystems. Here, too, you can add in System x and Linux or even Power and System i and do hybrid computing. You can also manage the hybrid environment through a single console, albeit a different console—the Flex System Manager—and manage this second IBM hybrid platform as a unified environment.  DancingDinosaur has noted the irony of IBM having two different, incompatible hybrid systems; IBM has reassured this blogger several times that it is trying to converge the two. Whenever it happens DancingDinosaur will be the first to report it.

The zEnterprise or even PureSystems as a hybrid computing platform, however, is not the same as a hybrid data center.  Apparently there is no definition of a hybrid data center despite all the talk about hybrid computing, hybrid clouds, and hybrid systems.  As best DancingDinosaur can piece it together, the hybrid data center is multiplatform like the zEnterprise, but it also is multi-location, often using co-location facilities or factory-built containerized data centers (IBM calls them Portable Modular Data Centers, PMDC). More often, however, hybrid data centers are associated with cloud computing as the third of the three flavors of cloud (private, public, hybrid).

Gartner recently described some architecture options for a hybrid data center. In one case you could have a zEnterprise acting as, say, a private cloud using a co-location facility as a DMZ between the private cloud and a public cloud like Amazon. Not sure, however, you would need the DMZ if your private cloud was running on the highly secure zEnterprise but Gartner included it. Go figure.

Hybrid showed up in numerous Enterprise 2013 sessions this past October. You can catch some video highlights from it here. The conference made frequent mention of hybrid in numerous sessions, some noted in previous DancingDinosaur posts, such as Exploring the World of zEnterprise Hybrid: How Does It Work and What’s the Point? The session introduced the Unified Resource Manager and described how it would allow an IT shop to manage a collection of one or more zEnterprise nodes including any optionally attached zBX cabinets as a single logical virtualized system through a Hardware Management Console (HMC). In short, it was about providing a single point of control through which data center personnel can deploy, configure, monitor, manage and maintain the integrated System z and zBX blades based on heterogeneous architectures in a unified manner. But it wasn’t talking about the hybrid enterprise data center described in the previous paragraph.

Similarly, Application Performance Management and Capacity Planning for the IBM zEnterprise Hybrid Workload focused on extending the Unified Resource Manager to goal-oriented performance management for both traditional System z and BladeCenter applications. It was about applying WLM, RMF, and Platform Performance Management to cross-platform hybrid applications. Again, this really wasn’t about the hybrid data center described above.

BTW, plans apparently already are underway for Enterprise 2014. Looks like it will be Oct. 6-10 at the Venetian in Las Vegas. It should be quite an event given that IBM will be celebrating the 50th anniversary of the mainframe in 2014.

And there is much more on z hybrid computing and hybrid clouds. The zEnterprise has its own page on cloud computing here, and last month the zEnterprise zBC12 won CRN Tech Innovator Award for the most Innovative cloud solution.  You can also click here to see how a dozen IBM customers used various IBM platforms to build hybrid clouds.

IBM has already used the zEnterprise to consolidate over 30,000 servers around the world for an 84% improvement in data center efficiency and a 50% reduction in power and cooling. This effectively freed $1 billion to spend on innovative new projects that drive business growth across the company. And IBM is about as hybrid a data center as you can find.

Follow DancingDinosaur on Twitter, @mainframeblog

Goodbye Itanium, zEnterprise Continues to Grow

November 8, 2013

The HP announcement earlier this week wasn’t specifically the death knell for Itanium-based systems, but it just as well might have been. Rather, HP disclosed plans to extend the HP NonStop architecture to the Intel x86 platform.  With NonStop to be available on x86 servers, why would anyone even consider the Itanium platform?

Meanwhile, at an IBM analyst briefing at Enterprise 2013 and again this week, IBM rattled off growth figures for the zEnterprise: 56% MIPS growth and 6% revenue growth year-to-year, over 230 new z accounts since the introduction of the hybrid zEnterprise, and over 290 hybrid computing devices shipped including over 200 zBX cabinets.  Linux on z continues to penetrate the mainframe world with 80% of the top 100 mainframe enterprises having IFLs installed. But maybe the best sign of the vitality of the zEnterprise was the news that 33 new ISVs brought product to the z platform in 3Q2013.

Another sign of zEnterprise vitality: over 65,000 students entered the Master the Mainframe competition in the last 8 years.  In addition, over 1000 universities are teaching curriculum related to mainframe topics. Are you worried that you will not be able to find mainframe talent going forward? You probably never thought that the mainframe would be cool.

Recruiters from Cigna, Fidelity, JP Morgan Chase, Baldor, Dillars, Wal-mart, and more have been actively recruiting at schools participating in the Academic Initiative. For example, a senior business leader for switching systems at Visa described the Academic Initiative as a critical success factor and a lifeline for the company’s future.

With regard to the Itanium platform, HP’s announcement is more about trying to salvage the NonStop operating system than to save the Itanium server business.  “Extending HP NonStop to an x86 server platform shows a deep level of investment in maintaining the NonStop technology for mission-critical workloads in financial markets, telecommunications and other industries. At the same time, it brings new levels of availability to the x86-based standardized data center infrastructure,” said Jean Bozman, IDC research VP in the HP announcement.

Certainly for those organizations that require continuous operations on x86 the HP move will be a boon. Otherwise, high availability on x86 has always been something of a kluge. But don’t expect HP  to get anything running overnight.  This is just the latest step in a multi-year HP effort underway since 2011, and it will probably be another two years before everything gets ported and fully tested. HP promises to help customers with migration.

DancingDinosaur’s advice to NonStop customers that are frustrated by the underwhelming performance of Itanium systems today: Jump to the zEnterprise, either zEC12 or zBC12. You are almost certain to qualify for one of the deeply discounted System z Solution Edition deals (includes hardware, software, middleware, and 3 years of maintenance).  And something like IBM’s Migration Factory can help you get there. If it has taken HP two years to get this far, you can probably be up and running on z long before they get the first lines of NonStop code ported to x86.

Meanwhile, the System z team hasn’t been twiddling their collective thumbs.  In addition to introducing the zBC12 in July (shipped in Sept.) and absorbing the CSL International acquisition, which should prove quite valuable in z cloud initiatives, there has been a new IBM Entry Cloud Configuration for SAP Solutions on zEnterprise, a version of IBM Cognos TM1 for financial planning, and improved enterprise key management capabilities based on the Crypto Analytics Tool and the Advanced Crypto Services Provider.

System z growth led the enterprise server pack in the Gartner and IDC quarterly tabulations. Ironically, HP did well too with worldwide server shipments growing by more than 5% in the third quarter, halting a slump of eight consecutive quarters of shipment declines, according to preliminary market data from Gartner. Still, DancingDinosaur doesn’t think anyone will miss Itanium.

Follow DancingDinosaur on Twitter, @mainframeblog

Technology Change is Coming for the zBX

November 1, 2013

The zBX hasn’t been subject to much in the way of big new announcements this year.  Maybe the most obvious was a quiet announcement that the zBX would connect to the zBC12, the newest System z machine announced early in the summer. Buried deeply in that July announcement was that starting in Sept. 2013 you could attach the IBM zBX Model 003 to the new machine. Machines older than the zEC12 would need the zBX Model 002.

At Enterprise 2013, however, the zBX managed to grab a little of the spotlight in a session by Harv Emery titled IBM zEnterprise BladeCenter Extension Model 3 and Model 2 Deep Dive Update. OK, it’s not exactly a riveting title, but Emery’s 60 slides were packed with far more detail than can possibly fit here.

To summarize:  a slew of software and firmware updates will be coming through the end of this year and into 2014. Similarly, starting next year and beyond, IBM will begin to stop marketing older zBX hardware and eventually stop supporting the older stuff.  This is standard IBM practice; what makes it surprising is the realization that the zBX no longer is the new kid on the scene. PureSystems in their various iterations are the sexy newcomer.  As of the end of last year somewhat over 200 z hybrid units (zBX cabinets) had been sold along with considerably more blades. Again, PureSystems are IBM’s other hybrid platform.

Still, as Emery pointed out, new zBX functionality continues to roll out. This includes:

  • CPU management for x86 blades
  • Support for Windows 12, and current LDP OS releases
  • GDPS automated site recovery for zBX
  • Ensemble Availability Manager for improved monitoring and reporting
  • Support for Layer 2 communications
  • An IBM statement of direction (SOD) on support for next generation DataPower Virtual Appliance XI52
  • Support for next generation hardware technologies in the zBX
  • zBX firmware currency
  • A stand-alone zBX node to preserve the investment
  • Bolstered networking including a new BNT Virtual Fabric 10 GbE Switch
  • zBX integrated hypervisor for IBM System x blades and running KVM

Emery also did a little crystal balling about future capabilities, relying partly on recent IBM SODs. These include:

  • Support of zBX with the next generation server
  • New technology configuration extensions in the zBX
  • CEC and zBX continued investment in the virtualization and management capabilities for hybrid computing environment
  • Enablement of Infrastructure as a Service (IAAS) for Cloud
  • Unified Resource Manager improvements and extensions for guest mobility
  • More monitoring instrumentation
  • Autonomic management functions
  • Integration with the STG Portfolio
  • Continued efforts by zEnterprise and STG to leverage the Tivoli portfolio to deliver enterprise-wide management capabilities across all STG systems

DancingDinosaur periodically has been asked questions about how to handle storage for the zBX and the blades it contains.  Emery tried to address some of those.  Certain blades, DataPower for example, now come with their own storage and don’t need to any outside storage on the host z.  Through the top of the rack switch in the zBX you can connect to a distributed SAN.

Emery also noted the latest supported storage devices.  Supported IBM storage products as of Sept. 2013 include: DS3400, 3500, 3950, 4100, 4200, 4700 4800, 5020, 5100, 5300, 6000, 8100, 8300, 8700, 8800, SVC 2145, XIV, 2105, 2107, and Storwize v7000. Non-IBM storage is possible but you or you’re the OEM storage vendor will have to figure it out.

Finally, Emery made numerous references to Unified Resource Manager (or zManager, although it manages more than z) for the zBX and Flex System Manager for PureSystems.  Right now IBM tries to bridge the two systems with higher level management from Tivoli.  Another possibility, Emery hinted, is OpenStack to unify hybrid management. Sounds very intriguing, especially given IBM’s announced intention to make extensive use of OpenStack. Is there an interoperable OpenStack version of Unified Resource Manager and Flex System Manager in the works?

Follow DancingDinosaur on Twitter, @mainframeblog.

SHARE Puts Spotlight on Mainframe Storage, Compression, and More

August 16, 2013

The mainframe crowd gathered this week at SHARE Boston for a dive into the intricacies of the mainframe.  Storage, especially through the MVS Storage Management Project, grabbed a lot of attention. One of the project’s primary ongoing activities—identifying new requirements and communicating their priority to IBM—clearly bore fruit as seen in Barbara McDonald’s DFSMS Latest and Greatest session.

McDonald started out by noting that many of the enhancements, maybe all, in one way or another, address the explosive growth and management of customer data, which is being driven by the surging growth in interest in big data. To start, DFSMS exploits zEnterprise Data Compression (zEDC) Express, low-cost data compression for z/OS.

Specifically, zEDC, combined with a new hardware feature of the zEC12/zBC12 called zEDC Express, offers compression acceleration designed for high performance, low latency compression with little additional overhead. According to IBM it is optimized for use with large sequential files and it uses zlib, an industry standard compression library. zEDC is complementary to existing System z compression technology. Smaller records and files should still use z/OS hardware compression instructions to get best benefit while larger files will be directed to zEDC.

Support for additional access methods, specifically BSAM and QSAM, is planned for the end of the first quarter of 2014. This should save disk space and improve effective channel and network bandwidth without incurring significant CPU overhead.

The latest enhancements to DFSMS, among other things, also will enable increased data storage capacity and scalability, handle cross platform data and storage, and ensure data availability at all levels of the storage hierarchy.  They also enhance business resiliency through improved point-in-time copy, fast replication, and continuous data mirroring functions.

Although DFSMS proved interesting at SHARE and included dozens of enhancements, many suggested at previous SHARE conferences, there actually was much more going on. For example, a session on Real-time Compression (RtC), another innovative compression technology from IBM, drew a good crowd.

The RtC session was run by Steve Kenniston, IBM’s business line executive for data protection. RtC differs from zEDC, starting with different compression algorithms; RtC uses LZ where zEDC use zlib. RtC also leverages over 35 IBM patents to enable it to do real-time compression of active, production data with no impact on either application or storage performance. Kenniston also noted RtC, the industry’s only heterogeneous storage technology, now handles both block and file data. Find his RtC presentation here.

Kenniston drilled into the how and why of RtC and its key features, such as compressing active data on the fly. Some of it comes down to difference between fixed and variable compression.  Fixed compression, which most competitors do, has to add changes to the data at the end, which leads EMC and NetApp to quip that with RtC you could end up with a bigger compressed file than the original file due to the adds at the end.

Not so. RtC sits in the data stream in front of the storage and sees all the data. All the like data is compressed as it is written together.  The data then is put into known fixed segments. It amounts to a journaled file system inside the compressed file. RtC tracks where the compressed segments are put. When compressed data is deleted, RtC knows where the data was in the compressed file and can put new like data in the now empty space. At worst, the RtC compressed data temporarily becomes a single I/O larger than it started.

RtC also uses time-based optimization rather than location-based optimization as IBM’s competitors do.  Since it sits in the data stream and understands how the data was created RtC can achieve greater optimization.  And since it doesn’t have to do reads before writes it can get better compression ratios. In one example Kenniston showed RtC delivering up to 5x compression while maintaining or improving database transaction response time and overall business throughput.

In general, companies can expect a 30%-40% reduction in $/GB for common configurations.  In one case, 168 900GB SAS drives representing 100TB listed for $757,880. Using RtC, you needed only 67 SAS drives to get equivalent capacity. The cost, including the RtC software license and three years of maintenance, was $460,851. The $297,000 difference amounted to a 40% savings.

There was more interesting stuff at SHARE, including how the rise of BYOD smartphones and tablets could return the mainframe data center to a new version on 3270 computing, sort of an interesting back-to-the-future scenario.  Will take that up and more in another posting.


%d bloggers like this: