Posts Tagged ‘Google’

IBM Introduces Cloud Private to Hybrid Clouds

November 10, 2017

When you have enough technologies lying around your basement, sometimes you can cobble a few pieces together, mix it with some sexy new stuff and, bingo, you have something that meets a serious need of a number of disparate customers. That’s essentially what IBM did with Cloud Private, which it announced Nov. 1.

IBM staff test Cloud Private automation software

IBM intended Cloud Private to enable companies to create on-premises cloud capabilities similar to public clouds to accelerate app dev. Don’t think it as just old stuff; the new platform is built on the open source Kubernetes-based container architecture and supports both Docker containers and Cloud Foundry. This facilitates integration and portability of workloads, enabling them to evolve to almost any cloud environment, including—especially—the public IBM Cloud.

Also IBM announced container-optimized versions of core enterprise software, including IBM WebSphere Liberty, DB2 and MQ – widely used to run and manage the world’s most business-critical applications and data. This makes it easier to share data and evolve applications as needed across the IBM Cloud, private, public clouds, and other cloud environments with a consistent developer, administrator, and user experience.

Cloud Private amounts to a new software platform, which relies on open source container technology to unlock billions of dollars in core data and applications incorporating legacy software like WebSphere and Db2. The purpose is to extend cloud-native tools across public and private clouds. For z data centers that have tons of valuable, reliable working systems years away from being retired, if ever, Cloud Private may be just what they need.

Almost all enterprise systems vendors are trying to do the same hybrid cloud computing enablement. HPE, Microsoft, Cisco, which is partnering with Google on this, and more. This is a clear indication that the cloud and especially the hybrid cloud is crossing the proverbial chasm. In years past IT managers and C-level executives didn’t want anything to do with the cloud; the IT folks saw it as a threat to their on premises data center and the C-suite was scared witless about security.

Those issues haven’t gone away although the advent of hybrid clouds have mitigated some of the fears among both groups. Similarly, the natural evolution of the cloud and advances in hybrid cloud computing make this more practical.

The private cloud too is growing. According to IBM, while public cloud adoption continues to grow at a rapid pace, organizations, especially in regulated industries of finance and health care, are continuing to leverage private clouds as part of their journey to public cloud environments to quickly launch and update applications. This also is what is driving hybrid clouds. IBM estimates companies will spend more than $50 billion globally starting in 2017 to create and evolve private clouds with growth rates of 15 to 20 percent a year through 2020, according to IBM market projections.

The problem facing IBM and the other enterprise systems vendors scrambling for hybrid clouds is how to transition legacy systems into cloud native systems. The hybrid cloud in effect acts as facilitating middleware. “Innovation and adoption of public cloud services has been constrained by the challenge of transitioning complex enterprise systems and applications into a true cloud-native environment,” said Arvind Krishna, Senior Vice President for IBM Hybrid Cloud and Director of IBM Research. IBM’s response is Cloud Private, which brings rapid application development and modernization to existing IT infrastructure while combining it with the service of a public cloud platform.

Hertz adopted this approach. “Private cloud is a must for many enterprises such as ours working to reduce or eliminate their dependence on internal data centers,” said Tyler Best, Hertz Chief Information Officer.  A strategy consisting of public, private and hybrid cloud is essential for large enterprises to effectively make the transition from legacy systems to cloud.

IBM is serious about cloud as a strategic initiative. Although not as large as Microsoft Azure or Amazon Web Service (AWS) in the public cloud, a recent report by Synergy Research found that IBM is a major provider of private cloud services, making the company the third-largest overall cloud provider.

DancingDinosaur is Alan Radding, a veteran information technology analyst, writer, and ghost-writer. Please follow DancingDinosaur on Twitter, @mainframeblog. See more of his IT writing at technologywriter.com and here.

Open POWER-Open Compute-POWER9 at Open Compute Summit

March 16, 2017

Bryan Talik, President, OpenPOWER Foundation provides a detailed rundown on the action at the Open Compute  Summit held last week in Santa Clara. After weeks of writing about Cognitive, Machine Learning, Blockchain, and even quantum computing, it is a nice shift to conventional computing platforms that should still be viewed as strategic initiatives.

The OpenPOWER, Open Compute gospel was filling the air in Santa Clara.  As reported, Andy Walsh, Xilinx Director of Strategic Market Development and OpenPOWER Foundation Board member explained, “We very much support open standards and the broad innovation they foster. Open Compute and OpenPOWER are catalysts in enabling new data center capabilities in computing, storage, and networking.”

Added Adam Smith, CEO of Alpha Data:  “Open standards and communities lead to rapid innovation…We are proud to support the latest advances of OpenPOWER accelerator technology featuring Xilinx FPGAs.”

John Zannos, Canonical OpenPOWER Board Chair chimed in: For 2017, the OpenPOWER Board approved four areas of focus that include machine learning/AI, database and analytics, cloud applications and containers. The strategy for 2017 also includes plans to extend OpenPOWER’s reach worldwide and promote technical innovations at various academic labs and in industry. Finally, the group plans to open additional application-oriented workgroups to further technical solutions that benefits specific application areas.

Not surprisingly, some members even see collaboration as the key to satisfying the performance demands that the computing market craves. “The computing industry is at an inflection point between conventional processing and specialized processing,” according to Aaron Sullivan, distinguished engineer at Rackspace. “

To satisfy this shift, Rackspace and Google announced an OCP-OpenPOWER server platform last year, codenamed Zaius and Barreleye G2.  It is based on POWER9. At the OCP Summit, both companies put on a public display of the two products.

This server platform promises to improve the performance, bandwidth, and power consumption demands for emerging applications that leverage machine learning, cognitive systems, real-time analytics and big data platforms. The OCP players plan to continue their work alongside Google, OpenPOWER, OpenCAPI, and other Zaius project members.

Andy Walsh, Xilinx Director of Strategic Market Development and OpenPOWER Foundation Board member explains: “We very much support open standards and the broad innovation they foster. Open Compute and OpenPOWER are catalysts in enabling new data center capabilities in computing, storage, and networking.”

This Zaius and Barreleye G@ server platforms promise to advance the performance, bandwidth and power consumption demands for emerging applications that leverage the latest advanced technologies. These latest technologies are none other than the strategic imperatives–cognitive, machine learning, real-time analytics–IBM has been repeating like a mantra for months.

Open Compute Projects also were displayed at the Summit. Specifically, as reported: Google and Rackspace, published the Zaius specification to Open Compute in October 2016, and had engineers to explain the specification process and to give attendees a starting point for their own server design.

Other Open Compute members, reportedly, also were there. Inventec showed a POWER9 OpenPOWER server based on the Zaius server specification. Mellanox showcased ConnectX-5, its next generation networking adaptor that features 100Gb/s Infiniband and Ethernet. This adaptor supports PCIe Gen4 and CAPI2.0, providing a higher performance and a coherent connection to the POWER9 processor vs. PCIe Gen3.

Others, reported by Talik, included Wistron and E4 Computing, which showcased their newly announced OCP-form factor POWER8 server. Featuring two POWER8 processors, four NVIDIA Tesla P100 GPUs with the NVLink interconnect, and liquid cooling, the new platform represents an ideal OCP-compliant HPC system.

Talik also reported IBM, Xilinx, and Alpha Data showed their line ups of several FPGA adaptors designed for both POWER8 and POWER9. Featuring PCIe Gen3, CAPI1.0 for POWER8 and PCIe Gen4, CAPI2.0 and 25G/s CAPI3.0 for POWER9 these new FPGAs bring acceleration to a whole new level. OpenPOWER member engineers were on-hand to provide information regarding the CAPI SNAP developer and programming framework as well as OpenCAPI.

Not to be left out, Talik reported that IBM showcased products it previously tested and demonstrated: POWER8-based OCP and OpenPOWER Barreleye servers running IBM’s Spectrum Scale software, a full-featured global parallel file system with roots in HPC and now widely adopted in commercial enterprises across all industries for data management at petabyte scale.  Guess compute platform isn’t quite the dirty phrase IBM has been implying for months.

DancingDinosaur is Alan Radding, a veteran information technology analyst, writer, and ghost-writer. Please follow DancingDinosaur on Twitter, @mainframeblog. See more of his IT writing at technologywriter.com and here.

 

IBM Fires a Shot at Intel with its Latest POWER Roadmap

June 17, 2016

In case you worry that IBM will abandon hardware in the pursuit of its strategic initiatives focusing on cloud, mobile, analytics and more; well, stop worrying. With the announcement of its POWER Roadmap at the OpenPOWER Summit earlier this spring, it appears POWER will be around for years to come. But IBM is not abandoning the strategic initiatives either; the new Roadmap promises to support new types of workloads, such as real time analytics, Linux, hyperscale data centers, and more along with support for the current POWER workloads.

power9b

Pictured above: POWER9 Architecture, courtesy of IBM

Specifically, IBM is offering a denser roadmap, not tied to technology and not even tied solely to IBM. It draws on innovations from a handful of the members of the Open POWER Foundation as well as support from Google. The new roadmap also signals IBM’s intention to make a serious run at Intel’s near monopoly on enterprise server processors by offering comparable or better price, performance, and features.

Google, for example, reports porting many of its popular web services to run on Power systems; its toolchain has been updated to output code for x86, ARM, or Power architectures with the flip of a configuration flag. Google, which strives to be everything to everybody, now has a highly viable alternative to Intel in terms of performance and price with POWER. At the OpenPOWER Summit early in the spring, Google made it clear it plans to build scale-out server solutions based on OpenPower.

Don’t even think, however, that Google is abandoning Intel. The majority of its systems are Intel-oriented. Still, POWER and the OpenPOWER community will provide a directly competitive processing alternative.  To underscore the situation Google and Rackspace announced they were working together on Power9 server blueprints for the Open Compute Project, designs that reportedly are compatible with the 48V Open Compute racks Google and Facebook, another hyperscale data center, already are working on.

Google represents another proof point that OpenPOWER is ready for hyperscale data centers. DancingDinosaur, however, really is interested most in what is coming from OpenPOWER that is new and sexy for enterprise data centers, since most DancingDinosaur readers are focused on the enterprise data center. Of course, they still need ever better performance and scalability too. In that regard OpenPOWER has much for them in the works.

For starters, POWER8 is currently delivered as a 12-core, 22nm processor. POWER9, expected in 2017, will be delivered as 14nm processor with 24 cores and CAPI and NVlink accelerators. That is sure to deliver more performance with greater energy efficiency.  By 2018, the IBM roadmap shows POWER8/9 as a 10nm, maybe even 7nm, processor, based on the existing micro-architecture.

The real POWER future, arriving around 2020, will feature a new micro-architecture, sport new features and functions, and bring new technology. Expect much, if not almost all, of the new functions to come from various OpenPOWER Foundation partners,

POWER9, only a year or so out, promises a wealth of improvements in speeds and feeds. Although intended to serve the traditional Power Server market, it also is expanding its analytics capabilities and bringing new deployment models for hyperscale, cloud, and technical computing through scale out deployment. This will include deployment in both clustered or multiple formats. It will feature a shorter pipeline, improved branch execution, and low latency on the die cache as well as PCI gen 4.

Expect a 3x bandwidth improvement with POWER9 over POWER8 and a 33% speed increase. POWER9 also will continue to speed hardware acceleration and support next gen NVlink, improved coherency, enhance CAPI, and introduce a 25 GPS high speed link. Although the 2-socket chip will remain, IBM suggests larger socket counts are coming. It will need that to compete with Intel.

As a data center manager, will a POWER9 machine change your data center dynamics?  Maybe, you decide: a dual-socket Power9 server with 32 DDR4 memory slots, two NVlink slots, three PCIe gen-4 x16 slots, and a total 44 core count. That’s a lot of computing power in one rack.

Now IBM just has to crank out similar advances for the next z System (a z14 maybe?) through the Open Mainframe Project.

DancingDinosaur is Alan Radding, a veteran information technology analyst and writer. Please follow DancingDinosaur on Twitter, @mainframeblog. See more of his IT writing at technologywriter.com and here.

 

Put the Mainframe at the Heart of the Internet of Things

August 4, 2014

Does the Internet of things (IoT) sound familiar? It should. Remember massive networks of ATMs connecting back to the mainframe?

The mainframe is poised to take on the IoT challenge writes Advanced Software Products Group, Inc. (ASPG), a company specializing in mainframe software, in an interesting document called the Future of the Mainframe.  Part of that future is the IoT, which IBM refers to in a Redbook Point of View as the Interconnecting of Everything.

In that Redbook the IoT is defined as a network of Internet-enabled, real-world objects—things—ranging from nanotechnology objects to consumer electronics, home appliances, sensors of all kinds, embedded systems, and personal mobile devices. The IoT also will encompass enabling network and communication technologies, such as IPv6 to get the unique address capacity, web services, RFID, and 4G networks.

The IoT Redbook cites industry predictions of upwards of 50 billion connected devices by 2020, a number 10x that of all current Internet hosts, including connected mobile phones. Based on that the Redbook authors note two primary IoT scalability issues:

  1. The sheer number of connected devices; the quantity of connected devices, mainly the number of concurrent connections (throughput) a system can support and the quality of service (QoS) that can be delivered. Here, authors note, Internet scalability is a critical factor. Currently, most Internet-connected devices use IPv4, which is based on a 32-bit. Clearly, the industry has to speed the transition to IPv6, which implements a 128-bit addressing scheme that can support up to 2128 addresses or 4 x 1038 devices, although some tweaking of the IPv6 standard is being proposed for IoT.
  1. The volume of generated data and the performance issues associated with data collection, processing, storage, query, and display. IoT systems need to handle both device and data scalability issues. From a data standpoint, this is big data on steroids.

As ASPG noted in its paper cited above, the mainframe is well suited to provide a central platform for IoT. The zEnterprise has the power to connect large dispersed networks, capture and process the mountains of data produced every minute, and provide the security and privacy companies and individuals demand. In addition, it can accept, process, and interpret all that data in a useful way. In short, it may be the only general commercial computing platform powerful enough today to crunch vast quantities of data very quickly and is already proven to perform millions of transactions per second and do it securely.

Even with a top end zEC12 configured to the hilt and proven to handle maximum transactions per second, you are not quite yet ready to handle the IoT as it is currently being envisioned. This IoT vision is much more heterogeneous in all dimensions than the massive reservation or POS or ATM networks the mainframe has proven itself with.

At least one major piece still needed: an industry-wide standard that defines how the various devices capture myriad information for a diverse set of applications involving numerous vendors and ensure everything can communicate and exchange information in a meaningful way. Not surprisingly, the industry already is working on it.

Actually, maybe too many groups. The IEEE points to a number of standards, projects and activities it is involved with that address the creation of what it considers a vibrant IoT. The Open Internet Consortium, consisting of a slew of tech-industry heavyweights like Intel, Broadcom, and Samsung, hope to develop standards and certification for devices involved in the IoT. Another group, the AllSeen Alliance, is promoting an open standard called AllJoyn with the goal of enabling ubiquitously connected devices. Even Google is getting into the act by opening up its Nest acquisition so developers can connect their various home devices (thermostats, security alarm controllers, garage door openers, and such) via a home IoT.

This will likely shake out the way IT standards usually do with several competing groups fighting it out. Probably too early to start placing bets. But you can be sure IBM will be right there. The company already has put an IoT stake in the ground here (as if the z wasn’t enough).  Whatever eventually shakes out, System z shops should be right in the middle of the IoT action.

Expect this will be subject of discussion at the upcoming IBM Enterprise 2014 conference, Oct. 6-10 in Las Vegas. Your blogger expects to be there. DancingDinosaur is Alan Radding. Follow him on Twitter, @mainframeblog or Technologywriter.com.

3 Big Takeaways at IBM POWER8 Intro

April 24, 2014

POWER8 did not disappoint. IBM unveiled its latest generation of systems built on its new POWER8 technology on Wednesday, April 23.

DancingDinosaur sees three important takeaways from this announcement:

First, the OpenPOWER Foundation. It was introduced months ago and almost immediately forgotten. DancingDinosaur covered it at the time here. It had handful of small partners. Only one was significant, Google, and was it was hard to imagine Google bringing out open source POWER servers. Now the Foundation has several dozen members and it still is not clear what Google is doing there, but the Foundation clearly is gaining traction. You can expect more companies to join the Foundation in the coming weeks and months.

With the Foundation IBM swears it is committed to a true open ecosystem; one where even competitors can license the technology and bring out their own systems. At some point don’t be surprised to see white box Power systems below IBM’s price. More likely in the short term will be specialized Power appliances. What you get as a foundation member is the Power SOC design, Bus Specifications, Reference Designs, FW OS, and Hypervisor Open Source. It also includes access to Little Endian Linux, which will ease the migration of software to POWER. BTW, Google is listed as a member focusing on open source firmware and on the cloud and high performance computing.

Second, the POWER8 processor itself and the new family of systems. The processor, designed for big data, will run more concurrent queries and run them up to 50x fast than x86 with 4x more threads per core than x86. Its I/O bandwidth is 5x faster than POWER7. It can handle 1TB of memory with 4-6x more memory bandwidth and more than 3x more on-chip cache than an x86. The processor itself will utilize 22nm circuits and run 2.5 -5 GHz.

POWER8 sports an eight-threaded processor. That means each of the 12 cores in the CPU will coordinate the processing of eight sets of instructions at a time for a total of 96 processes. Each process consists of a set of related instructions making up a discrete process within a program. By designating sections of an application that can run as a process and coordinate the results, a chip can accomplish more work than a single-threaded chip, IBM explains. By comparison, IBM reports Intel’s Ivy Bridge E5 Xeon CPUs are double-threaded cores, with up to eight cores, handling 16 processes at a time (compared to 96 with POWER8).  Yes, there is some coordination overhead incurred as more threads are added. Still the POWER8 chip should attract interest among white box manufacturers and users of large numbers of servers processing big data.

Third is CAPI, your newest acronym.  If something is going to be a game-changer, this will be it. The key is to watch for adoption. Coherent Accelerator Processor Interface (CAPI) sits directly on the POWER8 and works with the same memory addresses that the processor uses. Pointers de-referenced same as the host application. CAPI, in effect, removes OS and device driver overhead by presenting an efficient, robust, durable interface. In the process, it offloads complexity.

CAPI can reduce the typical seven-step I/O model flow to three steps (shared memory/notify accelerator, acceleration, and shared memory completion). The advantages revolve around virtual addressing and data caching through shared memory and reduced latency for highly referenced data. [see accompanying graphic] It also enables an easier, natural programming model with traditional thread level programming and eliminates the need to restructure the application to accommodate long latency I/O.  Finally it enables apps otherwise not possible, such as those requiring pointer chasing.

 CAPI Picture

It’s too early to determine if CAPI is a game changer but IBM has already started to benchmark some uses. For example, it ran NoSQL on POWER8 with CAPI and achieved a 5x cost reduction. When combined with IBM’s TMI flash it found it could:

  • Attack problem sets otherwise too big for the memory footprint
  • Deliver fast access to small chunks of data
  • Achieve high throughput for data or simplify object addressing through memory semantics.

CAPI brings programming efficiency and simplicity. It uses the PCIe physical interface for the easiest programming and fastest, most direct I/O performance. It enables better virtual addressing and data caching. Although it was intended for acceleration it works well for I/O caching. And it has been shown to deliver a 5x cost reduction with equivalent performance when attaching to flash.  In summary, CAPI enables you to regain infrastructure control and rein in costs to deliver services otherwise not feasible.

It will take time for CAPI to catch on. Developers will need to figure out where and how best to use it. But with CAPI as part of the OpenPOWER Foundation expect to see work taking off in a variety of directions. At a pre-briefing a few weeks ago, DancingDinosaur was able to walk through some very CAPI interesting demos.

As for the new POWER8 Systems lineup, IBM introduced 6 one- or two-socket systems, some for Linux others for all systems.  The systems, reportedly, will start below $8000.

You can follow Alan Radding/DancingDinosaur on Twitter: @mainframeblog. Also, please join me at IBM Edge2014, this May 19-23 at the Venetian in Las Vegas.  Find me in the bloggers lounge.

Open POWER Consortium Aims to Expand the POWER Ecosystem beyond IBM

August 7, 2013

With IBM’s August 6 announcement of new POWER partners, including Google, not only is IBM aiming to expand the variety of POWER workloads but establish an alternative ecosystem to Intel/ x86 that continues to dominate general corporate computing.  Through the new Open POWER Consortium, IBM will make  POWER hardware and software available for open development for the first time as well as offer open-source POWER firmware, the software that controls basic chip functions. By doing this, IBM and the consortium can enable innovative customization in creating new styles of server hardware for a variety of computing workloads.

IBM has a long history of using open consortiums to grab a foothold in different markets;  as it did with Eclipse (open software development tools), Linux (open portable operating system), KVM (open hypervisor and virtualization), and OpenStack (open cloud interoperability). In each case, IBM had proprietary technologies but could use the open source consortium strategy to expand market opportunities at the expense of entrenched proprietary competitors like Microsoft or VMware.  The Open POWER Consortium opens a new front against Intel, which already is scrambling to fend off ARM-based systems and other lightweight processors.

The establishment of the Open POWER Consortium also reinforces IBM’s commitment to the POWER platform in the face of several poor quarters. The commitment to POWER has never really wavered, insists an IBM manager, despite what financial analysts might hint at. Even stronger evidence of that commitment to POWER is POWER8, which is on track for 2014 if not sooner, and POWER9, which is currently in development, he confirmed.

As part of its initial collaboration within the consortium, IBM reported it and NVIDIA will integrate NVIDIA’s CUDA GPU and POWER.  CUDA is a parallel computing platform and programming model that enables dramatic increases in computing performance by harnessing the power of the graphics processing unit (GPU).  GPUs increasingly are being used to boost overall system performance, not just graphics performance. The two companies envision powerful computing systems based on NVIDIA GPUs and IBM’s POWER CPUs  and represent an example of the new kind of systems the open consortium can produce.

However, don’t expect immediate results.  The IBM manager told DancingDinosaur that the fruits of any collaboration won’t start showing up until sometime next year. Even the Open POWER Collaboration website has yet to post anything. The consortium is just forming up; IBM expects the public commitment of Google to attract other players, which IBM describes as the next generation of data-center innovators.

As for POWER users, this can only be a good thing. IBM is not reducing its commitment to the POWER roadmap, plus users will be able to enjoy whatever the new players bring to the POWER party, which could be considerable. In the meantime, the Open POWER Consortium welcomes any firm that wants to innovate on the POWER platform and participate in an open, collaborative effort.

An even more interesting question may be where else will IBM’s interest in open systems and open consortiums take it. IBM remains “very focused on open and it’s a safe bet that IBM will continue to support open technologies and groups that support that,” the IBM manager told DancingDinosaur.  IBM, however, has nothing to announce after the Open POWER Consortium. Hmm, might a z/OS open collaborative consortium someday be in the works?

SHARE will be in Boston next week. DancingDinosaur expects to be there and will report on the goings-on. Hope to see some of you there.  There also are plans for a big IBM System z/Power conference, Enterprise Systems 2013, toward to end of October in Florida.  Haven’t seen many details yet, but will keep you posted as they come in.

Big Data as a Game Changing Technology at IBM Edge 2013

June 11, 2013

If you ever doubted that big data was going to become important, there should be no doubt anymore. Recent headlines from the past couple of weeks of the government capturing and analyzing massive amounts of daily phone call data should convince you. That this report was shortly followed by more reports of the government tapping the big online data websites like Google, Yahoo, and such for even more data should alert you to three things:

1—There is a massive amount of data out there that can be collected and analyzed.

2—Companies are amassing incredible volumes of data in the normal course of serving people who readily and knowingly give their data to these organizations. (This blogger is one of those tens of million .)

3—The tools and capabilities are mature enough for someone to sort through that data and connect the dots to deliver meaningful insights.

Particularly with regard to the last point this blogger thought the industry was still five years away from generating meaningful results from that amount of data coming in at that velocity. Sure, marketers have been sorting and correlating large amounts of data for years, but it was mostly structured data and not at nearly this much. BTW, your blogger has been writing about big data for some time.

If the news reports weren’t enough it became clear at Edge 2013 that big data analytics is happening and companies like Constant Contact and many others are succeeding at it now. It also is clear that there is sufficient commercial off-the-shelf computing power from companies like IBM and analytics tools to sort through massive amounts of data and make sense of it fast.

Another interesting point came up in one of the many discussions touching on big data. Every person’s personal data footprint is as unique as a fingerprint or other bio-metrics. We all visit different websites and interact with social media and use our credit and debit cards in highly individual ways. Again, marketers have sensed this at some level for years, but they haven’t yet really honed it down to the actual individual on a mass scale, although there is no technical reason one couldn’t.

Subsequent blogs will take up other topics from Edge 2013, such as software defined everything.

Although there were over a dozen sessions on System z topics, the mainframe did not have a big presence at the conference. However, Enterprise Systems 2013 was being promoted at IBM Edge. It will take place Oct. 21-25 in Orlando, Fl. It will combine the System z and the Power System Technical University along with a new executive-focused Enterprise Systems event. It will include new announcements, peeks into trends and directions, over 500 expert technical sessions across 10 tracks, and a comprehensive solution center.


%d bloggers like this: