The Machine from HPE has long been gestating, but a new prototype of the ‘world’s largest single-memory computer’ is meant to be vastly greater than just the Sherlock to IBM’s Watson.
UPDATE: iTWire Enterprise writer Stephen Withers covered this story yesterday with more detail and a better article than what you'll read below.
Please read Stephen's article entitled "HPE's The Machine gets a step closer" instead of what is below.
---
Promoted in video ads before the most recent movie in the rebooted Star Trek trilogy as a computer so advanced it was still around 200+ years in our fictional Star Trek future, HPE has unveiled its newest prototype of ‘The Machine’, a computer the company dubs as being ‘built for the era of Big Data.’
Stating this new prototype “upends 60 years of innovation and demonstrates the potential for Memory-Driven Computing,” we’re told this is “the world’s largest single-memory computer” (qualified by HPE as “a single-memory system as one with a single address space”), and “the latest milestone in The Machine research project”.
The Machine is the largest R&D program in the history of HPE, which the company says “is aimed at delivering a new paradigm called Memory-Driven Computing – an architecture custom-built for the Big Data era.”
{loadposition alex08}
Meg Whitman, HPE CEO said: “The secrets to the next great scientific breakthrough, industry-changing innovation, or life-altering technology hide in plain sight behind the mountains of data we create every day.
“To realise this promise, we can’t rely on the technologies of the past, we need a computer built for the Big Data era.”
There have been criticisms that HPE has now launched its Machine several times over without actually delivering a finished product to end-users, however when I went to the US at HPE’s invitation to its Discover 2016 conference, I took lots of videos for readers to enjoy, with HPE stating it had used advances in developing its Machine in the more traditional servers it had developed.
The company states its new prototype, unveiled today, “contains 160 terabytes (TB) of memory, capable of simultaneously working with the data held in every book in the Library of Congress five times over – or approximately 160 million books.”
HPE proudly boasts that “t has never been possible to hold and manipulate whole data sets of this size in a single-memory system, and this is just a glimpse of the immense potential of Memory-Driven Computing.”
So, what are the “Scalability & Societal Implications” according to HPE?
We’re told that, based on the current prototype, the company “expects the architecture could easily scale to an Exabyte-scale single-memory system and, beyond that, to a nearly-limitless pool of memory – 4,096 yottabytes. For context, that is 250,000 times the entire digital universe today.”
With that amount of memory, explains HPE, “it will be possible to simultaneously work with every digital health record of every person on earth; every piece of data from Facebook; every trip of Google’s autonomous vehicles; and every data set from space exploration all at the same time – getting to answers and uncovering new opportunities at unprecedented speeds.”
Mark Potter, CTO at HPE and Director, Hewlett Packard Labs, and presumably no relation to Harry (a very different HP it must be noted), said: “We believe Memory-Driven Computing is the solution to move the technology industry forward in a way that can enable advancements across all aspects of society.
“The architecture we have unveiled can be applied to every computing category – from intelligent edge devices to supercomputers.”
HPE says that “memory-Driven Computing puts memory, not the processor, at the centre of the computing architecture. By eliminating the inefficiencies of how memory, storage and processors interact in traditional systems today, Memory-Driven Computing reduces the time needed to process complex problems from days to hours, hours to minutes, minutes to seconds – to deliver real-time intelligence.”
So, what are the Technical Specifications of the new prototype?
The company says its new prototype “builds on the achievements of The Machine research program," including:
- 160 TB of shared memory spread across 40 physical nodes, interconnected using a high-performance fabric protocol
- An optimised Linux-based operating system (OS) running on ThunderX2, Cavium’s flagship second generation dual socket capable ARMv8-A workload optimised System on a Chip.
- Photonics/Optical communication links, including the new X1 photonics module, are online and operational; and
- Software programming tools designed to take advantage of abundant persistent memory.
Syed Ali, President & CEO of Cavium Inc said: “Cavium shares HPE’s vision for Memory-Driven Computing and is proud to collaborate with HPE on The Machine program.
”HPE’s ground-breaking innovations in Memory-Driven Computing will enable a new compute paradigm for a variety of applications, including the next generation data centre, cloud and high performance computing.”
More information on Memory-Driven Computing and HPE’s The Machine research program is here.
Here's HPE's infographic: