The IBM Server for AI Enterprise combines the benefits of IBM's Power8 CPU with Nvidia's Tesla P100 GPUs.
IBM Server for AI Enterprise is designed for "tens of thousands of enterprise workloads," according to Nvidia vice-president of solutions architecture and engineering Marc Hamilton.
Available now, the server is a compact rackmount unit that uses NVlink to interconnect the GPUs and the Power8 CPU.
It includes the Nvidia GPU Deep Learning Toolkit and the IBM PowerAI Toolkit, supporting popular frameworks including Caffe, DL4J and TensorFlow.
{loadposition stephen08}According to IBM, this hardware and software combination means the server provides more than twice the performance of other comparable servers with four GPUs running AlexNet with Caffe, and the same configuration can outperform x86 configurations with eight M40 GPUs running AlexNet with BVLC Caffe, making it the world's fastest commercially available enterprise systems platform on two versions of a key deep learning framework.
"PowerAI democratises deep learning and other advanced analytic technologies by giving enterprise data scientists and research scientists alike an easy to deploy platform to rapidly advance their journey on AI," said IBM OpenPower general manager Ken King.
"Coupled with our high performance computing servers built for AI, IBM provides what we believe is the best platform for enterprises building AI-based software, whether it's chatbots for customer engagement, or real-time analysis of social media data."
"Every industry wants AI," observed Hamilton.