Turning data into information and then into knowledge requires hardware architectures and complementary packaging, memory, storage and interconnect technologies that can evolve and support emerging and increasingly complex use cases and AI techniques
At Hot Chips 2019, one of the semiconductor industry’s leading conferences on high-performance microprocessors and related integrated circuits, Intel revealed details of upcoming high-performance artificial intelligence (AI) accelerators: Intel Nervana neural network processors, with the NNP-T for training and the NNP-I for inference. Intel engineers also presented technical details on hybrid chip packaging technology, Intel Optane DC persistent memory and chiplet technology for optical I/O.
“To get to a future state of ‘AI everywhere,’ we’ll need to address the crush of data being generated and ensure enterprises are empowered to make efficient use of their data, processing it where it’s collected when it makes sense and making smarter use of their upstream resources,” said Naveen Rao, Intel vice president and general manager, Artificial Intelligence Products Group. “Data centres and the cloud need to have access to performant and scalable general purpose computing and specialised acceleration for complex AI applications. In this future vision of AI everywhere, a holistic approach is needed – from hardware to software to applications.”
Why it’s important: Turning data into information and then into knowledge requires hardware architectures and complementary packaging, memory, storage and interconnect technologies that can evolve and support emerging and increasingly complex use cases and AI techniques. Dedicated accelerators like the Intel Nervana NNPs are built from the ground up, with a focus on AI to provide customers the right intelligence at the right time.
Built to train deep learning models at scale, Intel Nervana NNP-T (Neural Network Processor) pushes the boundaries of deep learning training. It is built to prioritise two key real-world considerations: training a network as fast as possible and doing it within a given power budget. This deep learning training processor is built with flexibility in mind, striking a balance among computing, communication and memory.