MojoKid writes: NVIDIA CEO Jen-Hsun Huang just offered the first public unveiling of a product based on the company’s next generation GPU architecture, codenamed Volta. NVIDIA just announced its new Tesla V100 accelerator that’s designed for AI and machine learning applications, and at the heart of the Tesla V100 is NVIDIA’s Volta GV100 GPU. The chip features a 21.1 billion transistors on a die that measures 815mm2 (compared to 12 billion transistors and 610mm2 respectively for the previous gen Pascal GP100). The GV100 is built on a 12nm FinFET manufacturing process by TSMC. It is comprised of 5,120 CUDA cores with a boost clock of 1455MHz, compared to 3585 CUDA cores for the GeForce GTX 1080 Ti and previous gen Tesla P100 AI accelerator, for example. The new Volta GPU delivers 15 TFLOPS FP32 compute performance and 7.5 TFLOPS of FP64 compute performance. Also on board is 16MB of cache and 16GB of second generation High Bandwidth (HBM2) memory with 900GB/sec of bandwidth via a 4096-bit interface. The GV100 also has dedicated Tensor cores (640 in total) accelerating AI workloads. NVIDIA notes the dedicated Tensor cores also allow for a 12x uplift in deep learning performance compared to Pascal, which relies solely on its CUDA cores. NVIDIA is targeting a Q3 2017 release for the Tesla V100 with Volta, but the timetable for a GeForce derivative family of consumer graphics cards has has not been disclosed.
Read more of this story at Slashdot.