Nvidia Reveals Next-Generation Graphics Processor.
Chief executive Jen-Hsun Huang of Nvidia has recently unveiled his company’s most latest graphics- processor architecture, dubbed Pascal, which will supposedly put a “supercomputer in the space of two credit cards” This newly introduced design is quite significant due to it being able to enable a new generation of mobile devices, desktop computers, and supercomputers
The new chips architecture, which are able to be designed into all sorts of chips, will assimilate both central processing unit(CPU) and graphics processing unit (GPU) on the very same chip. Another interested feature will be its integration of 3D memory or memory cells stacked in three dimensions. This will give it 2.5 time the memory capacity and a four-fold saving in energy efficiency, Huang said. This announcement was made during the company’s GPUTech conference in San Jose on Tuesday.
The device will be utilizing Nvidia’s high-speed NVLink technology, which has been said to speed data communication by 5 times to 12 times, and introduce big data computing to mobile and desktops devices. The architecture will makes its debut as a follow-on to Maxwell, which is being used in today’s graphics chips and mobile processors.
The huge amount of memory and NVLink will enable something thats been dubbed Unified Memory, which simplifies the programming of various software applications. The architecture was named after 17th century scientist Blaise Pascal, who invented the ever first mechanical calculator.
Huang went on to say that the processing power of Pascal, which has been dated to arrive at 2016, will enable “machine learning” or artificial intelligence that will have the ability to recognize faces, patterns, and other various objects.
“Machine learning experts call this object classification, and they program it using a neural net,” Huang said. “Programming this is an awfully large challenge.”
The brain has been estimated of having billions of connections, or synapses. It has over 100 billion neurons and 1,000 connections for each neuron, it can store over an estimated amount of 500 million images.
Huang has pointed out a computing named Google Brain, which cost $5 million. It operates with 600 kilowatts, has over 1,000 CPU servers, 2,000 CPUs, and 16,000 processing cores. Our brain would take 5 million times long to train on recognizing images compared to Google Brain’s.