NVIDIA's Parker System with Powerful CPU made for self driving cars
NVIDIA debuted its Drive PX2 in-car supercomputer at CES in January and there are currently "80 carmakers, tier 1 suppliers and university research centers" using Drive PX2 at the moment. That platform uses two Parker processors and two Pascal architecture-based GPUs to power deep learning applications.
Now the company is showing off the Parker system on a chip powering it. The 256-core processor boasts up to 1.5 teraflops of juice for "deep learning-based self-driving AI cockpit systems," according to a post on NVIDIA's blog. That's in addition to 24 trillion deep learning operations per second it can churn out, too. For a perhaps more familiar touchpoint, NVIDIA says that Parker can also decode and encode 4K video streams running at 60FPS -- no easy feat on its own.
Parker delivers class-leading performance and energy efficiency, while supporting features important to the automotive market such as deep learning, hardware-level virtualization for tighter design integration, a hardware-based safety engine for reliable fault detection and error processing, and feature-rich IO ports for automotive integration.
Parker includes hardware-enabled virtualization that supports up to eight virtual machines. Virtualization enables carmakers to use a single Parker-based DRIVE PX 2 system to concurrently host multiple systems, such as in-vehicle infotainment systems, digital instrument clusters and driver assistance systems
A new 256-core Pascal GPU in Parker delivers the performance needed to run advanced deep learning inference algorithms for self-driving capabilities. And it offers the raw graphics performance and features to power multiple high-resolution displays, such as cockpit instrument displays and in-vehicle infotainment panels.
The Denver 2.0 CPU is a seven-way superscalar processor supporting the ARM v8 instruction set and implements an improved dynamic code optimization algorithm and additional low-power retention states for better energy efficiency. The two Denver cores and the Cortex A57 CPU complex are interconnected through a proprietary coherent interconnect fabric.
However, Parker is significantly less beefy than NVIDIA's other deep learning initiative, the DGX-1 for Elon Musk's OpenAI, which can hit 170 teraflops of performance. This platform still sounds more than capable of running high-end digital dashboards and keeping your future autonomous car shiny side up without a problem, regardless.
Source: NVIDIA
Now the company is showing off the Parker system on a chip powering it. The 256-core processor boasts up to 1.5 teraflops of juice for "deep learning-based self-driving AI cockpit systems," according to a post on NVIDIA's blog. That's in addition to 24 trillion deep learning operations per second it can churn out, too. For a perhaps more familiar touchpoint, NVIDIA says that Parker can also decode and encode 4K video streams running at 60FPS -- no easy feat on its own.
Parker delivers class-leading performance and energy efficiency, while supporting features important to the automotive market such as deep learning, hardware-level virtualization for tighter design integration, a hardware-based safety engine for reliable fault detection and error processing, and feature-rich IO ports for automotive integration.
Parker includes hardware-enabled virtualization that supports up to eight virtual machines. Virtualization enables carmakers to use a single Parker-based DRIVE PX 2 system to concurrently host multiple systems, such as in-vehicle infotainment systems, digital instrument clusters and driver assistance systems
A new 256-core Pascal GPU in Parker delivers the performance needed to run advanced deep learning inference algorithms for self-driving capabilities. And it offers the raw graphics performance and features to power multiple high-resolution displays, such as cockpit instrument displays and in-vehicle infotainment panels.
The Denver 2.0 CPU is a seven-way superscalar processor supporting the ARM v8 instruction set and implements an improved dynamic code optimization algorithm and additional low-power retention states for better energy efficiency. The two Denver cores and the Cortex A57 CPU complex are interconnected through a proprietary coherent interconnect fabric.
However, Parker is significantly less beefy than NVIDIA's other deep learning initiative, the DGX-1 for Elon Musk's OpenAI, which can hit 170 teraflops of performance. This platform still sounds more than capable of running high-end digital dashboards and keeping your future autonomous car shiny side up without a problem, regardless.
Source: NVIDIA
No comments
Post a Comment