Visuals Get an AI Boost from NVIDIA's Lovelace GPU.

Visuals Get an AI Boost from NVIDIA's Lovelace GPU.

According to the business, TSMC will produce the first Lovelace-based GPUs using a unique "4N" technology node.

NVIDIA's next-gen Ada Lovelace architecture, which uses AI to make games look more realistic, is the driving force behind its new flagship series of graphics chips.

Jensen Huang, CEO of NVIDIA, mentioned the Lovelace architecture that powers the company's newest GeForce RTX 40 GPUs during the GTC 2022 conference. The design honors a 19th-century mathematician who was a forefather of computer science; his name is Ada Lovelace.

According to NVIDIA, the RTX 4090, the flagship GPU in the gaming family, offers twice the performance of its predecessor at half the power consumption thanks to its adoption of the circa-2020 Ampere GPU architecture.

In addition to more than 16,000 CUDA cores, the Lovelace GPU features 76.3 billion transistors, which is around 2.7 times more than its Ampere GPU and nearly the same as its Hopper GPU for data centers.

In a time when competitors like AMD (with its forthcoming RDMA 3 GPU architecture) and Intel are heating up, these additions make it one of the most powerful graphics chips available (with its high-performance Arc GPUs).

Engineering of the Lovelace Type:

TSMC will produce the chips using a proprietary "4N" technology node. This is a significant improvement compared to NVIDIA's previous generation of gaming graphics processors, which Samsung Electronics on the 8-nm node manufactured.

According to the manufacturer, Lovelace-based graphics processors are twice as energy-efficient as their Ampere-based predecessor's thanks to the updated process technology and enhanced underlying architecture.

Compared to NVIDIA's Hopper design, which was revealed at the beginning of the year, the Lovelace architecture is a clear winner. The H100 GPU will be powered by Hopper, designed for high-performance computing and artificial intelligence workloads. However, the Lovelace architecture is better suited to general-purpose, graphics-heavy workloads, such as building digital twins with NVIDIA's Omniverse software platform or designing games with physically accurate lighting and objects.

Digital twins are large-scale simulations (for example, of manufacturing floors or cars) that allow you to test and evaluate designs or processes in a risk-free environment before introducing them into the real world.

NVIDIA claims that the RTX 40 series of GPUs usher in a plethora of advancements in all areas. For example, a new generation of streaming multiprocessors is said to be three times as fast as its predecessor. Modern games rely on shaders to calculate the correct lighting, shading, and color throughout the rendering process. These units can provide up to 90 TOPS performance to these vital components.

NVIDIA's shader execution reordering is a crucial feature of the Lovelace architecture. This improves performance by dynamically rescheduling and shading workloads, which is time-consuming. The technology appears analogous to out-of-order execution on a central processing unit (CPU). According to NVIDIA, the Lovelace architecture can increase ray-tracing speed by up to three and boost frame rates by up to twenty-five percent.

These chips also have updated ray-tracing (RT)cores with up to 200 TFLOPS, allowing for the more lifelike rendering of shadows and reflections in real-time.

New video encoders compatible with the AV1 codec can be found in graphics chips built on the Lovelace architecture.

Computer-Generated Images:

NVIDIA's fourth-generation tensor cores are also part of the Lovelace architecture. The "matrix multiply and accumulate" operations at the heart of machine learning can be performed by these specialized units.

For its FP8 format for artificial intelligence tasks, NVIDIA's next-generation tensor cores are reportedly up to five times as fast as their predecessors.

Like the Hopper GPU, the upgraded inference processing cores are found in the latest hardware. So, they share the same "transformer engine" as the Hopper GPUs.

The tensor cores complement a novel hardware engine called the optical flow accelerator. It uses machine learning to analyze many high-resolution frames and make predictions about the motion of 3D-rendered objects. Lovelace may now pre-render any aspect of the scene, from particles and reflections to shadows and lighting, allowing for a smoother gameplay experience and higher frame rate without sacrificing detail.

One of the most advanced graphics technologies in Lovelace GPUs is NVIDIA's Deep Learning Super Sampling, now in its third generation, made possible by the new tensor cores and hardware accelerators (DLSS).

Massive computational power is required to render every pixel in games and virtual environments with natural materials, lighting, and dynamics. However, the technique skips some pixels in a scene instead of trying to render them all. Then, it uses machine learning to generate fill pixels, resulting in crisp, high-resolution visuals that run at frame rates that exceed NVIDIA's GPUs' computational capabilities.

With DLSS 3, artificial intelligence creates entirely new frames, not just new pixels, increasing frame rates by as much as four times above what they would be without DLSS. When the central processing unit (CPU) is the limiting factor in a game's performance, this technology can help.

Constant Company:

NVIDIA claims that its flagship RTX 4090 is one of the most powerful graphics cards available thanks to its 16,384 CUDA cores and its increased base clock frequency of over 30% compared to its predecessor's 10,752 cores.

The CPU can render 4K quality gaming at over 100 frames per second because of recent advancements in hardware and the Ada Lovelace design.

With 24 GB of high-speed GDDR6X memory from Micron Technology, the RTX 4090 consumes the same 450 W as its predecessor. The chips communicate with one another via PCIe Gen 4 channels.

The RTX 4090, according to NVIDIA, is four times as powerful as the company's previous top-tier graphics processor, the RTX 3090. The speeds it provides can even double above its predecessor at the same amount of battery usage.

The RTX 4080 is the name of the new mid-range graphics processor that the semiconductor giant has released for the gaming community. The new processor can be purchased with either 12 or 16 gigabytes of GDDR6X RAM.

NVIDIA claims that the Lovelace-based GPUs can display higher-quality graphics with more accurate lighting at a faster frame rate than the current RTX 3090 and that this is true even if none of these combinations is as advanced as the RTX 4090.

When it launches next month, the high-end RTX 4090 will cost $1,599, while the mid-range RTX 4080 will cost $899 (12 GB GDDR6X) and $1,199 (18 GB GDDR6X) (with 16 GB of GDDR6X).

Comments