The Last of Us Part 2 proves that 8 GB of VRAM can be enough, even at 4K with maximum settings, so why aren’t more games using the same clever asset-streaming trick?

When Sony’s acclaimed title, The Last of Us Part 1, made its debut on PC two years ago, it was anticipated to mark a significant turning point in the realm of console ports. However, the reality was far from ideal; the game was plagued with bugs and instability, consuming CPU and GPU resources at an alarming rate. Perhaps most controversially, it attempted to utilize more VRAM than many graphics cards could provide, igniting a heated debate about the sufficiency of 8 GB of VRAM. While subsequent patches addressed many of these issues, launching the game at 4K on Ultra settings still resulted in VRAM usage exceeding the available memory on the graphics card.

(Image credit: Sony Interactive Entertainment)

As I began testing The Last of Us Part 2 Remastered, my first focus was on monitoring the graphics memory allocation. Utilizing Microsoft’s PIX on Windows—a developer tool designed for in-depth analysis of game performance—I ran the game on two different rigs: one equipped with an RTX 5080 and another with an RTX 3060 Ti. Both systems were set to 4K resolution with maximum quality settings, including DLAA and frame generation.

The results were telling. The RTX 5080 averaged 9.77 GB of local memory usage alongside 1.59 GB of non-local memory, totaling 11.36 GB. In contrast, the RTX 3060 Ti showed figures of 6.81 GB local and 4.25 GB non-local, amounting to 11.06 GB. The slight discrepancy can be attributed to the different frame generation technologies employed by each card, as well as variations in gameplay loops that may have led to different asset usage.

Improved Asset Management

What stands out in this analysis is how The Last of Us Part 2 effectively manages VRAM without overloading the GPU. This contrasts sharply with its predecessor, which struggled with VRAM allocation. The efficient handling of memory in Part 2 raises an essential question: why aren’t other AAA titles adopting similar strategies?

Another noteworthy aspect of TLOU2 is its substantial CPU workload. During testing with an older Core i7 9700K paired with a Radeon RX 5700 XT, the CPU utilization was consistently at 100% across all cores. Even the more powerful Ryzen 7 9800X3D showed significant load, indicating that TLOU2 generates numerous threads to manage tasks in parallel, including DirectStorage operations.

(Image credit: Microsoft)

The performance metrics reveal that TLOU2’s asset management system is a commendable solution to the ongoing debate about VRAM limitations. While 8 GB may not be universally sufficient, the game demonstrates that with proper asset streaming and management, performance can remain stable even on lower VRAM configurations. This is particularly relevant as the industry continues to evolve, with increasing demands for memory due to advancements in ray tracing and AI technologies.

As the gaming landscape progresses, it is crucial for developers to take cues from successful implementations like those seen in The Last of Us Part 2. With many consumers still utilizing 8 GB graphics cards, effective asset management will be vital in ensuring that future titles can deliver high-quality experiences without overwhelming hardware capabilities.

AppWizard
The Last of Us Part 2 proves that 8 GB of VRAM can be enough, even at 4K with maximum settings, so why aren't more games using the same clever asset-streaming trick?