‘DLSS fundamentally makes no sense’ says Minecraft creator Notch, but actually it kinda does

When you buy through links on our articles, Future and its syndication partners may earn a commission.

Notch’s Critique of Nvidia’s DLSS Technology

In a recent exchange on social media platform X, Markus Persson, widely known as Notch, the creator of the iconic game Minecraft, expressed his concerns regarding Nvidia’s Deep Learning Super Sampling (DLSS) technology. His remarks have sparked a lively discussion within the gaming community, particularly around the complexities and implications of this advanced graphics rendering technique.

Notch’s primary contention revolves around the fundamental mechanics of DLSS, which he describes as perplexing. He stated, “DLSS fundamentally makes no sense. Because the graphics card is too slow to run the game at reasonable speeds, you use THE SAME HARDWARE to run a neural network to generate frames in between the existing ones.” This statement, while provocative, seems to conflate various aspects of DLSS, particularly its Frame Generation feature.

Responses from the community have been swift and insightful. One user pointed out, “It’s not the same hardware; it’s the hardware specialized and optimized to run neural networks.” Another chimed in, explaining that the load is distributed differently within the graphics pipeline, leading to improved performance overall. These clarifications highlight the specialized nature of the technology, which utilizes dedicated Tensor Cores designed for machine learning tasks.

Some commentators took a broader view, suggesting that the industry might benefit from a renewed focus on raw raster performance rather than solely on machine learning enhancements. However, the landscape of graphics rendering has evolved significantly, especially with the introduction of ray tracing and increasingly sophisticated game engines. As Bryan Catanzaro, Nvidia’s vice president of Applied Deep Learning Research, noted in 2023, “Moore’s Law is dead. We don’t know as a civilization how to keep turning the crank on traditional ways of doing things. We have to be smarter.”

Catanzaro emphasized the need for a more intelligent approach to graphics rendering, arguing that brute force methods are inefficient. He stated, “You fundamentally realize you have to be more intelligent about the graphics rendering process. Brute force—let’s re-render every frame 120 times a second at 2160p output—that is wasteful because we know that there are a lot of correlations in the output of any rendering process.” This perspective underscores the potential for AI-driven techniques to enhance image quality while optimizing resource use.

Currently, our own expert, Nick, is conducting thorough testing of DLSS and Frame Generation, providing valuable insights into the technology’s performance. He shared an illustrative image that delineates the GPU’s workload in rendering a single frame versus the additional processing required for AI-based interpolation of frames.

While DLSS and Nvidia’s AI enhancements have their proponents, they are not universally embraced. The technology does come with its share of caveats, particularly when it comes to performance on lower-spec graphics cards or when developers overly rely on it. Nevertheless, the underlying principles of DLSS and its potential to transform gaming graphics remain a topic of significant interest and debate within the industry.

AppWizard
'DLSS fundamentally makes no sense' says Minecraft creator Notch, but actually it kinda does