AMD’s recent launch of the Ryzen 9000 desktop processors has sparked a wave of discussions, particularly regarding the unexpected performance discrepancies observed by independent reviewers compared to the company’s initial marketing claims. The situation has prompted both AMD and the reviewing community to delve into the underlying causes of these differences.
Understanding the Performance Discrepancies
In a recent community post, AMD acknowledged a variety of factors that contributed to the contrasting benchmark results. These included differences in the Windows operating mode used during testing, VBS security settings, the configuration of competing Intel systems, and the specific games selected for benchmarking. David McAfee, who heads AMD’s client channel segment, elaborated on these points during a special edition of The Full Nerd podcast.
Central to the confusion was the methodology employed by AMD in its testing processes. The company utilized an automated testing framework that operated in Super Admin mode, which allowed for optimizations that were not present in the standard user mode utilized by reviewers. This oversight became evident as McAfee explained:
“The ‘Zen 5’ architecture incorporates a wider branch prediction capacity than prior ‘Zen’ generations. Our automated test methodology was run in ‘Admin’ mode which produced results that reflect branch prediction code optimizations not present in the version of Windows reviewers used to test Ryzen 9000 Series.”
As a result, PC users can expect to access these performance improvements when Windows 11’s annual feature update, known as “24H2,” rolls out later this year. This update will align the branch prediction optimizations with those tested by AMD.
The Impact of Testing Environments
McAfee emphasized that utilizing Super Admin mode for gaming is not practical for end-users. The historical context of AMD’s testing framework revealed that the performance differences between Super Admin and user modes were negligible in previous generations. However, as gaming technology evolved, this gap widened, leading to the current discrepancies.
“Historically, when our automation framework was built, the difference between performance in Super Admin mode and what you’d consider user mode when testing was negligible. That has changed over time, and quite honestly, it was a change we were blind to.”
This blind spot contributed to the marketing numbers that AMD initially presented for the Ryzen 9000 series, which did not fully account for the performance delta between the two modes.
Game Selection and Benchmarking Techniques
Further complicating the matter is the selection of games and even specific scenes within those games for benchmarking. McAfee noted that the balance of a game’s reliance on CPU versus GPU can lead to significant variations in performance results. This nuance highlights the importance of not only the games chosen but also how they are tested.
“Even within an individual game, the balance of the system where portions of a game which lean heavier on the CPU versus lean heavier on the GPU result in massive differences in relative performance between product A and product B.”
During the podcast, McAfee clarified that AMD’s approach to selecting benchmarks was not merely to showcase favorable results. Instead, the company was genuinely surprised by some of the reviewer tests this generation, reflecting a deeper engagement with the benchmarking process.
Clarifying Misunderstandings
When asked whether AMD was placing blame on reviewers for the discrepancies, McAfee firmly stated that this was not the case. He emphasized that the differences stemmed from AMD’s own testing decisions rather than any shortcomings in the review process.
“At the end of the day, there were a series of decisions that AMD made, that differed from how reviewers were testing, that led to different conclusions. This is not saying that reviewers did anything wrong.”
As the conversation unfolded, it became clear that AMD is committed to refining its testing methodologies and ensuring better alignment with real-world user experiences moving forward. The insights shared during the podcast provide a valuable perspective on the complexities of performance benchmarking in the ever-evolving landscape of PC hardware.