There’s a tendency for testers to dis­miss “synthetic” benchmarks as having no value whatsoever, but that attitude is misplaced. Synthetics got their bad name in the 1990s, when they were the only game in town for testing hardware. Hardware makers soon started to opti­mize for them, and on occasion, those actions would actually hurt perfor­mance in real games and applications.

The 1990s are long behind us, though, and benchmarks and the benchmarking community have matured to the point that synthetics can offer very useful metrics when measuring the perfor­mance of a single component or system. At the same time, real-world bench­marks aren’t untouchable. If a devel­oper receives funding or engineering support from a hardware maker to op­timize a game or app, does that really make it neutral? There is the argument that it doesn’t matter because if there’s “cheating” to improve performance, that only benefits the users. Except that it only benefits those using a certain piece of hardware.

In the end, it’s probably more im­portant to understand the nuances of each benchmark and how to apply them when testing hardware. SiSoft San­dra, for example, is a popular synthetic benchmark with a slew of tests for vari­ous components. We use it for memory bandwidth testing, for which it is invalu­able—as long as the results are put in the right context. A doubling of main system memory bandwidth, for ex­ample, doesn’t mean you get a doubling of performance in games and apps. Of course, the same caveats apply to real- world benchmarks, too.