Has Intel Really Beaten ARM?
EE Times member Jim McGregor debunked a recent ABI Research report claiming that AnTuTu benchmark results and a recent ABI Research report claiming, "Intel apps processor outperforms Nvidia, Qualcomm, and Samsung."
New AnTuTu benchmark results and a recent ABI Research report claim that Intel surpassed the entire ARM ecosystem in mobile processors for the high-end smartphone segment.
In response to the report, EE Times member Jim McGregor investigated further and compiled a variety of benchmark information from tech reviewers, benchmarking organizations and other industry resources. In particular, he looked at processors from Samsung, Intel and Qualcomm and, in effect, debunked the entire report, showing that ARM-based processors came out on top. At the same time, he pointed to the nuances and traps of processor benchmarking in general.
"Evaluating current mobile processors is challenging because these processors, known as systems-on-chips (SoCs), are complex systems of heterogeneous processing elements combined with memory, I/O, high-speed networks, communications modems and a host of other dedicated system functions," a forum user wrote.
"Integration of the processors into mobile devices further complicates any evaluation because the overall performance and efficiency of these processors is impacted by the other system components. As a result, the industry turns to benchmarks to compare processors and devices. Unfortunately, mobile benchmarks are plagued by many issues and also fall short of providing an accurate evaluation."
Despite what seemed a fairly comprehensive conclusion, the EE Times community took McGregor's analysis, and benchmarking in general, to task, with an emphasis on power consumption:
"...this analysis kinda sidestep[s] the issue of power consumption. It was not the processor's computational speed that was in question. It was that Intel CPU had more or less the same performance at HALF the current drain/power."
"Well, long calls affecting battery life is much more a function of the RF chipset efficiency and software control of transmit levels, etc. I don't see how it would fit into a comparison of digital SoCs."
"Also, what OS was running on each platform to carry out these tests, since a highly optimized OS can make these benchmark tests show amazing performance on a slow processor vs. poor results on a badly ported OS running on a considerably faster processor."
"The ABI article is clearly more about the current draw than about raw performance. So, while I agree that they could have done a better job by averaging multiple benchmarks, I think the point of the article is that Intel seems to have finally conquered what analysts have considered its 'Achilles' heel': power consumption."
"The RAM scores seem highly unusual. Is there some kind of "cheating" going on with AnTuTu?"
On the topic of compilers:
"What's wrong with Intel getting ahead using better compiler technology?"
"Nothing, if we're talking about making real applications run faster. But that's not what we're talking about here.
What we're talking about here is the compiler removing portions of the benchmark, contrary to the intent of the benchmark. As a result, the benchmark results become meaningless."
The discussion continues to heat up. Clearly, all benchmarks should be questioned and none used exclusively; and recent headlines were more sensational than truthful.
In response to the report, EE Times member Jim McGregor investigated further and compiled a variety of benchmark information from tech reviewers, benchmarking organizations and other industry resources. In particular, he looked at processors from Samsung, Intel and Qualcomm and, in effect, debunked the entire report, showing that ARM-based processors came out on top. At the same time, he pointed to the nuances and traps of processor benchmarking in general.
"Evaluating current mobile processors is challenging because these processors, known as systems-on-chips (SoCs), are complex systems of heterogeneous processing elements combined with memory, I/O, high-speed networks, communications modems and a host of other dedicated system functions," a forum user wrote.
"Integration of the processors into mobile devices further complicates any evaluation because the overall performance and efficiency of these processors is impacted by the other system components. As a result, the industry turns to benchmarks to compare processors and devices. Unfortunately, mobile benchmarks are plagued by many issues and also fall short of providing an accurate evaluation."
Despite what seemed a fairly comprehensive conclusion, the EE Times community took McGregor's analysis, and benchmarking in general, to task, with an emphasis on power consumption:
"...this analysis kinda sidestep[s] the issue of power consumption. It was not the processor's computational speed that was in question. It was that Intel CPU had more or less the same performance at HALF the current drain/power."
"Well, long calls affecting battery life is much more a function of the RF chipset efficiency and software control of transmit levels, etc. I don't see how it would fit into a comparison of digital SoCs."
"Also, what OS was running on each platform to carry out these tests, since a highly optimized OS can make these benchmark tests show amazing performance on a slow processor vs. poor results on a badly ported OS running on a considerably faster processor."
"The ABI article is clearly more about the current draw than about raw performance. So, while I agree that they could have done a better job by averaging multiple benchmarks, I think the point of the article is that Intel seems to have finally conquered what analysts have considered its 'Achilles' heel': power consumption."
"The RAM scores seem highly unusual. Is there some kind of "cheating" going on with AnTuTu?"
On the topic of compilers:
"What's wrong with Intel getting ahead using better compiler technology?"
"Nothing, if we're talking about making real applications run faster. But that's not what we're talking about here.
What we're talking about here is the compiler removing portions of the benchmark, contrary to the intent of the benchmark. As a result, the benchmark results become meaningless."
The discussion continues to heat up. Clearly, all benchmarks should be questioned and none used exclusively; and recent headlines were more sensational than truthful.