![New Machine Learning Inference Benchmarks Assess Performance Across a Wide Range of AI Applications](https://cdrinfo.com/d7/system/files/styles/siteberty_image_770x484/private/new_site_image/mlperf_logo_2.jpg?itok=JnBkZS1m)
New Machine Learning Inference Benchmarks Assess Performance Across a Wide Range of AI Applications
A consortium involving more than 40 leading companies and university researchers introduced MLPerf Inference v0.5, the first industry standard machine learning benchmark suite for measuring system performance and power efficiency.
The benchmark suite covers models applicable to a wide range of applications including autonomous driving and natural language processing, on a variety of form factors, including smartphones, PCs, edge servers, and cloud computing platforms in the data center. MLPerf Inference v0.5 uses a combination of models and data sets to ensure that the results are relevant to real-world applications.
By measuring inference, this benchmark suite will give valuable information on how quickly a trained neural network can process new data to provide useful insights. Previously, MLPerf released the companion Training v0.5 benchmark suite leading to 29 different results measuring the performance of cutting-edge systems for training deep neural networks.
MLPerf Inference v0.5 consists of five benchmarks, focused on three common ML tasks:
- Image Classification - predicting a “label” for a given image from the ImageNet dataset, such as identifying items in a photo.
- Object Detection - picking out an object using a bounding box within an image from the MS-COCO dataset, commonly used in robotics, automation, and automotive.
- Machine Translation - translating sentences between English and German using the WMT English-German benchmark, similar to auto-translate features in widely used chat and email applications.
MLPerf provides benchmark reference implementations that define the problem, model, and quality target, and provide instructions to run the code. The reference implementations are available in ONNX, PyTorch, and TensorFlow frameworks. The mlperf.org website provides a complete specification with guidelines on the reference code and will track future results.
The inference benchmarks were created thanks to the contributions and leadership of members over the last 11 months, including representatives from: Arm, Cadence, Centaur Technology, Dividiti, Facebook, General Motors, Google, Habana Labs, Harvard University, Intel, MediaTek, Microsoft, Myrtle, Nvidia, Real World Insights, University of Illinois at Urbana-Champaign, University of Toronto, and Xilinx.