fork of mlperf training
APACHE-2.0 License
This is a repository of reference implementations for the MLPerf benchmark. These implementations are valid as starting points for benchmark implementations but are not fully optimized and are not intended to be used for "real" performance measurements of software frameworks or hardware.
This release is very much an "alpha" release -- it could be improved in many ways. The benchmark suite is still being developed and refined, see the Suggestions section below to learn how to contribute.
We anticipate a significant round of updates at the end of May based on input from users.
We provide reference implementations for each of the 7 benchmarks in the MLPerf suite.
Each reference implementation provides the following:
These benchmarks have been tested on the following machine configuration:
Generally, a benchmark can be run with the following steps:
./download_dataset.sh
. This should be run outside of docker, on your host machine. This should be run from the directory it is in (it may make assumptions about CWD).verify_dataset.sh
to ensure the was successfully downloaded.Each benchmark will run until the target quality is reached and then stop, printing timing results.
Some these benchmarks are rather slow or take a long time to run on the reference hardware (i.e. 16 CPUs and one P100). We expect to see significant performance improvements with more hardware and optimized implementations.
We are still in the early stages of developing MLPerf and we are looking for areas to improve, partners, and contributors. If you have recommendations for new benchmarks, or otherwise would like to be involved in the process, please reach out to [email protected]
. For technical bugs or support, email [email protected]
.