Develop tensorflow models with or without a GPU accelerator using the same Docker image. 🥳
Develop tensorflow models with or without a GPU accelerator using the same Docker image.
Use
docker build -t universal-tensorflow-image .
to build the imagedocker run --rm --runtime=nvidia universal-tensorflow-image python3 test_tensorflow.py
to test tensorflow using the GPUdocker run --rm universal-tensorflow-image python3 test_tensorflow.py
to test tensorflow using the CPUNote that the only difference between the CPU and GPU run is the runtime.
Docker images for tensorflow commonly come in two flavours:
cpu.Dockerfile
,gpu.Dockerfile
.Maintaining two images in parallel is both cumbersome and increases the risk of discrepancies between different runtime environments.
The Dockerfile
provides a universal setup so you can use the same image irrespective of whether you're using a GPU or not. It installs tensorflow-gpu
and symlinks the required CUDA library stubs to the location where tensorflow expects to find them. That means tensorflow-gpu
can be used even if you start a container based on the image without using the nvidia runtime. When you use the nvidia runtime, the stubs are overwritten by the real libraries and you can access the GPU seamlessly.
You may want to stick to a setup with two different images if you care about the size of your images: the universal tensorflow image has the same size as the GPU image. The CPU image is substantially smaller.