Docker Volume Plugin for Google Cloud Storage
APACHE-2.0 License
An easy-to-use, cross-platform, and highly optimized Docker Volume Plugin for mounting Google Cloud Storage buckets.
Table of Contents
gcsfs
is distributed on Docker Hub, allowing a seamless install:
$ docker plugin install ofekmeister/gcsfs
You will also need at least one service account key.
Create a volume with the key contents:
$ docker volume create -d ofekmeister/gcsfs -o key=$(cat service_account_key_file) <BUCKET_NAME>
or via docker-compose
:
version: '3.4'
volumes:
mybucket:
name: <BUCKET_NAME>
driver: ofekmeister/gcsfs
driver_opts:
key: ${KEY_CONTENTS_IN_ENV_VAR}
Then create a container that uses the volume:
$ docker run -v <BUCKET_NAME>:/data --rm -d --name gcsfs-test alpine tail -f /dev/null
or via docker-compose
:
services:
test:
container_name: gcsfs-test
image: alpine
entrypoint: ['tail', '-f', '/dev/null']
volumes:
- mybucket:/data
At this point you should be able to access your bucket:
$ docker exec gcsfs-test ls /data
Alternatively, you can mount a directory of service account keys and reference the file name.
First disable the plugin:
$ docker plugin disable ofekmeister/gcsfs
then set the keys.source
option:
$ docker plugin set ofekmeister/gcsfs keys.source=/path/to/keys
If you don't yet have the plugin, this can also be done during the installation:
$ docker plugin install ofekmeister/gcsfs keys.source=/path/to/keys
Note: On Windows you'll need to use host_mnt
paths e.g. C:\path\to\keys
would become /host_mnt/c/path/to/keys
.
Assuming there is a file named credentials.json
in /path/to/keys
, you can now create a volume by doing:
$ docker volume create -d ofekmeister/gcsfs -o key=credentials.json <BUCKET_NAME>
or via docker-compose
:
version: '3.4'
volumes:
mybucket:
name: <BUCKET_NAME>
driver: ofekmeister/gcsfs
driver_opts:
key: credentials.json
key
- The file name of the key in the keys.source
directory, or else the raw key contents if it doesn't exist.bucket
- The Google Cloud Storage bucket to use. If this is not specified, the volume name is assumed to be the desired bucket.flags
- Extra flags to-o flags="--limit-ops-per-sec=10 --only-dir=some/nested/folder"
.debug
- A timeout (in seconds) used only for testing. This will attempt to mount the bucket, wait for logs, then un-mount and print debug info.In order to access anything stored in Google Cloud Storage, you will need service accounts with appropriate IAM
roles. Read more about them here. If writes
are needed, you will usually select roles/storage.admin
scoped to the desired buckets.
The easiest way to create service account keys, if you don't yet have any, is to run:
$ gcloud iam service-accounts list
to find the email of a desired service account, then run:
$ gcloud iam service-accounts keys create <FILE_NAME>.json --iam-account <EMAIL>
to create a key file.
Tip: If you have a service account with write access you want to share with containers that should only
be able to read, you can append the standard :ro
to avoid creating a new read-only service account.
gcsfs
is distributed under the terms of both
at your option.
See csi-gcs.