Data Volumes

Mounting a host directory can be useful for testing. For example, you can mount source code inside a container. Then, change the source code and see its effect on the application in real time. The directory on the host must be specified as an absolute path and if the directory doesn’t exist the Docker Engine daemon automatically creates it for you.

$ docker run -v /Users/<path>:/<container path> ...

If the path / already exists inside the container’s image, the /Users/ mount overlays but does not remove the pre-existing content. Once the mount is removed, the content is accessible again. This is consistent with the expected behavior of the mount command.

Creating and mounting a data volume container

If you have some persistent data that you want to share between containers, or want to use from non-persistent containers, it’s best to create a named Data Volume Container, and then to mount the data from it.

$ docker create -v /dbdata --name dbstore aleccunningham/postgres /bin/true
$ docker run -d --volumes from dbstore --name db1 aleccunningham/postgres
$ docker run -d --volumes-from dbstore --name db2 aleccunningham/postgres

In this case, if the postgres image contained a directory called /dbdata then mounting the volumes from the dbstore container hides the /dbdata files from the postgres image. The result is only the files from the dbstore container are visible.

Backup, restore using Docker

$ docker run --rm --volumes-from dbstore -v $(pwd):/backup ubuntu tar cvf /backup/backup.tar /dbdata

Here you’ve launched a new container and mounted the volume from the dbstore container. You’ve then mounted a local host directory as /backup. Finally, you’ve passed a command that uses tar to backup the contents of the dbdata volume to a backup.tar file inside our /backup directory. When the command completes and the container stops we’ll be left with a backup of our dbdata volume.

You could then restore it to the same container, or another that you’ve made elsewhere. Create a new container.

$ docker run --rm --volumes-from dbstore2 -v $(pwd):/backup ubuntu bash -c "cd /dbdata && tar xvf /backup/backup.tar --strip 1"


To run migrations, use docker-compose run after the containers are already linked and running

$ docker-compose -f run web python makemigrations
$ docker-compose -f run web python migrate

Included are a few shell scripts for Postgres. To create a backup of the database, run:

$ docker-compose -f run db backup

To see a list of backups:

$ docker-compose -f run db list-backups

And, to restore to a specific backup:

docker-compose -f run postgres restore filename.sql

Run any of those commands in prod by omitting the file.

If you would like to copy the files on the Postgres container to your host system, use docker cp:

$ docker cp <containerId>:/backups /host/path/target
# find the id using docker ps