Compose has commands for managing the whole lifecycle of your application:
- Start, stop services
- View the status of running services
- Stream the log output of running services
- Run a one-off command on a service
To bring up dev environment initially:
$ docker-compose -f docker-compose.dev.yml up -d
Or, to rebuild and then bring up the containers:
$ docker-compose -f docker-compose.dev.yml up --build -d
Note: The -f flag denotes a custom compose file, while the -d flags brings it up in detached mode. This allows you to run other commands in the same shell (you may want to run it without the -d flag to see docker’s log, and open another terminal tab to run commands).
Dev environment will run the server using runserver_plus (although you can always disable that in docker-compose and use the native runserver). It will use the same PostgreSQL database that production uses. Also, start-dev.sh runs migrations prior to starting the server. Modify that shell script as you wish to specify the dev tools you’d like to use.
NOTE: Using the
--build flag creates a new image each time. Run
inv rm_images to clean up; that command is also run by a cron job once a day. Regardless, try to minimize the use of the build command; most code changes are reloaded through the server. Changes in requirements, some setting definitions, and migrations should use the build flag.
You can bring the development environment down using invoke:
$ invoke dev_stop
To use breakpoints, insert one like normal using ipdb:
import ipdb; ipdb.set_trace()
When bringing up the containers, use the
--service-ports flag to ensure the correct links are setup; the ipdb shell should start when the breakpoint is hit (For example, if you’re running tests”)
If you start a service configured with links, the run command first checks to see if the linked service is running and starts the service if it is stopped. Once all the linked services are running, the run executes the command you passed it. So, for example, you could run:
$ docker-compose run db psql -h db -U postgres
Which opens an interactive PostgreSQL shell for the linked db container. Following the same style, the following would start a Django shell session. Due to limitations regarding where you can start a shell, make use of shell_plus when appropriate/
$ docker-compose run web python manage.py shell
Open a bash shell:
$ docker-compose run web /bin/bash
--service-ports flag to map the services ports to the host.
Running tests is simple. When bring the containers up, you can specify more than one docker-compose file with the
-f flag. So, the following command would bring up the dev environment, but the test file runs instead of
$ docker-compose -f docker-compose.dev.yml -f docker-compose.test.yml up
Find a better way to interactiveley run tests!
Mounting a host directory can be useful for testing. For example, you can mount source code inside a container. Then, change the source code and see its effect on the application in real time. The directory on the host must be specified as an absolute path and if the directory doesn’t exist the Docker Engine daemon automatically creates it for you.
$ docker run -v /Users/<path>:/<container path> ...
If the path /
Creating and mounting a data volume container
If you have some persistent data that you want to share between containers, or want to use from non-persistent containers, it’s best to create a named Data Volume Container, and then to mount the data from it.
$ docker create -v /dbdata --name dbstore aleccunningham/postgres /bin/true $ docker run -d --volumes from dbstore --name db1 aleccunningham/postgres $ docker run -d --volumes-from dbstore --name db2 aleccunningham/postgres
In this case, if the postgres image contained a directory called /dbdata then mounting the volumes from the dbstore container hides the /dbdata files from the postgres image. The result is only the files from the dbstore container are visible.
Backup, restore using Docker
$ docker run --rm --volumes-from dbstore -v $(pwd):/backup ubuntu tar cvf /backup/backup.tar /dbdata
Here you’ve launched a new container and mounted the volume from the dbstore container. You’ve then mounted a local host directory as /backup. Finally, you’ve passed a command that uses tar to backup the contents of the dbdata volume to a backup.tar file inside our /backup directory. When the command completes and the container stops we’ll be left with a backup of our dbdata volume.
You could then restore it to the same container, or another that you’ve made elsewhere. Create a new container.
$ docker run --rm --volumes-from dbstore2 -v $(pwd):/backup ubuntu bash -c "cd /dbdata && tar xvf /backup/backup.tar --strip 1"
To run migrations, use
docker-compose run after the containers are already linked and running
$ docker-compose -f docker-compose.dev.yml run web python manage.py makemigrations $ docker-compose -f docker-compose.dev.yml run web python manage.py migrate
Included are a few shell scripts for Postgres. To create a backup of the database, run:
$ docker-compose -f docker-compose.dev.yml run db backup
To see a list of backups:
$ docker-compose -f docker-compose.dev.yml run db list-backups
And, to restore to a specific backup:
docker-compose -f docker-compose.dev.yml run postgres restore filename.sql
Run any of those commands in prod by omitting the
If you would like to copy the files on the Postgres container to your host system, use
$ docker cp <containerId>:/backups /host/path/target # find the id using docker ps
Working with containers
Pushing a certain applications image
In this repository the Django app that is being served is called
simpleblog. In the projects folder is it’s Dockerfile. In order to speed up the docker-compose flow, creating and then pushing an updated projects image is important.
- Make changes to the Django project (
cdinto the project directory that houses the Dockerfile
docker build -t code .
docker push aleccunningham/code
And that will update the image on the official Docker hub. Now in our web container, we can replace
build: ./td/ with
image: aleccunningham/code, and each version will be cached to your machine, and will update when a new version is pushed. You can do this with multiple projects in the same parent directory, as long as each have their own Dockerfile. If needed, you can use the
-f flag to specify a custom Dockerfile.
Working on a specific container
--no-deps flag to start just one individual container when running a command. For example:
docker-compose run --no-deps web python manage.py shell
Which would open a Django shell without bringing up any of the other containers.
There are multiple ways to go about booting up the services externally. All, however, are basically the same as they prep a barebones server for docker and then run’s the docker-compose. For Production software like Ansible is the easiest, especially when load balancing/using swarms. But in development, a service like hypersh works really well. In just a few minutes you can assign a public IP address to the service(s) and bring them online (underneath the magic are AWS servers). The best part of hyper are its similarities to Docker’s CLI. Assign a public IP using
hyper fip and then bring the service up:
$ hyper fip allocate 1 184.108.40.206
Then, add a
fip option to your docker-compose file:
services: web: image: wordpress:latest fip: 220.127.116.11 links: - db:mysql depends_on: - db ports: - "8080:80"
Now run hyper compose up to visit your web service on
If you would like to serve your Django project through Gunicorn and Nginx, replicating a prod environment, just run the
up command but with the normal compose file:
$ docker-compose up -d
Inline-style: alt text
This will use the Nginx config file located in /etc/nginx, and builds from the Dockerfile located there. All other commands can be used as if you were using the dev environment, just ommiting the
docker-compose.dev.yml, as it will fall back onto the prod one.
Note, this does not run migrations on start.
To restore the production database to a local Postgres database, open a bash shell and run the following:
$ createdb NAME_OF_DATABASE $ psql NAME_OF_DATABASE < NAME_OF_BACKUP_FILE
Ansible supports docker and can read docker-compose files and, with some help, build the full stack. In comparison you can also create a playbook that configures a more traditional server for Docker. A simple example is below:
name: Make sure apt-transport-https is installed apt: pkg: “apt-transport-https” state: installed
name: Add Docker repository key apt_key: id: “36A1D7869245C8950F966E92D8576A8BA88D21E9” keyserver: “hkp://keyserver.ubuntu.com:80” state: present
name: Add Docker repository and update apt cache apt_repository: repo: “deb http://get.docker.io/ubuntu docker main” update_cache: yes state: present
- name: Install lxc-docker apt: pkg: “lxc-docker” state: installed
- name: Install pip
name: Install Docker-py pip: name: docker-py
- name: Make sure docker is running service: name: docker state: started Current configuration inspired by @jcalazan on Github.
Further exploration looks at using Ansible to create swarm clusters from bare servers, install docker, and then use it to deploy using your docker-compose file.