Startup-Init


We can use autodock as a daemon for Docker automation. Start by running the autodock daemon:

$ docker run -d --name autodock prologic/autodock

Followed by linking the plugin:

$ docker run -d --link autodock prologic/autodock-cron

Now whenever you create a new container autodock will listen for Docker events and discover containers that have been created. The autodock-cron plugin will specifically listen for created containers that have a CRON environment variable and schedule a job based on the cron expression supplied and re-run that container when its scheduled has triggered.

Start a “Hello” Busybox Container:

$ docker run --name hello -e CRON="*/1 * * * *" busybox sh -c 'echo Hello'

Now autodock-cron will schedule a timer to re-run this container every minute until the container is deleted and removed.. After about three minutes or so you should see the following in the logs:

$ docker logs hello
Hello
Hello
Hello

You can also use hypersh to run cron jobs against containers you’ve brought online via the hyper service. This is in beta, and the documentation can be found here

Serve assets with S3 and Cloudfront

Amazon makes it very easy to communicate with their AWS services. In our case, we’re using the packages django-storage-redux and boto to make a connection with the S3 bucket, and to read from the Cloudfront domain. The environment variables are as follows:

  • DJANGO_AWS_ACCESS_KEY_ID
  • DJANGO_AWS_SECRET_ACCESS_KEY
  • DJANGO_AWS_STORAGE_BUCKET_NAME
  • DJANGO_CLOUDFRONT_DOMAIN

Static file storage is also automatically set to the configured S3 bucket, and bringing up the production environment, docker-entrypoint.sh will run collectstatic, ensuring the most up to date files are added to the bucket and served from the CDN.