Docker allows users to push and pull images from a registry. The primary, and default, registry used is DockerHub. While this is great for most things there are commonly times you don’t really want to push your image into a public location.
With Docker Registry 2.0 you can keep your images private. 2.0 supports storing these images on local storage or at a cloud storage provider like Azure or Amazon. For the purposes of this tutorial I’m going to go ahead and use my S3 account. This keeps us from relying persistent disks.
I will be hosting my Docker Registry at ProfitBricks. First, ensure you have a virtual machine running Ubuntu and Docker. You can follow my tutorial here on how to build that out.
Docker Registry 2.0
A core component of this release is a new implementation for storing and distributing docker images, speeding up image distribution, a common pain point. The API has also been enhanced with some interesting features:
- Image Verification
- Resumable Push
- Resumable Pull
- Layer Upload De-duplication
You can read more on the API here. There are some great features for making builds easier.
Getting the Bits
Since we don’t care to run the default registry provided as a container on DockerHub we will build from source. Shell into your host and
cd /tmp then:
git clone https://github.com/docker/distribution.git
This will pull the repo down locally. All you next set of work will be done from within this repo.
Generate TLS Certificates for the Registry
Before we configure and install Docker Registry 2.0 we need some TLS certs to keep it all secure. Change into the distribution/directory and create the directory certs.
Next, generate your SSL signed certificates:
openssl req \ -newkey rsa:2048 -nodes -keyout certs/spc.key \ -x509 -days 365 -out certs/spc.crt
These will be copied into the container when we build it.
Installing and Configuring Docker Registry 2.0
From still within
/tmp/distribution use your favorite text editor and open
cmd/registry/config.yml. We need to tweak the default file so that it reflects our AWS credentials and sets up TLS.
Go ahead and strip anything out of the configuration that isn’t presently needed. Take the approach of introducing configuration changes versus thinking you’ll need something and just leaving a default config there.
Since I’m not using redis for my cache (I’ll cover that in a future tutorial) I have slimmed my
storage section down to just AWS. I don’t want to keep this on the local file system as we’ve adopted the portability pattern. I can now migrate my registry across providers who support Docker and not have to deal with migrating persistent data.
I will scale my configuration down to this:
version: 0.1 loglevel: debug storage: s3: accesskey: awsaccesskey secretkey: awssecretkey region: us-west-2 bucket: bucketname encrypt: true secure: true v4auth: true chunksize: 5242880 rootdirectory: / http: addr: 0.0.0.0:5000 debug: addr: 0.0.0.0:5001 tls: certificate: /go/src/github.com/docker/distribution/certs/spc.crt key: /go/src/github.com/docker/distribution/certs/spc.key notifications: endpoints: - name: local-8082 url: http://localhost:5003/callback headers: Authorization: [Bearer <an example token>] timeout: 1s threshold: 10 backoff: 1s disabled: true - name: local-8083 url: http://localhost:8083/callback timeout: 1s threshold: 10 backoff: 1s disabled: true
Save it, then head back to the root
/tmp/distribution directory and run:
docker build --rm=true -t registry .
Remember the dot at the end. You should now see the container building. The
--rm instructs docker to remove any intermediate container. This keeps it a bit more tidy.
Once it has built, you should now be able to get a list of images and see it there:
[email protected]:/mnt/registry/distribution# docker images REPOSITORY TAG IMAGE ID CREATED VIRTUAL SIZE registry latest de0810a02c5e About a minute ago 542.7 MB
Now you can simply run the following command to start it up:
docker run -d -i --name registry -p 5000:5000 registry
Working with the Registry
You can do some basic testing with the registry by tagging and pushing an image. You should see the root directory of your S3 bucket begin to populate with data.
Just as a basic test I use the commands Docker uses with the
docker run hello-world docker tag hello-world:latest localhost:5000/hello-mine:latest docker push localhost:5000/hello-mine:latest