Jerome is a senior engineer at Docker, where he helps others to containerize all the things. In another life he built and operated Xen clouds when EC2 was just the name of a plane, developed a GIS to deploy fiber interconnects through the French subway, managed commando deployments of large-scale video streaming systems in bandwidth-constrained environments such as conference centers, operated and scaled the dotCloud PAAS, and various other feats of technical wizardry. When annoyed, he threatens to replace things with a very small shell script.
You have installed Docker, you know how to run containers, you have written Dockerfiles to build container images for your applications (or parts of your applications), and perhaps your are even using Compose to describe your application stack as an assemblage of multiple containers.
But how do you go to production? What modifications are necessary in your code to allow it to run on a cluster? (Spoiler alert: very little, if at all.) How does one set up such a cluster, anyway? Then how can we use it to deploy and scale appications with high availability requirements? What about logging, metrics, and other production-related requirements?
In this workshop, we will answer those questions using tools from the Docker ecosystem, with a strong focus on the native orchestration capabilities available since Docker Engine 1.12, aka "Swarm Mode."
The whole workshop will use "real-world" demo applications with web frontends, web services, background workers, and stateful data stores, in order to cover a wide gamut of use cases.
Come with your laptop! You don't need to install anything before the workshop, as long as you have a web browser and a SSH client.