Ray is a Developer Advocate for the Google Cloud Platform. Ray had extensive hands on cross-industry enterprise systems integration delivery and management experiences during his time at Accenture, managed full stack application development, DevOps, and ITOps. Ray specialized in middleware, big data, and PaaS products during his time at Red Hat while contributing to open source projects, such as Infinispan. Aside from technology, Ray enjoys traveling and adventures.
Follow Ray on Twitter @saturnism
Kubernetes is a powerful, open source, container orchestration / cluster management tool created by Google. It drew upon all the lessons learned from a near-decade of using containers at Google. Kubernetes handles a number of failure scenarios gracefully, from a crashed process, to a failure of a cluster node! We'll show this through a real Raspberry Pi computing cluster that runs Kubernetes - and play a real-life chaos monkey by pulling the plugs!
In this session, we'll look beyond container orchestration with Kubernetes, but also demonstrate its failure handling by pulling the plugs on random nodes from a Raspberry Pi computing cluster: - Overview of Kubernetes - Process resource isolation to prevent a run-away process affecting another - Use Replication controller to ensure a crashed process is restarted - Who wants to pull a network or power plug from a computing cluster?
gRPC is a high performance, open source, general RPC framework that puts mobile and HTTP/2 first. gRPC is based on many years of Google's experience in building distributed systems - it is designed to be low latency, bandwidth and CPU efficient, to create massively distributed systems that span data centers, as well as power mobile apps, real-time communications, IoT devices and APIs. It's also interoperable between multiple languages.
But beyond that fact that it's more efficient than REST, we'll look into how to use gRPC's streaming API, where you can establish server-side streaming, client-side streaming, and bidirectional streaming! This allows developers to build sophisticated real-time applications with ease.
In addition to learning about gRPC and HTTP/2 concepts with code and demonstrations, we'll also deep dive into integration with existing build systems such as Maven and Gralde, but also frameworks such as Spring Boot and RxJava. - Configuring projects to generate gRPC stub code - Using Protobuf3 to define services - Creating synchronous and asynchronous services, with streaming. - Load balancing - Intercepts
Ray lived in a 180-sqft (~18m2) micro-studio near New York. It had no kitchen but it does come with free, but slow Wi-Fi. Ray's a Developer Advocate at Google and he travels a lot, often needs to build Docker images on the airplane, in coffee shops, or in hotels with unpredictable Wi-Fi. Learn how Ray adapted to working with large Docker images over slow Wi-Fi by utilizing Docker Machine in the cloud, and a bunch of tips and tricks!
This strategy also saved Ray from numerous demoes - he once demoed Docker containers and Kubernetes by tethering to his colleagues' phone connection.
In this session, you'll learn about: - Docker Machine in the cloud - How to reduce your local bandwidth needs when building an image - How to use Docker ONBUILD - Sharing data between local laptop and a remote Docker machine - Strategy w/ ENTRYPOINT and STDIN/STDOUT - Working w/ Kubernetes - Reducing layer sizes
Today's technology is moving fast towards using containers and managing a fleet of containers. This session will give hands-on experience with creating containers using Docker and deploy a fleet of containerized Java microservices into Kuberenetes. You'll get to: - Build a Java microservice - Build Docker container - Deploy the container into a private container registry - Deploy a fleet of containerized microservices - Learn service discovery - Perform rolling update, canary, and roll backs
In addition, we will also explore advanced features such as: - Secret - securely give your application the credentials and configurations - Daemon set - run the same workload across all of the cluster nodes - PetSet - running stateful applications such as Cassandra or Zookeeper - Persistent volume / claims - store persistent data using volume mounts in the pods - Health checks - check to see if your application is alive and ready to serve traffic - Autoscaling - automatic horizontal pod scaling using CPU utilization metric
The lab can be self-paced - pick your own adventure depending on how familiar you are w/ Kubernetes.