Deploying Applications on Kubernetes

Deploying and managing highly-available and fault-tolerant applications at scale


Kubernetes is a container scheduler and quite a lot more. We can use it to deploy our services, to roll out new releases without downtime, and to scale (or de-scale) those services. It is portable. It can run on a public or private cloud. It can run on-premise or in a hybrid environment. Kubernetes, in a way, makes your infrastructure vendor agnostic. We can move a Kubernetes cluster from one hosting vendor to another without changing (almost) any of the deployment and management processes. Kubernetes can be easily extended to serve nearly any needs. We can choose which modules we'll use, and we can develop additional features ourselves and plug them in.

If we choose to use Kubernetes, we decide to relinquish control. Kubernetes will decide where to run something and how to accomplish the state we specify. Such control allows Kubernetes to place replicas of a service on the most appropriate server, to restart them when needed, to replicate them, and to scale them. We can say that self-healing is a feature included in its design from the start. On the other hand, self-adaptation is coming as well. At the time of this writing, it is still in its infancy. Soon it will be an integral part of the system.

Zero-downtime deployments, fault tolerance, high availability, scaling, scheduling, and self-healing should be more than enough to see the value in Kubernetes. Yet, that is only a fraction of what it provides. We can use it to mount volumes for stateful applications. It allows us to store confidential information as secrets. We can use it to validate the health of our services. It can load balance requests and monitor resources. It provides service discovery and easy access to logs. And so on and so forth. The list of what Kubernetes does is long and rapidly increasing. Together with Docker, it is becoming a platform that envelops whole software development and deployment lifecycle.

This will be a very fast-paced show & tell type of workshop. The objective is to get introduced to some of the major Kubernetes concepts and resources that will serve as a base for a more detailed learning and practice.

This course is curated from a live session presentation.

What you will learn from this course

  • Building Docker Images
  • What Is A Container Scheduler
  • Running A Kubernetes Cluster Locally
  • Creating Pods
  • Scaling Pods With ReplicaSets
  • Using Services To Enable Communication Between Pods
  • Deploying Releases With Zero-Downtime
  • Using Ingress To Forward Traffic
  • Using Volumes To Access Host's File System

Course Curriculum

Deploying Applications on Kubernetes

What's included?

1 Video
Viktor Farcic
Viktor Farcic
Independent Consultant

About the instructor

Viktor Farcic is a senior DevOps consultant at CloudBees, a member of the Docker Captains group, and an author. 

He codes using a plethora of languages starting with Pascal (yes, he is old), Basic (before it got the Visual prefix), ASP (before it got the .NET suffix), C, C++, Perl, Python, ASP.NET, Visual Basic, C#, JavaScript, Java, Scala, and so on. He never worked with Fortran. His current favorite is Go. Viktor's big passions are Microservices, Continuous Deployment, and Test-Driven Development (TDD). He often speaks at community gatherings and conferences. 

Viktor wrote Test-Driven Java Development by Packt Publishing, and The DevOps 2.0 Toolkit. His random thoughts and tutorials can be found in his blog (

Secure your place today - seat numbers are strictly limited to 200

Sign up now!