Deploy a Multi-node Kubernetes Cluster on CentOS 7

By | October 29, 2016


Kubernetes is an open source container management system that allows the deployment, orchestration, and scaling of container applications and micro-services across multiple hosts. This tutorial will describe the installation and configuration of a multi-node Kubernetes cluster on CentOS 7.

Understanding the basic Kubernetes concepts and multi-node deployment architecture will make the installation and management much easier. It is suggested that the Kubernetes overview document be reviewed before continuing forward.

A single master host will manage the cluster and run several core Kubernetes services.

  • API Server – The REST API endpoint for managing most aspects of the Kubernetes cluster.
  • Replication Controller – Ensures the number of specified pod replicas are always running by starting or shutting down pods.
  • Scheduler – Finds a suitable host where new pods will be reside.
  • etcd – A distributed key value store where Kubernetes stores information about itself, pods, services, etc.
  • Flannel – A network overlay that will allow containers to communicate across multiple hosts.

The minion hosts will run the following services to manage containers and their network.

  • Kubelet – Host level pod management; determines the state of pod containers based on the pod manifest received from the Kubernetes master.
  • Proxy – Manages the container network (IP addresses and ports) based on the network service manifests received from the Kubernetes master.
  • Docker – An API and framework built around Linux Containers (LXC) that allows for the easy management of containers and their images.
  • Flannel – A network overlay that will allow containers to communicate across multiple hosts.

Note: Flannel, or another network overlay service, is required to run on the minions when there is more than one minion host. This allows the containers which are typically on their own internal subnet to communicate across multiple hosts. As the Kubernetes master is not typically running containers, the Flannel service is not required to run on the master.


  • CentOS 7 (possibly Red Hat Enterprise Linux 7)
  • Kubernetes 0.15.0
  • Docker 1.5.0

Note: The versions specified were the latest available while drafting this tutorial. The instructions may work with later versions, but the configuration could vary as Kubernetes is rapidly evolving.

  • Three virtual servers:
    • Kubernetes master
      • Hostname: kube-master
      • Private IP:
    • Kubernetes minion #1
      • Hostname: kube-minion1
      • Private IP:
      • Public IP:
    • Kubernetes minion #2
      • Hostname: kube-minion2
      • Private IP:
      • Public IP:

Note: The public IP addresses are completely optional and depend on whether the pods will be exposed publicly.

Here is a screenshot of the infrastructure layout used in this tutorial:

Configure All Kubernetes Hosts

The following Configure Kubernetes Hosts steps should be performed on all of the hosts including the master and minion hosts.

Add the Package Repository

Kubernetes is currently under active development and there are frequent changes to the code base. The latest binaries can be compiled from source, but the CentOS Community Build System usually has current binaries and simplifies the install onto Enterprise Linux distributions.

The Community Build System YUM repository, virt7-testing, will need to be added to all Kubernetes hosts.

cat << EOF > /etc/yum.repos.d/virt7-testing.repo

Install Required Packages

Now the Kubernetes package and dependencies can be installed. Again, these packages should be installed on all hosts including the master and minion hosts.

yum -y install docker docker-logrotate kubernetes etcd flanneld

Setup Hostname Resolution

Using hostname resolution will help clarify the relationship between all the hosts. Add the following mapping to the /etc/hosts file to allow proper DNS resolution across all hosts. kube-master kube-minion1 kube-minion2

Common Kubernetes Configuration

Edit the /etc/kubernetes/config file and set the KUBE_MASTER value to the API server URL which will ultimately reside on the master host. The config file should look similar to the following:

# logging to stderr means we get it in the systemd journal

# journal message level, 0 is debug

# Should this cluster be allowed to run privileged docker containers

# How the controller-manager, scheduler, and proxy find the apiserver

Configure the Flannel Service

Edit the /etc/sysconfig/flanneld and specify the etcd URL and key location. Here is an example of the flanneld configuration file:

# Flanneld configuration options

# etcd url location.  Point this to the server where etcd runs

# etcd config key.  This is the configuration key that flannel queries
# For address range assignment

# Any additional options that you want to pass

Configure the Master

The following Configure the Master setup will take place on the master host only.

Configure the API Server

The API server configuration file will handle the API service binding, specify the location of the etcd service, and define the container IP address range. Edit the /etc/kubernetes/apiserver to match the following example:

# The address on the local server to listen to.

# The port on the local server to listen on.
# KUBE_API_PORT="--port=8080"

# Port minions listen on
# KUBELET_PORT="--kubelet_port=10250"

# Comma separated list of nodes in the etcd cluster

# Address range to use for services

# default admission control policies

# Add you own!

Configure the Controller Manager

This Controller Manager configuration file provides the list of nodes where the containers will run. Edit the /etc/kubernetes/controller-manager so that it matches the following example:

# Comma separated list of minions
KUBELET_ADDRESSES="--machines=kube-minion1, kube-minion2"

# Add you own!

Enable and Start the Master Services

The master node services can now be enabled on host boot and started.

for service in etcd kube-apiserver kube-controller-manager kube-scheduler flanneld; do 
    systemctl enable $service
    systemctl restart $service
    systemctl status $service 

Load the Flannel Settings

Create a flannel-config.json file that will define the Flannel settings. The subnet specified in the Flannel settings should match that of the API server KUBE_SERVICE_ADDRESSES value. This is the subnet that the containers will use.

cat << EOF > ./flannel-config.json
    "Network": "",
    "SubnetLen": 24,
    "SubnetMin": "",
    "SubnetMax": "",
    "Backend": {
        "Type": "vxlan",
        "VNI": 1

Now the Flannel settings can now be loaded into etcd using curl.

curl -L http://kube-master:4001/v2/keys/flannel/network/config -XPUT --data-urlencode [email protected]/flannel-config.json

Configure the Minion Nodes

Finally, the following Configure Minion Hosts setup will take place on the container nodes only.

Configure the Kubelet Service

The Kubelet configuration file handles the IP binding of the kubelet service, the optional kubelet hostname, and the location of the Kubernetes API server. The /etc/kubernetes/kubelet configuration file should match the following example:

# The address for the info server to serve on (set to or "" for all interfaces)

# The port for the info server to serve on
# KUBELET_PORT="--port=10250"

# You may leave this blank to use the actual hostname

# location of the api-server

# Add your own!

Enable and Start the Minion Services

The minion services can now be enabled on host boot and started.

for service in kube-proxy kubelet docker flanneld; do
    systemctl enable $service
    systemctl restart $service
    systemctl status $service 

Verify Success

Log into the Kubernetes master and verify the minion hosts appear as Ready using the kubectl command. The results should be similar to the following example:

# kubectl get nodes
kube-minion1   <none>    Ready
kube-minion2   <none>    Ready

The Kubernetes cluster is now ready for pods and services.

Leave a Reply

Your email address will not be published. Required fields are marked *

This site uses Akismet to reduce spam. Learn how your comment data is processed.