In this section, we will walk you through how to configure multiple node pools for a Konvoy cluster. Node pools allow the cluster administrator to use different configurations for different sets of worker nodes in an heterogeneous environment.
Configure multiple node pools for an on-premise cluster
We will use a concrete example to walk you through how to configure multiple node pools for an on-premise Konvoy cluster.
Assuming that the cluster administrator wants to have a dedicated host for the monitoring pipeline (i.e., Prometheus) because it is very critical to the entire cluster and should not be interfered by any other pods.
Since this is an on-premise deployment, you need to specify the Ansible inventory file (i.e.,
inventory.yaml) manually like the following:
control-plane: hosts: 10.0.194.142: ansible_host: 10.0.194.142 10.0.198.130: ansible_host: 10.0.198.130 10.0.200.148: ansible_host: 10.0.200.148 node: hosts: 10.0.130.168: ansible_host: 10.0.130.168 node_pool: worker 10.0.133.221: ansible_host: 10.0.133.221 node_pool: worker 10.0.139.120: ansible_host: 10.0.139.120 node_pool: worker 10.0.131.62: ansible_host: 10.0.131.62 node_pool: monitoring all: vars: version: v1beta1 order: sorted ansible_user: "centos" ansible_port: 22
Notice that in the
nodes section, each host has a string attribute called
This field specifies the node pool to which a host belongs.
In this case, node
10.0.131.62 belongs to node pool
monitoring, and the rest of the nodes belongs to node pool
monitoring node pool is dedicated for monitoring pipeline, you need to taint the nodes in that node pool so that regular pods will not be scheduled on those hosts.
You may also want to add some special labels to the nodes in the node pool so that users can use node selectors to schedule pods on those nodes.
To configure the taints and labels, edit the cluster configuration file (i.e.,
cluster.yaml) like the following:
kind: ClusterConfiguration version: konvoy.mesosphere.io/v1alpha1 spec: nodePools: - name: worker - name: control-plane - name: monitoring labels: - key: dedicated value: monitoring taints: - key: dedicated value: monitoring effect: NoExecute
The above configuration will add label
dedicated: monitoring and apply
dedicated: monitoring taint to all nodes in the
monitoring node pool.
Note that all node pools specified in inventory file (i.e.,
inventory.yaml) should have a corresponding entry in
spec.nodePools section of
If not, the validation will fail.
Then, configure the Prometheus addon like the following so that it will be scheduled on the dedicated node (i.e.,
monitoring node pool).
- name: prometheus enabled: true values: | prometheus: prometheusSpec: tolerations: - key: "dedicated" operator: "Equal" value: "monitoring" effect: "NoExecute" nodeSelector: dedicated: "monitoring"
Once all configurations are done, run
konvoy up to install the cluster.
The labels and taints will be applied accordingly to the corresponding nodes.
And Prometheus will be scheduled on the dedicated node for monitoring purpose.
Configure multiple node pools for as AWS cluster
Configuring node pools for AWS deployment is mostly the same as on-premise deployment, except that you do not need to configure inventory file manually as it is automatically generated by the AWS provisioner.
You can simply add node pools to the
ClusterProvisioner configuration like the following:
kind: ClusterProvisioner apiVersion: konvoy.mesosphere.io/v1alpha1 spec: nodePools: - name: worker count: 3 machine: rootVolumeSize: 80 rootVolumeType: gp2 imagefsVolumeEnabled: true imagefsVolumeSize: 160 imagefsVolumeType: gp2 type: m4.4xlarge - name: control-plane controlPlane: true count: 3 machine: rootVolumeSize: 80 rootVolumeType: gp2 imagefsVolumeEnabled: true imagefsVolumeSize: 160 imagefsVolumeType: gp2 type: i3.xlarge - name: monitoring count: 1 machine: rootVolumeSize: 80 rootVolumeType: gp2 type: m4.4xlarge
The rest of the configuration will be the same as that in the on-premise case.