This section guides you through the basic steps to prepare your environment and install Konvoy on AWS.
- The aws command line utility
- Docker Desktop version 18.09.2 or newer
- kubectl v1.15.2 or newer (for interacting with the running cluster)
- A valid AWS account with credentials configured.
You need to be authorized to create the following resources in the AWS account:
- EC2 Instances
- Elastic Load Balancer (ELB)
- Internet Gateway
- NAT Gateway
- Elastic Block Storage (EBS) Volumes
- Security Groups
- Route Tables
- IAM Roles
After verifying your prerequisites, you can create an AWS Kubernetes cluster by running
This command creates your Amazon EC2 instances, installs Kubernetes, and installs default add-ons to support your Kubernetes cluster.
konvoy up command does the following:
- Provisions three
t3.largeEC2 instances as Kubernetes master nodes
- Provisions four
t3.xlargeEC2 instances as Kubernetes worker nodes
- Deploys all of the following default add-ons:
- AWS EBS CSI driver
- Elasticsearch (including Elasticsearch Exporter)
- Fluent Bit
- Prometheus operator (including Grafana, AlertManager and Prometheus Adapter)
- Kubernetes dashboard
- Operations portal
- Dex identity service
- Dex Kubernetes client authenticator
- Traefik forward authorization proxy
The default configuration options are recommended for a small cluster (about 10 worker nodes).
Modifying the cluster name
By default, the cluster name is the name of the folder where your run the
The cluster name will be used to tag the provisioned infrastructure and the context when applying the kubeconfig file.
To customize the cluster name, run the following command:
konvoy up --cluster-name <YOUR_SPECIFIED_NAME>
Control plane and worker nodes
Control plane nodes are the nodes where the Kubernetes Control Plane components will be installed.
The Control Plane contains various components, including
kube-apiserver (that you will interact with through
kube-controller-manager. Please also refer to the Concepts section.
Having three control plane nodes makes the cluster “highly available” to protect against failures.
Worker nodes run your containers in Kubernetes pods.
The default addons help you manage your Kubernetes cluster by providing monitoring (Prometheus), logging (Elasticsearch), dashboards (Kubernetes Dashboard), storage (AWS CSI Driver), ingress (Traefik) and other services.
Viewing installation operations
As noted above, you start the cluster installation by running the
konvoy up command.
konvoy up command runs, you will see output about the operations performed.
The first set of messages you see is the output generated by Terraform as it provisions your nodes.
After the nodes are provisioned, Ansible connects to the EC2 instances and installs Kubernetes in steps called tasks and playbooks. Near the end of the output, addons are installed.
Viewing cluster operations
You can access user interfaces to monitor your cluster through the Operations Portal.
After you run the
konvoy up command, if the installation is successful, the command output displays the information you need to access the Operations Portal.
For example, you should see information similar to this:
Kubernetes cluster and addons deployed successfully! Run `konvoy apply kubeconfig` to update kubectl credentials. Navigate to the URL below to access various services running in the cluster. https://lb_addr-12345.us-west-2.elb.amazonaws.com/ops/landing And login using the credentials below. Username: AUTO_GENERATED_USERNAME Password: SOME_AUTO_GENERATED_PASSWORD_12345 The dashboard and services may take a few minutes to be accessible.
Checking the files installed
konvoy up completes its setup operations, the following files are generated:
cluster.yaml- defines the Konvoy configuration for the cluster, where you customize your cluster provisioning configuration and your addons.
admin.conf- is a kubeconfig file, which contains credentials to connect to the
kube-apiserverof your cluster through
inventory.yaml- is an Ansible Inventory file.
statefolder - contains Terraform files, including a state file.
cluster-name-ssh.pub- stores the SSH keys used to connect to the EC2 instances.
runsfolder - which contains logging information.