Install on AWS

Prepare for and install Konvoy on AWS

This section guides you through the basic steps to prepare your environment and install Konvoy on AWS.

Prerequisites

  • The aws command line utility
  • Docker Desktop version 18.09.2 or newer
  • kubectl v1.15.2 or newer (for interacting with the running cluster)
  • A valid AWS account with credentials configured. You need to be authorized to create the following resources in the AWS account:
    • EC2 Instances
    • VPC
    • Subnets
    • Elastic Load Balancer (ELB)
    • Internet Gateway
    • NAT Gateway
    • Elastic Block Storage (EBS) Volumes
    • Security Groups
    • Route Tables
    • IAM Roles

Installation

After verifying your prerequisites, you can create an AWS Kubernetes cluster by running konvoy up. This command creates your Amazon EC2 instances, installs Kubernetes, and installs default add-ons to support your Kubernetes cluster.

Specifically, the konvoy up command does the following:

  • Provisions three t3.large EC2 instances as Kubernetes master nodes
  • Provisions four t3.xlarge EC2 instances as Kubernetes worker nodes
  • Deploys all of the following default add-ons:
    • Calico
    • CoreDNS
    • Helm
    • AWS EBS CSI driver
    • Elasticsearch (including Elasticsearch Exporter)
    • Fluent Bit
    • Kibana
    • Prometheus operator (including Grafana, AlertManager and Prometheus Adapter)
    • Traefik
    • Kubernetes dashboard
    • Operations portal
    • Velero
    • Dex identity service
    • Dex Kubernetes client authenticator
    • Traefik forward authorization proxy
    • Kommander

The default configuration options are recommended for a small cluster (about 10 worker nodes).

Modifying the cluster name

By default, the cluster name is the name of the folder where your run the konvoy command. The cluster name will be used to tag the provisioned infrastructure and the context when applying the kubeconfig file. To customize the cluster name, run the following command:

konvoy up --cluster-name <YOUR_SPECIFIED_NAME>

Control plane and worker nodes

Control plane nodes are the nodes where the Kubernetes Control Plane components will be installed. The Control Plane contains various components, including etcd, kube-apiserver (that you will interact with through kubectl), kube-scheduler and kube-controller-manager. Please also refer to the Concepts section. Having three control plane nodes makes the cluster “highly available” to protect against failures. Worker nodes run your containers in Kubernetes pods.

Default addons

The default addons help you manage your Kubernetes cluster by providing monitoring (Prometheus), logging (Elasticsearch), dashboards (Kubernetes Dashboard), storage (AWS CSI Driver), ingress (Traefik) and other services.

Viewing installation operations

As noted above, you start the cluster installation by running the konvoy up command. As the konvoy up command runs, you will see output about the operations performed. The first set of messages you see is the output generated by Terraform as it provisions your nodes.

After the nodes are provisioned, Ansible connects to the EC2 instances and installs Kubernetes in steps called tasks and playbooks. Near the end of the output, addons are installed.

Viewing cluster operations

You can access user interfaces to monitor your cluster through the Operations Portal. After you run the konvoy up command, if the installation is successful, the command output displays the information you need to access the Operations Portal.

For example, you should see information similar to this:

Kubernetes cluster and addons deployed successfully!

Run `konvoy apply kubeconfig` to update kubectl credentials.

Navigate to the URL below to access various services running in the cluster.
  https://lb_addr-12345.us-west-2.elb.amazonaws.com/ops/landing
And login using the credentials below.
  Username: AUTO_GENERATED_USERNAME
  Password: SOME_AUTO_GENERATED_PASSWORD_12345

The dashboard and services may take a few minutes to be accessible.

Checking the files installed

When the konvoy up completes its setup operations, the following files are generated: