Prerequisites

The following CLI tools are required - eksctl - aws (used by eksctl) - helm - kubectl

You will need sufficient AWS privileges to create and assign IAM policies and roles.

Creating an EKS Cluster

You will also need an EKS cluster. Our recommendation is that you create a dedicated 4 node EKS cluster which will allow you to install Sextant for Sawtooth and then use this to deploy a Sawtooth network using PBFT consensus on it.

If you are not familiar with how to create an EKS cluster instructions can be found in EKS Cluster Basics

Configuring your EKS Cluster

Sextant for Sawtooth runs under the default service account. However, since it is an AWS Marketplace metered product, certain IAM privileges need to be assigned to this service account for it to operate correctly.

The instructions for doing this can be found in EKS AWS Marketplace.

NOTE it is only necessary to configure your cluster once.

Deploying Sextant for Sawtooth

Sextant for Sawtooth is packaged as a Kubernetes Operator and deployed using Helm.

Pre-flight checks

If you haven't done so already you will need to create local Helm chart repos.

  1. Create local helm repo sextant.
helm repo add sextant https://btp-charts-stable.s3.amazonaws.com/charts/
  1. Create local helm repo bitnami.
helm repo add bitnami https://charts.bitnami.com/bitnami

Basic Deployment

To quickly spin up Sextant for Sawtooth without a persistent backing store, e.g. for evaluation purposes, run the following commands.

helm repo update
helm install eval sextant/sextant-sfs --version 2.0.10

NOTE that when the helm install command completes it provides you with instructions on how to obtain your initial login for Sextant for Sawtooth and how to set up a basic connection to it using port forwarding. You can always recover this information using helm status eval.

NAME: eval
LAST DEPLOYED: Sun Mar  1 18:20:51 2020
NAMESPACE: default
STATUS: deployed
REVISION: 1
TEST SUITE: None
NOTES:
1. Get the initial Sextant application username and password by running this command
  kubectl describe pod/eval-sextant-sfs|grep INITIAL_
2. Get the application URL by running these commands:
  export POD_NAME=$(kubectl get pods -l "app=eval-sextant-sfs" -o jsonpath="{.items[0].metadata.name}")
  echo "Visit http://127.0.0.1:8080 to use your application"
  kubectl port-forward $POD_NAME 8080:80

NOTE that the deployment initiated my helm is not instantaneous so run watch -n 5 kubectl get all and wait until pod/eval-sextant-sfs is running before trying to connect to your Sextant for Sawtooth instance.

NOTE that you can also establish a persistent connection to your Sextant for Sawtooth instance following these instructions

Cleaning up

When you've finished your evaluation it is straight forward to delete your Sextant for Sawtooth instance.

helm delete eval

As noted above this will delete all the data associated with your instance. If you want this data to persist even if Sextant for Sawtooth is deleted then reinstalled then you need to do an advanced deployment as detailed below.

Advanced Deployment

To spin up Sextant for Sawtooth with a persistent backing store, e.g. for test purposes, you will first need to create a separate postgresql database then supply its details.

Our recommended approach is to create a values.yaml file containing the following postgres specification.

sextant:
  database:
    type: "postgres"
    user: "postgres"
    password: "postgres"
    db: "sextant"
    port: "5432"
    host: "sextant-pg-postgresql.default.svc.cluster.local"
postgresql:
  username: postgres
  password: postgres
  database: sextant
postgresqlUsername: postgres
postgresqlPassword: postgres
postgresqlDatabase: sextant
persistence:
  enabled: false

Assuming that this is in your current directory you can then create a postgres database using the Bitnami chart.

helm repo update
helm install -f values.yaml sextant-pg bitnami/postgresql

You can ignore the notes provided by Bitnami because once this database is available you can use it as the persistent backing store for Sextant for Sawtooth by simply running this helm command.

NOTE It takes a while to set up so run watch -n 5 kubectl get all until you can see pod/sextant-pg-postgresql-0 is running. Once it is running you can then install Sextant for Sawtooth.

helm repo update
helm install -f values.yaml test sextant/sextant-sfs --version 2.0.10

NOTE that when the helm install command completes it provides you with instructions on how to set up a basic connection to it using port forwarding. You can always recover this information using helm status eval.

NAME: test
LAST DEPLOYED: Sun Mar  1 18:25:48 2020
NAMESPACE: default
STATUS: deployed
REVISION: 1
TEST SUITE: None
NOTES:
1. Get the initial Sextant application username and password by running this command
  kubectl describe pod/test-sextant-sfs|grep INITIAL_
2. Get the application URL by running these commands:
  export POD_NAME=$(kubectl get pods -l "app=test-sextant-sfs" -o jsonpath="{.items[0].metadata.name}")
  echo "Visit http://127.0.0.1:8080 to use your application"
  kubectl port-forward $POD_NAME 8080:80

IMPORTANT NOTE This time it is important to store the initial admin credentials the first time you do an advanced deployment since these will be persisted. Therefore if you delete then re-install Sextant for Sawtooth the initial credentials of the re-installed version will be not work.

NOTE that as with the basic deployment, you can also establish a persistent connection to your Sextant for Sawtooth instance following these instructions

Cleaning up

When you've finished your testing it is straight forward to delete your Sextant for Sawtooth instance.

helm delete test

However this time the data is persisted which means that you can spin up another instance using the values.yaml file and continue where you left off assuming that the state has not been changed in the meantime.

Persistent Connection

Rather than relying on port forwarding, you can create a persistent connection using a LoadBalancer or other ingress, pointing to port 80 on your Sextant for Sawtooth pod. For example -

kubectl expose pod/test-sextant-sfs --type=LoadBalancer --name=test-sextant-sfs-lb --port=80 --target-port=80

Getting the Sextant for Sawtooth hostname

Assuming that you have opted to use a LoadBalancer to enable you to access your Sextant for Sawtooth instance on a persistent basis as described above, you can use the following command to obtain its hostname.

kubectl get all -o wide | grep LoadBalancer

Alternatively you can run this command.

kubectl get all -o wide --output json | awk '/hostname/{print $2}'

In both cases the hostname is the string ending in .elb.amazonaws.com and you can use this to connect to your Sextant for Sawtooth instance using the credentials obtained above.

Finding the Kubernetes API Address on Amazon EKS

If you are using a Kubernetes management server such as Rancher to access your EKS clusters, these typically provide a proxy to the kubernetes API server which will not work with Sextant for Sawtooth. In order to find the address of an EKS Kubernetes API server execute the following command:

aws eks describe-cluster --name <CLUSTER_NAME> --region <REGION_NAME> --output json | jq '.cluster.endpoint'