Exposing Kubernetes Services
An application runnning on a Kubernetes cluster must be explicitly exposed in order to be accessed from outside of the cluster. This is especially true in the case of cloud environments such as AWS.
General documentation on how to expose Kubernetes applications may be found in the Kubernetes documentation where there is a nice tutorial as well.
This document is intended to give more specific guidance on exposing services of a Sextant deployed application, specifically on an AWS hosted Kubernetes cluster, whether EKS based or not.
Services
NOTE These examples assume that you have deployed a Sawtooth network called test-network
in namespace test-namespace
.
You can view the services currently defined using this command substituting test-namespace
for your Sawtooth namespace.
kubectl get svc --namespace=test-namespace
Sawtooth REST API
Conveniently a Sextant deployed Sawtooth network already contains a basic service for the Sawtooth Rest API. Since this API is conventional HTTP, a traditional load balancer will do. Therefore you can use this command.
REMINDER Make sure that you remember to substitute test-network
and test-namespace
for your Sawtooth network name and namespace respectively.
kubectl expose service test-network-rest-api --name=test-network-rest-api-lb --port=8008 --target-port=8008 --type=LoadBalancer --namespace=test-namespace
Return to Services
Grafana and Influxdb
Similar to the REST API the grafana and influxdb instances deployed with Sawtooth each already has a service defined. Therefore you can use these commands.
REMINDER Make sure that you remember to substitute test-network
and test-namespace
for your Sawtooth network name and namespace respectively.
kubectl expose service grafana --name=test-network-grafana-lb --port=3000 --target-port=3000 --type=LoadBalancer --namespace=test-namespace
kubectl expose service influxdb --name=test-network-influxdb-lb --port=8086 --target-port=8086 --type=LoadBalancer --namespace=test-namespace
PLEASE NOTE the influxdb instance currently deployed is not particularly secure, so exposing the influxdb to the outside world should be discouraged. Any load balancer exposing the influxdb should use strict firewall (security group) rules to tighten up access control. We plan to address this in a future Sextant release but for now, we do not recommend exposing the influxdb.
Return to Services
Sawtooth Validator Network
The Sawtooth validator network itself is a somewhat different than the other services and protocols. Validators must connect to each other directly and not be mediated via any loadbalancing. In order to prepare for this a Sextant deployed Sawtooth network uses the direct hostPort 8800
on each of the nodes (similar to a NodePort)
In addition due to some limitations in Sawtooth 1.1 each validator must address other validators using the same name that the target validator uses to refer to itself (the value of its --endpoint
argument). Doing otherwise can create some instability in the network. On AWS each validator refers to itself via its internal network name, e.g. ip-192-168-183-187.us-west-2.compute.internal
. In order to use a node as an external seed the this name must resolve via DNS to the ip actually used to connect to the target validator. However outside of AWS or even a given VPC these *.compute.internal
hostnames do not resolve normally. Two mechanisms are available to resolve this.
Option 1
If one part of the network is outside of AWS, then effectively the network is passing through NAT. The best soluton in this case is to synch up the hostnames on the connecting side to how the receiving side sees itself. In order to address this /etc/hosts
entries or equivalent must be made for each of the target hosts on the source network mapping the target host's name to its public ip address.
Option 2
VPC Peering. If the two portions of the network are both on AWS and do not have overlapping CIDRs then you can peer the two VPC's and enable DNS resolution between the two VPC's. This will allow both VPC's to communicate directly and resolve their *.compute.internal
hostnames.
Finally, AWS networks are closed to most traffic from the outside world by default. In order to connect directly to the validator hosts at all the relevant security groups for the k8s worker nodes must be opened on port 8800
. Peered VPC's still require individual security group configurations.
Return to Services
Daml gRPC
The Daml connection is GRPC based, and therefore on AWS it requires a NLB type load balancer in order to function properly.
To create this use the following service definition with the two mandatory changes needed to reflect your environment:
- Replace
test-namespace
with the actual namespace of your deployment. - Replace
test-network
in the stringtest-network-daml-rpc
with the actual network name of your deployment.
apiVersion: v1
kind: Service
metadata:
name: daml-rpc-lb
namespace: test-namespace # CHANGE to reflect your environment
annotations:
service.beta.kubernetes.io/aws-load-balancer-type: nlb
labels:
app: daml-rpc-lb
spec:
type: LoadBalancer
ports:
- name: daml-ledger-api
port: 39000
targetPort: 39000
selector:
daml: test-network-daml-rpc # CHANGE to reflect your deployment
The recommended approach is to create a YAML file containing suitably edited version of this configuration then apply it to your cluster. For example, if you have saved your config to daml-rpc-lb.yaml
then use this command to apply it.
kubectl apply -f daml-rpc-lb.yaml
This instructs Kubernetes to create the necessary NLB resource to access the Daml ledger.
You can find the relevant hostname via the kubectl get service
command substituting test-namespace
for whatever your namespace is called.
kubectl get service -n test-namespace | grep daml-rpc-lb
You can now deploy to the Daml ledger using the hostname
you have just obtained together with port 39000
. Instructions on how to do this can be found in the Daml documentation here.
Return to Services