Quickstart: Creating a Sandbox
- Signadot account (No account yet? Sign up here).
- A Kubernetes cluster
- Option 1: Set it up on your cluster: This can be a local Kubernetes cluster spun up using minikube, k3s, etc.
- You will need to first install the Signadot Operator into this cluster.
- After that, you will also need to install the HotROD application for quickstart. You can install this as follows:
kubectl create ns hotrod
kubectl -n hotrod apply -k 'https://github.com/signadot/hotrod/k8s/overlays/prod/quickstart' - Option 2: Use a Playground Cluster: If you don't have a Kubernetes cluster for the above steps, you can provision a Playground Cluster from the Dashboard. It comes with the Signadot Operator and the HotROD application pre-installed in the
hotrod
namespace.
- Option 1: Set it up on your cluster: This can be a local Kubernetes cluster spun up using minikube, k3s, etc.
Overview
Imagine that we're working on a microservices-based application. In typical fashion, we've identified a bug in one of the microservices that causes the application to misbehave. In this guide, we'll explore how to test a fix to that microservice in an isolated manner, using Sandboxes to test the changes against the rest of the services for high-fidelity feedback before merging code. Let's dive in!
Demo Setup
We'll be using HotROD as our application. It is a simple ride-sharing application that allows the end users to request rides to one of 4 locations and have a nearby driver assigned along with an ETA.
It consists of 4 services: frontend
, location
, driver
and route
, as well
as some stateful components like Kafka, Redis and MySQL. These four
microservices running on the remote cluster will serve as our "baseline" - the
stable, pre-production version of the application. Typically this is an
environment that is updated continuously by a CI/CD process.
Before proceeding, ensure you have a Kubernetes cluster setup with HotROD application pre-installed as detailed in the Prerequisites section.
To access the frontend, we will use the Signadot CLI to establish a connection with the cluster, enabling access to the application from your local machine. In a practical application, an endpoint, like that specified using a Kubernetes Ingress can be used to access the application as well.
Set up access to the HotROD application
Let's define the following values in the Signadot CLI config located at $HOME/.signadot/config.yaml
:
org: <your-org-name> # Find it on https://app.signadot.com/settings/global
api_key: <your-api-key> # Create API key from https://app.signadot.com/settings/apikeys
local:
connections:
- cluster: <cluster name> # Find it on the clusters page: https://app.signadot.com/settings/clusters
type: ControlPlaneProxy
A correctly configured configuration file will look something like this:
org: hooli
api_key: TJvQdbEs2dVNotRealKeycVJukaMZQAeIYrOK123
local:
connections:
- cluster: hooli-cluster
type: ControlPlaneProxy
Read more about CLI configuration here.
You are now ready to use the CLI to connect to the Kubernetes cluster, as well as start testing local changes using Sandboxes.
% signadot local connect
signadot local connect needs root privileges for:
- updating /etc/hosts with cluster service names
- configuring networking to direct local traffic to the cluster
Password:
signadot local connect has been started ✓
* runtime config: cluster hooli-cluster, running with root-daemon
✓ Local connection healthy!
* operator version 0.16.0
* control-plane proxy listening at ":46575"
* localnet has been configured
* 41 hosts accessible via /etc/hosts
* sandboxes watcher is running
* Connected Sandboxes:
- No active sandbox
You can also check its status with: signadot local status
. This establishes a
bidirectional connection between your workstation and the cluster, thereby
making the cluster services available from your machine.
Access the frontend
Now that we have the connection established with the remote cluster, let's access the HotROD frontend UI at http://frontend.hotrod.svc:8080 and request a few rides. Clicking on one of the four locations will order a ride for the location and displays updates on the order along with an ETA.
You'll notice when you request a ride that the ETA of the driver is a negative value! This is clearly a bug. In the subsquent sections, we'll fix this bug and validate it using manual and automated tests in a Sandbox.
Creating a Sandbox
Let's say a developer has identified the bug and has a potential fix to verify. We assume that they (perhaps locally, or in CI) have built a new docker image containing the fix. We've tagged that fix as signadot/hotrod:quickstart-v3-fix
.
Now, let's create a sandbox containing a "fork" of the route
service, and use the above image.
name: negative-eta-fix
spec:
description: Fix negative ETA in Route Service
cluster: "@{cluster}"
forks:
- forkOf:
kind: Deployment
namespace: hotrod
name: route
customizations:
images:
- image: signadot/hotrod:quickstart-v3-fix
container: hotrod
- Run with UI
- Run with CLI
Click the button below to open and run this spec on Create Sandbox UI.
Run the below command using Signadot CLI.
# Save the sandbox spec as `negative-eta-fix.yaml`.
# Note that <cluster> must be replaced with the name of the linked cluster in
# signadot, under https://app.signadot.com/settings/clusters.
% signadot sandbox apply -f ./negative-eta-fix.yaml --set cluster=<cluster>
Created sandbox "negative-eta-fix" (routing key: dxux1yyzbrb0g) in cluster "<cluster name>".
Waiting (up to --wait-timeout=3m0s) for sandbox to be ready...
✓ Sandbox status: Ready: All desired workloads are available.
Dashboard page: https://app.signadot.com/sandbox/name/negative-eta-fix
The sandbox "negative-eta-fix" was applied and is ready.
Previewing the changes from the frontend
Once you create the sandbox in the previous step, you can retrieve the "routing
key" corresponding to that sandbox from either the CLI output or the UI. To test
the change within our sandbox, all you need to do is to pass the baggage
header as shown below to the URL frontend.hotrod.svc:8080
that we were using
previously.
baggage: sd-routing-key=<routing-key>
If you're on Chrome, you can use Signadot Chrome Extension (beta) to automatically set the header corresponding to the selected sandbox. If not, you can use any extension that allows setting a header, such as Requestly. Once you have selected the sandbox on the extension, access the Frontend URL again (http://frontend.hotrod.svc:8080). This time, you should see positive value for ETA.
With the application of the baggage
header in the call to the frontend, the
request took the usual path for all the services - except for the route
service. In that case, since we had specified a fork for the route
service in
the sandbox specification, the request was instead sent to the fork that was
provisioned using the image we had supplied in the specification - the one that
contained the fix for the ETA. Hence, though the request had originated at the
frontEnd, it reflected the changes in the route Server because of request-level
isolation.
The diagram below shows the path taken by requests with and without the baggage
header in the context of this guide.
Running a test on the sandbox
Now that we have verified the fix, we may want to write automated tests for it
as well. The approach is exactly same as above. All we need to do is to pass the
baggage
header in the call.
Since the route
service is a gRPC application, we'll use grpcurl
to access
it. The calls below show the two calls - firstly to the baseline route
that
still returns a negative value, and secondly to the sandboxed route
service
(with the passing of the baggage
header) that returns a positive value.
docker run --network=host fullstorydev/grpcurl -plaintext -d '{"from": "231,773", "to": "115,277"}' route.hotrod.svc:8083 route.RoutesService/FindRoute
{
"etaSeconds": -2436
}
docker run --network=host fullstorydev/grpcurl -plaintext -d '{"from": "231,773", "to": "115,277"}' -H "Baggage: sd-routing-key=dxux1yyzbrb0g" route.hotrod.svc:8083 route.RoutesService/FindRoute
{
"etaSeconds": 1117
}
In this case, since we are making a call route.hotrod.svc:8083
with the
baggage
header, the call was sent directly to the sandboxed route
service.
The diagram below shows the path taken by the calls with and without the
baggage
header.
Finally, once you have tested your changes, you can delete the sandbox either from the UI or the CLI. If using the CLI, the command looks like the following:
signadot sandbox delete negative-eta-fix
Conclusion
Congrats! You have tested a new version of a microservice in Kubernetes using Sandboxes. You can use Sandboxes for Automated Testing as well as Feature Previews of your microservices. To learn more about how sandboxes are isolated from one another, check out header propagation and sandbox resources.