Skip to main content

Running Smart Tests

Prerequisites
  • This feature requires Signadot Operator v0.19+.
  • You must enable Managed Runner Group on your cluster via the Signadot Dashboard (under Settings) and have at least 1 runner pod.

Overview

In this document we present how to set up and run Smart Tests. This is a 3 step process.

  1. Write a test using Starlark
  2. Select a trigger for the test
  3. Create a sandbox that triggers the test

Once the first 2 steps are completed, any sandbox that triggers the test will cause the test to run and one can examine the smart test execution results.

Writing a Smart Test

In the dashboard, select "Smart Tests" on the left and then click on the "Create New" button on the right. This brings up an editor as in the example below.

Smart Test Editor

Smart Tests are suited for API testing, and are run from within a connected Kubernetes cluster, and hence can access services using the in-cluster DNS, such as app.namespace.svc:8080. To learn more about the syntax and examples, refer to the Smart Test Reference.

While editing the Smart Test, you can try to run it by clicking on "Save and Run". These runs will run the test in a selected cluster giving you a way to debug the test itself. In this context, the test will run but is not associated with a sandbox.

Selecting a Trigger

Once the test has been saved, a new tab called Triggers appears where you can define under what conditions the test will run in association with a sandbox.

Smart Test Save

A trigger defines:

  • The cluster in which the Smart Test will run.
  • A baseline workload in the cluster.

Once the above trigger is set up, every sandbox that contains a fork of the workload specified above in the cluster specified above will automatically trigger an execution of the test. The tests will be triggered when such a matching sandbox is created or updated.

If specified, these executions automatically provide a Smart Diff between the requests & responses from the baseline and from the sandbox. To learn more about what a Smart Diff does and modeling done behind the scenes, refer to the concept documentation.

Create a Sandbox

Once we have a test and a trigger, one needs to create a sandbox which triggers the test. For example, let us suppose our test has a trigger for the Deploment demo in the testing namespace in cluster staging.

name: my-sandbox
spec:
cluster: staging
forks:
- forkOf:
kind: Deployment
namespace: testing
name: demo

Click the button below to open and run this spec on Create Sandbox UI.

View Results

Once the sandbox is created and ready, the test will be executed and you can find the executions by clicking on the test in the dashboard.

Each execution gives categorized diffs of the requests and responses. The diff is between two runs of the test, one run against a reference representing the baseline and one against the sandbox.

The categories for the diffs indicate how relevant the changes are, measured roughly by how likely a similar difference may occur between runs in the baseline.

Below gives an example where the sandbox has added a category field and changed the value of the stock units. Note that these changes are classified as relevant. The classification of changes is performed by an AI model, and while it may not always be perfect, it is expected to improve over time. You can view changes categorized as high, medium, or low relevance based on the model's current assessment.

Smart Diff Results