Running performance tests at scale doesn’t have to be a headache. With Grafana/K6 and Signadot, you can isolate workloads, fine-tune test parameters, and scale tests efficiently. By using Signadot Sandboxes, you can simulate realistic environments without impacting your baseline, while Job Runner Groups help you run tests in parallel across multiple pods. This approach not only saves on infrastructure costs but also lets you spot performance bottlenecks early, optimize microservices, and ensure scalability—all with minimal setup. It’s a smarter way to get fast, reliable feedback on your services before they hit production.
Image by Scott Graham from Unsplash.
Isolate Workloads, Parameterize Tests, and Optimize Performance at Scale
Performance testing in pull requests is often overlooked due to the complexity of setting up full-scale environments and the challenge of obtaining reliable results in shared clusters. Signadot simplifies this process by providing an efficient and scalable solution for running performance tests without the need for extensive infrastructure provisioning. With Signadot Sandboxes, developers can seamlessly fork only the necessary services, such as HotROD’s location service, into isolated environments, eliminating the overhead of full-scale deployments. Additionally, the Signadot Job Runner Group automates the execution of Grafana K6 test scripts as jobs. This allows teams to conduct repeatable load tests while dynamically adjusting parameters such as URLs, virtual users (VUs), and duration. This streamlined approach ensures that performance evaluations take place in a controlled, production-like environment. As a result, teams receive rapid feedback, reduce infrastructure costs, and gain confidence in microservice scalability—all while maintaining efficient pull request workflows. This guide walks through, step by step, how Signadot implements this concept**.**
What You’ll Learn
Prerequisites
HotROD is a microservice-based application designed for testing distributed systems. Deploy it to your cluster with the following commands:
This deploys HotROD with the devmesh overlay in the hotrod namespace.

Next, We’re going to Introduce the the delay in the location service API and rebuild the image to simulate delayed response.

Run following commands to build location service API delay response docker image
Signadot Sandboxes allow you to fork services for isolated testing without impacting the baseline environment.
Create location-sbx.yaml to fork the location-service deployment:
Note: If you haven’t already installed Signadot CLI on your computer, please go to this link and install the Signadot CLI: https://www.signadot.com/docs/getting-started/installation/signadot-cli
This will create sandbox for location service api.

Optional: Scale replicas for load testing by adding a customPatch to the sandbox YAML. Please go to this link to have some idea on how to create custom patches for sandboxes https://www.signadot.com/docs/reference/sandboxes/spec#patch
Job Runner Groups enable distributed test execution across multiple pods, ideal for large-scale load testing.
Create job-runner.yaml to configure a group named k6-perf-tests:
This will create Jobrunner group on Signadot. You can see it from Signadot GUI

K6 scripts can leverage environment variables to decouple test logic from configuration, enabling reusable tests across environments.
Key Features of the Script
This job template is designed to dynamically route HTTP request traffic to a specified target destination—either baseline or sandbox—for performance benchmarking. The workload, derived from the location-service-api, enables comparative analysis of an upcoming pull request by evaluating key performance metrics.
Execute the test with custom parameters:
injectRoutingKey :
Note: traficManager.injectroutingKey won’t work with HTTPS traffic. If you are running your workload under HTTPS, Please go to this URL https://www.signadot.com/docs/tutorials/testing/e2e-with-cypress#injecting-routing-context. It will demonstrate how to inject routing keys to your HTTP request.
After execution, K6 outputs metrics such as:
Test result summary of baseline deployment: location-service-api

Test result summary of sandbox created from Signadot: location-service-api

Use these metrics to:
As you can see The comparison between the baseline and the proposed change ( sandbox ) reveals significant performance degradation in the upcoming version. Below are the critical observations:
1. Response Time Violations
Baseline (Current Workload):
Upcoming Change:
2. Throughput and Iteration Efficiency
Baseline:
Upcoming Change:
3. Root Cause Indicators
Latency Distribution:
The upcoming change shows uniformly high latency (min: 10.2s, max: 10.39s), suggesting a systemic delay (e.g., introduced artificial latency, inefficient code, or resource contention).
HTTP request waiting time (avg: 10.3s) dominates total duration, indicating backend processing bottlenecks.
Failed Threshold Automation:
Best Practices for Effective Testing
**Additional Recommendations:**
By integrating K6 with Signadot Sandboxes and Job Runner Groups, engineering teams can:
Get the latest updates from Signadot