FAQ
How does Signadot implement data isolation in sandboxes?
One of the common questions we encounter with our sandbox testing solution is around data isolation, particularly the risk of data corruption when multiple teams use a shared database. Here’s how we address that with a flexible approach we call "tunable isolation."
Default Mode: Shared Database with Partitioned Data
By default, all sandbox environments connect to the same shared database. This works well for most scenarios, especially if tests are primarily read-only or if they use partitioned identifiers like an org ID or user ID. Isolating data at the domain level ensures that each test can create/delete its data within its partition, reducing cross-test interference. Ideally, tests should also clean up after themselves once completed.
Schema Change Scenarios
For scenarios that involve schema changes, we recommend two approaches:
- Local Database Setup: Developers can spin up a temporary database on their local workstations, allowing the sandbox service to connect locally. Signadot supports this model by routing cluster traffic to/from the local workstation, so it behaves like any other sandbox in the cluster.
- PR Workflow Integration: For a standardized setup, platform teams can create resource plugins to spin up a temporary, containerized database within the cluster or in the cloud. These plugins are scripts that run before and after the sandbox, managing temporary databases and ensuring clean resource tear-down after tests.
This tunable isolation model allows teams to balance convenience with isolation as needed, with the shared database as the default for ease, and temporary databases for cases requiring schema changes or higher isolation.
How do you handle message queue isolation in sandboxes?
Message queue isolation enables developers to independently test end-to-end application flows that involve shared message queues without impacting each other's tests. Signadot provides two primary approaches for implementing this isolation:
- Message-Level Routing: The most scalable and cost-effective approach is to enable message-level routing by propagating the routing key through message headers. Consumers can then selectively process messages based on these headers.
- Dynamic Queue Creation: Alternatively, you can create dedicated message queue topics or queues on demand using resource plugins. This approach connects sandboxed producers and consumers to newly created topics.
For detailed implementation examples and configuration guides, refer to our message queue isolation guide and tutorial.
Is distributed tracing required to use Signadot?
No. You only need the ability to propagate request headers, typically implemented using OpenTelemetry libraries. This header propagation enables Signadot to route requests to the appropriate sandboxed services.
How can I test features that span multiple Pull Requests?
When testing changes across multiple PRs, you can combine routing to multiple sandboxes using RouteGroups. Each RouteGroup has its own routing key, allowing you to test request flows that interact with all sandboxes within the group. This enables testing of complex feature implementations that span multiple services and PRs.
What are the advantages of running automated tests as Signadot Jobs vs CI tools?
Running automated tests through Signadot Jobs offers several key benefits over traditional CI-based testing:
Security and Access: Jobs run within your Kubernetes cluster, with native access to internal services. This eliminates the need to expose internal endpoints for external testing.
Cost Optimization: Testing within your existing cloud environment can be more cost-effective than running tests in CI vendor environments, especially for high-volume testing.
Centralized Management: Signadot provides a unified platform for managing diverse test types - from API and integration tests to end-to-end, smoke, and performance tests.
Kubernetes-Native Scaling: Leverage Kubernetes' auto-scaling capabilities to run tests in parallel across your engineering organization, improving testing throughput and developer productivity.