This guide shows you how to create a fixture, wire it into the test harness, and use it from a test. You will build an HTTP echo server as a running example and then learn how to share fixtures across suites, handle missing dependencies, manage containers, and add structured options.
Prerequisites
Section titled “Prerequisites”- Follow write tests to scaffold a project and
install
tenzir-test. - Make sure your project root already contains
fixtures/,inputs/, andtests/directories (they can be empty).
Register the fixture
Section titled “Register the fixture”tenzir-test imports fixtures/__init__.py automatically. Each module you
import there registers its @fixture()-decorated functions on startup:
from . import http # noqa: F401 (side effect: register fixture)Implement a fixture
Section titled “Implement a fixture”A fixture is a generator decorated with @fixture(). It sets up a resource,
yields a dictionary of environment variables that tests can read, and cleans up
in a finally block. Here is a minimal HTTP echo server:
from tenzir_test import fixture
@fixture()def http(): server = start_echo_server() # your setup logic try: yield {"HTTP_FIXTURE_URL": server.url} # expose to tests finally: server.shutdown() # always clean upThe harness calls the generator once per fixture activation. Everything before
yield is setup, the dictionary becomes environment variables, and the
finally block runs regardless of whether the test passes or fails. Fixtures
also receive a per-test scratch directory via TENZIR_TMP_DIR for temporary
files.
Use the fixture in a test
Section titled “Use the fixture in a test”Request a fixture by name in the test’s frontmatter. The harness starts it before the test runs and exports its environment variables:
---fixtures: [http]---
from {x: 42, y: "foo"}http env("HTTP_FIXTURE_URL"), body=thisCapture the baseline
Section titled “Capture the baseline”Run the harness in update mode to record the expected output:
uvx tenzir-test --updateThis creates tests/http/echo-read.txt with the fixture’s response. Subsequent
runs compare live output against this baseline. Add --debug to see fixture
lifecycle logs, or set TENZIR_TEST_DEBUG=1 in CI.
Share a fixture across a suite
Section titled “Share a fixture across a suite”By default each test gets its own fixture lifecycle. To start a fixture once and
share it across multiple tests, declare a suite in a directory-level
test.yaml:
suite: smoke-httpfixtures: [http]timeout: 45Every test file in that directory joins the suite. The harness starts the
fixture once, runs all members in lexicographic order, and tears it down
afterwards. Suites pin to a single worker but different suites still run in
parallel when --jobs permits.
Tests inside a suite inherit fixtures, timeout, and retry from the suite
configuration and cannot override them in frontmatter. Other keys like inputs:
remain overridable per file.
Run just the suite directory to focus on it:
uvx tenzir-test tests/httpSelecting a single file inside a suite fails fast with a descriptive error to keep the lifecycle predictable.
Handle unavailable fixtures
Section titled “Handle unavailable fixtures”Fixtures that depend on external tools (a container runtime, a cloud CLI) should
raise FixtureUnavailable when the dependency is missing:
from tenzir_test.fixtures import FixtureUnavailable, fixture
@fixture()def mysql(): if not shutil.which("docker"): raise FixtureUnavailable("docker not found") # ...By default this causes a test failure. To convert it into a skip, add a
structured skip entry to the suite’s test.yaml:
suite: mysql-integrationfixtures: [mysql]skip: on: fixture-unavailableThe harness marks every test in the suite as skipped and includes the exception message in the output. This opt-in design keeps suites failing loudly by default — you only suppress the failure for environments where the missing dependency is expected.
Use container runtime helpers
Section titled “Use container runtime helpers”When a fixture manages a single container directly rather than orchestrating
services through Docker Compose, the container_runtime module
(tenzir_test.fixtures.container_runtime) handles the repetitive parts:
finding a runtime, running containers, polling for readiness, and tearing down.
A container-backed fixture follows four steps:
-
Detect the runtime.
detect_runtime()probes the system for Podman first, then Docker, and returns aRuntimeSpec. When no runtime is found it returnsNone— raiseFixtureUnavailableso the suite can skip gracefully. -
Start the container.
start_detached(runtime, args)runs<runtime> run -dand returns aManagedContainerhandle. Pass the same flags you would use on the command line (port mappings, environment variables, image name). -
Wait for readiness.
wait_until_ready(probe, ...)calls your probe function repeatedly until it returns(True, observation). On timeout it raisesContainerReadinessTimeoutwith the context string and the last observation, so you can tell why the service did not come up. -
Clean up. Call
container.stop()in afinallyblock. TheManagedContainerhandle also exposesexec(),inspect_json(),is_running(), andcopy_from()for anything you need during the test.
The example-project/fixtures/container.py in this repository shows the
pattern applied end-to-end.
Add structured options
Section titled “Add structured options”When a fixture needs runtime configuration — a custom port, a TLS toggle, a
database name — declare a frozen dataclass and pass it to @fixture():
from dataclasses import dataclassfrom tenzir_test import current_options, fixture
@dataclass(frozen=True)class HttpOptions: port: int = 0
@fixture(options=HttpOptions)def http(): opts = current_options("http") server = start_echo_server(port=opts.port) # ...Every field needs a default so that bare requests (fixtures: [http]) keep
working. Test authors provide values via a mapping in test.yaml or
frontmatter:
fixtures: - http: port: 9090The harness constructs the dataclass from the YAML mapping. Nested dataclasses work too — the harness walks the type annotations recursively. See the test framework reference for the full options API.
Control fixtures from Python tests
Section titled “Control fixtures from Python tests”The declarative workflow (fixtures: [http]) covers most cases. When a
Python-mode test needs to start, stop, or restart a fixture explicitly — for
example to simulate a crash — use acquire_fixture():
# runner: pythonwith acquire_fixture("http") as http: env = http.env # exercise the system while the fixture runs
# start a fresh instancehttp = acquire_fixture("http")http.start()http.stop()The controller wraps the registered fixture factory. start() enters the
generator and stores the environment mapping on controller.env; stop()
triggers the finally block. Use the context manager form when a single
lifecycle suffices, or call start()/stop() manually when you need multiple
cycles.
Fixture hooks
Section titled “Fixture hooks”Fixtures can advertise extra operations by returning a FixtureHandle with
named hooks:
@fixture()def node(): process = _start_node() return FixtureHandle( env=_make_env(process), teardown=lambda: process.terminate(), hooks={"kill": lambda sig=SIGTERM: process.send_signal(sig)}, )Test authors access hooks as attributes on the controller. Assert their presence so tests fail immediately when the contract changes:
node = acquire_fixture("node")node.start()assert hasattr(node, "kill")node.kill(signal.SIGKILL)node.stop()Stabilise flaky scenarios
Section titled “Stabilise flaky scenarios”Fixture-backed tests may occasionally need retries when a service takes longer
to initialise. Add retry to the frontmatter:
---fixtures: [http]retry: 4---The number is the total attempt budget. Treat this as a temporary safety net and investigate persistent flakes — long retry chains mask race conditions.