A Python SDK for defining workflow steps with dependency management.
uv init my_project
cd my_project
uv add bridge-sdk@git+https://github.com/poolsideai/bridge-sdk.gitCreate the package directory and files:
mkdir -p my_project
touch my_project/__init__.pyYour project should look like this:
my_project/
├── pyproject.toml
└── my_project/
├── __init__.py
└── steps.py
Create my_project/steps.py:
from typing import Annotated
from pydantic import BaseModel
from bridge_sdk import Pipeline, step_result
pipeline = Pipeline(
name="my_pipeline",
description="Process and transform data",
)
class ProcessInput(BaseModel):
value: str
class ProcessOutput(BaseModel):
result: str
@pipeline.step
def process_data(input_data: ProcessInput) -> ProcessOutput:
return ProcessOutput(result=f"processed: {input_data.value}")
@pipeline.step
def transform_data(
input_data: ProcessInput,
previous: Annotated[ProcessOutput, step_result(process_data)],
) -> ProcessOutput:
return ProcessOutput(result=f"{previous.result} -> {input_data.value}")Steps are grouped under a pipeline using the @pipeline.step decorator. Dependencies between steps are declared with step_result annotations — the DAG is inferred automatically.
Add the following to your pyproject.toml:
[build-system]
requires = ["hatchling"]
build-backend = "hatchling.build"
[tool.bridge]
modules = ["my_project.steps"]Note: The
[build-system]section is required for your modules to be importable. Recent versions ofuv initmay not generate it by default.
Then sync to install your project in development mode:
uv syncThe Bridge orchestrator expects a main.py at the root of your project that imports and runs the CLI:
from bridge_sdk.cli import main
if __name__ == "__main__":
main()To test locally, you can use the uv run bridge commands:
uv sync
uv run bridge check # Validate setup
uv run bridge config get-dsl # Get step definitions
uv run bridge run --step process_data --input '{"input_data": {"value": "test"}}' --results '{}'As your project grows, you may want to split steps across multiple files. There are two ways to set this up:
Option A: List each module explicitly
my_project/
├── pyproject.toml
└── my_project/
└── steps/
├── __init__.py
├── ingestion.py
└── transform.py
[tool.bridge]
modules = ["my_project.steps.ingestion", "my_project.steps.transform"]This is the most explicit approach — each module is listed individually. If you forget to add a new module, you'll get a clear error rather than silent missing steps.
Option B: Re-export from __init__.py
Import all step modules from your package's __init__.py and point tool.bridge to the package:
# my_project/steps/__init__.py
from .ingestion import *
from .transform import *[tool.bridge]
modules = ["my_project.steps"]This keeps your tool.bridge config short, but you must remember to update __init__.py whenever you add a new step file.
| Command | Description |
|---|---|
bridge check |
Validate project setup |
bridge config get-dsl |
Export step, pipeline, and eval definitions as JSON |
bridge run --step <name> --input <json> --results <json> |
Execute a step |
bridge eval run --eval <name> --context <json> |
Execute an eval |
config get-dsl:
--modules- Override modules from config--output-file- Write output to file (default:/tmp/config_get_dsl/dsl.json)
run:
--step- Step name (required)--input- Input JSON (required)--results- Cached results JSON from previous steps--results-file- Path to results JSON file--output-file- Write result to file
eval run:
--eval- Eval name (required)--context- Context JSON string, or@filepathto read from file (required)--output-file- Write result to file--modules- Override modules from config
Pipelines are the recommended way to organize steps. A pipeline groups related steps together, and the dependency graph (DAG) between steps is automatically inferred from step_result annotations.
from bridge_sdk import Pipeline
pipeline = Pipeline(
name="my_pipeline",
description="My data processing pipeline",
)
@pipeline.step
def my_step(value: str) -> str:
return f"result: {value}"Use step_result to declare that a step depends on the output of another step:
from typing import Annotated
from pydantic import BaseModel
from bridge_sdk import Pipeline, step_result
pipeline = Pipeline(name="data_pipeline")
class RawData(BaseModel):
content: str
class CleanedData(BaseModel):
content: str
word_count: int
class Summary(BaseModel):
text: str
@pipeline.step
def fetch_data() -> RawData:
return RawData(content="raw content from source")
@pipeline.step
def clean_data(
raw: Annotated[RawData, step_result(fetch_data)],
) -> CleanedData:
cleaned = raw.content.strip().lower()
return CleanedData(content=cleaned, word_count=len(cleaned.split()))
@pipeline.step
def summarize(
data: Annotated[CleanedData, step_result(clean_data)],
) -> Summary:
return Summary(text=f"Processed {data.word_count} words")This defines a three-step DAG: fetch_data → clean_data → summarize. No explicit wiring is needed — the graph is derived from the step_result annotations.
@pipeline.step accepts the same options as the standalone @step decorator:
@pipeline.step(
name="custom_name", # Override function name
description="Does something useful",
setup_script="setup.sh",
post_execution_script="cleanup.sh",
metadata={"type": "agent"},
)
def my_step() -> str:
return "done"Use credential_bindings to inject credentials from Bridge into your step's environment. The dictionary key is the credential UUID registered in Bridge, and the value is the environment variable name the credential will be exposed as at runtime.
@pipeline.step(
credential_bindings={
"a1b2c3d4-5678-90ab-cdef-1234567890ab": "MY_API_KEY",
"f0e1d2c3-b4a5-6789-0abc-def123456789": "DB_PASSWORD",
},
)
def my_step() -> str:
import os
api_key = os.environ["MY_API_KEY"]
return f"authenticated"Optionally, use SandboxDefinition to customize the compute resources for a step's sandbox environment:
from bridge_sdk import Pipeline, SandboxDefinition
pipeline = Pipeline(name="my_pipeline")
@pipeline.step(
sandbox_definition=SandboxDefinition(
image="python:3.11-slim",
cpu_request="2",
memory_request="4Gi",
memory_limit="8Gi",
storage_request="50Gi",
),
)
def my_step() -> str:
return "done"All fields are optional:
| Field | Description | Example |
|---|---|---|
image |
Container image | "python:3.11-slim" |
cpu_request |
CPU request | "2" |
memory_request |
Memory request | "4Gi" |
memory_limit |
Memory limit | "8Gi" |
storage_request |
Storage request | "50Gi" |
storage_limit |
Storage limit | "100Gi" |
Pipelines can be triggered by external webhook events. WebhookPipelineAction endpoints (signature verification, secrets, idempotency) are configured in Console. The SDK declares actions that reference an endpoint by name and define filtering/transformation logic via CEL.
from bridge_sdk import Pipeline, WebhookPipelineAction
pipeline = Pipeline(
name="on_issue_create",
webhooks=[
WebhookPipelineAction(
name="linear-issues",
# branch determines where this webhook is indexed from and which
# version of the pipeline code runs when it fires. The webhook
# won't exist until this branch is indexed.
branch="main",
on='payload.type == "Issue" && payload.action == "create"',
transform='{"process_issue": {"issue_id": payload.data.id, "title": payload.data.title}}',
webhook_endpoint="linear_issues",
),
],
)Each webhook action uses CEL expressions that receive payload (the parsed JSON body) and headers (HTTP headers as map(string, string)):
name— Unique name for this webhook action within the pipeline + branch.branch— The git branch this webhook is indexed from and whose pipeline code runs when it fires. The webhook only exists after that branch is indexed, and incoming events execute the pipeline version from that branch. This lets you run different versions of the same pipeline (e.g."main"for development,"production"for stable).on— Returnsbool. The action fires only when this evaluates totrue.transform— Returnsmap(string, map(string, dyn))keyed by step name. Maps webhook payload fields into step inputs.webhook_endpoint— Name of the webhook endpoint configured in Console (e.g."linear_issues").
WebhookPipelineAction(
name="custom-alerts",
branch="staging",
on='payload.status == "firing" && payload.severity == "critical"',
transform='{"handle_alert": {"alert_id": payload.alert_id, "message": payload.message}}',
webhook_endpoint="monitoring_alerts",
)See examples/webhook_example.py and examples/webhook_generic_example.py for complete working examples.
@pipeline.step
async def async_step(value: str) -> str:
return f"async result: {value}"Deprecated: Standalone
@stepis deprecated and will be removed in a future release. Use@pipeline.stepinstead.
from bridge_sdk import step
@step
def my_step(value: str) -> str:
return f"result: {value}"Evals are quality measurement functions that automatically run when steps or pipelines complete. Define an eval with @bridge_eval, then bind it on @step, @pipeline.step, or Pipeline(...) using eval_bindings.
from typing import TypedDict, Any
from bridge_sdk import bridge_eval, EvalResult, StepEvalContext
class QualityMetrics(TypedDict):
accuracy: float
followed_format: bool
@bridge_eval
def quality_check(ctx: StepEvalContext[Any, Any]) -> EvalResult[QualityMetrics]:
is_correct = ctx.step_output.answer == ctx.step_input.expected
return EvalResult(
metrics={"accuracy": 1.0 if is_correct else 0.0, "followed_format": True},
result="Looks good" # Optional structured result value
)The eval function's first parameter type determines what it evaluates:
StepEvalContext[I, O]— evaluates a step (receives step input/output)PipelineEvalContext[I, O]— evaluates a pipeline (receives all step results)
Use Any for generic evals that work with any step, or specific types for type-safe evals.
from bridge_sdk import step, on_branch, sample
@step(
eval_bindings=[
(quality_check, on_branch("main")),
(llm_judge, on_branch("main") & sample(0.1)),
]
)
def my_step(input: TaskInput) -> TaskOutput:
...Pipeline eval bindings are specified in the Pipeline constructor:
from bridge_sdk import Pipeline, always
pipeline = Pipeline(
name="my_pipeline",
eval_bindings=[(pipeline_quality, always())],
)Conditions control when evals run:
| Condition | Description |
|---|---|
always() |
Every execution (default) |
never() |
Never run |
on_branch("main") |
Only on the specified branch |
sample(0.1) |
10% of executions |
Combine with & (and) and | (or):
on_branch("main") & sample(0.1) # Both must pass
on_branch("main") | on_branch("staging") # Either passesuv run bridge eval run \
--eval quality_check \
--context '{"step_name": "my_step", "step_input": {...}, "step_output": {...}, "metadata": {}}' \
--output-file /tmp/eval_result.jsonThe BridgeSidecarClient.start_agent() method returns a tuple of (_, session_id, status), where status is just a success/fail message, not the actual agent output.
To get the agent's response, you must:
- Instruct the agent in the prompt to write its output to a specific file
- Read that file after the agent completes
import json
from bridge_sdk import step
from bridge_sdk.bridge_sidecar_client import BridgeSidecarClient
OUTPUT_FILE = "/tmp/agent_output.json"
PROMPT = """Do some analysis and write your results.
CRITICAL: You MUST write your output to {output_file} using the write tool.
Output JSON structure:
{{
"result": "your analysis here",
"details": ["item1", "item2"]
}}
"""
@step
def run_agent() -> dict:
prompt = PROMPT.format(output_file=OUTPUT_FILE)
with BridgeSidecarClient() as client:
_, session_id, _ = client.start_agent(
prompt=prompt,
agent_name="my-agent"
)
# Read the output from the file the agent wrote
try:
with open(OUTPUT_FILE, "r") as f:
output_data = json.load(f)
response = json.dumps(output_data, indent=2)
except (FileNotFoundError, json.JSONDecodeError) as e:
response = f"Error reading output file: {e}"
return {"session_id": session_id, "response": response}Use content_parts to send text and image inputs alongside the prompt. Example file: examples/multimodal_agent_example.py.
from bridge_sdk import Pipeline
from bridge_sdk.bridge_sidecar_client import BridgeSidecarClient
pipeline = Pipeline(name="multimodal_agent_example")
@pipeline.step(metadata={"type": "agent"})
def analyze_image() -> tuple[str, str]:
content_parts = [
{"type": "text", "text": "Describe the image and list notable objects."},
{
"type": "image_url",
"image_url": {
"url": "https://upload.wikimedia.org/wikipedia/commons/7/70/Example.png"
},
},
]
with BridgeSidecarClient() as client:
_, session_id, res = client.start_agent(
prompt="Analyze the attached image.",
agent_name="Malibu",
content_parts=content_parts,
)
return session_id, resTo continue an agent session (preserving context from a previous step):
from bridge_sdk.bridge_sidecar_client import BridgeSidecarClient
from bridge_sdk.proto.bridge_sidecar_pb2 import ContinueFrom, RunDetail
with BridgeSidecarClient() as client:
_, session_id, _ = client.start_agent(
prompt=prompt,
agent_name="my-agent",
continue_from=ContinueFrom(
previous_run_detail=RunDetail(
agent_name="my-agent",
session_id=previous_session_id,
),
continuation=ContinueFrom.NoCompactionStrategy(),
),
)from bridge_sdk import (
# Pipelines & Steps
Pipeline, # Pipeline class for grouping steps
PipelineData, # Pydantic model for pipeline metadata
PIPELINE_REGISTRY, # Global registry of discovered pipelines
step, # Standalone step decorator (deprecated)
step_result, # Annotation helper for step dependencies
StepFunction, # Type for decorated step functions
StepData, # Pydantic model for step metadata
SandboxDefinition, # Optional compute resources for a step's sandbox
WebhookPipelineAction, # WebhookPipelineAction action definition
STEP_REGISTRY, # Global registry of discovered steps
get_dsl_output, # Generate DSL from registry
# Evals
bridge_eval, # Decorator for defining eval functions
EvalFunction, # Type for decorated eval functions
EvalData, # Pydantic model for eval metadata
EVAL_REGISTRY, # Global registry of discovered evals
EvalResult, # Return type for eval functions
StepEvalContext, # Context for step-level evals
PipelineEvalContext, # Context for pipeline-level evals
EvalBindingSpec, # Type alias for eval binding entries
EvalBindingData, # Pydantic model for eval binding metadata
Condition, # Base class for eval conditions
always, # Condition: always run
never, # Condition: never run
on_branch, # Condition: run on specific branch
sample, # Condition: run on percentage of executions
)
from bridge_sdk.bridge_sidecar_client import BridgeSidecarClient
from bridge_sdk.proto.bridge_sidecar_pb2 import ContinueFrom, RunDetailA GitHub Action is included to automatically index your repository in Bridge whenever code is pushed. It triggers indexing and polls until complete.
Add .github/workflows/bridge-index.yml to your repo:
name: Bridge Index Branch
on:
push:
jobs:
index:
runs-on: ubuntu-latest
steps:
- uses: poolsideai/bridge-sdk/action@main
with:
repository_id: ${{ secrets.BRIDGE_REPOSITORY_ID }}
token: ${{ secrets.BRIDGE_API_TOKEN }}
api_base_url: ${{ secrets.BRIDGE_API_URL }}Then add these as repository secrets: BRIDGE_REPOSITORY_ID, BRIDGE_API_TOKEN, and BRIDGE_API_URL.
| Input | Required | Default | Description |
|---|---|---|---|
repository_id |
yes | — | Bridge repository ID |
token |
yes | — | Bearer token for API auth |
api_base_url |
yes | — | Base URL for the Poolside API |
ref |
no | github.ref_name |
Branch or tag name to index |
commit_sha |
no | github.sha |
Commit SHA to index |
poll_interval |
no | 5 |
Seconds between status polls |
poll_timeout |
no | 300 |
Max seconds to wait for indexing |
| Output | Description |
|---|---|
commit_id |
The Bridge commit ID, available for downstream steps |
make venv # Create virtual environment
make sync # Install dependencies
make proto # Generate protocol buffers
make test # Run testsThis project uses pre-commit hooks to automatically update uv.lock when pyproject.toml changes.
Setup:
uv sync
uv run pre-commit installWhat it does:
- Automatically runs
uv lockwhen you commit changes topyproject.toml - Ensures
uv.lockis always in sync with dependencies - Adds the updated
uv.lockto your commit automatically
Manual run:
uv run pre-commit run --all-filesThis project is licensed under the Apache License 2.0 - see the LICENSE file for details.
Copyright 2026 Poolside, Inc.