Public starters

Start from working code.

Four small starters that protect one risky action first, keep the workflow moving, and write real diagnostics you can inspect.

Each starter uses the published SDKs, runs against the live API, and is intentionally small enough to copy into a real builder repo.

Run doctor first

npx @verifiedx-core/sdk doctor verifiedx doctor

OpenAI Agents

Protect a risky tool call in an OpenAI Agents runner.

Updates an internal workflow, attempts an external email, then lets VerifiedX block the bad send and replan to internal Slack.

  • Uses `@openai/agents` plus the native VerifiedX adapter.
  • Shows a blocked external side effect and safe continuation.

LangGraph

Protect graph state and store writes without a model loop.

Allows a grounded durable memory write, then blocks a graph update that tries to continue on a bad assumption.

  • Model-free on purpose so the LangGraph boundary is obvious.
  • Shows both allow and `replan_required` in one run.

Vercel AI SDK

Protect `generateText()` without changing the app flow.

Uses the Vercel AI SDK tool loop to update a workflow, attempt an unsafe email, and keep the case moving with a Slack fallback.

  • Uses `generateText()` plus the native VerifiedX adapter.
  • Works as the smallest entry point before adding stream or UI wiring.

Claude + MCP

Protect Claude tool calls delivered through MCP.

Exposes support tools over an in-process MCP server, then lets VerifiedX block an unsafe external email and replan to internal Slack.

  • Uses Claude Agent SDK, MCP, and the native VerifiedX adapter.
  • Proves the action boundary still works when the tool surface is MCP.

Protect one action first

Use the same runtime you already ship. Start with one risky action, inspect the diagnostics, then widen.