-
Notifications
You must be signed in to change notification settings - Fork 2
Guard
Benmebrouk edited this page Apr 12, 2026
·
1 revision
Guard is a hallucination firewall. It checks whether LLM output is supported by source documents and returns a numeric verdict.
result = client.guard(
text="The contract allows 90-day returns.",
source_context="Our policy: returns within 14 business days.",
mode="lexical",
)
result.verdict # "verified" | "rejected" | "weak"
result.action # "allow" | "block" | "review"
result.confidence # 0.0 to 1.0
result.is_safe # True if verdict == "verified"
result.is_blocked # True if action == "block"
result.claims[0].reason # "numerical_mismatch" | "partial_match" | None| Verdict | Meaning | Action |
|---|---|---|
verified |
All claims supported by source | allow |
rejected |
Claims contradict the source | block |
weak |
Partial or insufficient evidence | review |
| Mode | Speed | Best For |
|---|---|---|
lexical |
<1ms | Token overlap, fast checks |
hybrid |
~50ms | Token + semantic embedding |
semantic |
~500ms | Full embedding similarity |
from wauldo import MockHttpClient
mock = MockHttpClient()
r = mock.guard("60 days", "14 days")
# verdict: "rejected" — mock detects numerical contradictionsdef safe_answer(question, context):
answer = your_llm.generate(question, context)
check = client.guard(text=answer, source_context=context)
if check.is_blocked:
return "Could not verify this answer."
return answer