Skip to content

wauldo/wauldo-sdk-rust

Repository files navigation


🦀 Wauldo Rust SDK

Verified RAG for Rust — trust score on every answer


Your LLM passes demos. It fails in production.

One import, two lines — plug Wauldo Guard on top of your existing RAG pipeline and get a numeric trust_score + verdict (SAFE / CONFLICT / UNVERIFIED / BLOCK) on every response.


crates.io License: MIT Leaderboard


Rust 1.70+ · MIT · async tokio runtime · reproducible bench: wauldo-leaderboard


Quickstart (30 seconds)

[dependencies]
wauldo = "0.7"
tokio = { version = "1", features = ["rt-multi-thread", "macros"] }

Guard — catch hallucinations in 3 lines

use wauldo::{HttpClient, HttpConfig, Result};

#[tokio::main]
async fn main() -> Result<()> {
    let client = HttpClient::new(
        HttpConfig::new("https://api.wauldo.com").with_api_key("YOUR_API_KEY"),
    )?;

    let result = client.guard(
        "Returns are accepted within 60 days.",
        "Our policy allows returns within 14 days.",
        None,
    ).await?;
    println!("Verdict: {}", result.verdict);             // "rejected"
    println!("Reason: {:?}", result.claims[0].reason);   // Some("numerical_mismatch")
    Ok(())
}

Verified RAG — upload, ask, verify

client.rag_upload("Our refund policy allows returns within 60 days...", Some("policy.txt".into())).await?;
let result = client.rag_query("What is the refund policy?", None).await?;
println!("Answer: {}", result.answer);  // Verified answer with sources

Try the demo | Get a free API key


Why Wauldo (and not standard RAG)

Typical RAG pipeline

retrieve → generate → hope it's correct

Wauldo pipeline

retrieve → extract facts → generate → verify → return or refuse

If the answer can't be verified, it returns "insufficient evidence" instead of guessing.

See the difference

Document: "Refunds are processed within 60 days"

Typical RAG:  "Refunds are processed within 30 days"     ← wrong
Wauldo:       "Refunds are processed within 60 days"     ← verified
              or "insufficient evidence" if unclear       ← safe

Try locally (no server needed)

Explore every feature using MockHttpClient -- no API key, no server, no network:

use wauldo::MockHttpClient;

#[tokio::main]
async fn main() {
    let client = MockHttpClient::with_defaults();

    // Upload + query
    let _ = client.rag_upload("Your document text...", None).await.unwrap();
    let result = client.rag_query("What is the refund policy?", None).await.unwrap();
    println!("Answer: {}", result.answer);
    println!("Grounded: {}", result.grounded().unwrap_or(false));

    // Guard — catch hallucinations
    let result = client.guard(
        "Returns within 60 days.",
        "Policy allows returns within 14 days.",
        None,
    ).await.unwrap();
    println!("Verdict: {}", result.verdict);  // "rejected"
}

Run the full quickstart example:

cargo run --example quickstart
cargo run --example analytics_demo

Examples

Guard — catch hallucinations

let result = client.guard(
    "Returns are accepted within 60 days of purchase",
    "Our return policy allows returns within 14 days.",
    None, // defaults to "lexical" mode
).await?;

println!("{}", result.verdict);             // "rejected"
println!("{}", result.action);              // "block"
println!("{:?}", result.claims[0].reason);  // Some("numerical_mismatch")
println!("{}", result.is_blocked());        // true

Guard verifies any LLM output against source documents. Wrong answers get blocked before they reach your users. Modes: lexical (<1ms), hybrid (~50ms), semantic (~500ms).

Upload a PDF and ask questions

let upload = client.upload_file("contract.pdf", Some("Q3 Contract".into()), None).await?;
println!("Extracted {} chunks", upload.chunks_count);

let result = client.rag_query("What are the payment terms?", None).await?;
println!("Answer: {}", result.answer);
println!("Confidence: {:.0}%", result.confidence() * 100.0);

Chat (OpenAI-compatible)

use wauldo::{ChatRequest, ChatMessage};

let req = ChatRequest::new("auto", vec![ChatMessage::user("Explain ownership in Rust")]);
let resp = client.chat(req).await?;
println!("{}", resp.content());

Streaming

let req = ChatRequest::new("auto", vec![ChatMessage::user("Hello!")]);
let mut rx = client.chat_stream(req).await?;
while let Some(chunk) = rx.recv().await {
    print!("{}", chunk.unwrap_or_default());
}

Conversation

let mut conv = client.conversation()
    .with_system("You are an expert on Rust programming.")
    .with_model("auto");
let reply = conv.say("What is the borrow checker?").await?;
let follow_up = conv.say("Give me an example").await?;

Features

  • Guard API — one-call hallucination firewall, 3 modes (lexical <1ms, hybrid, semantic)
  • Verified RAG — every answer checked against source documents
  • Native PDF/DOCX upload — server-side extraction with quality scoring
  • Analytics & Insights — token savings, cache performance, per-tenant traffic
  • Smart model routing — auto-selects cheapest model that meets quality
  • OpenAI-compatible — swap your base_url, keep your existing code
  • Type-safe — full Rust type system, no unwrap in production

Error Handling

use wauldo::Error;

match client.chat(req).await {
    Ok(resp) => println!("{}", resp.content()),
    Err(Error::Server { code, message, .. }) => eprintln!("Server error [{}]: {}", code, message),
    Err(Error::Connection(msg)) => eprintln!("Connection failed: {}", msg),
    Err(Error::Timeout(msg)) => eprintln!("Timeout: {}", msg),
    Err(e) => eprintln!("Other error: {}", e),
}

RapidAPI

let config = HttpConfig::new("https://api.wauldo.com")
    .with_header("X-RapidAPI-Key", "YOUR_RAPIDAPI_KEY")
    .with_header("X-RapidAPI-Host", "smart-rag-api.p.rapidapi.com");
let client = HttpClient::new(config)?;

Free tier (300 req/month): RapidAPI


Contributing

PRs welcome! See CONTRIBUTING.md for setup instructions and guidelines. Check the good first issues.


🔗 Related


📄 License

MIT — see LICENSE.


Built by the Wauldo team. If this changed your mind about your RAG stack, give it a ⭐.

Releases

No releases published

Packages

 
 
 

Contributors

Languages