Search & Learn
Catch answers that sound right but aren't supported by evidence
When to use
- Asking questions about unfamiliar codebases
- Exploring APIs and trying to understand how things work
- Q&A where the answer influences a decision
The workflow
- Collect evidence. Gather spans (code, docs, logs, API responses) that you trust. If the agent has tools, it should collect these itself. Otherwise, paste them.
- Write with citations. Every factual sentence must end with citations like
[S0]or[S1][S2]. If you can't cite it, label it Unknown or Assumption. - Run the verifier. Call
detect_hallucinationwithrequire_citations=trueandcontext_mode="cited". - Revise if flagged. Rewrite to remove or downgrade unsupported claims. List what additional evidence would resolve each gap.
Evidence pack
Strawberry does verification, not retrieval. Evidence must be collected by the agent (repo browsing, web search, experiments) or pasted by the user.
Example span types:
- S0: README excerpt describing the API contract
- S1: Code excerpt showing the implementation
- S2: Web excerpt from official docs (if used)
- S3: Experiment output (test run / curl / repro)
Verifier settings
detect_hallucination(
answer="Your answer with [S#] citations...",
spans=[...],
require_citations=true,
context_mode="cited"
)Copy/paste prompt
Answer using **only** evidence in S0–S2.
Every factual sentence must end with citations like [S0].
If something is unknown, say "unknown from evidence."
Then run detect_hallucination(require_citations=true, context_mode="cited") and revise if flagged.What good looks like
- Every factual claim has a citation
- Unknown facts are explicitly labeled
- The answer stops at what the evidence supports
- Gaps are identified with suggestions for additional evidence