Examples
Before and after: what verification looks like in practice
These examples show the difference between uncited answers (confident but unverifiable) and verified answers (citations backed by evidence spans).
Example: Does auth validate JWT aud+iss?
You're exploring an unfamiliar codebase and asking about authentication. This is a Search & Learn workflow.
Without Berry
Confident, but uncited
aud and iss? Where is it configured?- "Yes — the middleware validates issuer and audience via
jsonwebtoken.verify()usingJWT_ISSUERandJWT_AUDIENCEenv vars." - "It's configured in
config/auth.tsand loaded from.env." - "To disable audience validation, set
audience: undefined."
What's wrong
- File paths and env vars may not exist
- "Disable audience validation" advice may be wrong for this library
- It's unverifiable unless you manually hunt through the repo
With Berry
Evidence-first + detect_hallucination
1. Evidence pack
// src/auth/middleware.ts
export function auth(req, res, next) {
const token = getBearer(req);
const { payload } = jwtVerify(token, getKey(), {
issuer: process.env.JWT_ISSUER,
});
req.user = payload.sub;
next();
}# README.md
JWT_ISSUER is required. No other JWT settings
are currently supported.# .env.example
JWT_ISSUER=2. Cited answer
Auth validates issuer via the issuer: option passed into jwtVerify(...). [S0]
The issuer value comes from process.env.JWT_ISSUER. [S0]
I do not see any audience (aud) validation in the provided middleware; no audience: option is present. [S0]
The README explicitly says JWT_ISSUER is required and that no other JWT settings are supported. [S1]
.env.example only includes JWT_ISSUER. [S2]
3. Verifier call
detect_hallucination(
answer="Auth validates issuer via...",
spans=[S0, S1, S2],
require_citations=true,
context_mode="cited"
)4. Final answer
Issuer is validated and configured via JWT_ISSUER. [S0][S1][S2]
Audience (aud) validation is not evidenced in S0–S2. [S0]
If audience validation exists elsewhere, it is not shown here; provide the relevant span(s) to verify it.
The difference
| Without Berry | With Berry | |
|---|---|---|
| Evidence | None required | Spans collected first |
| Citations | None | Every factual claim cited |
| Unknown facts | Invented or assumed | Explicitly labeled "not evidenced" |
| Verification | Trust the model | Tool checks each claim |
| Failure mode | "Yes, definitely" | "I don't know" |
Workflow playbooks
Berry ships with workflow playbooks for different use cases. Each includes a before/after worked example.
- Search & Learn — Q&A, repo exploration. Uses
detect_hallucination. - Generate Boilerplate — Tests, docs, migrations, configs. Uses
audit_trace_budget. - Inline Completions — Spot-check tab-complete. Uses
audit_trace_budget. - RCA Fix Agent — Full debugging loop with verified claims.
- Greenfield Prototyping — Facts vs Decisions vs Assumptions.
- Objective Optimization — Baseline, hypothesis, experiment, measure, keep/revert.
- Plan & Execute — Verified planning + post-approval execution.
Two verification tools
detect_hallucination
Takes an answer with [S#] citations and checks whether each sentence is supported by the cited evidence. Use for Q&A, documentation, and any task where output is text with claims.
audit_trace_budget
Takes a structured trace of reasoning steps—each step is a claim plus citations—and verifies each step has sufficient evidence. Use for refactoring, bug fixes, and migrations where you want to catch "almost right" reasoning before it becomes a confident patch.