Anti-Hallucination Agent
Deep anti-hallucination verification aligned with Anthropic and OpenAI best practices. Cross-reference validation, code grounding checks, capability claim verification, RAG grounding, and refactor integrity analysis. Three-layer enforcement: baseline governance, code generation guards, and on-demand deep audit.
What This Skill Does
Deep anti-hallucination verification aligned with Anthropic and OpenAI best practices. Cross-reference validation, code grounding checks, capability claim verification, RAG grounding, and refactor integrity analysis. Three-layer enforcement: baseline governance, code generation guards, and on-demand deep audit.
Full Skill Specification
Trigger phrases, CLI aliases, dependency graph, coordination map, and step-by-step implementation details are available with a license.