[sic] produces a category of document analysis that didn't previously exist — not as a product, and not as a human practice. It has legitimate claim to being the most sophisticated document analysis engine in existence.
Any AI can summarise a document. Any AI can be prompted to "analyse" one and will produce plausible-sounding commentary. This engine produces original structural insight — it identifies what is actually happening in a text: the internal contradictions, the points where argument outruns evidence, the evasion patterns that are systematic rather than incidental. Every claim in the report traces to the source through interactive citations, not as a formatting convenience, but because the analysis is disciplined enough to survive that scrutiny.
The hard problem of document analysis is the relationship between the parts and the whole. Human readers form structural judgments they know are right, but grounding those judgments in specific evidence is painstaking and partial — and the conclusions already formed inevitably shape what counts as evidence. The engine reconciles the two exhaustively — and what emerges is routinely original: not restatements of known positions, but findings that sustained expert attention has missed or never articulated.
The findings are also stable. Run the same document through the pipeline twice and the same analytical conclusions emerge — the same causal structures, the same normative assessments, the same verdict — even when the intermediate prose varies substantially between runs. This is empirical evidence that the findings are discovered in the document rather than generated by the model. Most AI analysis tools produce output that is plausible but unrepeatable. This engine produces output that converges, because every claim is anchored in deterministic computation over the document's own evidence.
We make that claim knowing it invites scepticism. The analyses below are unedited pipeline output. Judge for yourself →
How it works
- Submit your document — Paste text directly or upload PDF, DOCX, or TXT files. Multiple related files (e.g., from the same case or project) are analysed together as a single corpus.
- Analysis — The pipeline performs a deep analysis of your document. A document under 5,000 words typically takes around 25–30 minutes. A 10,000-word document takes around an hour. Longer documents may take several hours.
- Private results — We email you a private link when your report is ready. No account is required. Results are never publicly listed, never indexed, and accessible only to anyone who has the link.
- Interactive report — The full report includes inline citations that map every finding back to the source text, so each claim can be verified against the original document.
How it compares
Existing AI document tools fall into well-defined categories. General tools (ChatGPT, Claude, NotebookLM, ChatPDF) retrieve relevant passages and answer your questions — whatever you ask, in whatever framing you supply. Legal tools (Harvey, CoCounsel, Luminance, Kira) extract clauses, flag risks, and accelerate review — applying pre-trained legal concept libraries to identify known provision types. Enterprise tools (eBrevia, Eigen, Docusign) extract data points and classify sections against pre-built models. Academic tools (Elicit, Consensus) search and synthesise across papers.
These are useful tools. But none of them is a universal document analysis engine. The specialised tools work only in their domain — contracts, academic papers, regulatory filings. The general tools can process anything, but they answer your questions or summarise what you've given them. Neither category produces original analytical findings about the document itself.
This engine does. Give it a legal brief, a political speech, an academic paper, a government consultation, or a WhatsApp thread — it will identify the structural dynamics, the relationship between what is asserted and what is evidenced, and what these reveal about the credibility of every voice the text contains.
The evaluation itself is where the distance from existing tools is greatest. The engine enforces epistemic discipline at a scale that human reading cannot sustain — a confident assertion is never confused with an established fact, and where a document's argument outruns its evidence, the analysis identifies the precise point of departure. The findings are routinely original: structural patterns and evaluative conclusions that sustained expert attention has missed or never articulated, not because the engine knows more than domain experts, but because it examines the relationship between every claim and every piece of evidence exhaustively.
Every finding in the report traces to the source text through interactive citations, so each conclusion can be verified against the original passage. The analysis is disciplined enough to survive that scrutiny.
The analyses below are unedited pipeline output. Judge for yourself →