How do you use this tool?
- Paste the text you want to analyze into the input field (minimum 150 words for reliable results).
- Click Analyze — the classifier processes the text locally in your browser.
- Read the probability score: higher percentages indicate a stronger AI-authorship signal.
- Check the per-sentence heatmap to see which sentences contributed most to the AI score.
- For borderline results, try re-analyzing with just the most suspicious paragraphs.
What This Tool Does
AI writing assistants have made LLM-generated text ubiquitous in classrooms, newsrooms, and content pipelines. This tool helps you screen text for machine authorship using a browser-based statistical classifier — no account, no server, no data leaving your device.
It is designed for teachers checking student submissions, editors reviewing freelance copy, journalists verifying sources, and anyone who needs a fast first-pass opinion on whether a piece of text was written by a human or an AI model.
How Does It Work?
AI text detectors use several statistical signals that differ between human and LLM writing:
| Signal | What It Measures | Human Pattern | LLM Pattern |
|---|---|---|---|
| Perplexity | How “surprising” the word choices are | High variability | Low — LLMs choose predictable tokens |
| Burstiness | Variation in sentence length and complexity | High bursts | Uniformly smooth |
| Token distribution | Which words and phrases appear, how often | Idiosyncratic | Close to training distribution |
| Entropy | Randomness of the text | Higher | Lower (over long runs) |
The classifier combines these signals in a logistic regression or lightweight neural model, producing a probability between 0% (confidently human) and 100% (confidently AI).
What Are Common Use Cases?
- Education: Teachers and professors screen essay submissions to flag potential AI assistance for further review.
- Publishing: Book editors and magazine fact-checkers verify that freelance submissions are original human writing.
- Journalism: Reporters check quotes and contributed articles before publication.
- Content marketing: Agencies auditing content libraries for AI-generated posts that may violate platform policies.
- Legal and compliance: Law firms reviewing AI-use policies screen documents for LLM authorship.
- HR: Recruiters checking cover letters and work-sample submissions for AI generation.
Frequently Asked Questions
Is this useful if a student only used AI to “help” with their essay? Mixed human/AI text is the hardest case for any detector. If someone writes a draft and uses AI to improve specific sentences, the output may score anywhere from 20% to 70%. A high score is more meaningful when the text shows other signs: uniform sentence structure, generic arguments, missing personal voice.
What does “burstiness” mean in human writing? Human writers naturally vary their sentence length dramatically — a short punchy sentence followed by a complex multi-clause one. LLMs tend to produce text with much more uniform sentence length and complexity, making the rhythm feel “flat.” This statistical regularity is one of the strongest AI signals.
Can this detect AI writing in languages other than English? The classifier is primarily trained on English-language corpora. Detection accuracy in other languages is substantially lower and should not be relied upon. For non-English text, use a detector specifically trained on that language.
Why do different AI detectors give different scores for the same text? Each tool uses a different underlying model, training data, and decision threshold. Disagreement across tools is common, especially in the 40–75% range. When tools disagree significantly, treat the text as ambiguous rather than AI-authored.
Last updated: