Clairon

How Legal Brands Get Cited by ChatGPT, Claude and Perplexity in 2026

Hugo Debrabandere

Hugo Debrabandere

Co-founder · Clairon

Apr 29, 2026

In April 2026, the marketing director of an AmLaw 100 firm with 1,200 lawyers and a $300K Q1 thought leadership budget opened Claude on a Tuesday morning. She typed “what are the elements of negligence under New York law?” to test how her firm shows up. Claude returned four sources. Cornell LII was first, with the model code section. Justia was second, citing the leading NY Court of Appeals decisions. Wikipedia was third. A Reddit thread on r/LawSchool was fourth. Her firm, with 38 NY litigation partners, was not in the answer.

That gap is structural, not editorial. 77.67% of YMYL legal queries trigger an AI Overview in 2026, and Google and OpenAI both apply the strictest E-E-A-T scrutiny of any vertical to legal content. The result is a near-monopoly for institutional sources. Cornell LII, Justia and Wikipedia absorb the bulk of citations on common-law and statute queries before your firm ever enters the consideration set. Spending more on generic blog content does not move that math.

What does move it is a different playbook. The slot a firm or legal-tech brand can win is not the first slot, it is the fifth: attorney-bylined analysis layered on top of named primary sources. This article is the playbook. The 7 prompts to baseline your firm or product tonight, the 3 observable platform patterns we see (Justia, Cornell LII, Clio), the 3 mistakes that keep most legal brands invisible, and the 5 wins ranked by leverage to ship this quarter.

The shift in legal discovery, in 6 numbers

Legal buyers, both consumer and corporate, now route an increasing share of their first questions through ChatGPT, Claude, Perplexity and Google AI Overviews. The numbers below explain why firm marketing budgets are being reallocated this quarter.

  • 77.67% of YMYL legal queries trigger a Google AI Overview, the highest rate of any vertical (Harvard JOLT 2026).
  • 30% of ChatGPT responses involving legal citations hallucinate at least one case or holding when no authoritative legal database is grounded into the answer (MyCase 2025 / Clio 2026).
  • 127 AI-related complaints were filed with the Florida Bar alone in 2024, and California SB 37 now requires office-of-record disclosure on attorney-published AI-assisted content.
  • Empirical testing of GPT-4 on legal queries (Tandfonline 2024) placed Cornell LII among the top three sources cited alongside Wikipedia and government legislative sites on the majority of statute prompts.
  • Pages not updated quarterly are 3× more likely to lose citations, while attorney-bylined pages with named statutes and a 40-word answer block correlate with 2.8× higher citation rates on practice-area queries.
  • AI-referred legal visitors convert at 4.4× the rate of standard organic search, and intake-stage queries (cost of representation, statute of limitations, retainer requirements) clock higher still.
77.67%
of YMYL legal queries trigger an AI Overview
30%
ChatGPT hallucination rate on legal citations without a database
4.4×
AI-referred legal conversion vs traditional organic

The strategic takeaway is direct. Your firm or legal-tech brand is not competing with other firms for the first slot, it is competing for the fourth and fifth slot in an answer the institutional triad already owns. The leverage is in attorney-bylined analysis, named-statute density and active hallucination monitoring. Pillar context for the GEO discipline itself sits in our complete GEO guide for 2026.

Run these 7 prompts tonight to see your legal invisibility

Open Claude, ChatGPT and Perplexity in three browser tabs. Spend 5 minutes. The prompts below are written for a law firm marketing director or a legal-tech CMO testing both practice-area visibility and brand visibility. Replace the bracketed inputs with your jurisdiction, practice area or product line.

The 7 prompts

  1. what are the elements of [your-practice-area] under [your-jurisdiction] law. Practice-area query. Tests whether your firm is cited as commentary on top of named statutes.
  2. statute of limitations for [a real claim type] in [your-jurisdiction]. Intake-stage query. The phrasing a prospective client actually types into ChatGPT before calling anyone.
  3. [your firm name] vs [closest-named-competitor firm]. Direct comparison query. Most firms have never run this on Claude, the answer often surprises the partners.
  4. best [your-practice-area] law firm in [your-city]. City-level query. Tests Chambers and Vault visibility plus geographic E-E-A-T signals.
  5. [a recent named case in your practice area] explained for a client. Case-explainer query. Tests whether your alerts and analyses surface as citation-grade.
  6. best [legal-tech category] software for [firm size]. Legal-tech vendor query. For Clio, MyCase, LegalZoom, LexisNexis, Westlaw teams.
  7. has [your firm name] been involved in any AI hallucination case. Reputation-protection query. The Mata v. Avianca pattern, run it weekly.

The scoring matrix (0 to 30)

Citation depth ↓ / Engine breadth →1-2 engines3-4 engines5-6 engines
Mentioned in passing1-23-45-6
Named in a list3-47-911-12
Named with description5-711-1416-18
Named as recommendation8-1015-1921-24
Primary, attorney byline + jurisdiction + link11-1220-2326-30

Score each prompt, average across all 7. Below 7 means the institutional triad is fully crowding you out, which is the starting point for most firms. 12 to 18 means you have surfaced in legal-tech queries but not practice-area ones. 20+ means you have earned the fifth slot consistently, which is the realistic ceiling for any firm that is not Justia or Cornell. The weekly measurement workflow lives in our citation share weekly playbook.

How Justia, Cornell LII and Clio dominate AI answers

Three platform archetypes, three repeatable citation patterns. The first two cannot be out-competed at their own game, the third can be matched by any legal-tech vendor that ships the same content discipline. Run the prompts, the same names come back.

Justia — the primary-source-host pattern

Prompt to test: what are the elements of negligence under New York law.

Justia surfaces because every published opinion is hosted at a stable law.justia.com URL with statute, jurisdiction and party metadata in clean HTML. That makes it a low-friction citation for LLMs that need a verifiable case URL, and it is the platform that hosts the actual Mata v. Avianca ruling cited everywhere AI hallucination is discussed. Justia’s Onward blog (Feb 2026) confirms AI Mode reformulates legal queries and pulls heavily from open-access law portals. The pattern in one sentence: Justia owns the URL of the case law itself. You do not out-Justia Justia. You cite the same case, named, and add attorney analysis on top.

Cornell LII — the .edu-authority pattern

Prompt to test: what does 17 USC 230 say.

Empirical testing of GPT-4 on legal queries (Tandfonline 2024 study) found Cornell LII among the top sources cited alongside Wikipedia and government legislative sites. The LII corpus mirrors the U.S. Code, the CFR and Supreme Court opinions with stable URLs and clean section-level anchors, exactly the structure RAG retrievers prefer. The .edu domain and decades of inbound academic links keep it parked at the top of YMYL trust signals. The pattern in one sentence: Cornell LII owns the .edu institutional anchor on the U.S. Code. Your firm cannot replicate that. What you can do is cite the LII URL directly when discussing the same statute, which compounds your own citability through corroboration.

Clio — the structured-resource-library pattern

Prompt to test: best legal practice management software for small law firms.

The Legal Tech AI Visibility Index 2026 places Clio in the near-universal tier across ChatGPT and Perplexity for practice-management prompts. Clio’s clio.com/resources and clio.com/blog operate as a structured library, with named hubs (Manage AI, Legal AI Ecosystem, ChatGPT Prompts for Lawyers) where each post answers a single question with a clear definition near the top, the exact structure AI Overviews favor. Competitor MyCase appears in answers but trails on share of voice in head-of-funnel comparison prompts. The pattern in one sentence: ship a named resource library where each page answers a single named question in the first 40 words. Any legal-tech vendor or firm can replicate this without YMYL gating, and it is the highest-leverage move for category comparison queries.

The 3 mistakes that keep most legal brands invisible

We have audited around 50 firm and legal-tech sites in the last 12 months. Three editorial mistakes account for roughly 80% of the lost citation share. None of them require new headcount to fix.

Mistake 1. No attorney byline, JD or jurisdiction

The symptom: practice-area pages bylined as “Marketing Team” or with no byline at all. AI engines apply YMYL E-E-A-T checks to legal content, and a missing licensed-attorney byline drops the page below the citation threshold regardless of how thorough the writing is. The fix: every practice-area page gets a named attorney byline with JD year, bar admission and jurisdiction, surfaced in the page header and in Person schema. Add a separate reviewed-by attorney line for AI-assisted drafts. This single change moves citation share inside 8 weeks on most firms we audit.

Mistake 2. Statutes and cases referred to generically

The symptom: blog content that says “federal communications law protects platform liability” instead of 17 USC 230, or “the leading case on negligence” instead of Palsgraf v. Long Island Railroad. LLMs anchor YMYL answers to named statutes and cases, and a page that doesn’t name them reads as commentary the model can skip. The fix: name the statute and the leading case in the first 40 words under each H2. Link to the Cornell LII or Justia URL for the citation. Per Aggarwal et al. 2024, statistic-and-quotation density rewrites lift cited passage rate by 22% to 37%, and the same effect shows on legal pages where the “statistic” is the named statute.

Mistake 3. No state-bar compliance footer

The symptom: AI-assisted legal content published without ABA Model Rule 7.1 wording, without the California SB 37 office-of-record disclosure, without a clear attorney-advertising notice. The compliance gap is itself a citation gap, because engines down-weight legal pages that lack the visible disclosure patterns trusted competitors carry. The fix: a single firm-standard footer with attorney advertising disclosure, office-of-record (state, address, supervising attorney) and a last-reviewed date that matches the dateModified schema. Removes legal exposure and lifts citability in one move.

The 5-step quick win this quarter

Five moves, ranked by leverage, sequenced for a 90-day window. A law firm marketing director or legal-tech CMO can ship all five inside the resources already on the team, no new agency required.

Add attorney byline + JD + jurisdiction to every practice-area page

Highest leverage. The single biggest YMYL signal AI engines look for on legal content. Named attorney, JD year, bar number, jurisdiction, surfaced in the page header and in Person schema. Add a separate reviewed-by line for AI-assisted drafts. Citation share moves inside 8 weeks on most firms after this single change.

Cite named statutes and named cases verbatim

Rewrite the first 40 words under each H2 to name the statute (17 USC 230, NY CPLR § 214) and the leading case (Palsgraf v. Long Island Railroad). Link directly to Cornell LII or Justia for the citation. Stop saying “federal communications law” or “the leading case.” Models cite specifics, not generalities.

Ship a state-bar compliance footer firm-wide

ABA Model Rule 7.1 attorney-advertising disclosure, California SB 37 office-of-record (state, address, supervising attorney), last-reviewed date matching dateModified schema. One template, deployed across every page. Reduces legal exposure on AI-assisted content and lifts citability through the trust-signal pattern AI engines reward.

Run a weekly hallucination audit on your firm name and practice areas

Run the 7 prompts above weekly across all 6 engines, plus one prompt template tuned to detect fabricated case law involving the firm name. Document any hallucinated case, the prompt that produced it and any competitor named. File with the state bar when a competitor benefits. Flag the response in your AI visibility dashboard so the next refresh cycle addresses the gap.

Refresh practice-area pages every 90 days with one new case citation

Justia and Cornell update constantly. A practice-area page that hasn’t added a 2026 case citation in 6 months decays in citation share at roughly 4% per month. Set a 90-day refresh cadence on the top 20 commercial pages. One new case citation per refresh, one updated statute reference, one updated last-reviewed date.

What’s next

Three concrete next moves, ordered by what your week looks like before the next partner meeting.

  1. Run a free Legal AI visibility audit. Drop your firm or legal-tech domain, get a baseline citation share score across all 6 engines and a practice-area scorecard in 60 seconds. Audit your firm or product now.
  2. Read the pillar guide. The complete GEO guide for 2026 unpacks the Citation Trinity, the 5-stage AI search pipeline and the engine-by-engine deltas your content team needs before re-templating practice-area pages.
  3. Go deeper on Claude. Claude weights Cornell LII, Justia and Wikipedia as canonical legal authorities more heavily than the other engines. Claude AI citation strategies covers the editorial pattern firms use to earn the fifth slot specifically on Claude.

Two follow-ups are in the production queue for this vertical. The MOFU playbook on rolling out a practice-area GEO program across an AmLaw 100 firm without burning the existing intake funnel, and the BOFU comparison of the AI visibility platforms legal procurement actually approves with state-bar compliance and attorney-bylined workflow. Both ship inside 30 days.

Legal brands in 2026 are not competing with each other for the first slot. They are competing with Cornell, Justia and Wikipedia for the fifth slot, and the fifth slot is decided by attorney byline, named statute, named case and state-bar compliant footer. Ship those four and the next prompt your prospect runs on Claude returns your firm.

Frequently asked questions

Why does ChatGPT keep citing Justia and Cornell LII instead of my firm?
Justia hosts the actual case law your prompt references and Cornell LII mirrors the U.S. Code under .edu, both with stable URLs and clean section anchors. LLMs anchor YMYL legal answers to primary sources first, so a firm-authored explainer only earns a citation when it adds attorney-attributed analysis on top of a clearly named statute or case. Without a JD byline, jurisdiction and a direct answer in the first 40 words, your page reads as commentary the model can skip.
Can law firms safely use AI to draft GEO content?
Yes, but not unsupervised. ABA Model Rule 7.1, Florida Bar enforcement (127 AI-related complaints in 2024) and California SB 37 all require licensed-attorney review before publication, plus office-of-record disclosure on every page. The safe pattern is AI-drafted, attorney-edited, attorney-bylined content that cites statutes and cases by name and avoids invented results or testimonials. The same review loop also raises citation rates because it forces the E-E-A-T signals AI engines reward.
How do I prevent hallucinated case law from being cited about my firm?
Run weekly prompts checking specific case citations across all 6 engines, focused on your practice areas. Flag any answer that fabricates case names, dockets or holdings (the Mata v. Avianca pattern, where ChatGPT invented six fake cases). When you find one, document the exact prompt, response and any competitor named, then file with the relevant state bar if your firm or competitor is affected. Active monitoring is now standard hygiene for AmLaw 100 firms.
How long until a legal brand sees citation share lift?
On practice-area pages with attorney bylines, named statute citations and SB 37 disclosure, citation share moves within 8 to 12 weeks. Generic legal explainers compete head-on with Cornell LII and Justia and rarely move. Track practice-area citation share separately from firm-level. Earnings on legal-tech vendor queries (Clio, MyCase, LegalZoom) move faster, often within 4 to 6 weeks after a structured comparison page ships.
Should BigLaw firms target AI visibility differently than legal tech?
Yes. BigLaw competes on Chambers and Vault placement, named-partner thought leadership and earned analyst citations (LexisNexis, Westlaw, ALM). Legal tech competes on category comparison content (Clio vs MyCase, LegalZoom vs Rocket Lawyer), Capterra and G2 reviews and named-customer case studies. Two different prompt sets, two different content workflows, track them separately.
What is the right baseline prompt set size for a law firm?
100 to 200 prompts at the firm level, weekly cadence, 6 engines. Split 60% practice-area queries (negligence, M&A, employment, IP, securities), 25% jurisdictional queries (NY, CA, TX, federal), 15% reputation queries (firm name, named partners, Chambers ranking). Add 30 prompts per major practice area. Keep one prompt template tuned to detect hallucinated case law involving the firm.
Are AI citations worth more than Google rankings for law firms?
On consultation conversion, yes. AI-driven legal visitors convert at 4.4× the rate of standard organic search across compiled 2025 analyses, with intake-stage queries clocking even higher. On volume, no. Google still drives the majority of legal traffic. The strategic answer is to fund both and accept that the AI visibility line item is now a board-level YMYL question, not a marketing experiment.
Summarize with Claude
Summarize with Perplexity
Summarize with Google
Summarize with Grok
Summarize with ChatGPT