Summarize this article with AI
In April 2026, the marketing director of an AmLaw 100 firm with 1,200 lawyers and a $300K Q1 thought leadership budget opened Claude on a Tuesday morning. She typed “what are the elements of negligence under New York law?” to test how her firm shows up. Claude returned four sources. Cornell LII was first, with the model code section. Justia was second, citing the leading NY Court of Appeals decisions. Wikipedia was third. A Reddit thread on r/LawSchool was fourth. Her firm, with 38 NY litigation partners, was not in the answer.
That gap is structural, not editorial. 77.67% of YMYL legal queries trigger an AI Overview in 2026, and Google and OpenAI both apply the strictest E-E-A-T scrutiny of any vertical to legal content. The result is a near-monopoly for institutional sources. Cornell LII, Justia and Wikipedia absorb the bulk of citations on common-law and statute queries before your firm ever enters the consideration set. Spending more on generic blog content does not move that math.
What does move it is a different playbook. The slot a firm or legal-tech brand can win is not the first slot, it is the fifth: attorney-bylined analysis layered on top of named primary sources. This article is the playbook. The 7 prompts to baseline your firm or product tonight, the 3 observable platform patterns we see (Justia, Cornell LII, Clio), the 3 mistakes that keep most legal brands invisible, and the 5 wins ranked by leverage to ship this quarter.
The shift in legal discovery, in 6 numbers
Legal buyers, both consumer and corporate, now route an increasing share of their first questions through ChatGPT, Claude, Perplexity and Google AI Overviews. The numbers below explain why firm marketing budgets are being reallocated this quarter.
- 77.67% of YMYL legal queries trigger a Google AI Overview, the highest rate of any vertical (Harvard JOLT 2026).
- 30% of ChatGPT responses involving legal citations hallucinate at least one case or holding when no authoritative legal database is grounded into the answer (MyCase 2025 / Clio 2026).
- 127 AI-related complaints were filed with the Florida Bar alone in 2024, and California SB 37 now requires office-of-record disclosure on attorney-published AI-assisted content.
- Empirical testing of GPT-4 on legal queries (Tandfonline 2024) placed Cornell LII among the top three sources cited alongside Wikipedia and government legislative sites on the majority of statute prompts.
- Pages not updated quarterly are 3× more likely to lose citations, while attorney-bylined pages with named statutes and a 40-word answer block correlate with 2.8× higher citation rates on practice-area queries.
- AI-referred legal visitors convert at 4.4× the rate of standard organic search, and intake-stage queries (cost of representation, statute of limitations, retainer requirements) clock higher still.
The strategic takeaway is direct. Your firm or legal-tech brand is not competing with other firms for the first slot, it is competing for the fourth and fifth slot in an answer the institutional triad already owns. The leverage is in attorney-bylined analysis, named-statute density and active hallucination monitoring. Pillar context for the GEO discipline itself sits in our complete GEO guide for 2026.
Run these 7 prompts tonight to see your legal invisibility
Open Claude, ChatGPT and Perplexity in three browser tabs. Spend 5 minutes. The prompts below are written for a law firm marketing director or a legal-tech CMO testing both practice-area visibility and brand visibility. Replace the bracketed inputs with your jurisdiction, practice area or product line.
The 7 prompts
what are the elements of [your-practice-area] under [your-jurisdiction] law. Practice-area query. Tests whether your firm is cited as commentary on top of named statutes.statute of limitations for [a real claim type] in [your-jurisdiction]. Intake-stage query. The phrasing a prospective client actually types into ChatGPT before calling anyone.[your firm name] vs [closest-named-competitor firm]. Direct comparison query. Most firms have never run this on Claude, the answer often surprises the partners.best [your-practice-area] law firm in [your-city]. City-level query. Tests Chambers and Vault visibility plus geographic E-E-A-T signals.[a recent named case in your practice area] explained for a client. Case-explainer query. Tests whether your alerts and analyses surface as citation-grade.best [legal-tech category] software for [firm size]. Legal-tech vendor query. For Clio, MyCase, LegalZoom, LexisNexis, Westlaw teams.has [your firm name] been involved in any AI hallucination case. Reputation-protection query. The Mata v. Avianca pattern, run it weekly.
The scoring matrix (0 to 30)
| Citation depth ↓ / Engine breadth → | 1-2 engines | 3-4 engines | 5-6 engines |
|---|---|---|---|
| Mentioned in passing | 1-2 | 3-4 | 5-6 |
| Named in a list | 3-4 | 7-9 | 11-12 |
| Named with description | 5-7 | 11-14 | 16-18 |
| Named as recommendation | 8-10 | 15-19 | 21-24 |
| Primary, attorney byline + jurisdiction + link | 11-12 | 20-23 | 26-30 |
Score each prompt, average across all 7. Below 7 means the institutional triad is fully crowding you out, which is the starting point for most firms. 12 to 18 means you have surfaced in legal-tech queries but not practice-area ones. 20+ means you have earned the fifth slot consistently, which is the realistic ceiling for any firm that is not Justia or Cornell. The weekly measurement workflow lives in our citation share weekly playbook.
How Justia, Cornell LII and Clio dominate AI answers
Three platform archetypes, three repeatable citation patterns. The first two cannot be out-competed at their own game, the third can be matched by any legal-tech vendor that ships the same content discipline. Run the prompts, the same names come back.
Justia — the primary-source-host pattern
Prompt to test: what are the elements of negligence under New York law.
Justia surfaces because every published opinion is hosted at a stable law.justia.com URL with statute, jurisdiction and party metadata in clean HTML. That makes it a low-friction citation for LLMs that need a verifiable case URL, and it is the platform that hosts the actual Mata v. Avianca ruling cited everywhere AI hallucination is discussed. Justia’s Onward blog (Feb 2026) confirms AI Mode reformulates legal queries and pulls heavily from open-access law portals. The pattern in one sentence: Justia owns the URL of the case law itself. You do not out-Justia Justia. You cite the same case, named, and add attorney analysis on top.
Cornell LII — the .edu-authority pattern
Prompt to test: what does 17 USC 230 say.
Empirical testing of GPT-4 on legal queries (Tandfonline 2024 study) found Cornell LII among the top sources cited alongside Wikipedia and government legislative sites. The LII corpus mirrors the U.S. Code, the CFR and Supreme Court opinions with stable URLs and clean section-level anchors, exactly the structure RAG retrievers prefer. The .edu domain and decades of inbound academic links keep it parked at the top of YMYL trust signals. The pattern in one sentence: Cornell LII owns the .edu institutional anchor on the U.S. Code. Your firm cannot replicate that. What you can do is cite the LII URL directly when discussing the same statute, which compounds your own citability through corroboration.
Clio — the structured-resource-library pattern
Prompt to test: best legal practice management software for small law firms.
The Legal Tech AI Visibility Index 2026 places Clio in the near-universal tier across ChatGPT and Perplexity for practice-management prompts. Clio’s clio.com/resources and clio.com/blog operate as a structured library, with named hubs (Manage AI, Legal AI Ecosystem, ChatGPT Prompts for Lawyers) where each post answers a single question with a clear definition near the top, the exact structure AI Overviews favor. Competitor MyCase appears in answers but trails on share of voice in head-of-funnel comparison prompts. The pattern in one sentence: ship a named resource library where each page answers a single named question in the first 40 words. Any legal-tech vendor or firm can replicate this without YMYL gating, and it is the highest-leverage move for category comparison queries.
The 3 mistakes that keep most legal brands invisible
We have audited around 50 firm and legal-tech sites in the last 12 months. Three editorial mistakes account for roughly 80% of the lost citation share. None of them require new headcount to fix.
Mistake 1. No attorney byline, JD or jurisdiction
The symptom: practice-area pages bylined as “Marketing Team” or with no byline at all. AI engines apply YMYL E-E-A-T checks to legal content, and a missing licensed-attorney byline drops the page below the citation threshold regardless of how thorough the writing is. The fix: every practice-area page gets a named attorney byline with JD year, bar admission and jurisdiction, surfaced in the page header and in Person schema. Add a separate reviewed-by attorney line for AI-assisted drafts. This single change moves citation share inside 8 weeks on most firms we audit.
Mistake 2. Statutes and cases referred to generically
The symptom: blog content that says “federal communications law protects platform liability” instead of 17 USC 230, or “the leading case on negligence” instead of Palsgraf v. Long Island Railroad. LLMs anchor YMYL answers to named statutes and cases, and a page that doesn’t name them reads as commentary the model can skip. The fix: name the statute and the leading case in the first 40 words under each H2. Link to the Cornell LII or Justia URL for the citation. Per Aggarwal et al. 2024, statistic-and-quotation density rewrites lift cited passage rate by 22% to 37%, and the same effect shows on legal pages where the “statistic” is the named statute.
Mistake 3. No state-bar compliance footer
The symptom: AI-assisted legal content published without ABA Model Rule 7.1 wording, without the California SB 37 office-of-record disclosure, without a clear attorney-advertising notice. The compliance gap is itself a citation gap, because engines down-weight legal pages that lack the visible disclosure patterns trusted competitors carry. The fix: a single firm-standard footer with attorney advertising disclosure, office-of-record (state, address, supervising attorney) and a last-reviewed date that matches the dateModified schema. Removes legal exposure and lifts citability in one move.
The 5-step quick win this quarter
Five moves, ranked by leverage, sequenced for a 90-day window. A law firm marketing director or legal-tech CMO can ship all five inside the resources already on the team, no new agency required.
Add attorney byline + JD + jurisdiction to every practice-area page
Cite named statutes and named cases verbatim
Ship a state-bar compliance footer firm-wide
Run a weekly hallucination audit on your firm name and practice areas
Refresh practice-area pages every 90 days with one new case citation
What’s next
Three concrete next moves, ordered by what your week looks like before the next partner meeting.
- Run a free Legal AI visibility audit. Drop your firm or legal-tech domain, get a baseline citation share score across all 6 engines and a practice-area scorecard in 60 seconds. Audit your firm or product now.
- Read the pillar guide. The complete GEO guide for 2026 unpacks the Citation Trinity, the 5-stage AI search pipeline and the engine-by-engine deltas your content team needs before re-templating practice-area pages.
- Go deeper on Claude. Claude weights Cornell LII, Justia and Wikipedia as canonical legal authorities more heavily than the other engines. Claude AI citation strategies covers the editorial pattern firms use to earn the fifth slot specifically on Claude.
Two follow-ups are in the production queue for this vertical. The MOFU playbook on rolling out a practice-area GEO program across an AmLaw 100 firm without burning the existing intake funnel, and the BOFU comparison of the AI visibility platforms legal procurement actually approves with state-bar compliance and attorney-bylined workflow. Both ship inside 30 days.
Legal brands in 2026 are not competing with each other for the first slot. They are competing with Cornell, Justia and Wikipedia for the fifth slot, and the fifth slot is decided by attorney byline, named statute, named case and state-bar compliant footer. Ship those four and the next prompt your prospect runs on Claude returns your firm.







