Private Company Research Note

Anthropic & Claude: A financial deep dive

Training cost, infrastructure stack, revenue engine, unit economics, and capital structure
Compiled byShreyan Basu Ray
Emailbasurayshreyan@gmail.com
As ofMay 7, 2026
Latest eventSpaceX Colossus 1 deal (May 6, 2026)

Compiled May 7, 2026 from public sources and curated by Shreyan Basu Ray as a guide to what might be going on inside Anthropic and how the underlying financial, infrastructure, and unit-economic mechanics may operate. This is a directional research note, not a definitive record.

Disclaimer: The figures, interpretations, estimates, and forward-looking statements in this document may be incomplete, outdated, or incorrect. Anthropic is a private company and does not publish audited financial statements. Non-disclosed line items are estimates with stated confidence tiers, and leaked or reported numbers should be treated as uncertain. This document is for informational purposes only and is not investment, legal, accounting, or business advice.

Executive summary

Anthropic spent 2025 crossing the line from “promising frontier lab” into something structurally closer to a hyperscale infrastructure company. Revenue expanded from startup scale to an estimated $9B exit run-rate within a single year, yet nearly every dollar of growth arrived attached to extraordinary compute intensity. By April 2026 the company was reportedly operating near ~$30B run-rate revenue, had raised capital at a $380B valuation, and accumulated more than $330B in long-duration infrastructure and compute commitments spanning AWS, Google Cloud, Microsoft, NVIDIA, Fluidstack, and SpaceX. The central tension in the business is now visible: demand is compounding faster than inference efficiency improvements. Anthropic's models are becoming cheaper to serve on a per-token basis, but customer behavior is scaling even faster as coding agents, enterprise copilots, and autonomous workflows move into production environments. Internal forecasts suggest the company could reach software-like gross margins by 2028, but only after surviving one of the most capital-intensive scaling phases ever seen in commercial software.

Apr 2026 ARR
~$30B
+30× from Jan 2025
2025 revenue (full-year)
$4.5B
~12× YoY
Last valuation
$380B
Series G, Feb 2026
2025 EBITDA
−$5.2B
Inference +23% vs plan
2025 training spend
$4.1B
Aggregate, all models
Compute backlog
$330B+
Google, AWS, MSFT, etc.
Contracted capacity
10+ GW
By 2028, multi-partner
Headcount
~3,000
2,300 → 3,000 in Q1 2026
Reader note on the $30B ARR figure. April 2026 ARR jumped from $19B (March) to $30B (April). This is reported by Bloomberg/Yahoo with some sources citing $40B internal. The implied +$11B MRR-equivalent in one month is not a normal SaaS growth curve. Likely drivers: (1) one or more mega-enterprise deals booking, (2) cloud-marketplace gross revenue inclusion, (3) calendar timing of contracted bookings. Treat as directional rather than validated. Anthropic books cloud-reseller revenue (AWS Bedrock, GCP Vertex, Microsoft Foundry) on a gross basis, which inflates top-line vs. net-reporting peers like OpenAI.
SECTION 01

Methodology & data confidence framework

Anthropic operates with the opacity typical of frontier AI companies: audited financial statements are unavailable, infrastructure contracts are fragmented across multiple counterparties, and many economically important figures surface only through leaks, court disclosures, partner announcements, or secondary reporting. As a result, the objective of this report is not to claim precision where precision does not exist. The goal is to build a coherent operating picture from partially observable data. Every material figure therefore carries a confidence tier indicating whether the number is disclosed, leaked, reported, industry-estimated, or analytically derived inside this document.

Tag Tier Description Examples in this report
A Disclosed Stated by Anthropic, partner, or in court filing API pricing, SpaceX/AWS/Google deal terms, Series G size, $1.5B copyright settlement
B Leaked primary Internal documents reported by WSJ or The Information $4.1B 2025 training spend, 40% gross margin, $5.2B 2025 EBITDA loss, 2028 forecast
C Tier-1 reported Bloomberg / Reuters / FT / CNBC / TechCrunch with named sources $30B Apr 2026 ARR, $850–900B pending round, headcount
D Industry estimate Sacra, Epoch AI, SemiAnalysis, equity research FLOP estimates, chip pricing, paying user counts
E Author estimate Derived in this report with stated method Per-model training cost, OpEx breakdown, LTV/CAC ranges
Two accounting subtleties readers should hold onto. (i) ARR ≠ GAAP revenue: ARR is monthly revenue × 12, taken at a point in time. With 10× annual growth, full-year GAAP revenue significantly trails exit ARR (2025: $4.5B revenue vs $9B exit ARR). (ii) Gross vs net: Anthropic reports cloud-marketplace pass-through revenue gross. The Information and OpenAI commentary suggest "real" net revenue may run 25–30% lower than reported headline ARR.
SECTION 02

Revenue engine: ARR trajectory and segment mix

Annualized run-rate revenue, Jan 2024 – Apr 2026
USD billions, monthly snapshots, log-like compression on early months
$30B $22B $15B $8B $0 $87M Jan 24 $1B Dec 24 $9B Dec 25 $14B Feb 26 $19B Mar 26 $30B reported Apr 26 — see caveat Jan 24 Jul 24 Mar 25 Nov 25 Apr 26
Source: The Information; Bloomberg; Yahoo Finance; Reuters; @mwi.invest aggregation. Apr 2026 figure reported with caveat.

Pricing by tier

Tier Price Inclusions Source
Free $0 Limited usage; no training on data by default A
Pro $20/mo 5× free usage, projects, priority access A
Max 5× $100/mo 5× Pro usage (~25× free) A
Max 20× $200/mo 20× Pro usage (~100× free), peak priority A
Team $30/seat/mo annual ($35 monthly), 5-seat min Collaboration, admin, central billing A
Enterprise Custom (~$60–100+/seat/mo at 5K+ seats) SSO, audit logs, expanded context, data residency A price; D seat math
API (Opus 4.x) $5 / $25 per 1M input/output tokens 90% off cached input; 50% off batch A
API (Sonnet 4.x) $3 / $15 per 1M input/output tokens Same caching/batch discounts A
API (Haiku 4.5) $1 / $5 per 1M input/output tokens Lowest tier A
Claude Code Bundled in Pro/Max + usage; enterprise ~$150–250/dev/mo $13/dev/active-day enterprise blended C

Revenue mix (Apr 2026 ARR-weighted)

Revenue segment mix
~$30B ARR, April 2026 reported basis
$30B ARR API / Enterprise~80% (~$24B) Claude Code~8% (~$2.5B) Consumer Pro/Max~7% (~$2.1B) Team / seat-Ent~5% (~$1.5B)
D Mix derived from Sacra, The Information segment commentary, Anthropic blog posts on Code traction.

Customer footprint

Business customers A
300,000+ (Oct 2025)
Customers > $1M ARR A
1,000+ (Apr 2026), up from ~12 in 2024 — ~80× growth in 18 months
Customers > $100K ARR A
Up ~7× YoY in 2025
Fortune 10 adoption A
8 of 10
Largest single deployment A
Deloitte: 470,000 seats
Consumer free MAU D
~18.9M (web 11M + app 7.4M, Dec 2025)
Consumer paying users E
Estimated 500K–1.5M. Method: assume 3–8% conversion on 18.9M MAU, in line with typical AI chat subscription rates. Earlier published "2–4M" figures are likely overstated.
Web traffic D
613.7M visits/month claude.ai (#3 in AI, #73 globally — Similarweb, Mar 2026)
SECTION 03

Training economics: cost per model

Anthropic disclosed only two training-cost data points directly: Dario Amodei stated Claude 3.5 Sonnet cost "a few tens of millions" (~$30–40M), and the company confirmed Claude 3.7 Sonnet was a "few tens of millions" (TechCrunch, Feb 2025). Aggregate 2025 model-training spend was reported by The Information at $4.1 billion. Per-model estimates below are derived from disclosed totals, FLOP scaling (Epoch AI methodology), and chip-allocation assumptions.

Estimation methodology

Estimated final training-run cost by model
USD millions, midpoint of range. Bars include disclosed and estimated values.
$0 $130M $260M $390M $520M $650M $115M $35M $50M $300M $100M $30M $400M $150M $500M $650M 3 Opus 3.5 Son 3.7 Son Opus 4/4.1 Son 4/4.5 Haiku 4.5 Opus 4.5 Son 4.6 Opus 4.6 Opus 4.7 Opus tier Sonnet tier Haiku tier Legacy / disclosed
Method: A for 3.5/3.7 Sonnet (Anthropic CEO statements). E for all 4.x estimates: scaled from FLOP-share assumptions, calibrated to The Information's $4.1B aggregate 2025 spend. Final-run cost only; total program cost is 3–5× higher.

Hardware unit economics

Chip Approx unit cost Cloud rental Power/chip Tier
NVIDIA H100 SXM $25–30K $2.00–2.50/hr ~700W D
NVIDIA H200 $30–40K $3.00–3.50/hr ~700W D
NVIDIA B200 (Blackwell) $35–40K $4.50–6.00/hr ~1,000W D
AWS Trainium2 $8–12K ~$1.20–1.60/hr (bundled) ~500W D
AWS Trainium3 (re:Invent 2025) $10–15K n/a (Anthropic captive) ~600–700W A spec, D price
Google TPU v5p n/a (not sold) $2.00–4.00/hr (Vertex) ~500W D
Google TPU v7 "Ironwood" n/a n/a (Anthropic captive) ~600W A spec, E draw

Worked example: Claude Opus 4.6 final training run (illustrative)

Configuration: ~250,000 Trainium2 chips + ~50,000 TPU v5p, 90 days.
IT load: 250,000 × 500W (Tr2) + 50,000 × 500W (TPU) = 150 MW.
Facility load at PUE 1.15 (AWS Indiana Rainier, closed-loop liquid + outside air): 172.5 MW.
Energy: 172.5 MW × 2,160 hr = 372.6 GWh.
Energy cost: 372.6 GWh × $0.06/kWh (Indiana industrial) = $22.4M.
Pure cooling component (PUE 1.15 implies ~15% overhead on IT load): ~$3M.
Hardware amortization allocated to this run (~10% of chip CapEx): ~$300–400M.
Total final-run cost (energy + cooling + hardware allocation + networking + staff): ~$400–600M. Matches the leaked aggregate to within model-allocation tolerance.
SECTION 04

Inference and serving costs

The most important financial story inside Anthropic during 2025 was not training cost. It was inference. Frontier-model economics increasingly resemble cloud infrastructure economics: once a model is trained successfully, the real battle shifts toward serving millions of high-frequency requests at acceptable latency without destroying margin structure. Internal reporting suggests Anthropic underestimated just how aggressively enterprise customers would use long-context reasoning, coding agents, and autonomous workflows once those tools became production reliable. Per-token serving costs improved materially as Trainium2 and TPU deployments expanded, yet aggregate demand expanded even faster. In practical terms, the company became a victim of its own product-market fit. Revenue accelerated sharply, but so did compute burn.

Inference cost economics, 2025

Metric Value Tier
Inference COGS as % of paid-customer revenue ~50–55% B
Inference cost overrun vs. internal plan +23% B
Gross margin (paid only) 40% B
Gross margin (incl. free-tier compute) ~38% B
Per-token inference cost change (Sonnet 3.5 → Sonnet 4.6, generation-over-generation) −~90% D
Total 2025 inference compute spend (estimated) ~$2.5–3.0B E
2026E inference compute spend ~$10–13B E
Forecast gross margin 2027 (internal) ~70% B
Forecast gross margin 2028 (internal) ~77% B
Gross margin trajectory: actuals and internal forecast
Blended gross margin including free-tier inference overhead
+100% +50% 0% −50% −94% 40% ~50% ~70% ~77% 2024A 2025A 2026E 2027E 2028E
Source: B The Information leaked internal financials (Oct/Nov 2025); WSJ leaked confidential financials. Solid line: actual. Dashed line: internal forecast.
SECTION 05

Operating expenses and headcount

Estimated 2025 operating expense breakdown

The cost structure increasingly resembles a hybrid between a hyperscaler, a semiconductor customer, and an enterprise software company. Traditional SaaS businesses scale primarily through sales efficiency and distribution leverage. Anthropic scales through power delivery, networking throughput, inference orchestration, and access to frontier silicon. That distinction matters because it changes the shape of profitability. In the near term, compute dominates almost every other operating line item combined.

2025 OpEx composition (estimated)
~$10B total OpEx, against $4.5B revenue
~$10B 2025 OpEx Compute70% (~$7B) R&D personnel12% (~$1.2B) Sales & Marketing5% (~$500M) G&A6% (~$600M) Other / data / tools7% (~$700M)
E Compute and training spend: B from leaked totals. R&D / G&A apportioned via headcount × comp. S&M derived from observed spend (Super Bowl, conferences, GTM scaling).

Headcount over time C

Dec 2022 ~190
Dec 2023 ~400
Dec 2024 ~1,100
Dec 2025 ~2,300
Apr 2026 ~3,000
Open roles (Apr 2026) ~450

Compensation

  • Software Engineer total comp: $300K–$490K+ (median ~$336K)
  • Senior/Staff: > $490K, heavy equity weighting
  • Glassdoor approval (Dario Amodei): 93%; 95% recommend
  • Revenue per employee (Apr 26 ARR basis): $30B / 3,000 = ~$10M — historically unprecedented (Stripe peaked at $1.4M)

Departmental mix (estimated)

  • Research / safety: ~35%
  • Engineering / product: ~30%
  • GTM / sales / SE: ~15%
  • Policy / comms / community: ~10%
  • Finance / legal / HR / ops: ~10%
SECTION 06

Infrastructure stack and compute backlog

As of May 7, 2026, Anthropic has stacked over $330B in multi-year compute commitments across six counterparties spanning four chip families (AWS Trainium, Google TPU, NVIDIA GPU, Broadcom-designed custom silicon). The latest addition — announced May 6, 2026 — is a deal to take all of SpaceX's Colossus 1 capacity: 300+ MW and 220,000+ NVIDIA GPUs online within May 2026, dedicated to inference for Claude Pro and Max subscribers. The same announcement floats a multi-GW orbital AI compute partnership with SpaceX, exploratory.

Multi-year compute commitments by counterparty
USD billions, contracted spend (Anthropic to partner)
$0 $50B $100B $150B $200B Google Cloud $200B / 5y AWS $100B / 10y Fluidstack (US infra) $50B Microsoft Azure $30B NVIDIA $10B equity SpaceX (Colossus 1) $ undisclosed (300 MW, 220K GPUs)
All deal values: A (Anthropic / partner press releases). Aggregate > $330B excluding undisclosed SpaceX/Colossus value and any equity-only NVIDIA component.

Capacity by partner — gigawatt commitments

Partner Capacity Term / online Hardware Notes
AWS Up to 5 GW 10 years; ~1 GW new by end-2026 Trainium2 → Trainium3 → Trainium4 Project Rainier, Indiana, $11B AWS site investment, PUE 1.15
Google + Broadcom 5 GW From 2027 TPU v5p, v7 Ironwood, custom Broadcom-designed TPUs $200B/5y total spend; up to 1M TPU chips
Microsoft Azure + NVIDIA Multi-GW Multi-year, Nov 2025 NVIDIA H100 / H200 / B200 via Azure $30B Azure compute + $5B MS equity + $10B NVIDIA equity
SpaceX (Colossus 1) 0.3 GW Online May 2026 ~220,000 NVIDIA GPUs Inference-dedicated for Pro and Max; orbital multi-GW under exploration
Fluidstack Multi-GW (subset of $50B US infra commit) Multi-year Mixed $50B Anthropic investment in American AI infrastructure (Nov 2025)
Why the multi-silicon strategy matters. Anthropic operates the most diversified frontier-AI compute stack of any lab. AWS Trainium gives a unit-cost edge (Trainium2 estimated 30–40% cheaper per FLOP than H100; Trainium3 claims 5× tokens per MW). Google TPU gives a price/performance edge for prefill-heavy workloads. NVIDIA gives latency-sensitive inference and ecosystem familiarity. SpaceX's Colossus 1 gives capacity on a one-month timeline. Internal forecast: $2.10 of revenue per dollar of compute by 2028, vs ~$1.60 for OpenAI's NVIDIA-heavy stack.

Project Rainier — anchor training site

Location New Carlisle, St. Joseph County, Indiana
AWS investment $11B (largest in Indiana history)
Footprint 1,200 acres; 30 buildings planned (~200K sq ft each); 7 online by mid-2025
Power Up to 2.2 GW (≈1.6M homes-equivalent), via Indiana Michigan Power / AEP
Chip count ~500K Trainium2 (late 2025) → > 1M Trainium2 (April 2026)
Cooling Closed-loop direct-to-chip liquid + outside air; PUE ~1.15; WUE 0.15 L/kWh
Compute uplift > 5× previous Anthropic training compute
Local impact 1,000+ jobs; $7M highway, $114M utility, $100M community fund; $722M projected tax over 35y
SECTION 07

Unit economics by tier

The numbers below are illustrative ranges built on stated assumptions, not point estimates. CAC for a private AI company is not directly observable; it is inferred from S&M as % of revenue, paid acquisition channel mix, and field-sales productivity benchmarks. LTV uses 24–48-month retention assumptions calibrated to enterprise SaaS norms.

Tier ARPU/yr Gross margin Est. CAC Est. LTV LTV/CAC Tier
Pro consumer $240 25–35% $20–40 $300–500 ~10–15× E
Max 5× consumer $1,200 40–45% $50–100 $2,000–3,000 ~25–30× E
Max 20× consumer $2,400 ~50% $80–150 $4,000–6,000 ~30–40× E
Team (per seat) $360–420 45–55% $100–200 $1,200–2,500 ~10–15× E
Enterprise (per seat blended) ~$1,000–1,500 55–65% $300–800 $4,000–8,000 ~10–15× E
API per $1M-ARR enterprise $1M 40% (25) → 50% (26) $30–80K $3–5M ~50–100× E
Claude Code (avg dev) $1,800–3,000 35–45% $50–150 $4,000–8,000 ~30–50× E
Caveats on these ranges. Margins on consumer tiers are dragged down meaningfully by free-tier inference (~95% of MAU don't pay; their compute is real cost). Max-tier margins are noticeably higher because heavy users self-select into a price point that better matches their compute draw. Enterprise margins are highest because of prepaid commits and prompt-cache utilization (the 90%-off cache discount mostly accrues to repeat-prompt enterprise patterns). LTV/CAC is healthy across all tiers but the enterprise API tier is the disproportionate value driver — which matches Anthropic's stated focus on B2B revenue.
SECTION 08

Capital stack, valuation, and investors

Valuation step history
Post-money valuation, USD billions, by funding round
$1T $750B $500B $250B $0 $4.1B $18.4B $61.5B $183B $350B $380B ~$850B? Series CMay 23 Series DDec 23 Series EMar 25 Series FSep 25 MS + NVDANov 25 Series GFeb 26 PendingMay 26 (rumored)
Sources: Anthropic press releases A, Sacra D, TechCrunch reporting on pending round C. Pending $850B valuation is reported but unconfirmed.

Capital raised by round

Round / event Date Raised Post-money Lead investors
Seed–Series B 2021–22 ~$700M < $5B Jaan Tallinn, others
Google strategic Feb 2023 $300M ~$5B Google
Series C May 2023 $450M $4.1B Spark Capital, Google
Amazon initial Sep 2023 $1.25B Amazon
Series D / extensions Late 2023 $2.3B+ $18.4B Google +$2B; Amazon +$2.75B (Mar 2024)
Amazon top-up Nov 2024 $4B Amazon (cumulative ~$8B)
Series E Mar 2025 $3.5B $61.5B Lightspeed, Bessemer
Series F Sep 2025 $13B $183B ICONIQ, Fidelity, Lightspeed, Coatue, GIC, BlackRock, Blackstone, QIA
MS + NVDA strategic Nov 2025 $15B (MS $5B + NVDA $10B) $350B Microsoft, NVIDIA + $30B Azure capacity
Series G Feb 2026 $30B $380B GIC, Coatue (lead); D.E. Shaw, Dragoneer, Founders Fund, ICONIQ, MGX
Amazon Apr 2026 Apr 20, 2026 $5B + up to $20B milestones Amazon (cumulative up to $33B)
Google Apr 2026 Apr 25, 2026 Up to $40B aggregate Alphabet
Pending round (rumored) May 2026 $40–50B target $850–900B TBD; reported by TechCrunch

Major investors and strategic partners

Investor Approx. cumulative $ Strategic element
Amazon $8B (now); up to $33B $100B / 10y AWS commit; Trainium co-design; Project Rainier exclusivity
Alphabet / Google ~$3B (now); up to $40B $200B / 5y Google Cloud commit; TPU access; Vertex AI distribution
Microsoft $5B (Nov 2025) $30B Azure compute commit; Microsoft Foundry distribution
NVIDIA $10B (Nov 2025) GPU supply preference
GIC, Coatue Lead Series G Sovereign + crossover
ICONIQ Lead Series F Multi-stage
Lightspeed Co-lead Series F
Fidelity, BlackRock, Blackstone, QIA, Sequoia, MGX, Founders Fund, D.E. Shaw, Dragoneer Various Diversified institutional / SWF

Cash and runway

Cash post-Series G (estimated) E
~$35–40B. Method: prior cash balance + $30B Series G + $15B Nov 2025 strategic raise − ~$10B 2025 burn.
Annual cash burn 2025 B
~$5–7B
Annual cash burn 2026E C
~$5–10B (per TechCrunch / The Information)
Runway at current burn
5–7 years pre-pending round; 10+ years if pending $40–50B raise closes.
Outstanding credit facility C
$2.5B
Authors copyright settlement (accrued) A
$1.5B (Bartz et al. v. Anthropic, settled Aug 2025 — largest US copyright settlement on record)
SECTION 09

Forward projections and risk register

Internal projections (leaked to The Information, WSJ, Reuters)

Year Revenue Gross margin FCF / EBITDA Tier
2024A ~$700–900M −94% ~−$2.7B B
2025A $4.5B 40% −$5.2B EBITDA B
2026E $20–26B ~50% ~−$3 to −$5B B
2027E ~$40B ~70% Approaching breakeven B
2028E $55–70B ~77% +$17B free cash flow B

Risk register

1. Inference unit economics under agentic load
Claude Code and Cowork agents drive 5–50× the token consumption of chat. The 2028 ~77% gross margin forecast requires per-token inference cost to fall faster than agentic workloads scale up token consumption. The 23% inference overrun in 2025 is the early signal that this is hard.
2. Compute concentration
$200B Google + $100B AWS + $30B Microsoft means Anthropic is exposed to step changes in cloud pricing or supply disruption. The multi-silicon strategy (Trainium / TPU / NVIDIA / Broadcom) is the explicit hedge.
3. Pentagon supply-chain risk designation (March 2026)
Anthropic told a federal judge that > 100 enterprise customers had raised concerns. Counterfactual revenue impact is not quantified publicly. Material if a fraction of FedRAMP-bound customers shift to OpenAI / Google.
4. Gross-vs-net headline ARR
If Anthropic is forced to restate ARR on a net basis (consistent with OpenAI's reporting), reported numbers could fall 25–30%. This is a presentation issue, not an economic one, but it would compress headline-multiple comparables.
5. Frontier model release cadence vs cost
Claude 4.7 trained on Trainium3 + Ironwood TPU is estimated at $500–800M final-run. If frontier-cost compounds at 50–80% per model generation while revenue compounds at 80–100% per year, the gap closes — but slowly. Failure of a generation to deliver capability gains relative to cost would compress the forecast.
6. Pending $40–50B round at $850–900B
If the round prices below rumored levels (or doesn't close), the 28× FY2026E ARR multiple at $850B comes under scrutiny. The Series G at $380B is 16× FY2026E ARR — historically reasonable for hyper-growth software, but priced for execution.
7. Consumer electricity-price commitment
Anthropic publicly committed to cover any consumer electricity price increases caused by its US data centers, with stated intent to extend internationally. The commitment creates an unbounded contingent liability and a positive externality reputational hedge. Magnitude is unquantified.
SECTION 10

Summary unit-economics dashboard [FY2026E]

KPI Value Method
Blended ARPU per business customer ~$67–80K $20–24B enterprise rev ÷ 300K accounts
Blended ARPU per >$1M customer ~$2–3M 1,000 accounts ≈ 50–60% of API rev
Blended consumer ARPU ~$300–400/yr Pro/Max mix; <10% on Max
API gross margin (2025 → 2026) 40% → 50% Leaked internal forecast
Subscription gross margin (Pro/Max) 30–45% Heavy free-tier overhead drag
Enterprise gross margin 55–65% Prepaid commits, prompt-cache utilization
Compute as % of revenue (2025) ~155% $7B compute / $4.5B rev (i.e. spending on compute exceeds revenue)
Compute as % of revenue (2028E target) ~48% Internal: $2.10 rev per $1 compute
Revenue per employee (Apr 26 ARR) ~$10M $30B ÷ 3,000 — historically unprecedented
Multiple of LTM revenue (Series G) ~85× $380B ÷ $4.5B 2025
Multiple of FY2026E ARR (Series G) ~16× $380B ÷ $23B mid 2026E
Multiple at pending $850B ~28× FY2026E ARR; ~12× FY2028E rev Rumored, not closed
SECTION 11

Sources

Primary documents (Tier A): Anthropic press releases (anthropic.com/news), partner press releases (Amazon, Google, Microsoft, SpaceX, Broadcom), court filings (Bartz et al. v. Anthropic), API pricing (claude.com/pricing).

Primary leaked financial documents (Tier B): WSJ confidential financials (Nov 2025); The Information leaked memo and $30B Series G coverage (Oct 2025); Reuters internal-projection reporting.

Tier-1 reporting (Tier C): Bloomberg; Yahoo Finance; CNBC; TechCrunch; Reuters; Financial Times; Datacenter Dynamics; Data Center Knowledge.

Industry research (Tier D): Sacra; Epoch AI; SemiAnalysis; SaaStr; iTiger; PYMNTS; Stanford AI Index. User-traffic data: Similarweb; Backlinko; AICPB.

APPENDIX

Selected financial and infrastructure terms

The following shorthand terms appear repeatedly throughout this report. Definitions are intentionally concise and written in the same operational framing used by institutional research notes and internal strategy memos.

Term Definition
ARR Annual Recurring Revenue - current monthly recurring revenue annualized as a forward-looking run-rate.
GAAP revenue Revenue recognized under Generally Accepted Accounting Principles over a reporting period.
Gross margin Revenue remaining after direct serving and infrastructure costs, expressed as a percentage of revenue.
EBITDA Earnings before interest, taxes, depreciation, and amortization; a proxy for operating profitability before financing and accounting adjustments.
Inference The live serving phase where trained models generate outputs for users and enterprise workloads.
Training run A large-scale compute cycle used to train or materially update a frontier model checkpoint.
COGS Cost of goods sold; direct operational expense required to deliver model output and infrastructure capacity.
PUE Power Usage Effectiveness; a datacenter efficiency metric comparing total facility power to IT equipment power.
CapEx Capital expenditure allocated toward long-lived infrastructure such as GPUs, networking, power systems, and datacenter buildouts.
Token The atomic text unit processed by language models during training and inference billing.
Context window The maximum amount of text or multimodal information a model can process within a single interaction.
MAU Monthly Active Users; the number of distinct users engaging with a product during a 30-day period.
LTV Lifetime Value; the estimated gross profit generated by a customer over the duration of the relationship.
OpEx Operating Expenses; recurring costs required to run the business excluding capital expenditures.
Run-rate A forward annualized estimate derived from the most recent observed monthly or quarterly operating level.