Investing FAQ

The list below pulls from posts in the Investing category, newest first. Each answer reads as a citable claim and links back to the source post for the numbers, the worked example, or the contradicting view.

The frame is valuation, not prediction. Mauboussin’s intrinsic-value framework applied to situations where price and assumptions diverge enough to be actionable: capital-allocation case studies (Buffett’s retirement), market structure under passive concentration (the active problem in passive investing), microstructure questions where information asymmetry shapes returns more than narrative (Kalshi adverse selection, the dot-com parallels in Burry’s $379 newsletter).

A few questions show up repeatedly: when broad-market exposure quietly becomes a directional AI bet, why software multiples inverted with semiconductors during the 2026 SaaSpocalypse, what a thesis-driven 2026 allocation actually weights.

Answers skip the language of “high-conviction picks” and the confident direction calls that crowd most investing FAQs. The default answer to “what should I buy” is “what assumptions does the price already make, and at what discount to those assumptions can you buy.” The questions below are named instances of that pattern.

20 most recent of 101 questions from 19 posts

How did the 2026 portfolio perform at midyear?

Through May 8, the portfolio returned +3.7% in CHF on a time-weighted basis, in line with a fairly-constructed global 60/40 benchmark (+3.8%) but trailing the S&P 500 total return in CHF (+5.8%). The 2025 USD-weakness tailwind that drove last year's outperformance has nearly evaporated, with USDCHF down only 1.5% YTD versus −11.5% across all of 2025.

From: Midyear Portfolio Review: The Rotation Worked. Europe Didn't.

Which 2026 thesis calls worked and which didn't?

Three of four rotation calls paid off: Emerging Markets +19.0%, US Small Cap +12.5%, Japan +11.7% in CHF terms. The fourth, the overweight in Europe, returned only +3.3%, trailing US large-cap by 3.4 percentage points and trailing every other rotation alternative by 8 to 16 points.

From: Midyear Portfolio Review: The Rotation Worked. Europe Didn't.

Is the S&P 500 overvalued in May 2026?

By the Shiller CAPE measure, yes, and more extremely than at the end of 2025. The CAPE rose from 39.8 in December to 42.0 in May 2026, the highest reading since the December 1999 dot-com peak of 44.2. That puts US large-cap at 2.4× the long-run mean of 17.4, and the European trailing P/E discount versus the US has widened from 22% (forward, December) to 34% (trailing, May).

From: Midyear Portfolio Review: The Rotation Worked. Europe Didn't.

Is the dollar still expected to depreciate in 2026?

Most major research houses (Goldman, UBS, J.P. Morgan, Pictet, ABN AMRO, MUFG) kept their dollar-weakness calls but pushed the bulk of the move into H2 2026. So far the dollar is down only 1.5% against CHF, a stall rather than a directional break. The IMF's April Fiscal Monitor flagged compressing US Treasury safety premium, supporting continued weakness, but the market has not acted yet.

From: Midyear Portfolio Review: The Rotation Worked. Europe Didn't.

What are the two Anthropics?

Two Anthropics is shorthand for the structural tension between the company Anthropic was founded to be in 2021 (an AI safety lab competing on safety to pull rivals upward) and the company it became at $380 billion valuation, $10 billion annualized revenue, around 2,500 employees, and roughly $78 billion in compute commitments through 2028. At founding, the safety lab and the frontier lab were two different things; at scale, they are the same organism. The post argues this collapses the original premise: at frontier scale, the race-to-the-top framing stops being a thesis and becomes a marketing claim.

From: Two Anthropics

What does Anthropic mean by race to the top?

Race to the top is the public-facing strategic claim that competing on safety would pull rivals upward. The argument: a lab that genuinely cares about safety has to be commercially competitive at the frontier, otherwise the frontier is set by labs that care less. Being at the frontier lets you publish safety practices, hire the best alignment researchers, and shape policy with credibility. Rivals see your practices working and copy them. The whole industry shifts. The thesis is laid out across Dario Amodei's essays from 2024 onward, anchored most explicitly in The Adolescence of Technology (January 2026, around 22,000 words).

From: Two Anthropics

Why did Dario Amodei turn down the OpenAI CEO offer in 2023?

After Sam Altman's brief firing and reinstatement at OpenAI in November 2023, the OpenAI board approached Amodei with two offers: take the CEO job, or merge Anthropic into OpenAI. He declined both. Walking away from the CEO chair at the most valuable AI company in the world, less than three years after leaving it, was the most expensive credibility signal he could send that the safety thesis was the actual thesis and not a brand exercise. Roughly fourteen senior OpenAI researchers had followed him out two years earlier; the November 2023 refusal told them they had not made a mistake.

From: Two Anthropics

What was the March 2026 DoD ruling about?

On March 26, 2026, a federal judge issued a temporary injunction against the Department of Defense in a dispute that started when Pete Hegseth's department asked Anthropic to drop the contractual ban on Claude being used for mass domestic surveillance or fully autonomous weapons in democratic countries. Anthropic refused. The DoD then labeled the company a supply-chain risk. The judge's written opinion described the DoD's actions as classic First Amendment retaliation, language that belongs to the court rather than to Anthropic. The ruling shows that at frontier scale, a safety constraint becomes a federal court fight, not a research-policy choice.

From: Two Anthropics

What are the three scenarios for how the paradox resolves?

Scenario A: the thesis holds, frontier labs converge on Anthropic-style safety practices, and the company earns a durable safety-narrative premium, conditional on the EU AI Act enforced with teeth and a US transparency framework. Scenario B (most likely): the thesis becomes a constraint, not a moat, as Anthropic loses ground on raw frontier capability to less-constrained competitors like xAI, a more permissive next-generation OpenAI, or leading Chinese labs. Scenario C: the paradox dissolves because the scale itself ends, AI capex hits a Jevons-paradox-for-labor wall, and Anthropic returns to looking like a research lab because every lab does.

From: Two Anthropics

What does this mean for how investors should price AI labs?

Three observations. First, at frontier scale safety narrative is not a moat, it is a constraint, and the safety premium investors paid in 2021-2023 should compress because the counterfactual that justified it (no safety-aligned frontier lab) no longer exists. Second, the signal to watch is whether the rate of frontier-capability spread is faster than the rate of safety-practice diffusion, the ratio that decides whether race-to-the-top is happening at all. Third, Anthropic-the-company and Anthropic-the-thesis are now two different things; an investor can be long the company and short the thesis.

From: Two Anthropics

Is Anthropic's safety-first approach a moat or a constraint?

At founding (2021-2023), Anthropic's safety-first approach worked as a moat: it was the only safety-aligned frontier lab, which justified a premium relative to the counterfactual where no such lab existed. At frontier scale in 2026, the dynamic inverts. Safety becomes a self-imposed handicap relative to less-constrained competitors like xAI, a more permissive next-generation OpenAI, or leading Chinese labs. The DoD March 2026 ruling, the Pottinger chip-controls op-ed, and the August 2025 Nvidia feud are early evidence. The post argues the safety stance is now a constraint, not a moat, and the 2021-2023 safety premium should compress.

From: Two Anthropics

How fast are AI inference costs declining?

At a median rate of 50x per year for equivalent performance, according to Epoch AI. GPT-4-level performance on PhD-level science questions cost $30 per million input tokens in early 2023 and under $0.10 through open-source alternatives today, roughly a 300-fold reduction in three years.

From: AI Models Are the New Rebar

Are open-source AI models as good as proprietary ones?

Nearly. The Stanford HAI 2025 AI Index found the gap shrank from 8 percent to 1.7 percent in a single year. Qwen 3.5-35B matches Claude Sonnet 4.5 on select benchmarks at roughly 3 percent of the cost, and GLM-5 achieves the highest Chatbot Arena Elo of any open-source model.

From: AI Models Are the New Rebar

Is OpenAI's $840 billion valuation justified?

At 42x trailing revenue, the valuation requires revenue growth to $200-280 billion by 2030 while expanding margins. But adjusted gross margins fell from 40 to 33 percent in 2025 as inference costs quadrupled, and the company lost $13.5 billion in the first half of 2025 alone.

From: AI Models Are the New Rebar

Who benefits from AI model commoditization?

Infrastructure providers like Nvidia and cloud platforms collect rent regardless of which model runs. Application-layer companies embedding AI into domain-specific workflows with proprietary data also benefit. Platforms with massive distribution like Meta and Google deliberately accelerate commoditization to prevent anyone from owning the model layer.

From: AI Models Are the New Rebar

What are the switching costs for AI models?

Near zero. The OpenAI API format is the de facto standard supported by virtually every provider. LiteLLM, an open-source gateway with 37,000 GitHub stars, provides a unified interface to over 100 providers through a single configuration change. OpenRouter offers managed access to more than 400 models. The only meaningful lock-in is custom fine-tuned models, which affect a small fraction of deployments.

From: AI Models Are the New Rebar

Can OpenAI and Anthropic become profitable?

Both face significant challenges. OpenAI lost $13.5 billion in the first half of 2025, with compute and talent consuming 75 percent of revenue and Microsoft taking another 20 percent through 2032. Anthropic at a $380 billion valuation on $14 billion in run-rate revenue projects positive cash flow around 2027-2028. Both are betting they can simultaneously grow revenue and expand margins in a market where open-source alternatives offer comparable performance at 3-15 percent of the cost.

From: AI Models Are the New Rebar

How much is Big Tech spending on AI in 2026?

The Big 4 (Amazon, Alphabet, Meta, and Microsoft) are collectively guiding to $610–665 billion in 2026 capital expenditure, up from approximately $384 billion in 2025. Including Oracle, the figure reaches $660–690 billion. Goldman Sachs projects cumulative 2025–2027 spending at $1.15 trillion, more than double the $477 billion spent over the prior three years combined.

From: AI Capex Arms Race: Who Blinks First?

What is happening to Big Tech free cash flow?

It is compressing sharply. Alphabet's free cash flow held at $73 billion in 2025 despite capex nearly doubling, because operating cash flow grew 31.5%. But with 2026 capex guided at $175–185 billion, Pivotal Research projects FCF falling approximately 90% to $8.2 billion. Amazon's FCF is already at $11.2 billion TTM. BofA credit strategists found AI capex will consume 94% of operating cash flow minus dividends and share repurchases for the Big 4 in 2025–2026.

From: AI Capex Arms Race: Who Blinks First?

What is the AI capex to revenue ratio?

Rough estimates place direct AI revenue at $40–60 billion in 2025 against AI-specific capex of $290–330 billion (roughly 75% of total capex per CreditSights), yielding a coverage ratio of approximately 0.12–0.20x. Sequoia's David Cahn calculated that the AI ecosystem needs to generate $600 billion in annual revenue to justify current infrastructure spending. By 2026, with perhaps $80–120 billion in AI revenue against $450 billion in AI capex, the ratio may reach 0.18–0.27x, still far below 1x.

From: AI Capex Arms Race: Who Blinks First?

Browse all Investing articles →