a variance decomposition for stock returns Explained
A variance decomposition for stock returns
What you will learn: this article explains what a variance decomposition for stock returns is, why it matters for asset pricing and risk attribution, how John Campbell’s VAR-based method works, typical empirical findings, practical steps to implement it, common extensions and limitations, and brief guidance on applicability to cryptocurrencies. As of 2026-01-17, according to NBER and Economic Journal bibliographic listings, Campbell’s paper (1990/1991) remains the foundational reference for this methodology.
Introduction
In plain terms, a variance decomposition for stock returns is a model-based procedure that breaks unexpected stock return innovations into components driven by (i) news about expected future cash flows (dividends or earnings), (ii) news about expected future returns (discount rates or expected returns), and (iii) the covariance between these two news types. The approach is most widely applied to U.S. equities and empirical asset-pricing research, and it provides a transparent way to ask the question: what moves prices?
Historical background and motivation
The question “what moves stock prices?” motivated a long literature in finance. The present-value identity shows stock prices equal the discounted expected stream of future payoffs (most simply, dividends). Therefore, any unanticipated change in price must reflect a revision either to expected future cash flows or to the discount rates applied to those cash flows. Campbell’s 1990/1991 VAR decomposition formalized this intuition into an empirical method that operationalizes the mapping from observed return surprises to forecast revisions.
Theoretical foundation
At the heart of a variance decomposition for stock returns is the present-value model. For a simple stock that pays dividends, the price at time t, P_t, satisfies:
P_t = E_t[ sum_{j=1}^infty beta^{j-1} D_{t+j} ]
where E_t denotes expectation conditional on information at t, beta is the discount factor, and D_{t+j} are future dividends. Log-linearizations of this identity allow researchers to express unexpected returns as the sum of two forecast revision terms:
- Cash-flow news — revisions to expected future dividends (or other fundamentals).
- Discount-rate (expected-return) news — revisions to expected future returns (the discount rates applied to cash flows).
Intuitively, if dividends are expected to be higher, prices rise today because of improved cash-flow expectations. If expected returns fall (meaning investors demand a lower premium), prices also rise because future payoffs are discounted less harshly. The variance decomposition quantifies how much of realized return volatility arises from each type of news and their interaction.
Campbell’s VAR decomposition methodology
John Y. Campbell (NBER Working Paper 1990; Economic Journal 1991) provided the canonical empirical strategy used in a variance decomposition for stock returns. The method combines a vector autoregression (VAR) for a set of forecasting variables with impulse-response algebra to compute how a one-period innovation to returns decomposes into cash-flow and expected-return news.
Key conceptual steps
- Choose a set of persistent predictors and the return series (for example: log dividend-price ratio, log earnings-price ratio, long-short returns).
- Estimate a VAR for the predictors and returns to capture joint dynamics.
- Compute forecast revisions implied by a one-period innovation to unexpected returns using VAR impulse responses. Forecast revisions are infinite sums but are computed via VAR algebra and impulse-response functions.
- Map those forecast revisions into two orthogonal components: cash-flow news and expected-return news (and their covariance contribution to return variance).
Because the decomposition uses forecast revisions, it relies on the VAR to summarize the conditional dynamics. The infinite-horizon nature of present-value sums is handled by summing response functions up to a horizon (or by using closed-form VAR expressions when stationarity assumptions hold).
Key estimation steps (practical)
Data inputs
Typical inputs for a variance decomposition for stock returns include:
- Real or nominal stock returns (monthly or quarterly), often excess returns over a short-term safe rate.
- Dividend-price ratio (log or demeaned), or alternative cash-flow proxies such as earnings-price ratios or clean-surplus earnings.
- Additional predictors if desired: interest rates, term spread, book-to-market, inflation, or macro variables used in forecasting returns.
Model specification
Popular choices: a small VAR (2–6 variables), with lag order selected by information criteria or set to capture persistence (often 1–4 lags for monthly data). Stationarity is important: persistent predictors may be detrended or expressed as deviations from long-run means to avoid spurious results. Log-linearization and demeaning often help numerical stability.
Shock identification and computation
Identification typically treats the realized unexpected return innovation in period t as the shock. Using VAR impulse responses, you compute how that shock revises forecasts of future dividends and future returns. The cash-flow news equals the present-value of forecast revisions to dividends, and the expected-return news equals the negative present-value of forecast revisions to expected returns (since higher expected returns reduce present values). The variance contribution is computed across estimated shocks and decomposed into variance due to cash-flow news, variance due to expected-return news, and twice their covariance term (which may be negative).
Practical notes
- Frequency and horizon: many studies use monthly data and compute decompositions at monthly frequency, summing impulse responses out to a sufficiently long horizon (e.g., 240 months) or until responses die out.
- Stationarity: deterministic trends or unit roots in predictors require care. Use detrending or cointegration frameworks when appropriate.
- Sample selection: results can be sample-period dependent, so robust checks across subperiods are standard.
Empirical findings: classic results
Campbell’s original implementation for U.S. monthly data (1927–1988) reported that unexpected return variance decomposes roughly into three sizable parts: cash-flow news, expected-return news, and their covariance. A useful, compact empirical summary often cited from that era is that about one-third of variance is attributable to dividend news, one-third to expected-return news, and one-third to their negative covariance — though exact shares depend on variable choices, sample, and model.
Key qualitative findings from the classic literature include:
- Expected-return news is persistent and accounts for much of low-frequency return variation.
- Dividend-news and expected-return-news tend to be negatively correlated, which can amplify price swings.
- Subsample differences exist: some postwar periods show a larger role for expected-return news relative to prewar periods.
These empirical insights reshaped how researchers interpret stock return volatility: large price swings need not imply volatile fundamentals if discount-rate news varies over time.
Extensions and related literature
Researchers have extended Campbell’s framework in multiple directions. Important threads include:
Bayesian approaches
Bayesian estimation of a variance decomposition for stock returns places priors on VAR parameters to quantify posterior uncertainty. Bayesian analyses find that, with diffuse priors and short samples, posterior uncertainty on shares of variance can be large; carefully chosen informative priors or hierarchical modeling can reduce estimation noise. A Bayesian perspective is useful when small-sample uncertainty or model instability is a concern.
Time-varying volatility
Scruggs & Nardari and similar work allow the VAR residual covariance matrix to evolve over time via multivariate stochastic volatility. That yields a time-varying variance decomposition: the contributions of cash-flow news and expected-return news change with market volatility regimes. This approach helps explain why market volatility changes over time by decomposing its drivers.
Firm-level and accounting-based decompositions
At the firm level, dividends are often an imperfect proxy for cash flows. Vuolteenaho (1999) and others replace dividends with earnings or clean-surplus accounting flows to decompose firm abnormal returns into news about cash flows vs discount rates, helping to diagnose corporate-specific drivers of return variability. Such firm-level decompositions often find a larger role for cash-flow news in cross-sectional return differences.
Financial‑ratio and factor applications
Some studies use variance decomposition to link return variance to news about financial ratios (e.g., AQR’s work). By incorporating predictors like profit margins, leverage, or turnover into the VAR, researchers can attribute realized return variance to news about accounting or macro ratios, offering a bridge between asset-pricing theory and practical risk attribution.
Applications
a variance decomposition for stock returns has several practical and research uses:
- Asset-pricing research: test theories about time-varying risk premia and the sources of return predictability.
- Portfolio construction & risk attribution: decompose realized portfolio volatility to decide if price swings are due to fundamentals or changing discount-rate expectations, informing hedging choices.
- Firm-level diagnostics: determine whether an equity price change reflects better-than-expected cash flows (good news) or lower required returns (re-rating).
- Risk-management: complement factor-based VaR and stress tests by showing whether scenarios should target cash-flow shocks or discount-rate shocks.
- Crypto adaptations: although most tokens lack dividends, the decomposition idea can be adapted if you define token-equivalent cash flows (protocol fees, staking rewards, buyback mechanisms, or on-chain revenue). Use care: mapping fundamentals is subjective and often more model-dependent for crypto than for equities.
Implementation & practical considerations
Data choices
Two recurring decisions shape results:
- Use of dividends vs earnings: for aggregate markets, dividends are the canonical cash-flow measure, but buybacks and payout policy changes over the last decades argue for broader payout or earnings measures in modern samples.
- Frequency: monthly data is standard; quarterly or annual horizons are possible but change the interpretation of persistence and impulse-response horizons.
Model specification and identification
Small VARs are easier to interpret; adding many predictors increases parameter uncertainty. Identification of shocks can be done with Cholesky decompositions or structural restrictions, but the canonical decomposition emphasizes the realized unexpected return innovation as the starting shock. Robustness checks across lag orders, predictor sets, and horizons are essential.
Common pitfalls
- Small-sample bias: long-run summation and persistent predictors create imprecise estimates in finite samples.
- Model dependence: results change with variable choice, detrending method, and sample period.
- Measurement error, e.g., if dividends poorly capture firm economic cash flows, the decomposition may misattribute news.
- Changing corporate payout patterns: the rise of buybacks and retained earnings affects the mapping from accounting flows to investor-perceived cash flows.
Tools and reproducibility
Popular toolchains include econometrics packages that support VARs and impulse-response computation (R’s vars and MCMC packages, Python’s statsmodels, or specialized Bayesian toolboxes). For time-varying volatility and Bayesian estimation, use packages supporting multivariate stochastic volatility or MCMC sampling. Practitioner guides describe numerical details for stable long-horizon sums and orthogonalization.
Criticisms and limitations
While a variance decomposition for stock returns is powerful conceptually, users should be aware of limitations:
- Dependence on model specification and chosen predictors; different setups yield different attributions.
- Identification ambiguity: the classification of news types is model-dependent and may not correspond to unique structural shocks without stronger assumptions.
- Large sampling uncertainty, especially for long-horizon decompositions and short samples.
- Changing payout practices and accounting treatments over time complicate interpretation of dividend-based decompositions.
Adapting the approach for cryptocurrencies
Applying a variance decomposition for stock returns directly to cryptocurrencies requires care. Most tokens do not have dividend-like cash flows. To adapt the framework, researchers and practitioners must:
- Define token fundamentals: protocol fees, staking yields, validator rewards, or explicit revenue share mechanisms can stand in for cash flows.
- Construct appropriate predictors: on-chain indicators (transaction count, active addresses, protocol revenue) may replace or complement accounting ratios.
- Be transparent about assumptions: the mapping from on-chain metrics to discounted cash flows is model-dependent and often contested.
For organizations using crypto in investment or risk workflows, consider using the decomposition as an interpretive tool rather than a precise estimator, and combine it with direct on-chain analytics and scenario analysis. If you use wallets or custody tools, Bitget Wallet and Bitget’s institutional services offer integrated tools for on-chain data access and portfolio monitoring.
Worked example outline (implementation sketch)
Below is a concise implementation roadmap you can follow or hand to a data team. This is an outline, not executable code.
- Data: monthly excess returns for an equity index, log dividend-price ratio (dp), and optionally one or two macro predictors across your sample (e.g., 1927–2020).
- Preprocessing: demean dp, check stationarity, remove deterministic trends if necessary.
- VAR: estimate VAR(p) for [return, dp, predictors] with p chosen by AIC/BIC or fixed to capture persistence.
- Shock: extract the one-period unexpected return innovation (residual) as the shock of interest.
- Impulse responses: compute IRFs for dp and other predictors to that shock out to a long horizon H.
- Forecast revisions: convert IRFs into present-valued forecast revisions to dividends and expected returns (sum IRFs appropriately using discount factor beta).
- Decomposition: compute variance contributions across sample shocks to get shares due to cash-flow news, expected-return news, and covariance.
- Robustness: repeat with alternative predictors, lag orders, and subperiods; consider Bayesian estimation to quantify uncertainty.
Further reading and key references
Select sources to deepen your understanding of a variance decomposition for stock returns:
- Campbell, J. Y., "A Variance Decomposition for Stock Returns" (NBER Working Paper 1990; Economic Journal 1991) — foundational paper outlining the VAR decomposition.
- Bayesian analysis of Campbell’s decomposition — papers applying Bayesian VAR methods to quantify posterior uncertainty in decomposition shares.
- Scruggs, J. T., & Nardari, F., "Why Does Stock Market Volatility Change Over Time? A Time‑Varying Variance Decomposition for Stock Returns" (2005) — VAR with multivariate stochastic volatility.
- AQR, "The Dynamic Relation Between Stock Returns and Key Financial Ratios: A Variance Decomposition Approach" — linking financial ratios to return variance.
- Vuolteenaho, T. (1999) and firm-level decomposition literature — replacing dividends with earnings for micro-level analysis.
- Practitioner guides — variance-decomposition in risk modeling and implementation notes.
Reporting note and recent context
As of 2026-01-17, according to NBER and Economic Journal bibliographic pages, Campbell’s 1990/1991 work continues to be the primary reference for variance decomposition approaches in asset-pricing. Recent extensions emphasizing Bayesian inference and time-varying volatility address key estimation uncertainties and help interpret changing market volatility patterns without invoking unique structural shocks.
Practical takeaways
- a variance decomposition for stock returns separates realized return volatility into cash-flow news and expected-return news plus covariance; this helps interpret price moves.
- Use small, well-specified VARs and report robustness across predictors and samples; consider Bayesian or time-varying volatility extensions when sample uncertainty or regime shifts are material.
- When applying the concept to crypto, define defensible token cash flows (staking yields, fees) and be explicit about assumptions.
To explore practical tools for portfolio monitoring and on-chain analytics that can support decomposition-style analysis for traditional and crypto assets, consider Bitget’s portfolio tools and Bitget Wallet for secure custody and integrated data feeds. These tools can help you gather the predictors and return series required to implement a variance decomposition and track the drivers of realized volatility.
More resources
If you want a follow-up, I can:
- Provide a non-technical 500-word summary of the method suitable for investment committees.
- Produce a worked example with code in Python or R that implements Campbell’s variance decomposition on a public equity index series.
- Show how to adapt the decomposition to a simple crypto token using staking rewards as cash flows and on-chain transaction counts as predictors.
Further exploration can help you turn the conceptual decomposition into a reproducible tool for research or risk reporting.
This article is informational and educational. It does not constitute investment advice. For secure custody and portfolio tools that can support research and risk workflows, consider Bitget Wallet and Bitget’s institutional services.





















