Note on Russell-based Methodologies

The literature on Russell-index rebalancing has grown complicated, and there are many contradicting viewpoints. For example, see the following papers:


Please note, I strongly believe that academic debate is healthy and important, and reasonable people can disagree about methodologies and results. None of my statements should be considered an attack on any person. I respect and admire many of the people in this literature, even those I disagree with, and I hope this page contributes to the lively debate in this literature.


Wei and Young (2021a), Wei and Young (2021b), and Glossner (2021) all argue that the methodology in Appel, Gormley, and Keim (and several other papers) can lead to biased results. Of course, it is not possible to know if a real-world result is actually biased, since we do not know the true data generating process. However, my paper "Do Index Funds Monitor?" also argues that the methodology in Appel, Gormley, and Keim can lead to biased results. We show that at least one of the main results in Appel, Gormely, and Keim (2016) is highly unlikely. Below, I discuss this finding (see Wei and Young's excellent discussion of the methodological issues for more information: Regression Discontinuity Versus Instrumental Variables: Response to Appel, Gormley, and Keim (2020)).

Appel et al. (2016) claims that an increase in passive ownership leads to significant drop in dual-class shares within 12 months of index assignment. Their point estimates imply approximately 135 firms in their sample remove their dual class share structure as a result of passive ownership. This is highly unlikely, for at least three reasons.

1) Passive investors have neither the incentives, nor the resources, to make such changes in a large number of their portfolio firms. As we state in our paper, "Bebchuk and Hirst (2019) point out that the top three index fund families have on average only 21 investment stewardship personnel to cover 17,849 firms in their portfolio."

2) Removing dual class shares typically requires a vote from investors and quite a bit of legal work. Russell index rebalancing occurs at the end of June. Most corporate shareholder votes occur in spring, typically between April and June. This means that following a change in passive ownership at the end of June, the earliest possible corporate vote would be nearly 12 months away. Even after a vote, it is likely that implementation of a major policy (like the removal of dual-class shares) would take still more time. Yet, Appel et al. claim a significant reduction in dual class shares in the first twelve months after index rebalancing. Even if passive funds did try to remove dual-class shares from their portfolio firms, it is highly unlikely that it could be accomplished so quickly.

3) Imbens and Lemieux (2008) argue that one way to test identification assumptions is to examine dependent variables ``known not to be affected by the treatment." In a sense, this provides a falsification test: the setup should not find a significant result for things that are known not to be affected. We believe dual-class shares should be considered ``known not to be affected." This is simple to see using summary statistics. Of the 1,702 firm-years in our replication of Appel et al. (2016) for which we observe dual-class share status, only six firms change their share structure! Yet, the methodology in Appel et al. implies that approximately 135 firms removed their dual class shares. This simply cannot be true. We do not need complicated analyses or strong assumptions. Just from the summary statistics, it is clear that Appel et al. is unlikely to be correct on this point.


I also note that the critique of my own work in Identification using Russell 1000/2000 index assignments: A discussion of methodologies. is incorrect. In it, Appel, Gormely, and Keim write:

"For example, in an attempt to study the effects of passive ownership on corporate governance, Heath, Macciocchi, Michaely, and Ringengberg (2020) uses a difference-indifferences type estimation that compares the post-switch change in outcomes of switchers vs. nonswitchers. However, by failing to control for the changes in end-of-May CRSP market caps that led one stock to switch but not the other, the estimation ignores that switchers and non-switchers are inherently different at the time of the switch. In essence, their difference-in-differences estimation likely suffers from an omitted variable bias because it fails to control for the critical determinant of index switches. A similar concern applies to Coles, Heath, and Ringgenberg (2020)."


It is easy to see that this statement in Appel, Gormley, and Keim is factually incorrect. In our specification, we include a fixed effect that exactly controls for the forcing variable, market cap. We make this point very clear in our paper. The fixed effect is shown in equation 1 and the text discusses it in detail. We write:

"Because each firm had a single ranking within a given cohort, the fixed effects absorb any correlation of the outcome variable with both the true ranking and the error in the proxy ranking. Thus, the specification estimates the effects of switching indexes, as would a perfectly measured RDD, but in a way that is not sensitive to the measurement error in the forcing variable because the fixed effect eliminates the need for a control function. Indeed, any control function would be subsumed by the firm-by-cohort fixed effects. "


We also provide the code for our paper here. It is easy to see that the fixed effect controls for market cap.