Counterfactual Auditing Methods

Causal inference frameworks for auditing algorithmic systems

This project develops novel methodological frameworks for causally estimating the effects of algorithmic systems using counterfactual experimental designs. We pioneered the use of “counterfactual bots” that replicate user behaviors under different algorithmic exposure conditions.

Our counterfactual bot framework enables causal inference in platform studies by creating digital twins that experience different algorithmic treatments while controlling for user behavior.

Key methodological contributions include:

  • Counterfactual bot design for platform auditing and causal inference
  • Digital twin methodologies that separate user agency from algorithmic influence
  • Cross-platform experimental frameworks for comparative algorithmic studies
  • Causal inference tools adapted for sociotechnical systems research

This work, published in PNAS, established new standards for rigorous causal evaluation of algorithmic systems and demonstrated that user intent often outweighs algorithmic bias in content exposure patterns. The methodology is now being applied across domains including youth mental health, misinformation, and content recommendation systems.

References

2024

  1. bots.png
    Causally estimating the effect of YouTube’s recommender system using counterfactual bots
    Homa Hosseinmardi, Amir Ghasemian, Miguel Rivera-Lanas, Manoel Horta Ribeiro, Robert West, and Duncan J Watts
    Proceedings of the National Academy of Sciences, 2024

2021

  1. ytpnas.jpg
    Examining the consumption of radical content on YouTube
    Homa Hosseinmardi, Amir Ghasemian, Aaron Clauset, Markus Mobius, David M Rothschild, and Duncan J Watts
    Proceedings of the National Academy of Sciences, 2021