Counterfactual Auditing Methods
Causal inference frameworks for auditing algorithmic systems
This project develops novel methodological frameworks for causally estimating the effects of algorithmic systems using counterfactual experimental designs. We pioneered the use of “counterfactual bots” that replicate user behaviors under different algorithmic exposure conditions.
Our counterfactual bot framework enables causal inference in platform studies by creating digital twins that experience different algorithmic treatments while controlling for user behavior.
Key methodological contributions include:
- Counterfactual bot design for platform auditing and causal inference
- Digital twin methodologies that separate user agency from algorithmic influence
- Cross-platform experimental frameworks for comparative algorithmic studies
- Causal inference tools adapted for sociotechnical systems research
This work, published in PNAS, established new standards for rigorous causal evaluation of algorithmic systems and demonstrated that user intent often outweighs algorithmic bias in content exposure patterns. The methodology is now being applied across domains including youth mental health, misinformation, and content recommendation systems.