Agentic AI Persuasions

Understanding AI agents' persuasive capabilities in realistic social settings

This project investigates how AI agents can influence human behavior through persuasive communication in realistic social and digital environments. We study the mechanisms, effectiveness, and ethical implications of AI-driven persuasion at scale.

We examine agentic AI systems' ability to shape opinions, decisions, and behaviors through strategic communication, using controlled experiments in naturalistic settings.

Research focus areas include:

  • Persuasion mechanisms employed by conversational AI agents
  • Effectiveness measurement across different demographics and contexts
  • Detection methods for identifying AI-generated persuasive content
  • Vulnerability assessment of human susceptibility to AI persuasion
  • Ethical frameworks for responsible development of persuasive AI

Persuasion in the Wild

Our work addresses critical questions about AI agency in social influence, providing empirical evidence to inform policy and design decisions for AI systems that interact with humans in persuasive contexts.

Led by: Amir Ghasemian Team: Homa Hosseinmardi, Pooriya Jamie, Sikata Sengupta, Rezvaneh (Shadi) Rezapour, Aria Pessianzadeh

The Dynamics of Bias in Multi-Agent Decision Systems

Led by: Amir Ghasemian Team: Homa Hosseinmardi, Hamed Loghmani

Using Therapeutic Interactions to Improve General-Purpose LLM Safety

Led by: Homa Hosseinmardi Team: Amir Ghasemian, Joan Asarnow, Anita Taha