Skip to main content


  • Social Informatics, 
  • Media System Design, 
  • Social Media

Xiaoting Yang is a BISA PhD student currently studying Informatics and System Science.

In an era dominated by technological advancements, the integration of artificial intelligence (AI) into various aspects of our lives is undeniable. One area where AI has made substantial inroads is in the realm of persuasive communication. AI systems are increasingly employed to influence human decisions and behaviours, whether it be in marketing, politics, healthcare, or personal interactions. This paradigm shift in the landscape of persuasion prompts a fundamental question: How does AI persuasion differ from human persuasion, and how can we enhance the persuasive efficacy of AI systems?

The concept of persuasion has deep-rooted historical significance, dating back to ancient rhetorical traditions and psychological theories of human cognition and communication. Traditional persuasion, reliant upon human-to-human interaction, involves a complex interplay of verbal and non-verbal cues, empathy, and nuanced understanding of individual differences. In contrast, AI persuasion leverages algorithms, data analytics, and machine learning techniques to tailor messages and strategies to target audiences. This divergence in approaches introduces a unique set of challenges and opportunities that necessitate thorough investigation.

The objective of this research is to provide a comprehensive exploration of AI persuasion, contrasting it with human persuasion, and developing strategies to enhance the effectiveness of AI-driven persuasive communication. In doing so, this study will address several key dimensions:

  • Understanding the Foundations of Persuasion: We will delve into the foundational theories of persuasion, drawing from psychology, communication studies, and rhetoric, to establish a framework for evaluating and comparing AI and human persuasion.
  • Analysing AI Persuasion Techniques: A critical examination of existing AI persuasion techniques, such as sentiment analysis, recommendation systems, and chatbots, will be conducted to elucidate their strengths, limitations, and ethical implications.
  • Comparative Analysis: Empirical research will be conducted to compare the outcomes of AI persuasion interventions to those of human persuasion in various contexts. This comparative analysis will uncover the relative strengths and weaknesses of each approach.
  • Human-AI Collaboration: Exploring ways in which AI can enhance human persuasion, and vice versa, will be a key focus. This includes investigating how AI can provide insights to human persuaders and augment their capabilities.
  • Ethical Considerations: The ethical dimensions of AI persuasion, including issues related to privacy, manipulation, and consent, will be rigorously examined, with a view toward developing ethical guidelines and best practices.
  • Enhancing AI Persuasion Effect: Building upon the findings from the previous dimensions, this research aims to propose novel strategies and interventions to improve the persuasive effectiveness of AI systems while maintaining ethical standards.

The research presented in this dissertation will contribute to a deeper understanding of the evolving landscape of persuasion in an AI-driven world. It seeks to bridge the gap between AI and human persuasion, leveraging the strengths of both to optimize persuasive outcomes. Ultimately, this study aspires to inform the responsible and ethical deployment of AI in persuasive contexts, fostering a more nuanced and informed approach to influence in our increasingly digital and AI-enhanced society.