Han Yu, Zhiqi Shen, Chunyan Miao, Cyril Leung, Victor R. Lesser, Qiang Yang
Abstract
taxonomy which divides the field into four areas: 1) exploring ethical dilemmas; 2) individual ethical decision frameworks; 3) collective ethical decision frameworks; and 4) ethics in human-AI interactions
satisfying consequentialist ethics Ethics in Human-AI Interactions Belmont Report
[Luckin, 2017; Yu et al., 2017b]
people’s personal autonomy should not be violated (they should be able to maintain their free will when interacting with the technology); 2) benefits brought
about by the technology should outweigh risks; and 3) the benefits
and risks should be distributed fairly among the users (people should not be discriminated based on their personal backgrounds such as race, gender and religion)
persuasion agents
[Kang et al., 2015; Rosenfeld and Kraus, 2016]
[Stock et al., 2016]
large-scale study to investigate human perceptions on the ethics of persuasion by an AI agent
authors tested three persuasive strategies: 1) appealing to the participants emotionally; 2) presenting the participants with utilitarian arguments; and 3) lying
participants hold a strong preconceived negative attitude towards the persuasion agent, and argumentation-based and lying-based persuasion strategies work better than emotional persuasion strategies
did not show significant variation across genders or cultures
adoption of persuasion strategies should take into account differences in individual personality, ethical attitude and expertise in the given domain.