TY - JOUR
T1 - Bad News? Send an AI. Good News? Send a Human
AU - Garvey, Aaron M.
AU - Kim, Tae Woo
AU - Duhachek, Adam
N1 - Publisher Copyright:
© American Marketing Association 2022.
PY - 2023/1
Y1 - 2023/1
N2 - The present research demonstrates how consumer responses to negative and positive offers are influenced by whether the administering marketing agent is an artificial intelligence (AI) or a human. In the case of a product or service offer that is worse than expected, consumers respond better when dealing with an AI agent in the form of increased purchase likelihood and satisfaction. In contrast, for an offer that is better than expected, consumers respond more positively to a human agent. The authors demonstrate that AI agents, compared with human agents, are perceived to have weaker intentions when administering offers, which accounts for this effect. That is, consumers infer that AI agents lack selfish intentions in the case of an offer that favors the agent and lack benevolent intentions in the case of an offer that favors the customer, thereby dampening the extremity of consumer responses. Moreover, the authors demonstrate a moderating effect, such that marketers may anthropomorphize AI agents to strengthen perceived intentions, providing an avenue to receive due credit from consumers when the agent provides a better offer and mitigate blame when it provides a worse offer. Potential ethical concerns with the use of AI to bypass consumer resistance to negative offers are discussed.
AB - The present research demonstrates how consumer responses to negative and positive offers are influenced by whether the administering marketing agent is an artificial intelligence (AI) or a human. In the case of a product or service offer that is worse than expected, consumers respond better when dealing with an AI agent in the form of increased purchase likelihood and satisfaction. In contrast, for an offer that is better than expected, consumers respond more positively to a human agent. The authors demonstrate that AI agents, compared with human agents, are perceived to have weaker intentions when administering offers, which accounts for this effect. That is, consumers infer that AI agents lack selfish intentions in the case of an offer that favors the agent and lack benevolent intentions in the case of an offer that favors the customer, thereby dampening the extremity of consumer responses. Moreover, the authors demonstrate a moderating effect, such that marketers may anthropomorphize AI agents to strengthen perceived intentions, providing an avenue to receive due credit from consumers when the agent provides a better offer and mitigate blame when it provides a worse offer. Potential ethical concerns with the use of AI to bypass consumer resistance to negative offers are discussed.
KW - algorithm
KW - anthropomorphism
KW - artificial intelligence
KW - expectations
KW - intentions
KW - pricing
KW - robots
KW - satisfaction
KW - technology
UR - http://www.scopus.com/inward/record.url?scp=85124844076&partnerID=8YFLogxK
UR - http://www.scopus.com/inward/citedby.url?scp=85124844076&partnerID=8YFLogxK
U2 - 10.1177/00222429211066972
DO - 10.1177/00222429211066972
M3 - Article
AN - SCOPUS:85124844076
SN - 0022-2429
VL - 87
SP - 10
EP - 25
JO - Journal of Marketing
JF - Journal of Marketing
IS - 1
ER -