Value alignment is a property of an intelligent agent indicating that it can only pursue goals and activities that are beneficial to humans. Traditional approaches to value alignment use imitation learning or preference learning to infer the values of humans by observing their behavior.We introduce a complementary technique in which a value-aligned prior is learned from naturally occurring stories which encode societal norms. Training data is sourced from the children's educational comic strip, Goofus & Gallant. In this work, we train multiple machine learning models to classify natural language descriptions of situations found in the comic strip as normative or non-normative by identifying if they align with the main characters' behavior. We also report the models' performance when transferring to two unrelated tasks with little to no additional training on the new task.
|Title of host publication||AIES 2020 - Proceedings of the AAAI/ACM Conference on AI, Ethics, and Society|
|Number of pages||7|
|State||Published - Feb 7 2020|
|Event||3rd AAAI/ACM Conference on AI, Ethics, and Society, AIES 2020, co-located with AAAI 2020 - New York, United States|
Duration: Feb 7 2020 → Feb 8 2020
|Name||AIES 2020 - Proceedings of the AAAI/ACM Conference on AI, Ethics, and Society|
|Conference||3rd AAAI/ACM Conference on AI, Ethics, and Society, AIES 2020, co-located with AAAI 2020|
|Period||2/7/20 → 2/8/20|
Bibliographical noteFunding Information:
This material is based upon work supported by the National Science Foundation under Grant No. 1849231.
© 2020 Copyright held by the owner/author(s).
- Learning from Stories
- Natural Language Processing
- Value alignment
ASJC Scopus subject areas
- Artificial Intelligence