Position: Social Choice Should Guide AI Alignment in Dealing with Diverse Human Feedback

dc.contributor.authorConitzer, Vincent
dc.contributor.authorFreedman, Rachel
dc.contributor.authorHeitzig, Jobst
dc.contributor.authorHolliday, Wesley H.
dc.contributor.authorJacobs, Bob M.
dc.contributor.authorLambert, Nathan
dc.contributor.authorZwicker, William S.
dc.date.accessioned2026-04-04T18:48:35Z
dc.date.available2026-04-04T18:48:35Z
dc.date.issued2024
dc.description41st International Conference on Machine Learning, ICML 2024 -- 21 July 2024 through 27 July 2024 -- Vienna -- 201670
dc.description.abstractFoundation models such as GPT-4 are fine-tuned to avoid unsafe or otherwise problematic behavior, such as helping to commit crimes or producing racist text. One approach to fine-tuning, called reinforcement learning from human feedback, learns from humans' expressed preferences over multiple outputs. Another approach is constitutional AI, in which the input from humans is a list of high-level principles. But how do we deal with potentially diverging input from humans? How can we aggregate the input into consistent data about “collective” preferences or otherwise use it to make collective choices about model behavior? In this paper, we argue that the field of social choice is well positioned to address these questions, and we discuss ways forward for this agenda, drawing on discussions in a recent workshop on Social Choice for AI Ethics and Safety held in Berkeley, CA, USA in December 2023. Copyright 2024 by the author(s)
dc.identifier.endpage9360
dc.identifier.issn2640-3498
dc.identifier.scopus2-s2.0-85203826255
dc.identifier.scopusqualityQ1
dc.identifier.startpage9346
dc.identifier.urihttps://hdl.handle.net/11411/10241
dc.identifier.volume235
dc.indekslendigikaynakScopus
dc.language.isoen
dc.publisherML Research Press
dc.relation.ispartofProceedings of Machine Learning Research
dc.relation.publicationcategoryKonferans Öğesi - Uluslararası - Kurum Öğretim Elemanı
dc.rightsinfo:eu-repo/semantics/closedAccess
dc.snmzKA_Scopus_20260402
dc.subjectAdversarial Machine Learning
dc.subjectContrastive Learning
dc.subjectSocial Psychology
dc.subjectCollective Preference
dc.subjectFine Tuning
dc.subjectFoundation Models
dc.subjectLearn+
dc.subjectModeling Behaviour
dc.subjectMultiple Outputs
dc.subjectReinforcement Learnings
dc.subjectSocial Choice
dc.subjectReinforcement Learning
dc.titlePosition: Social Choice Should Guide AI Alignment in Dealing with Diverse Human Feedback
dc.typeConference Paper

Dosyalar