G2TT
来源类型Discussion paper
规范类型论文
来源IDDP17298
DP17298 Aligned with Whom? Direct and social goals for AI systems
Anton Korinek; Avital Balwit
发表日期2022-05-11
出版年2022
语种英语
摘要As artificial intelligence (AI) becomes more powerful and widespread, the AI alignment problem - how to ensure that AI systems pursue the goals that we want them to pursue - has garnered growing attention. This article distinguishes two types of alignment problems depending on whose goals we consider, and analyzes the different solutions necessitated by each. The direct alignment problem considers whether an AI system accomplishes the goals of the entity operating it. In contrast, the social alignment problem considers the effects of an AI system on larger groups or on society more broadly. In particular, it also considers whether the system imposes externalities on others. Whereas solutions to the direct alignment problem center around more robust implementation, social alignment problems typically arise because of conflicts between individual and group-level goals, elevating the importance of AI governance to mediate such conflicts. Addressing the social alignment problem requires both enforcing existing norms on their developers and operators and designing new norms that apply directly to AI systems.
主题Industrial Organization ; Macroeconomics and Growth ; Public Economics
关键词Agency theory Delegation Direct alignment Social alignment Ai governance
URLhttps://cepr.org/publications/dp17298
来源智库Centre for Economic Policy Research (United Kingdom)
资源类型智库出版物
条目标识符http://119.78.100.153/handle/2XGU8XDN/546323
推荐引用方式
GB/T 7714
Anton Korinek,Avital Balwit. DP17298 Aligned with Whom? Direct and social goals for AI systems. 2022.
条目包含的文件
条目无相关文件。
个性服务
推荐该条目
保存到收藏夹
导出为Endnote文件
谷歌学术
谷歌学术中相似的文章
[Anton Korinek]的文章
[Avital Balwit]的文章
百度学术
百度学术中相似的文章
[Anton Korinek]的文章
[Avital Balwit]的文章
必应学术
必应学术中相似的文章
[Anton Korinek]的文章
[Avital Balwit]的文章
相关权益政策
暂无数据
收藏/分享

除非特别说明,本系统中所有内容都受版权保护,并保留所有权利。