来源类型 | Research Reports
|
规范类型 | 报告
|
DOI | https://doi.org/10.7249/RR1744
|
ISBN | 9780833097637
|
来源ID | RR-1744-RC
|
| An Intelligence in Our Image: The Risks of Bias and Errors in Artificial Intelligence |
| Osonde A. Osoba; William Welser IV
|
发表日期 | 2017
|
出版年 | 2017
|
页码 | 44
|
语种 | 英语
|
结论 |
Algorithms and Artificial Intelligence Agents Influence Many Areas of Life Today- In particular, these artificial agents influence the news articles read and associated advertising, access to credit and capital investment, risk assessments for convicts, and others.
This Reliance on Artificial Agents Carries Risks that Have Caused Concern- The potential for bias is one concern. Algorithms give the illusion of being unbiased but are written by people and trained on socially generated data. So they can encode and amplify human biases. Use of artificial agents in sentencing and other legal contexts is one area in particular that has caused concerns about bias.
- Another concern is that increasing reliance on artificial agents is fueling the rapid automation of jobs, even jobs that would seem to rely heavily on human intelligence, such as journalism and radiology.
- Among other risks are the possibility of hacked reward functions (an issue with machine learning) and the inability to distinguish among cultural differences.
Remedies Will Most Likely Require a Combination of Technical and Nontechnical Approaches- Reliance on algorithms for autonomous decisionmaking requires equipping them with means of auditing the causal factors behind decisions.
- Algorithms can lead to inequitable outcomes. Instilling a healthy dose of informed skepticism in the public would help reduce the effects of automation bias.
- Training and diversity in the ranks of algorithm developers could help improve sensitivity to potential disparate impact problems.
|
摘要 |
- Identify critical services and subsystems that require "human-in-the-loop" decisionmaking. Selection criteria may include high-risk systems or systems that require special accountability. Limit the role of artificial agents in these systems to a strictly advisory capacity. Emphasize the need for the ability to audit the results of these advisory artificial agents.
- Establish best practices for auditing algorithmic decisionmaking aids designed for use in government services and policy domains (e.g., the criminal justice system and social services administration). This should include specific guidance discouraging the use of unaccredited third-party black-box algorithmic solutions. Audit procedures should also address questions of disparate impact.
- Adopt standardized disclosure practices to inform stakeholders when decisions affecting them are algorithmically generated. Institute standard procedures for appealing or reviewing such decisions.
- Invest science research funds in research on algorithmic disparate impact. Engage with the commercial artificial intelligence community to share best practices.
- Address diversity issues in the science, technology, engineering, and math educational pipeline. Update accreditation guidelines for engineering school to include more training on the effects of technology on society and sociotechnical systems more generally.
|
主题 | Artificial Intelligence
; Big Data
; Criminal Justice
; Cyber and Data Sciences
; Databases and Data Collection
; Analysis
; and Processing
; Racial Discrimination
; Robust Decision Making
|
URL | https://www.rand.org/pubs/research_reports/RR1744.html
|
来源智库 | RAND Corporation (United States)
|
引用统计 |
|
资源类型 | 智库出版物
|
条目标识符 | http://119.78.100.153/handle/2XGU8XDN/108646
|
推荐引用方式 GB/T 7714 |
Osonde A. Osoba,William Welser IV. An Intelligence in Our Image: The Risks of Bias and Errors in Artificial Intelligence. 2017.
|
文件名:
|
x1535045537267.jpg
|
格式:
|
JPEG
|
文件名:
|
RAND_RR1744.pdf
|
格式:
|
Adobe PDF
|
除非特别说明,本系统中所有内容都受版权保护,并保留所有权利。