Gateway to Think Tanks
来源类型 | Report |
规范类型 | 报告 |
DOI | https://doi.org/10.7249/RRA1773-1 |
来源ID | RR-A1773-1 |
Labelling initiatives, codes of conduct and other self-regulatory mechanisms for artificial intelligence applications: From principles to practice and considerations for the future | |
Camilla d'Angelo; Isabel Flanagan; Immaculate Dadiso Motsi-Omoijiade; Mann Virdee; Salil Gunashekar | |
发表日期 | 2022-04-25 |
出版年 | 2022 |
页码 | 136 |
语种 | 英语 |
结论 | We identified and analysed a range of self-regulatory mechanisms — such as labelling initiatives, certification schemes, seals, trust/quality marks and codes of conduct — across diverse geographical contexts, sectors and AI applications. The initiatives span different stages of development, from early stage (and still conceptual) proposed mechanisms to operational examples, but many have yet to gain widespread acceptance and use. Many of the initiatives assess AI applications against ethical and legal criteria that emphasise safety, human rights and societal values, and are often based on principles that are informed by existing high-level ethical frameworks. We found a series of opportunities and challenges associated with the design, development and implementation of these voluntary, self-regulatory tools for AI applications. We outlined a set of key considerations that stakeholders can take forward to understand the potential implications for future action when designing, implementing and incentivising the take-up of voluntary, self-regulatory mechanisms, and to help contribute to the creation of a flexible and agile regulatory environment.
|
摘要 | Artificial intelligence (AI) is recognised as a strategically important technology that can contribute to a wide array of societal and economic benefits. However, it is also a technology that may present serious challenges and have unintended consequences. Within this context, trust in AI is recognised as a key prerequisite for the broader uptake of this technology in society. It is therefore vital that AI products, services and systems are developed and implemented responsibly, safely and ethically. ,Through a literature review, a crowdsourcing exercise and interviews with experts, we aimed to examine evidence on the use of labelling initiatives and schemes, codes of conduct and other voluntary, self-regulatory mechanisms for the ethical and safe development of AI applications. We draw out a set of common themes, highlight notable divergences between these mechanisms, and outline anticipated opportunities and challenges associated with developing and implementing them. We also offer a series of topics for further consideration to best balance these opportunities and challenges. These topics present a set of key learnings that stakeholders can take forward to understand the potential implications for future action when designing and implementing voluntary, self-regulatory mechanisms. The analysis is intended to stimulate further discussion and debate across stakeholders as applications of AI continue to multiply across the globe and particularly considering the European Commission's recently published draft proposal for AI regulation. |
目录 |
|
主题 | Artificial Intelligence ; Emerging Technologies ; Science and Technology Legislation ; Science ; Technology ; and Innovation Policy |
URL | https://www.rand.org/pubs/research_reports/RRA1773-1.html |
来源智库 | RAND Corporation (United States) |
引用统计 | |
资源类型 | 智库出版物 |
条目标识符 | http://119.78.100.153/handle/2XGU8XDN/524773 |
推荐引用方式 GB/T 7714 | Camilla d'Angelo,Isabel Flanagan,Immaculate Dadiso Motsi-Omoijiade,et al. Labelling initiatives, codes of conduct and other self-regulatory mechanisms for artificial intelligence applications: From principles to practice and considerations for the future. 2022. |
条目包含的文件 | ||||||
文件名称/大小 | 资源类型 | 版本类型 | 开放类型 | 使用许可 | ||
RAND_RRA1773-1.pdf(9833KB) | 智库出版物 | 限制开放 | CC BY-NC-SA | 浏览 | ||
x1650848643120.jpg.p(4KB) | 智库出版物 | 限制开放 | CC BY-NC-SA | 浏览 |
除非特别说明,本系统中所有内容都受版权保护,并保留所有权利。