15/03/2022
The Centre for Data Ethics and Innovation has revealed the necessary steps to develop a world-leading AI assurance ecosystem in the UK.
The Centre for Data Ethics and Innovation (CDEI) is the government expert body facilitating trustworthy innovation in data and Artificial Intelligence (AI). It has now created a roadmap that is essential to constructing a world-leading AI assurance ecosystem in the UK.
Building an AI assurance ecosystem
The roadmap, which was outlined as a commitment in the UK’s National AI Strategy, has been developed after requests from public bodies to construct an ecosystem of tools and services that is capable of detecting and alleviating the variety of threats presented by AI and drive trustworthy adoption. It responds to one of the biggest issues in AI governance identified by international organisations, including the Global Partnership on AI, OECD and World Economic Forum.
Assurance services, such as audit, certification, and impact assessments, are common in other sectors, such as financial services and cybersecurity. These tools make certain that complex products are reliable and compatible with regulation, enhancing organisations’ confidence to invest, as well as providing greater outcomes for consumers. However, currently, assurance services for AI are relatively undeveloped.
Creating a roadmap
The roadmap, which is the first of its kind, brings consistency to a disjointed and nascent ecosystem. It lays out the functions and duties of various stakeholders, and identifies six priority areas for action:
Generate demand for reliable and effective assurance across the AI supply chain, improving understanding of risks, as well as accountabilities for mitigating them
Build a dynamic, competitive AI assurance market that provides a range of effective services and tools
Develop standards that provide a common language for AI assurance
Build an accountable AI assurance profession to ensure that AI assurance services are also trustworthy and high quality
Support organisations to meet regulatory obligations by setting requirements that can be assured against
Improve links between industry and independent researchers so that researchers can help develop assurance techniques and identify AI risks