Reflecting evidence from 280 witnesses from the government, academia and industry, and nine months of investigation, the UK House of Lords Select Committee on Artificial Intelligence published its report “AI in the UK: ready, willing and able?” on April 16, 2018 (the Report). The Report considers the future of AI in the UK, from perceived opportunities to risks and challenges. In addition to scoping the legal and regulatory landscape, the Report considers the role of AI in a social and economic context, and proposes a set of ethical guidelines. This blog post sets out those ethical guidelines and summarises some of the key features of the Report.
Ethical AI
The ethical use of AI is central to the Report, with the five key principles for a cross-sector ethical AI code:
- Artificial intelligence should be developed for the common good and benefit of humanity.
- Artificial intelligence should operate on principles of intelligibility and fairness.
- Artificial intelligence should not be used to diminish the data rights or privacy of individuals, families or communities.
- All citizens have the right to be educated to enable them to flourish mentally, emotionally and economically alongside artificial intelligence.
- The autonomous power to hurt, destroy or deceive human beings should never be vested in artificial intelligence.
The Report suggest that, in the short-term, this AI code is more appropriate than legislation, even if the Committee does not rule out that regulation in the future.
Access to Data
One of the key concerns set out in the Report is that all companies should have “fair and reasonable access to data”. The Committee notes the possibility that the ‘Big Tech’ companies may use network effects to build up large proprietary datasets which are difficult to match. In response, the Committee suggests ethical, data protection and competition frameworks, including a review of the use of data by the Competition and Markets Authority.
Healthcare and AI
The Report notes that the current fragmented cooperation between the National Health Service (NHS) and AI developers risks undervaluing the wealth of information held by the NHS. To avoid this, the Report calls for a framework for sharing appropriately anonymised NHS data to be created. Patients should be made aware of the potential use of their data, and given the opportunity to opt-out. Recognised as elements pivotal to the use of AI in healthcare are the acceptance by the public of AI being used in their treatment, the use of patient data, the NHS being equipped to deploy new technology and AI-trained staff.
Criminal misuse of AI
A further concern set out is that AI could be used to cause harm, for example by facilitating cyber-attacks or data sabotage. The Report invites the UK Cabinet Office to address this risk in its Cyber Security Science & Technology Strategy, and urges further research on necessary protection mechanisms.
Other key take-aways
In addition to the ethical questions and competition concerns identified, the Report also addresses further issues that may shape the future of AI in the UK.
First, the Report recognises that legal liability and accountability requires clarification. In particular, referring to the potential for AI to cause harm in the event of a malfunction, underperformance or an erroneous decision, the Committee urges the UK Law Commission to ensure that these risks are adequately addressed.
Second, the Committee calls on industry to develop voluntary mechanisms to inform consumers when AI is used to make significant or sensitive decisions that affect them.
Third, the Report warns that the current focus on deep learning technologies may lead to under exploitation of other AI opportunities, and calls on the Government and universities to support diverse AI research.
Finally, the Report discusses increasing the number of visas available for people with valuable skills in AI-related areas, inclusion of the ethical design and use of AI technology in school curriculums, and retraining initiatives in the job market.
What to expect next
There is a lot of political and regulatory focus on AI, not only in the UK but throughout Europe and the world. The Report is just one piece in a larger puzzle. The European Commission’s Communication on “Artificial Intelligence for Europe”, will be addressed elsewhere on this blog, and French President Macron recently announced investments of nearly EUR 1.5 billion in AI technology by the end of 2022.
The UK Government will develop various policy initiatives and new legislation in response to the Report.