19th February saw the release of the European Commission’s white paper on AI, which remains open to public consultation until May. While extolling the virtues of AI such as its much anticipated roles in fine-tuning medical diagnostics and mitigating climate breakdown, the white paper ranks intrusion and privacy risks among the four main issues facing policy-making around AI. The other three risks included opaque and/or discriminatory decision-making and criminal application.
The expected impact on governance that AI uptake could have, and the resulting conspicuous contrast with governance systems lacking cutting edge AI capacity, leads the Commission to go so far as to note that a common European framework for policy on AI is necessary to avoid “the fragmentation of the single market.”
The paper outlines a largely theoretical “European approach to excellence and trust,” emphasising the requirement for global competitiveness in AI innovation. It states however that “trustworthiness is a prerequisite for [AI] uptake.” For instance, safeguards on law enforcement’s expanded capacities due to AI technology are recommended, though currently not detailed. Much of this trust is purportedly to be garnered by taking the “human-centric approach” to AI application. This approach was explicated in a paper called “Communication on Building Trust in Human-Centric Artificial Intelligence” released by the Commission last year, in which privacy and data governance was among seven “key requirements that AI applications should respect.”
Concrete, technical policies for regulation are somewhat more elusive. Both papers reiterate the accuracy requirement for any datasets that AI may be using as fuel for thought, i.e. the necessity for data integrity, but the requirement for stored data to be accurate is enforced by the General Data Protection Regulation (GDPR), a framework which will remain in the UK after Brexit due to the Data Protection Act 2018 and is seeing emulation across the world. Quite how the Commission’s value system of human-centric ethics will manifest in AI development remains unclear.
Where the white paper on AI is most outspoken is the perceived limitations of current EU legislation to regulate or even conceptualise AI. Changes to the legal concept of ‘safety’ invoked by AI risk and predictive analysis are anticipated; ambiguity concerning responsibility between economic agents in the supply chain may pose judicial quandaries; and there is even a chapter dedicated to the problem of AI indecipherability: if human officials cannot ascertain how an AI programme reached a decision, how can they know whether such a decision was skewed by bias in a dataset? Human oversight of AI development is therefore recommended at each stage of the industrial chain.
Harry Smithson, 21st February 2020