Why Europe (and the rest of the world) needs a code of conduct for AI developers plus a Central Registry for algorithms
The rapid developments in artificial intelligence (AI) necessitate an ethical framework. The time has come to set up a Central Registry for algorithms, says Toon Borré, a “legal deposit” where ordinary citizens can check whether an algorithm contains biases and preferences. An independent supervisory body will have to score the algorithms on their ethics and social impact, while developers will have to follow a code of conduct and guidelines, formalized by the taking of an oath comparable to the Hippocratic Oath for physicians.
Article by Toon Borré, Expert practice leader Data to Insight
This op-ed piece has also been published in Dutch on the websites of Datanews and Knack.be.
There is an emerging consensus on the need for an ethical framework for algorithms. Earlier this year, Mustafa Suleyman formulated what could be the moral imperative for AI developers: ‘It is the responsibility of those developing new technologies,’ he said, ‘to help address the effects of inequality, injustice and bias.’
Suleyman is co-founder and current Head of Applied AI of the Google-owned AI company DeepMind Technologies, which specialises in machine learning. He predicts that research into the ethics, safety and social impact of AI will be one of the most pressing areas of inquiry this year.
The European Union is now also coming up to speed. June 14, the European Commission published the list of 52 members of the High-Level Expert GroupHigh-Level Expert Group which is to support the EU's AI strategy and policy. This group is to formulate policy recommendations on the ethical, legal and societal impact of the development of AI and set out ethical guidelines on fairness, safety, transparency, democracy and fundamental rights.
These guidelines are absolutely necessary, be it only because technology is not neutral. Most algorithms contain a bias. This means that data used in machine learning may reflect the biases of the people who collect or process these data. Algorithmic bias creates an information bubble and causes us for instance to pay too much for flight tickets or insurances; it also introduces exclusion mechanisms and confirms ethnic, social and gender prejudice.
The sense of moral panic surrounding AI stems not only from the rapid developments in the field of robotics but also from rampant dystopian fantasies about robot wars and the domestication of man by frenzied malicious machines. Poor knowledge of AI and the associated fear of loss of control complete the circle.
Reality is not only a lot more prosaic, it is also more complex. Algorithms are omnipresent today, and many decisions are already being made by computers. Granted, some kind of human intervention remains necessary in most cases, but the ultimate goal of AI is to dispense with this human factor entirely.
Today, we use algorithms for example to set priorities for working on the power grid or for optimally deploying staff in organisations; financial institutions use algorithms to grant loans, the railways to determine at what times which types of trains should follow which routes.
Weapons of Math Destruction
However, algorithms can have undesirable effects, too. For example, you may be charged too much for an Uber taxi when your mobile battery is low. A Stanford University algorithm recognizes facial features relating to sexual orientation. People in the United States were refused a job because of a bad credit score, which in turn gave them, as unemployed, an even worse credit score making it even more difficult for them to get a job.
This kind of developments make it imperative to urgently discuss the need for an ethical framework in which the development of artificial intelligence must take place.
Towards a Central Registry of Algorithms
The tide is now high for measures to be taken, preferably at an international or European level. The GDPR has demonstrated that this is feasible, and the launch of the High-Level AI expert group seems a first step in this direction.
To ensure the ethical character of algorithms, we need four things: 1) a Central Registry, 2) a deontological guideline, 3) an oath for AI developers, and 4) a supervisory body.
By analogy with, for example, the European Patent Office or the Belgian Legal Deposit for publications, data and organisations which develop algorithms should be obliged to deposit all algorithms with a Central Registry. Algorithms must be screened, monitored and scored on their privacy impact by an audit body, with priority evaluation given to high-impact algorithms.
All algorithms that have an impact on individual citizens must be supplemented with information, in layman’s terms, on what exactly the algorithm does. Indeed, ordinary citizens must be able to verify which variables influence an algorithm, in the same way that they can, for example, check terms and conditions of sale.
After all, algorithms are black boxes. Whether an AI has any ethically undesirable effects will only become apparent over time. However, anyone writing an algorithm must be able to demonstrate that even in the course of writing they already envisaged the possible elements and consequences of a constriction that is potentially present in the statistical model.
We can safely say that, in order to guarantee an ethically desirable output, variables such as gender, age and ethnic characteristics should not be included in algorithms. Registering an algorithm offers the additional advantage that businesses wanting to use certain algorithms as input can immediately tell which preferences the system has, so that they can keep their own algorithms ethically sound.
An ethical directive should clarify this issue. It can best be compared to ISO certification. Businesses, the government, data developers and, preferably, also a supervisory body can develop such a certification on a voluntary basis.
This debate fundamentally involves three parties: the data scientists who develop the algorithms, the organizations which commission them to do so, and the public authorities, which have to create the legislative framework within which algorithm development takes place in society. Each of these parties must face up to its responsibilities. The appointment of the EU's High-Level Expert group is already a first big step towards an international approach.
Analogous to the Hippocratic Oath for physicians, an oath for data developers and data scientists could be the kingpin of this all. We could call it Asimov’s Oath, after the American science-fiction author who devised the Three Laws of Robotics, the first one being: “A robot may not injure a human being or, through inaction, allow a human being to come to harm.”