Array, Array, Array, Array, Array, Array, Array, Array, Array, Array, Array

Ethical and Governance Challenges of AI

August 21, 2019

alt=

Dr Jennifer Cobbe, Coordinator of the Trust & Technology Initiative at the University of Cambridge, joined the Digital Leadership Forum at our first AI for Good conference in July, Leading your organisation to responsible AI. Cobbe delivered a thought-provoking presentation, encouraging us to question how we perceive AI technology and its regulation. Here’s what we learnt:

1. It’s AI, Not Magic

While there is a tendency to make exaggerated claims about what artificial intelligence can actually do, we’re not quite at Skynet capabilities yet. Most current AI uses Machine Learning: essentially, statistical models that are trained to spot patterns and correlations in datasets and then make predictions based on these. Machine Learning is only trained to operate within what its trainers think is an acceptable margin of error. “It’s only ever going to be a proximation of the best result,” Cobbe said, arguing that AI is best suited to prediction and classification tasks, but anything more complex may be too much for it at the moment.

2. New Technology Is Not the Wild West

We often think of technology as a largely unregulated new frontier, with the law lagging far behind its bold strides, but this assumption is incorrect. Cobbe explained that existing laws apply straightforwardly to AI, including data protection laws, non-discrimination laws, employment laws, and other sector-specific laws.

3. Our AI Is Only As Ethical As We Are

“Technology isn’t neutral,” Cobbe reminded us. “If your business model isn’t ethical, if your practices aren’t ethical, if what you’re doing with AI isn’t ethical, then your AI cannot be ethical.” Fortunately, the process of introducing AI to your organisation gives you an opportunity to actively confront and address any existing issues.

“If your business model isn’t ethical, if your practices aren’t ethical, if what you’re doing with AI isn’t ethical, then your AI cannot be ethical.”

4. Regulation Can Make Us More Creative

“We should also acknowledge that advances in the law lead to advances in technology,” Cobbe said, highlighting the example of GDPR law, which encouraged the development of new Privacy Enhancing Technologies. We should welcome new regulations because the need to work within them inspires creative solutions. “The need for AI systems to be legally compliant means that designers and engineers are often tasked with finding novel ways to do what the law needs,” Cobbe said.

5. Beware of Bias

Bias manifests in many forms in artificial intelligence. Sometimes designers encode their own biases and assumptions simply by choosing which data to include (and to exclude). Machine Learning is also dependent on historical datasets, which reflect society’s existing biases and discriminatory practices. “By using historical data we do run the risk of essentially encoding the past into the future,” Cobbe said, encouraging organisations to actively guard against this.

In particular, when AI is used for classification there is a risk that it will choose to discriminate against protected groups, as in the example of Amazon’s AI recruiting tool. As we’ve already learned, non-discrimination laws apply straightforwardly to AI, and so companies face serious legal consequences for any discriminatory decisions made by AI.

6. Humans Might Actually Be Better

AI might not always be the most appropriate solution for your organisation. “If you’re using AI within your organisation then you really should be asking yourself whether you’re comfortable relying on a system which probably can’t tell you why it made a decision or why it reached a particular outcome.” Technical solutions are often framed as the best solutions to socioeconomic problems and non-technical problems, but this isn’t always the case. If a task involves qualitative data then a human will probably be a more efficient and ethical evaluator.

“While the real world is a messy, complicated thing, AI will inevitably flatten nuances and gloss over complexities,” Cobbe warned, explaining “It relies on data that attempts to quantify a world that is often qualitative in nature and provides outputs that are overly simplistic sometimes, or even just downright misleading.” “If a technology can’t do what the law requires, perhaps even if a technology can’t do what ethics requires, then the answer is simple: don’t use that technology.”

7. Hire More Social Scientists

We tend to assume that only people who studied STEM subjects need to be involved in artificial intelligence development, but Cobbe warns that this is a mistake. “We really need social scientists,” she said, as they are much more aware of the existing power-dynamics and biases in society and can help organisations to address these.

8. Good Regulation Should Stifle Bad Ideas

Not all new ideas are good ideas, Cobbe argued, and we should welcome the regulation of AI as relying on ethics and self-regulation has proven to be insufficient. We now need regulation as a baseline to protect society and to prevent unethical projects from prospering at the cost of ethical businesses. “Without legal intervention there’s a real danger that irresponsible AI becomes the defining feature of AI in the public’s imagination.”

9. The Buck Stops At You

Ultimately, it is your obligation as an organisation to ensure that you are using AI responsibly, both legally and morally. Organisations should also stay informed of emerging ethical issues. Cobbe highlighted the research work being done by Doteveryone, a London-based think tank, as a useful resource for organisations.

So what if your technology falls short of the legal and ethical requirements? Well, Dr Cobbe has an easy solution: “If a technology can’t do what the law requires, perhaps even if a technology can’t do what ethics requires, then the answer is simple: don’t use that technology.”

You can watch Dr Jennifer Cobbe’s full presentation below:

AI for Good

AI for Good – in partnership with Dell Technologies – is a programme of dedicated learning and development events which are designed to enable members of the Digital Leadership Forum to innovate with new AI technologies in a responsible way.