Organisations must be accountable for algorithms they use

20 May 19

Key decisions are increasingly determined by computers but there is a growing sense of unease about the transparency of the algorithms that power them, says John Thornton.  

Tech Watch Self-driving Cars Getty

 

The US Department of Housing and Urban Development recently charged Facebook with allegedly violating the US Fair Housing Act.

It claimed that Facebook let landlords and home sellers engage in discrimination through advertising that excluded people based on race, national origin, religion, gender or disability. According to the lawsuit, Facebook allowed advertisers, for example, to exclude people who were born outside the US, non-Christians, interested in Latino culture, or based in certain neighbourhoods.

As we move into a world where key decisions about access to services, availability of credit and even visibility of opportunities are being increasingly determined by computers and artificial intelligence, there is a developing sense of unease about the transparency of the algorithms that make these decisions.

For AI to be accepted, it is essential that citizens have trust in the systems used and feel that they can challenge and question the underlying algorithms. This is, of course, a particularly pertinent issue for public sector organisations around the world as they invest in AI to improve policymaking and service delivery.

‘Surely there is human oversight to sanity-check the decisions,’ I hear you wonder. But these are usually complex systems that build on previous iterations and draw on data that may have deficiencies.

In March 2018, in Tempe, Arizona, Elaine Herzberg became the first recorded pedestrian killed by a self-driving car. On a dark night, an Uber test vehicle had been travelling autonomously for 19 minutes when it struggled to make sense of a woman pushing a bike with shopping bags hanging from the handlebars.

Just 1.3 seconds before impact, the computer handed back control to an allegedly distracted safety driver, who failed to stop in time. It is easy to overestimate the ability of oversight functions to understand what is going on and take swift corrective action.


'For AI to be accepted, it is essential that citizens have trust in the systems used and feel that they can challenge and question the underlying algorithms.'


US lawmakers have now drafted a bill that would require tech firms to audit algorithms before deployment. The Algorithmic Accountability Act would create guidelines for assessing automated systems.

Companies would then be required to evaluate whether the algorithms powering their systems are discriminatory or biased, or pose a security or privacy risk to consumers. The act would apply only to companies that have a turnover of over $50m or hold data for more than one million people or devices and brokers that buy and sell consumer data.

In the UK, the Science and Technology Committee published a report last year on the use of algorithms in decision-making, in which it urged the government “to play its part in the algorithms revolution”. It recommended:

  • Introducing a legally enforceable ‘right to explanation’, allowing citizens to challenge algorithm decisions that affect them.
  • Requiring central government to publish details of where it is using algorithms with significant impact.
  • Making public sector datasets available for algorithm and ‘big data’ developers, through new data trusts.

AI offers huge opportunities – we now need to focus on getting it right and ensuring that our citizens have confidence in what it delivers for them.

Did you enjoy this article?

AddToAny

Top