Automated police tech contributes to UK structural racism problem

The use of artificial intelligence (AI) and facial-recognition technologies in policing is contributing to a “worrying rowback” in the civil and political rights of people of colour in the UK, according to Runneymede Trust and …

Automated police tech contributes to UK structural racism problem

The use of artificial intelligence (AI) and facial-recognition technologies in policing is contributing to a “worrying rowback” in the civil and political rights of people of colour in the UK, according to Runneymede Trust and Amnesty International.

In their joint submission to the United Nations’ (UN) Committee on the Elimination of Racial Discrimination – which independently monitors states’ efforts to eradicate racism and promote human rights – the civil society groups outlined how a combination of legislation, institutional practices and society’s customs continue to harm people of colour in the UK.

“The submission shows the ways in which disparities facing people of colour across the criminal justice system, health, education, employment and immigration have sustained since the previous reporting period four years ago,” they said in a press release.  

“Failure to improve outcomes for people of colour, whilst attacking the ways in which these communities can dissent, leads to an impossible situation for them. Targeted by Prevent [counter-terrorism programme], restricted avenues to protest, over-policed and under-protected, and subject to higher rates of poverty – communities of colour are having to pave the cracks blighted on them by the state.”

Endorsed by more than 40 other civil society organisations – including Liberty, Black Equity Organisation, Friends Families and Travellers, Migrants Organise, and Inquest – the 50-page report contains a section on the impacts of AI and automation in policing on people of colour in the UK.

It noted, for example, that despite the propensity of live-facial recognition (LFR) technology to mis-identify people of colour, the Home Office has previously affirmed the right of police forces to use it in existing legal frameworks, and that use of the tech is generally ramping up.

“According to logs published by the Metropolitan Police, LFR was deployed on nine occasions between 2020 and 2022, resulting in nine arrests or disposals. This increased markedly to 96 occasions between 2023 and May 2024, resulting in 243 arrests,” it said.

“Liberty Investigates revealed that the Home Office had secretly conducted hundreds of facial-recognition searches using its passport photo database and the immigration database, raising further questions about lack of transparency and scope of data usage for facial recognition.”

It noted that the use of automated systems such as predictive policing and Automated Number Plate Recognition (ANPR) by police can result in human rights violations and fatalities; highlighting the fact that the car being driven by 23-year-old Chris Kaba – who was fatally shot in the head by an armed police in September 2022 – was flagged by its registration plate through ANPR before being intercepted.

Although police said at the time that Kaba’s vehicle was “linked to a firearms offence in the previous days”, the car was not registered to him and no firearms were ultimately found inside. The armed officer involved has since been charged with murder and is set to face trial in October 2024.

The report further highlighted the discriminatory outcomes of the Met’s Gangs Matrix database, which resulted in people of colour (predominantly young Black boys and men, in this case) being racially profiled for the music they listen to, their behaviour on social media, or who their friends are.

It added that while the database has been scrapped after being condemned as racist, concerns remain about what will replace it.

In its recommendations for the UK government on police AI, the civil society groups said it should prohibit all forms of predictive and profiling systems in law enforcement and criminal justice (including systems which focus on and target individuals, groups, and locations or areas); provide public transparency and oversight when police or migration and national security agencies use “high-risk” AI; and impose legal limits to prohibit uses of AI that present an unacceptable risk to human rights.

They added that the UK government should commence an inquiry into all police gang databases, with a view to examining the need for more extensive reform at national level. This should consider whether the databases being used by forces across the country are an effective policing tool in dealing with serious violent crime; whether they operate in full compliance with human rights and data protection legislation; and whether they are influenced by racial bias or lead to discriminatory outcomes.

Computer Weekly contacted the Home Office about the report’s recommendations and whether under new administration it still holds the view that facial-recognition technology is adequately covered by existing legislation.

“In the past week, our towns and cities have witnessed appalling violence, with individuals targeted for their skin colour and places of worship attacked. This racism and hatred have caused widespread distress,” said a spokesperson. “We are determined that neither street thugs nor online instigators will define our nation. Our strength lies in unity across all backgrounds, faiths and cultures.”

At the end of July 2024, a coalition of 17 human rights-focused civil society groups similarly called on the new Labour government to place an outright ban on AI-powered predictive policing and biometric surveillance systems, on the basis they are disproportionately used to target racialised, working class and migrant communities.

“AI and automated systems have been proven to magnify discrimination and inequality in policing,” said Sara Chitseko, pre-crime programme manager for Open Rights Group at the time. “Without strong regulation, police will continue to use AI systems which infringe our rights and exacerbate structural power imbalances, while big tech companies profit.”

The Home Office is considering the findings of both reports.

Ongoing police tech concerns

In November 2023, the outgoing biometrics and surveillance camera commissioner for England and Wales, Fraser Sampson, questioned the crime prevention capabilities of facial recognition, arguing that the authorities were largely relying on its chilling effect, rather than its actual effectiveness in identifying wanted individuals.

He also warned of generally poor oversight over the use of biometric technologies by police, adding that there are real dangers of the UK slipping into an “all-encompassing” surveillance state if concerns about these powerful technologies aren’t heeded.

Sampson also previously warned in February 2023 about UK policing’s general “culture of retention” around biometric data, telling Parliament’s Joint Committee on Human Rights (JCHR) that the default among police forces was to hang onto biometric information, regardless of whether it was legally allowed.

He specifically highlighted the ongoing and unlawful retention of millions of custody images of people never charged with a crime, noting that although the High Court ruled in 2012 that these images must be deleted, the Home Office – which owns most of the biometric databases used by UK police – said that it can’t be done because the database they are held on has no bulk deletion capability.

A prior House of Lords inquiry into UK policing’s use of advanced algorithmic technologies – which explored the use of facial recognition and various crime prediction tools – also found in March 2022 that these tools pose “a real and current risk to human rights and to the rule of law. Unless this is acknowledged and addressed, the potential benefits of using advanced technologies may be outweighed by the harm that will occur and the distrust it will create.”

In the case of “predictive policing” technologies, Lords noted their tendency to produce a “vicious circle” and “entrench pre-existing patterns of discrimination” because they direct police patrols to low-income, already over-policed areas based on historic arrest data.

On facial recognition, they added that it could have a chilling effect on protest, undermine privacy and lead to discriminatory outcomes.

After a short follow-up investigation looking specifically at facial recognition, Lords noted that UK police were expanding their use of facial-recognition technology without proper scrutiny or accountability, despite lacking a clear legal basis for their deployments. They also found there were no rigorous standards or systems of regulation in place to control forces’ use of the technology.

Leave a Comment