The Guardian Editorial
It is a familiar story. Extravagant claims are made on behalf of novel computerized tools. The public is told that this or that digital application or system is going to change the world for the better. Efficiencies would be unlocked, and problems solved as human limitations are overcome by networked devices plugged into vast stores of data. Anyone who questions the narrative is a pessimist or, perhaps, a criminal. This appears to be the logic behind arguments put forward on behalf of one such tool — live facial recognition technology. Law-abiding citizens have “nothing to fear” from the police’s reliance on mounted cameras, British Minster of State for Policing and Crime Sarah Jones said last month after a high court challenge on human rights and privacy grounds failed. The use of AI-powered identification software made by Japanese company NEC “only locates wanted people,” she added. Last year Jones described the technology as “the biggest breakthrough for catching criminals since DNA.” Metropolitan Police Commissioner Mark Rowley is equally enthusiastic and London’s Mayor Sadiq Khan gave his blessing to a pilot scheme. There is no doubt that policing is under pressure despite sharp falls in homicides and knife crime. Shoplifting has recently risen across England and Wales, as have religious and racial hate crimes. It is not hard to see why, from the police’s point of view, the ability to match the faces of passersby with those stored on a database of suspects is very handy. The warnings carried in last weekend’s Guardian exclusive regarding weak oversight and misuse of these systems are a reminder of other priorities. The biometrics watchdog for England and Wales, professor William Webster, and his equivalent in Scotland Brian Plastow both believe that the Information Commissioner’s Office is not up to the job of monitoring this kind of data use and that a new regulator and rules are needed. An audit of the Metropolitan Police’s use of facial recognition was postponed and has not been rescheduled. The UK government is reviewing the legal framework so some updating is expected. The Home Office has acknowledged issues with racial bias after tests showed higher numbers of false positive identifications of black and Asian faces. However, with this technology in widespread use by retailers as well as police forces, once again politicians are playing catch-up looking for ways to right wrongs that have already happened.
One thing that is needed urgently is an improved system of redress for people who have been misidentified whether by police or private security guards. Ministers must also directly address the claims made by a whistleblower who said he knew of up to 15 instances of innocent people being added to watchlists maliciously by security employees with scores to settle. Race bias in the software must also be shown to have been eliminated. Practicalities aside, surveillance tools raise political questions about civil liberties, privacy and state and corporate overreach. The rollout of these tools is a choice, not an inevitability and there are alternatives. The policing minister could honestly believe that most people should not fear databases containing biometric data by which individuals can be identified. That does not make it true. The pattern whereby tech outpaces attempts to keep track of its impact, defying democratic checks and balances, needs to be broken.