It is often argued that the UK is the most surveilled country on the planet. This may or may not have been the case in the past but there are certainly now millions of surveillance cameras in public spaces—not to mention private buildings and homes. Behind those lenses they are changing in ways that people are often barely aware of, with privacy implications that should be widely discussed as a matter of urgency.
Automatic face recognition is currently the hot ticket in this industry, having been introduced in a number of cities around the world, in the US, China, Germany and Singapore. The police argue that piloting such systems has allowed them to test the technology to help identify potential terrorists and other known offenders. Yet this has to be weighed against different concerns. The broadest is our expectation of privacy and anonymity in public places—and whether this is a step too far towards our every move being visible to the state.
Then there is the question of how well these face recognition systems work at present. Their success rate at recognising faces has been shown to be as low as 2%. Linked to this is an inbuilt bias within the software that makes the technology far less accurate at identifying darker skinned people and women. It therefore has the potential to exacerbate tensions between ethnic minorities and the police.
This could be compounded by another contentious issue, which is the police using so-called “watch list” databases of faces against which it is trying to match live images. Typically these databases include policing images of people taken in custody, who may never have been convicted of a crime and are unlikely to have consented to their data being used in this way.