top of page

Facial Recognition Software And Why We Shouldn’t Blindly Trust It

  • Francesca Howard
  • Jun 10
  • 4 min read

A basic illustration visualizing how facial recognition works.


The once-futuristic concept of facial recognition technology has now revolutionized the way law enforcement identifies and apprehends criminals. This mind-boggling, state-of-the-art technology works by scanning a person’s face. Details like the distance between the eyes, the shape of the nose, and the angles that form the jawline are converted into a distinct mathematical code. This “faceprint” can then be compared to millions of images in databases, including mugshots, driver’s license photos, passport images, and even pictures taken from social media pages.


There are two main types of facial recognition technologies. One-to-one matching is used to confirm identity, like when someone goes through airport security and the system compares their face to the photo on their passport. The second kind is one-to-many matching. This is more commonly used in police investigations, as it allows officers to input an image from a surveillance camera or cellphone, and the software searches through extensive databases to spit back a list of possible matches. Agencies across America—and even the world—are adopting both of these tactics to expedite suspect identification and close cases more quickly than ever before. 


Although facial recognition may, in practice, sound like an efficient method of identification, law enforcement agencies that rely on it often run into problems. When agencies treat a facial recognition match as hard evidence rather than a lead to look into, it can result in misidentification and wrongful arrests. In 2020, for example, a Black man from Michigan was arrested for stealing watches from a store in Detroit. The only piece of evidence taken into account was a blurry image that had been matched to his driver’s license photo. He was detained for 30 hours before the police admitted the software had made a mistake. Even though his alibi was reliable and he was later exonerated, his arrest exemplified just how faulty this novel, experimental technology can be. 


Studies have consistently shown that facial recognition systems are far more likely to make mistakes when identifying people of color. In 2018, researchers at MIT Media Lab found that facial recognition software misidentified darker-skinned women at a much higher rate than lighter-skinned men. The error rate for darker-skinned women was over 34 percent, while for lighter-skinned men, it was less than 1 percent. Similarly, a 2019 report from the National Institute of Standards and Technology confirmed that these systems were nearly 100 times more likely to produce false positives for those who were Asian and Black than White people. 


Although some of these discrepancies can be attributed to external factors, such as lighting and camera quality, there is still a fundamental problem with the way the technology is designed and developed. Many facial recognition systems are trained on datasets that disproportionately show white faces, meaning the algorithm learns to recognize white features more accurately than those it is less familiar with. A lot of the time, this algorithmic bias isn’t adequately addressed because developers fail to test their systems with a wide range of racial and ethnic groups before launching their tools. 


However, law enforcement agencies are still continuing to integrate the technology into their investigative practices. The FBI uses facial recognition regularly and reportedly has access to over 400 million images. In New York City alone, the police department resorted to facial recognition software in more than 22,000 cases between 2017 and 2022. Many departments also use third-party AI software, which can collect billions of images from the internet, allowing officers to identify people with a single image, even if they have no prior criminal record.


The lack of transparency around how facial recognition is used makes the problem much more dangerous. In many cases, neither the suspect nor their defense attorney is told that facial recognition was used to identify them. And even when suspects are informed of this, the companies that create this software tend to keep their algorithms secret, claiming it as their “intellectual property.” This prevents defendants from challenging the reliability of the software in court. Unlike DNA evidence or fingerprint analysis, facial recognition operates in a way those who aren’t tech whizzes can’t fully wrap their heads around.


As of now, there is no federal law in the U.S. regulating how law enforcement can use facial recognition. While some cities, like San Francisco and Boston, have banned public agencies from using it altogether, most parts of the country still leave it up to the individual agency’s discretion. This ambiguous and lax approach to oversight is dangerous not just because of the numerous errors this technology makes but also because the public has a limited understanding of how it works. When an algorithm points to someone as a suspect, that person could easily wind up in handcuffs, especially when there is no other evidence that says otherwise.


The ubiquity of facial recognition technologies signifies a willing tradeoff between efficiency and fairness. While it can speed up the judicial process, facial recognition introduces new threats to civil rights that demand our attention. For the system to work justly, we must sometimes be willing to do things a little more meticulously, even if it takes longer and more resources.




bottom of page