AI is being used to predict and stop criminals, but at what cost?
We’re now living in “Minority Report.” Japan’s National Police is set to embark on a terrifying experiment involving using advanced AI-powered security cameras to preempt major crimes. These AI-enhanced cameras will specialize in machine-learning pattern recognition across three distinct categories: behavior detection for spotting suspicious conduct, object detection for identifying weapons, and intrusion detection to safeguard restricted areas.
This initiative is expected to roll out within this fiscal year—ending March 2024—after the shocking assassination of former Japanese Prime Minister Shinzo Abe and the subsequent attempted attack on current Prime Minister Fumio Kishida. Such high-profile crimes, often committed by so-called ‘lone offenders’—individuals disassociated from society—have prompted Japan’s police to explore crime-prevention strategies.
Proponents assert that the AI’s' behavior detection’ algorithm can learn by observing patterns indicative of suspicious activities, such as repetitive, anxious glances. Previous attempted AI-aided security have homed in on behaviors like restlessness and fidgeting, which may indicate unease or guilt. This is a worrying leap forward in what is possible for modern security agencies.
The Chinese model goes global
From ubiquitous police cameras on street corners to online monitoring and censorship, the Chinese population is constantly under surveillance. Now, a new generation of technology is delving into the vast pool of data gathered from daily activities, aiming to predict crimes and protests before they occur. However, these predictive systems aren’t just targeting those with a criminal record; they are also used to identify vulnerable groups, including ethnic minorities and individuals with a history of mental illness.
This cutting-edge technology relies on algorithms that sift through data, seeking patterns and deviations that could indicate potential threats. While these algorithms are anathema to many in the West, they’re being heralded as triumphs in China.
Reports detail instances where the technology flagged suspicious behavior, leading to investigations and uncovering fraud and pyramid schemes. However, these technologies extend far beyond surveillance. They are a powerful weapon for a society seeking to maintain near-total social control over the populace.
China’s focus on maintaining social stability is unwavering, and any perceived threat to it is aggressively silenced. Under the regime of President Xi Jinping, the security state has become more centralized, deploying technology to quell unrest, enforce strict COVID-19 lockdowns, and curb dissent. Unfortunately, China appears to be the model for leaders like Justin Trudeau and others.
Policing the homeland
Described as a groundbreaking innovation by TIME Magazine in 2011, predictive policing has been quietly rolled out across the US. Numerous police departments in the country are experimenting with predictive software, envisioning a future where law enforcement could foresee and thwart crimes before they unfold. Developers tout this technology as a means to eliminate human bias, enhance the precision of policing, and optimize resource allocation.
This approach gained momentum with substantial federal grants directed towards smart policing solutions. The LAPD, led by Police Chief William Bratton, spearheaded one of the initial trials in 2009 with $3 million in federal funding. The goal was to predict crime-prone areas and deploy officers preemptively to deter criminal activities. The involvement of respected figures like Bratton lent credibility to the technology, leading to its adoption by other departments nationwide. By 2014, a survey revealed that 38% of 200 surveyed departments were using predictive policing, and 70% were planning to implement it in the coming years.
Using data to find high-crime areas and deploy more resources is a reasonable use of data. However, with the rapid advancements of AI technology and the universal tracking of our devices, how long before the regime turns this pre-crime tech loose on the populace? We’re going to have to grapple with these questions because the AI is quickly reaching “Black Mirror” horror levels of surveillance.