Click here to sign up for our daily newsletter

Canadian police partner with AI in arms race against criminals. But at what cost?

Mar 28, 2025 | 2:02 AM

VANCOUVER — In one corner of the battle are criminals using artificial intelligence to generate child sexual abuse material — and in the other, AI is being used to help hunt down the offenders.

The rise of artificial intelligence has kicked off an arms race between perpetrators and police in Canada.

The RCMP’s National Child Exploitation Coordination Centre describes in the force’s official “Gazette” magazine how the technology is being turned on its head in the search for AI-generated abuse material, which can be stitched together from existing images and video.

Traditionally, the disturbing and laborious task of scouring a suspect’s computer hard drives has been done manually, taking a toll on officers, says Cpl. Philippe Gravel, an investigator with the centre.

But with AI, it can be done faster, and with less exposure for the officers.

“Some of the investigators and specialists on the unit are parents — for some, that’s why they do this job — so, the less time they can spend looking at this stuff, the better for their mental health,” says Gravel in the article.

It’s a case study in how AI technology can be used by police to rapidly sift large volumes of evidence — surveillance footage, criminal records, or hard drives — to identify patterns and potential threats.

Police in Ontario are already using AI facial-recognition tools to compare mug shots with images of suspects caught on video. In New Brunswick, police reports are being drafted from body camera recordings. The RCMP is seeking an AI tool to crack into encrypted phones. And in Alberta, the Edmonton Police Service is working on two in-house AI tools: one for transcribing audio recordings and another for records management to link cases and investigations.

But ethicists warn that use of AI in investigations is not free of bias and has the potential to violate human rights.

Anais Bussières McNicoll, a lawyer with the Canadian Civil Liberties Association, said the organization takes issue with some AI-related predictive policing techniques in which data is analyzed to forecast where and when crime might occur — specifically facial recognition technology, or FRTs.

She said it raises privacy concerns because “a person’s facial biometric data is permanently and irrevocably linked to their identity.”

“These privacy issues are exacerbated when (facial recognition technology) is used without the person’s consent or active involvement or even knowledge, which is often the case when it comes to law enforcement,” Bussières McNicoll said in an interview.

“FRTs by law enforcement risk stripping people of their anonymity and essentially reduce them to a walking license plate.”

‘JUST AN INVESTIGATIVE TOOL’

Daniel Escott, a research fellow at the Access to Justice Centre for Excellence with their Artificial Intelligence Risk and Regulation Lab in Victoria, said he has spoken to law enforcement across the country, and all are “very optimistic about the impact of AI.”

“Especially in taking the tools they already use and making it a little bit better — stuff like facial recognition, surveillance systems, analyzing digital evidence,” he said.

Escott noted that artificial intelligence is “fantastic” at pattern recognition and statistical analysis, but its use requires nuance, particularly when it comes to predictive policing.

Tools such as facial recognition and biometric surveillance, and algorithms to help inform bail, sentencing or parole decisions, have been criticized by human rights groups for their impact on racialized and low-income communities.

Last May, York Regional Police and Peel Regional Police in Ontario jointly launched a digital system “for storing, searching and comparing lawfully obtained crime scene images of suspects” with mug shots.

In a website post, York Police say the goal is to increase the investigative capacity of both agencies “by increasing the size of the database of images.”

It says it will reduce time spent manually comparing mug shots, and the technology “protects the integrity of the investigative process as it is not susceptible to the biases that exist in eyewitness accounts.”

A spokesman for the agency said that a facial match alone would not be grounds for an arrest.

“It’s just an investigative tool,” Const. James Dickson said.

“It’d be the same as a photo lineup or sketch where this might be something that points you in the right direction to someone, but isn’t evidence on its own merit.”

AI tools are also being employed in more mundane and non-investigative aspects of police work.

In Fredericton, police are using generative AI to draft reports based on the audio recorded on body-worn cameras during so-called non-charge files, such as missing-persons cases, assisting the general public or medical calls.

The pilot project by the Fredericton Police Force uses technology from American company Axon Enterprise to save officers’ time. It still requires members to review and sign off on the reports to “ensure accuracy and adherence to reporting standards.”

The initial stage of the project took place in the fall with a group of 11 officers, but it has since been deployed to all uniform members who wear body cameras. It is expected to conclude in December.

“We believe that integrating modern technology into our daily operations not only boosts our effectiveness but also enhances transparency and accountability,” the Fredericton Police Force said in a statement.

But efforts to integrate AI into policing haven’t always gone smoothly.

In 2021, the Office of the Privacy Commissioner of Canada found the RCMP violated privacy laws when it used facial recognition software supplied by U.S. company Clearview AI.

It had created a databank of more than three billion images scraped from the internet without users’ consent.

Toronto police admitted that same year that some of its officers used the same facial recognition software without informing their chief.

Daniel Therrien, who was then Canada’s privacy commissioner, said at the time that it was “just another example of how public-private partnerships and contracting relationships involving digital technologies are creating new complexities and risks for privacy.”

In response, the RCMP and the Toronto Police Service established internal assessment and review procedures for the use of artificial intelligence.

York Police, meanwhile, note on their website that their new digital tool does not involve Clearview AI.

Renee Sieber, an artificial intelligence ethicist and associate professor at McGill University, said her biggest concerns about AI and policing revolve around bias.

That means “bias from false arrests from facial recognition technology” and the “historical bias of where police have patrolled and the arrests they’ve made,” she said.

“We always need to ask our police to be fair wherever possible,” she said.

She pointed to AI-enabled gunshot-detection technology called ShotSpotter, which is deployed in more than 140 U.S. cities, as an example of how technology can “amplify bias.”

ShotSpotter’s AI algorithms and a network of microphones identify and locate gunfire in real time. But its efficacy and fairness have been questioned.

Brooklyn Defender Services, a public defence office, released a report in December saying 99 per cent of ShotSpotter alerts to New York Police failed to result in guns being recovered.

“Neighborhoods with predominantly Black residents are 3.5 times more likely to have an officer deployed based on an unconfirmed alert than a neighbourhood with predominantly white residents,” the report said.

Sieber and others want legislative oversight of police use of AI. The federal government’s Artificial Intelligence and Data Act was tabled in 2022, but died when Parliament was prorogued on Jan. 6.

Bussières McNicoll said the civil liberties association was “definitely not satisfied” with the federal government’s attempts at AI legislation, and a main concern was that it only applied to the private sector.

“We recommended to the committee tasked with studying this bill that it should also regulate national security and public safety bodies that use AI in their everyday operation,” she said.

She also called for more transparency around the use of artificial intelligence by law enforcement and government bodies, as well as an independent regulator and a formal reporting mechanism.

Among the provinces, Ontario is the first to adopt legislation specifically related to government agencies using AI.

But the Ontario Law Commission says the bill lacks detail, fails to protect human rights beyond privacy and “leaves it unclear as to the extent to which police and policing, courts and tribunals will be or can be included in proposed regulations.”

Sieber said she believes there should be “no carve outs” or exceptions when it comes to AI legislation.

“There are carve outs given for law enforcement (for) national security, so you may have regulations on the use of these technologies, and particularly making sure that there aren’t disproportionate discriminatory effects. Law enforcement is given a pass.”

AT WHAT COST TO OUR RIGHTS?’

The lack of legislative guardrails hasn’t slowed the race for more AI in policing.

Innovative Solutions Canada, a federal government program to boost innovation, says the RCMP has successfully found bidders to develop an AI system for the legal decryption of phones in criminal investigations.

In its 2021 pitch, the RCMP said “it is public knowledge that police can obtain judicial authority to search a device, but they cannot compel an individual to provide a password.”

“The RCMP is looking to develop an AI system that can ingest material seized during an investigation and generate case-specific word lists which will be utilized in an attempt to decrypt the seized data.”

Innovations Solutions Canada confirmed in a statement that two companies — Novacene AI Corp. and Arise Industries Inc. — have since been tasked with developing a prototype.

Benjamin Perrin is a law professor at the University of British Columbia who has launched an initiative looking at AI and criminal justice.

He and students have compiled a case book to help legal professionals and the public better understand the implications — and like Sieber and Bussières McNicoll, he urges caution.

Perrin said that while there could be positive uses, pointing to the RCMP’s AI filtering of child sexual abuse material, technology oversight was needed for each facet of the criminal justice system.

“It is a massive industry that’s developing and pushing AI out in all sectors of society and criminal justice is among the highest-risk sector, and yet we don’t have any laws that regulate it specifically and impose any kind of prohibitions on how it can be used,” he said.

“Can (AI) make us safer? Yes, it can, but at what cost to our rights and freedoms? Just because something is very effective does not mean that we should do it.”

This report by The Canadian Press was first published March 28, 2025.

Brieanna Charlebois, The Canadian Press