top of page

Shaping the Future of AI in Policing: ALIGNER's Pragmatic Approach

Author

Laura Galante

Jul 11, 2023

ALIGNER aspires to rally European stakeholders anxious about AI's role in law enforcement. The project's goal is to create a unified front to identify strategies that will not only bolster the strength of law enforcement agencies through AI but also ensure public benefit. But how far into the future is it useful to look?

ALIGNER aspires to rally European stakeholders anxious about AI's role in law enforcement. The project's goal is to create a unified front to identify strategies that will not only bolster the strength of law enforcement agencies through AI but also ensure public benefit. But how far into the future is it useful to look?

 

In a world where technological advancement is swift and relentless, the EU-funded security research project ALIGNER focusses on the integration and implications of Artificial Intelligence (AI) in law enforcement, looking at a more immediate, shorter-term time frame.

 

Project Coordinator Daniel Lückerath is pragmatic: “The rapid developments in AI technologies and their increasing public availability, as well as permeation throughout many aspects of society – from your fridge to your smartphone – make reliable foresight very far ahead almost impossible”. Hence, ALIGNER bases its strategies on the imminent needs, challenges, and opportunities that law enforcement confronts, considering both the potential misuse of AI and also its constructive use by police and law enforcement in societal contexts.

 

ALIGNER focuses on a not-too-distant "future scenario" where AI is an integral part of daily life, and plays a pivotal role in policing and law enforcement. This approach, enriched by input from advisory boards and research collaborations, has earmarked significant areas where AI's potential criminal usage might be prominent. These areas include disinformation and social manipulation, cybercrimes against individuals and organisations, and the application in vehicles, robots, and drones.

 

ALIGNER has identified sectors where AI could revolutionise policymaking and law enforcement practices. Promising applications include data handling processes, such as incident and crime reporting, digital forensics for obtaining digital evidence, improving incident reaction and response mechanisms, crime detection, and the use of AI in vehicles, robots, and drones.

Based on these identified sectors, ALIGNER works along four distinct "narratives" or topical scenarios, intertwining different aspects across these highlighted categories, giving guidance to the related work in the project. “For each ‘narrative’ that ALIGNER works on, we identify suitable AI technologies.” Lückerath explains. “These are briefly described in so-called scenario cards that summarise the relevant information – what the technology is about, how effective it is, and how robust.” The narratives as discussed thus far have revolved around disinformation and social manipulation, cybercrime against individuals using chatbots, and one on AI-enabled malware, with the fourth one currently being in discussion within the project team. Based on these topical scenarios, assessment methods for the technical, organisational, as well as ethical and legal implications were developed.

 

As an example, the first ‘narrative’, dealing with disinformation and social manipulation, assumes that criminals use AI for phishing attacks to gather personal data. Through phishing attempts, they identify and attack high-value targets (‘tailored phishing or spear phishing’). The goal of these attacks is to manipulate or coerce targets to gain unauthorised access to computer networks, e.g., of election campaigns, large research companies, or industry organizations. Phishing attacks may involve online attempts to persuade or trick individuals into divulging passwords or access codes or, if the opportunity arises, using harvested data to subject them to blackmail or coercive threats. Besides targeted phishing attacks and data harvesting, criminals may disseminate selective misinformation and disinformation apparently emanating from official or well-informed sources. Disinformation uses artificially generated videos, images, text, and sound, including deep fakes of public figures, and is generated by AI-fuelled ‘bots’. To counter the threat of phishing[KN(2] , law enforcement agencies also bring AI: They use veracity assessment methods to detect disinformation, then employ deanonymisation techniques like authorship attribution and the geolocation of images to identify from where the disinformation originated. This is supported by techniques for the detection of synthetic images and videos.

 

In the second ‘narrative’, a crypto romance scam, a criminal contacts a victim via an online chat, grooming the victim into believing the scammer is a genuine ‘friend’ and subsequently extracting crypto currency out of the victim. These scams might be supported by generative AI models like ChatGPT, Dall-e, or Midjourney, creating fake profile pictures, voices, and videos, or automating text generation in multiple languages. In the future, the creation of profiles, targeting of individuals, generation of fake crypto currency company sites, and grooming might even become highly automated. To address these threats, law enforcement agencies themselves need to deploy AI-based models to detect generative content, to support automatic detection of scammer profiles as well as scamming victims, to detect voice clones, or to detect crypto currency laundering.

 

ALIGNER collaborates with professionals from policing, academia, research, industry, and policymaking, including legal and ethics experts, organised in two advisory boards: one for law enforcement expertise, and the other gathering research, industry, and ethics authorities. “To receive a reliable assessment, we need many different experts from different European countries to ensure that we reflect a broad view on these emerging technologies and scenarios. This takes time, especially considering different languages and expertise” Lückerath says.

 

While AI can be misused by criminals, it also greatly aids law enforcement in combating crime, such as by reducing errors, automating time consuming tasks, identifying potentially suspicious behaviours, and even speeding up legal procedures by predicting possible outcomes based on past cases. However, care must be exercised to avoid AI creating biases and discrimination, as certain geographic areas or groups might be unfairly targeted, leading to a disproportionate increase in arrests.

 

This is why ALIGNER has developed the Aligner Fundamental Rights Impact Assessment (AFRIA) tool to enable law enforcement authorities to further enhance their already existing legal and ethical governance systems. This is a method designed to help law enforcement follow ethical guidelines and respect basic rights when using AI systems in their work. It consists of a fundamental rights impact assessment template and an AI System Governance template that help authorities identify, explain, and record possible measures to mitigate any potential negative impact the AI system may have on ethical principles. While in the EU there is no legal obligation to perform such assessments, the AFRIA complements already existing or potential legal and ethical governance systems, such as the forthcoming 2021 AI Act proposed by the European Commission. Depending on the results of negotiations during trialogues, Lückerath explains, “ALIGNER would like to see a practicable and sensible AI regulation that…enables law enforcement agencies to use AI in an ethical, legal, and socially acceptable way, and still allows us to make use of AI technologies for the betterment of society.”

 

Lückerath envisions a future in which established national centres across Europe support law enforcement agencies with ethical, legal, and socially acceptable implementation and deployment of AI technologies, as well as oversight bodies that would use a harmonised framework to assess AI technologies before, during, and after their deployment. In this envisaged future, a harmonious blend of technology and ethics may very well redefine the contours of law enforcement, empowering agencies with the tools of AI while maintaining steadfast commitment to ethical and legal standards.

 

14864

1

0

EXTERNAL LINKS


anhtuanhoang5

July 26, 2023 at 11:10:46 AM

top

New to foresight or want to deepen your knowledge on methods? Interested in the latest research and videos from the Futures4Europe community? Find out more in our futures literacy database!

Carla los

Carla los

Origin for Sustainability has unveiled the latest report from the Foresight Exercise, shedding light on the transition from archetypes to strategic options and policy needs. In this article, we delve into a journey aimed at confronting future scenarios, strategic options, policy needs, and the imminent challenges of mountain regions, along with the general conclusions of this part of the MOVING project.
From archetypes, Strategic Options and policy needs for mountains in 2050
From archetypes, Strategic Options and policy needs for mountains in 2050

0

0

0

Sandra Fernandes

Sandra Fernandes

The following recently published and upcoming reports and books shed light on future-oriented insights with a special focus on Portugal. These materials explore a range of topics, from economic development and technological innovation to environmental sustainability and social trends specific to the region. By delving into these resources, readers can gain a comprehensive understanding of the challenges and opportunities that Portugal may face in the coming years.
Portugal’s path forward: Key insights from recent foresight publications
Portugal’s path forward: Key insights from recent foresight publications

672

0

0

Laura Galante

Laura Galante

Futures4Europe interviewed Eye of Europe’s Coordinator, Radu Gheorghiu, foresight expert at UEFISCDI, the Romanian Research & Innovation funding agency. What does the future look like for R&I in Europe? How does foresight play a role? Radu provides a glimpse into these questions and Eye of Europe’s central role in them.
An Interview with Eye of Europe's Project Coordinator
An Interview with Eye of Europe's Project Coordinator

2597

0

2

Laura Galante

Laura Galante

Paulo Carvalho has been working in the field of futures and foresight for more than 25 years. On one hand, he is a professor in foresight, strategy and innovation at the Faculty of Economics and Management at the University of Lisbon. On the other hand, he founded a foresight company five years ago, IF Insight Foresight, focussing on consulting, horizon scanning and strategic intelligence, as well as other strategy and innovation projects. He talked to Futures4Europe about Insight Foresight’s recently developed tool ORION and how it could revolutionise foresight practices.
ORION: Meet Your Co-Pilot in Horizon Scanning
ORION: Meet Your Co-Pilot in Horizon Scanning

4014

0

5

Be part of the foresight community!

Share your insights! Let the Futures4Europe community know what you are working on and share insights from your foresight research or your foresight project.

bottom of page