Tackling Ethical and Legal Challenges in AI for Law Enforcement: The PRESERVE Approach
By the University of Bari (UNIBA)
Over the first six months of the PRESERVE project, UNIBA – in close collaboration with PARADIGM, the leader of WP2 – has been working to identify and address the key ethical and legal challenges involved in developing AI solutions to support law enforcement activities. Rather than viewing these issues as obstacles, the project treats them as critical challenges that guide the responsible design and deployment of AI technologies, ensuring alignment with European and international standards.
From Critical Issues to Responsible Innovation
The analysis conducted by UNIBA forms part of a broader effort to ensure that the AI solutions developed within the PRESERVE project are robust, trustworthy, and fully compliant with key regulations, including the GDPR, the EU AI Act, and the EU Charter of Fundamental Rights. By embedding an “ethics-by-design” approach from the beginning, the project seeks to set a benchmark for responsible AI deployment in law enforcement contexts.
Below are some of the key ethical and legal challenges identified, along with the ways in which PRESERVE is proactively addressing them:
1. Data Crawling: Balancing intelligence and privacy
The project explores the potential of data crawling—automated techniques for collecting information from open sources such as forums, social media, and less accessible areas of the internet. These techniques may offer support to investigative activities but also raise concerns about privacy, consent, and the distinction between open-source intelligence and surveillance.
Challenge:
How can data collection remain legitimate, proportionate, and respectful of individuals’ rights—especially when conducted in semi-private digital spaces or through the use of simulated identities?
Approach:
The project addresses these concerns by establishing clear protocols that distinguish between different investigative contexts, ensuring that sensitive data is collected only when there is a clear legal basis and genuine necessity. Key strategies include data minimisation, early-stage anonymisation, and the architectural separation of the data collection and deployment phases. Additionally, continuous ethical oversight ensures transparency, accountability, and compliance with legal and ethical standards throughout the entire lifecycle of data use.
2. Avoiding ethnic profiling in language-based AI
AI-based analysis often relies on linguistic cues, which carries the risk of conflating language use with assumptions about users’ backgrounds or identities. For instance, labelling a chat group by inferred origin or cultural background based solely on the language they use can unintentionally reinforce stereotypes or lead to unfair profiling.
Challenge:
How to ensure that AI tools do not propagate bias or contribute to discriminatory outcomes?
Approach:
The project implements a strict policy of using neutral, contextually relevant descriptors (e.g., “French-speaking users” or “users communicating in a particular language”) and provides training to all partners on bias-sensitive language. Systematic reviews of documentation and use cases help ensure that non-discrimination, fairness, and respect for legal and ethical standards remain central throughout development and deployment.
3. Adapting to dynamic criminal language
Criminal groups constantly adapt their language to evade detection. AI models must therefore keep pace, updating keyword lists and detection methods accordingly. However, an over-reliance on keyword detection carries risks of both false positives and false negatives, potentially infringing on freedom of expression or missing relevant content.
Challenge:
How to balance effective detection with respect for rights and contextual accuracy?
Approach:
The project promotes continuous monitoring and updating of linguistic models, drawing on input from both technical experts and law enforcement practitioners. Human oversight remains central: AI-generated outputs are always considered within a broader investigatory context.
4. Law enforcement data use: Privacy and security
The use of law enforcement databases for training AI models is a promising but sensitive area. While such data can improve the accuracy of AI tools, it also raises questions about privacy, data minimisation, and the risks associated with handling sensitive or personally identifiable information.
Challenge:
How to ensure privacy and security while enabling the legitimate use of law enforcement data?
Approach:
PRESERVE limits the use of such data to the training phase, with robust safeguards in place, including anonymisation and strict separation between development and deployment. The possibility of adopting a “counterfactual approach” to AI modelling – where training is done on closed cases to reduce risk – is also under exploration. Federated learning methods further ensure that data remains protected and decentralised.
5. Managing expectations: Aligning AI capabilities with operational needs
Law enforcement partners may sometimes have optimistic expectations about the capabilities of AI, such as real-time alerts or autonomous profiling. While these features are attractive, it is crucial to ensure that the technology’s limitations are understood and that human oversight is never abandoned.
Challenge:
How to ensure responsible integration of AI and avoid over-reliance on automated tools?
Approach:
PRESERVE has instituted a progressive, dialogue-based process for defining system requirements, with ethical and legal partners supporting this effort. Regular interactions between technical partners and end-users help align expectations and clarify what AI can – and cannot – do. Transparency, explainability, and human-in-the-loop principles are central throughout.
A collaborative, evolving process
Crucially, all these challenges are not treated as isolated problems but as part of a continuous, collaborative process. UNIBA, under the guidance of the project’s independent Ethics Advisor, Prof. Benedetta Giovanola, is committed to monitoring emerging issues and producing regular updates and recommendations as the project evolves.
This “living” approach ensures that PRESERVE remains at the forefront of ethical and legal best practices in AI for law enforcement. By treating challenges as opportunities for reflection and improvement, the project aims not only to meet regulatory requirements but also to set a positive example for responsible innovation in Europe and beyond.
For more information or to discuss these topics further, the project welcomes ongoing dialogue with all partners and stakeholders.
