Introduction: What the AI Act Brings
The European Union adopted the AI Act in 2024, the very first comprehensive legal framework for regulating artificial intelligence worldwide. The law responds to the rapid development of AI and the need to set clear rules for its safe and trustworthy use. For urban camera surveillance systems (MKDS), this legislation represents a breakthrough – it changes the way municipalities and cities can monitor public spaces, process data, and utilize modern analytical tools.
The goal of the AI Act is to balance technological capabilities with the protection of citizens’ fundamental rights. On one hand, it brings enormous potential to improve the safety and efficiency of public spaces; on the other hand, it emphasizes privacy protection, human oversight, and transparency.
Prohibited Practices in MKDS
The strictest part of the regulation concerns practices considered unacceptable. These technologies are either completely banned or allowed only in exceptionally limited cases.
Biometric identification in real time – recognizing faces, body features, or gait in publicly accessible places is generally prohibited. Exceptions apply where the police search for missing persons, prevent imminent terrorist attacks, or investigate serious crimes. Even then, court permission and strict time and space limits are required.
Untargeted face databases – so-called scraping (automatic downloading of faces from the internet or CCTV footage) is prohibited even for law enforcement purposes.
Manipulative AI – any systems that secretly influence citizens’ behavior, use deceptive techniques, or exploit vulnerabilities are not allowed.
Social scoring – creating profiles of individuals by linking different databases leading to discrimination or unfair treatment is also prohibited.
These rules aim to prevent mass surveillance that could threaten democratic values and citizens’ trust.
High-Risk AI Systems
Besides complete bans, the AI Act defines categories of high risk. These include technologies that may significantly affect citizens’ rights but can be used under strict conditions.
Behavioral analysis and security monitoring – tracking movement, gestures, or behavior of people. It is permitted if based on objective data, not used for mass profiling, and serves only as support for human decision-making (e.g., detecting aggressive behavior in metro or unattended luggage at airports).
Predictive policing – predicting crime based on historical data. While predicting risky locations may help plan patrols, mass profiling of individuals is prohibited. Only models based on objective and verifiable data are allowed.
Emotion recognition – using systems that infer emotional states from facial expressions or voice tone is banned in schools and workplaces. It may be permitted in safety or healthcare contexts but only as a supplement to human decision-making.
Critical infrastructure and emergency calls – AI used for traffic management, energy supply, or crisis call triage is automatically considered high risk.
Law enforcement authorities – AI use by police (e.g., polygraphs, reliability assessment of evidence, individual profiling) falls under high risk and is subject to strict supervision and special legal conditions.
All these cases require detailed risk management, data governance, technical documentation, cybersecurity, and human oversight.
Low-Risk Area
Many AI applications in MKDS fall into a less strict category if no personal identification or sensitive data processing occurs.
Anonymous traffic monitoring – traffic heatmaps, traffic light optimization based on traffic intensity.
Occupancy monitoring – tracking the number of people at squares, sports arenas, or concerts, with outputs being only numerical data or visualizations.
Counting people or vehicles – e.g., in public transport or parking management.
Urban planning – using anonymous statistical models for infrastructure development.
Here too, transparency toward citizens is mandatory under Article 50 of the AI Act. People must be informed that AI is used, and data must be anonymized in compliance with GDPR.
Obligations of AI System Providers
The AI Act imposes several duties on manufacturers, suppliers, or operators of AI systems:
Implement risk management and data quality control,
Maintain technical documentation and logs,
Ensure accuracy, reliability, and cybersecurity,
Provide transparent information about system functioning and data used,
Respect intellectual property rights and comply with copyright rules.
Without fulfilling these conditions, high-risk systems cannot be marketed. Exceptions are possible only in urgent public safety cases, where temporary deployment is allowed, but with an obligation to subsequently request approval.
Obligations of Cities and Municipalities
Cities as users of AI systems are not exempt from responsibilities. Their duties include:
Identifying and classifying used AI systems,
Preparing impact assessments on fundamental rights,
Ensuring human oversight through trained staff,
Keeping operational records of systems, monitoring input data quality and performance,
Informing employees and the public before system deployment,
Marking places where AI technologies are used and publishing their purposes and limits,
Setting up processes for citizen complaints and explaining AI decisions when outputs affect individuals.
Transparency and Communication with the Public
A key element of the AI Act is open communication. Municipalities must inform citizens not only about AI cameras but also explain the purpose and limitations of these systems. Transparency is crucial for building public trust and preventing fears of mass surveillance.
Recommended practice also includes publishing an overview of used systems on the municipal website, including information about the data processed and how it is used.
Long-term Oversight and Best Practices
The AI Act stresses the need for continuous risk assessment and the introduction of best practices. Cities have the option to participate in so-called regulatory sandboxes to test new technologies under regulator supervision. It is also recommended to use international standards (e.g., ISO/IEC 42001) or voluntary frameworks for trustworthy AI.
Examples from abroad show that responsible AI deployment is possible:
Amsterdam and Helsinki introduced public AI system registries providing citizens detailed information on technologies used.
Vienna tested predictive monitoring with human oversight and mechanisms against algorithmic bias, resulting in a 25% drop in crime in risky areas.
Barcelona installed an automatic vehicle detection system on buses that improved public transport flow without recording license plates or personal data.
Conclusion
The AI Act represents a historic step in regulating artificial intelligence. For urban camera systems, it means a fundamental change – strict limits on one hand and a framework for responsible and transparent technology use on the other.
For municipalities, it is both a challenge and an opportunity. Compliance with new rules will require time, investment, and thorough management. However, if technological innovation is combined with citizens’ rights protection, smart cities can become leaders in responsible AI use while strengthening public trust in digital transformation.