By Balazs Kiss

Cities around the world are shifting towards a new data-driven model of smart city governance.  Empowered by the use of AI technologies, municipal leaders can more easily tackle urban challenges and bring about massive improvements in the provision of essential public services. But this inherently poses questions related to data privacy.  

Real-time monitoring and data analysis in smart cities pave the way for matching the actual, dynamically changing needs of urban citizens. Enhanced public transport, safety or health is empowered by a vast network of interconnected devices such as sensors, GPS locators or cameras. These technologies have already helped enhance the networks of many cities. For example, Alibaba’s City Brain reduced travel time on a 22-kilometre highway in Hangzhou by 4.6 minutes in only two years. 

Sign Up

Subscribe

By signing up to the Autonomy newsletter you agree to receive electronic communications from us that may sometimes include advertisements or sponsored content.

Hackers are prepared to exploit the privacy risk

From a safety and security perspective, the interconnectedness or smart cities pose an evident threat. Hackers can target the city community directly: for example, by causing the failure of smart traffic control systems hence blocking the flow of transport, or by hijacking unmanned public transport vehicles, endangering the lives of passengers on-board. 

Hackers may also exploit privacy loopholes in the system. Even if they are unable to obtain passwords and credit card credentials, attackers can still gain valuable information on citizen profiles or movement patterns. 

Face recognition remains at the forefront of the privacy debate

Face recognition is a biometric technology using pattern matching in order to identify individuals based on their facial features, commonly used to enhance public safety. It can easily construct personal profiles of citizens: consumer and parking habits, criminal records etc.

China is often cited as a ‘worst case scenario’ with its massive networks of cameras used to monitor and identify minority groups (especially Uighurs and Tibetans) or political dissidents. China has also introduced social credit scores designed to ensure the loyalty of citizens. 

It might be tempting to say that the West is still far from this pervasive use of AI. Yet even in the US, AI is used to identify protestors, immigrants or even journalists. In the UK as well, a retail chain uses the technology to create biometric profiles of every visitor to prevent „violent attacks”, identifying individuals who have an evidenced record as an offender.

Threat from the inside: risks for discrimination

AI systems, including face recognition, are prone to biases and errors, particularly towards minority groups and people of colour. Through the training of data, human biases are easily passed onto the AI system handling and sorting the images. And even when no such bias is present, predicting the probability of an individual to, say, offend or reoffend based on profiling by home address, holds a serious risk of discrimination. 

The main problem is the lack of explicit consent

What can we do? Well, the main problem with such systems is that we do not necessarily have a choice not to partake. Usually, citizens do not have the possibility to opt-in or opt-out. 

To have some basic safeguards, citizens should be able to identify the primary information that sensors collect, and any data that can be provided to third parties (for say, advertisement) should be carefully depersonalised. Restricting access through encryption can be another way to go. However, both this and anonymisation are often reversible. A better approach could be the « privacy by design », which aims for limiting data collection to only the information required. For example, the Hayden AI platform monitors bus lane violations by retaining information of only the target vehicle. It is specifically designed not to monitor pedestrians, and instead of automatically forwarding all information to a central processing server, performs analysis on the spot, saving only the data of transgressor vehicles. 

International standards are needed to better regulate the use of AI

As of today, there are no proper international guidelines and standards for AI technology used in smart cities or elsewhere. Hopefully, similarly to the GDPR (which became an international standard-setter in its domain) the EU’s AI Act now in the making could help propel things forward. 

The AI Act’s first draft categorises the use of AI into four bands of risk. Facial recognition in public spaces is for example « high-risk », one step down from « unacceptable ». Lately, trying to avoid mass surveillance, many in the European Parliament and countries like Germany have been campaigning for a full ban or moratorium on the use of facial recognition in public by both private companies and law enforcement. This would not be unprecedented. Albeit on a much smaller scale, some US states and cities such as San Francisco and Virginia have already introduced restrictions. In 2020, several big tech companies made efforts to ‘stop the spread’ of the technology:  Amazon placed a year-long moratorium on police use of its facial recognition system.

The deal is probably still years away, but an EU-wide ban on face recognition and stricter, binding industry standards on the different uses of AI would surely serve the interest of citizen’s privacy in the smart city context. It would probably somewhat stifle innovation, but such regulations are needed to ensure that people are not only passive subjects in the AI revolution.