Extremely low rates of crime and casualty, increased traffic capacity, high social and environmental sustainability, garbage-free streets, excellent healthcare and optimized efficiency of literally everything — the benefits of smart city tech are countless. But everything has a price.
To put it briefly, the smart city concept is an Internet of Things (IoT) network orchestrated by artificial intelligence (AI). The intelligent devices may include driverless transport, drones, city cameras, traffic lights, and any hardware equipped with sensors that collect meaningful data.
Where there is big data and complex technology, ethical concerns are all but inescapable. Let’s take a look at the four most bothersome issues and see how we could resolve them.
Privacy threat due to mass surveillance
The first thing that comes to everyone’s mind as controversial when talking about smart cities is mass surveillance. On the one hand, the panoptic combination of ubiquitous cameras, face recognition and personal data collection from sensors is the best technical solution to prevent and clear crime.
On the other hand, human rights organizations are legitimately concerned about mass surveillance as a threat to civil liberties. A corrupt government can get to know your every move, habit, medical problem, and other private detail. Needless to say, this may put your life and freedom in danger.
Some government systems are simply corrupt to the point they cannot guarantee decent protection of their citizens’ personal data. Consequently, social engineering (which is only divided by a fine line from targeted ads) and blackmail can become a major problem.
Solution
As a developer of a smart environment, you should seek to mitigate the threats of mass surveillance in alignment with the following three concepts:
- Awareness. Establish data transparency policies for the residents. When people know exactly which parts of their lives are being exposed to the authorities and how they benefit from this, they might have more confidence in the system.
- Inclusion. Include citizens in the decision-making process. This will allow them to define which details to share with the supervising bodies.
- Balance. Make sure to achieve an acceptable balance between security and privacy. Almost ironically, the collection and storing of personal information for higher security poses further security risks.
Vague data ownership and oversight rules
When you interact with automated systems that process your data multiple times a day, the notice-and-consent model is no longer viable.
How often do you read terms of service? Every website you land on has a long wall of text in legal lingo elaborating the rules that you automatically accept by continuing to use the site. Smart city hardware is not different in that regard — the difference being that it handles more sensitive bits of your private data than cookies.
In addition, the rules that might have a crucial effect on your life are embedded in the source code of public IoT. They keep evolving, which makes it difficult to ensure people are aware every time these rules change.
Solution
An efficient solution for the inherent ambiguity of public infrastructure automation is yet to be found. As technology evolves, you can expect robust digital tools to emerge in this area.
There are also policies that could be established to mitigate this problem. As a policymaker, you can set longer implementation periods that will allow time to notify everyone of the upcoming sensitive changes and analyze the public’s response before they’re introduced.
One thing is for sure: there must be an efficient awareness network in place.
Unequal citizen inclusion
The concept of a smart city is technocratic by nature. This ostensibly gives tech-savvy people an advantage over those whose access to the benefits of intelligent infrastructure is more limited.
If a group of citizens is left out of the mix when it comes to reaping the gains of a smart city, you are unlikely to include them in the decision-making process for data policies either. This can create a dangerous gap between different groups in society and lay the groundwork for new forms of digital segregation.
Solution
In its Everyday Ethics for Artificial Intelligence guide, IBM treats explainability as one of the five areas of ethical focus. Here is a piece of advice from the respective section of the document:
“Allow for questions. A user should be able to ask why an AI is doing what it’s doing on an ongoing basis. This should be clear and upfront in the user interface at all times.”
In a nutshell, there are three major practices that smart city developers should implement to resolve this issue:
- Democratization. Simplify the citizen participation process as much as needed to ensure equal inclusion.
- Feedback. Gather and process interaction feedback across the spectrum to make adjustments accordingly.
- Proactive education. Make smart systems an integral part of the universal educational system and provide access to simple and clear educational materials (games, podcasts, videos, documents, etc.).
Predictive policing bias
Predictive policing is a powerful crime prevention mechanism used alongside mass surveillance. And, just like the latter, this practice has its dark side: bias.
In 2016, a coalition of US civil rights organizations picked predictive policing apart with a joint statement describing the technology as “biased against communities of color”. They also noted that the system relied on reports that documented the response of law enforcement to the reports they received rather than original records of all registered cases.
When you train AI on action patterns of police officers, you run the risk of introducing human bias in the algorithms. This defies the very idea of automated, accurate decision-making drawing upon the impartiality of big data.
Solution
The major solution to this challenge is in itself: make sure that the data you use to build an AI (and this applies to every AI case, not exclusively predictive policing) is free from bias. A demographically diverse team of developers would be one way to achieve that.
Even so, there’s still a risk of human error making its way into the data set and causing technical issues. Therefore, the AI-powered predictive policing system should always keep the focus on prevention — not violent law enforcement.
Here are the measures that the above-mentioned civil rights organizations suggest:
- Transparency
- Public debate, audit, and monitoring
- Independent expert assessment
- Social services’ participation
Wrap-up
The smart city technology is here to stay and there is no use fighting it. What you need to do now as a policymaker or engineer is to try to gain trust from all sections of society by easing their major concerns.
There are three main areas you should focus on:
- Citizen awareness
- Equal inclusion of diverse demographics in the decision-making process
- Transparency of the intelligent public infrastructure
If you are looking for stellar expertise in IoT, AI and overall automation, you have come to the right place. Intetics will be delighted to help you build an environment that is smart and ethical — just drop us a line.