It seems that no other app has attracted so much attention over the past weeks as the coronavirus app, described as a powerful tool to curb the spread of COVID-19 so we can go back to normal life. Not every country is ready for it yet. After experiencing a try-out by the Dutch government, it became clear essential items were missing: a clear set of problems that need to be solved and a thorough explanation why this should be done by technology while going through a process of checks-and-balances at the same time.
After a tender squeezed in four days and public demos by the short-list contenders over one weekend, the Netherlands decided to press the reset button after facing strong criticism from technical, privacy and security experts and the privacy authority.
The cost of speed to privacy
Not only in the Netherlands but all over the world, governments, tech giants, scientists, healthcare organizations, privacy and security experts, the European Commission and regulators stumble in their efforts to launch a coronavirus app to unlock society as soon as possible. The palpable sense of urgency and the desire to act quickly may work for urgent lockdowns to prevent new outbreaks; having this mindset though when developing new technology means entering dangerous territory.
Can we even begin to imagine the cost of ignoring the historical roots of going through a democratic process, of not being able to address fundamental human rights, or of the harmful side effects that could overrun Europe’s strong privacy appetite?
So far, most European countries have indicated they want to protect public healthcare by monitoring the spread of the coronavirus via an app, but details are essential for citizens to visualize the next steps in this journey and where we can expect to see the borders in return for trust in technology.
The GDPR and national legislations dealing with health care issues in the corona crisis can draw these lines.
The basis is trust
Trust and confidence are key triggers for the fast and effective adoption of a coronavirus app.
To achieve this, we need answers to important questions first:
- Can we still enter public spaces and travel to other countries without using the app at all?
- Should we immediately call in sick and remain absent from work after receiving a tracing alert?
- Is there enough capacity to test people who expose mild symptoms?
- How do we avoid that the app turns into a mass surveillance tool?
Before moving to the next phase of development – and well in advance of downloading the app onto our phones and welcoming it into our lives – we first need to know:
- How are social benefits – such as public health protection – designed into the app functionality?
- Which technology and embedded controls underpin this vision while protecting constitutional principles such as privacy?
‘We need an app for that’
The coronavirus app could be a tool to achieve meaningful purposes and offer public benefits when humans lack the knowledge or capacity to establish that. Looking at the different coronavirus apps around the globe, we can see they display a range of features:
- Self-diagnosis and symptom reporting by citizens with follow-up from the healthcare system (further diagnosis, treatment advice, self-isolation).
- Contact tracing between individuals who have tested positive and those who are potentially infected.
- Monitoring the effectiveness of social distancing and quarantine measures.
The most intense debates focus on contact tracing functionalities as this is something that can easily transform into ‘big brother is watching you’ scenarios. Such setups could come with extensive tracking, profiling and even penalizing citizens for violating quarantine rules by authorities, using the reason of protecting society as justification for these actions.
Asian countries have already demonstrated extensive governmental interference in people’s daily life. Some examples are measuring body temperature of each passenger at stations, checking someone’s health score before allowing entry to public buildings and publishing warnings online to inform the population of the personal activities of individual COVID-19 patients during their incubation period. This includes taking the subway, visits to vet clinics, restaurants and even their cinema seat numbers.
Privacy is not a currency
For me, it is obvious that Europe can’t just simply copy this approach of technology-driven control of the coronavirus. Instead, it must reformulate its own plan and adjust it to its own legal standards.
There is a strong public tendency at the moment towards exchanging privacy for more freedom, economic values and fewer confinement measures. But privacy, in the sense of protecting data, is not a currency we can simply exchange for the reopening of society: it is a fundamental human right.
This does not necessarily mean that someone’s individual data should remain private and hidden at all times. However, using personal data to fight the spread of the coronavirus should not lead to outcomes that violate other constitutional rights or harm citizens through discrimination or bias. There is no contrast between privacy and public health protection, they are both essential ingredients.