What ethics have to do with the rejection of the e-ID Act

Markus Christen explains why ethical questions play a role with regard to both the e-ID and hacker attacks.

Manage everything with a single login – from filing a tax return to booking a trip. That’s what the electronic ID (e-ID) is supposed to let you do. Now the e-ID has been rejected by Swiss voters. Can you understand why that happened?

The problem is that the very notion of wanting to get all your digital chores done in the same way is false. When people share their data, they always do so in a very specific context. If you disclose sensitive health data when taking part in a trial because you hope to be able to help, for example, you don't expect this data to be used in other contexts, such as by an insurance company. We call this contextual integrity. The notion that personal privacy is being infringed arises when the data is used in another context.

How does this affect the e-ID?

When I use the e-ID, I am interacting as a citizen with the state. If a private company that also sells me my mobile phone contract issues my e-ID, then the “citizen-state”context is broken. This prompts the belief that my personal privacy has been infringed because the mobile phone supplier knows how much I pay in taxes. Even if the e-ID is designed in such a way that access to this knowledge is technically impossible, there is still a symbolic perception of an infringement of personal privacy, which is no doubt why a lot of people voted against it.

How can this problem be resolved in future?

I think that ultimately, the e-ID will be issued by the state. This allows the “citizen-state”context to be maintained.

Your NRP 77 project addresses ethical questions in the area of cybersecurity. Can you give me examples of these questions?

In situations where cyberattacks need to be repelled, decisions frequently have to be made under extreme time pressure. These situations can produce an ethical dilemma. For example, when an incident response team identifies such an attack after an elaborate analysis. Then the question arises as to how to deal with the potential attack victim, with whom it has not yet been possible to build a basis of trust or who may even have a connection to the attacker. If the team keeps the victim informed, this jeopardises the success of any defensive measures because the attacker is warned and so able to change their tactics. Should the team opt not to share information, it runs the risk of another organisation or country suffering damage that would have been avoided.

What are your research goals in terms of addressing these problems?

We want to draw up general guidelines on tackling ethical problems in the cybersecurity domain, such as the scenario outlined before in the example. This is one part of our project. The second part consists of making recommendations on closing any gaps in the law in respect of cybersecurity.

What kind of gaps in the legislation might that be?

Take the example of a private cybersecurity service provider tasked with protecting critical infrastructures – say, a hospital as well as a power station. If these infrastructures fall prey to a cyberattack, then the provider will repel it. But the provider’s resources are limited and their experts cannot protect both infrastructures at the same time. So it’s either the hospital first or the power station. What should the provider do? There are no regulatory requirements for such cases. We aim to find out where these gaps in the law are and make recommendations on how to close them.