Dr. Gus Hosein is executive director of Privacy International, a charity organization dedicated to campaigning for privacy rights internationally. He has worked as an external evaluator for the United Nations High Commissioner for Refugees and as an advisor for the UN Special Rapporteur on Terrorism and Human Rights. He has written extensively on technology and privacy, particularly through the lens of law and policy. He has also been a visiting scholar or visiting fellow at Oxford University, Columbia University, and the American Civil Liberties Union. Amidst growing public concern about the extent of government surveillance, Dr. Hosein spoke with the HIR about the status of privacy rights in todayís society.

Currently, there is a growing worldwide fear of terrorism, and many governments are under enormous pressure to prevent terrorist attacks. This gives governments a justification to seize personal information and increase surveillance of their citizensí online activities, which raises the question of whether people should have to give up their privacy in exchange for national security. How do we, as a society, find a balance between these two needs?

There are two ways that my organization approaches this issue. First, the traditional route: we have long been balancing rights and government demands for power, and most often, our argument is framed in the form of constitutional rights. There are fights over constitutional values and ultimately, in the courts, over the Fourth Amendment in the United States, Article 8 of the European Convention on Human Rights, and similar clauses in constitutions across the world that concern protections of the home, personal space, private life, and family life. In that sense, we have neveróapart from in times of incredible duress or illiberalismófully abandoned the calculation that must always take place between peopleís rights and the ways that governments seek to interfere with those rights. As a result, I donít see any reason why that calculation should change, except when we get into my second answer, which is a little less academic and a little more forthright.

Specifically, the nature of the debate and the environment in which that debate is occurring are very different than they were just a few years ago. In the past, if the government wanted to intrude upon your privacy, it would either have to follow you around or ask people about you. Now, its capabilities have grown so dramatically that a government can spy on an entire nation at a moment in time. It can use algorithms to discern whether people are in a certain mood in order to identify all angry people and focus on their communications and activities. It can compromise a device, meaning it can turn on the microphone or camera in your laptop or your phone and monitor anything thatís going on around those devices. These capabilities are unprecedented, and as a result, new rules may have to be devised. These wonít just be rules pertaining to constitutional values and human rights, as we also need rules regulating what happens when governments, with the legitimate aim of national security, take actions that could actually increase insecurity and risks to safety.

We have the example of Apple vs the FBI just a year ago, where the FBI was well within its rights arguing that for national security purposes, it needed access to a phone. But Apple, well within its rights, was saying that the FBI was seeking more than just access to this singular phone, and that it actually wanted potential access to any phone. This would require Apple to undo all of the safety and security that it has created for its device, and Apple said that was too high a price to pay, because the potential for abuse and misuse was too high: Appleís customers would become vulnerable, and there are people who might take advantage of that vulnerability if given the chance. 

This is the calculation we have to make more and more: as governments argue for national security, they are increasingly arguing for a decrease of security of our devices, our infrastructure, and our services. In particular, as our physical infrastructure takes on a more digital nature, governments will demand access to our data, which will eventually collide with our demand for the security of this infrastructure. 

You mentioned that technology is currently advancing at a rapid pace and will continue to do so in the foreseeable future. However, laws can often take months to draft and enact. In light of this, how can the government keep up with technological advancement when drafting legislation to regulate it?

Itís odd: when governments want legal powers to do surveillance, the law moves exactly as quickly as they want it to, but when they are asked to create laws to protect our rights, they often claim that the law is slow-moving and that any action will take some time. So, I actually donít buy the argument that law moves slowly. I think that if you got the brightest minds on the planet to sit down and ask themselves how they could create a regime of legal protection around the individual, they will all converge on a system that isnít too far from the one we have today or from the ideas that exist in many laws now. What we lack isnít the ability of the law to move quickly enough. It is the ability of the legislator and the executive to push for the law to move that quickly.

At this moment, what kinds of laws that specifically pertain to technology do you believe still need to be passed to ensure privacy in the digital age?

To answer that question, let me first establish the separation between surveillance and privacy. In the surveillance sphere, we need to stop allowing governments to define when they shall seek law and when they shall not. For instance, when the government wants a company to comply with new powers, it will eventually agree that law is required, because that company might have lawyers who will refuse to take action without a legal basis. So, the government will find the imperative to draft law.

However, there are many domains where governments simply take power for themselves and rely on secret legal opinions as to whether or not its actions adhere to existing laws and whether they are constitutional. A good example of this is: should the FBI be allowed to hack your computer? Now, at no point has the government asked Congress to legislate. In fact, every time a case pertaining to this question emerges in the courts, the government does everything it can to shut down a fair legal process around it. This is a power that the government would prefer to use in secret, without legal encumbrance.

Therefore, we need new laws that regulate how and when governments shall conduct surveillance in the modern era. This type of surveillance doesnít just consist of knocking on the door with a warrant; it doesnít even consist of kicking down the door with a warrant, which is the way most of our laws currently treat it. It is the capability of monitoring a whole nation at one moment in time. It is the capability of turning your own technology against you without you ever knowing that this occurred. That is an incredible power that needs regulation. We need to pass laws that very clearly describe the power that governments possess and how it shall be contained.

Now, the United States is fairly advanced in the legislative debate over surveillance. However, it is very backwards in the protection of privacy and of the rights of individuals. What the United States needs more than anything is to pass a law or set up a legal framework that would ensure that people have rights to the privacy of their data, whether it resides with government entities or with an industry. What weíre finally seeing, particularly with the concerns around immigration surveillance in the United States, is recognition that the surveillance power of the US government does not begin or end with the data that it possesses. In reality, it extends to the data possessed by industries, to which the US government could gain access. Equally, these companies have to learn to abide by rules that ensure that the rights of the individual are maintained at all times. This regime of laws that are applied to protect the rights of the individual is called data protection law. These laws ensure that, no matter where your data resides or who is in control of it, that data is always yours, and you retain rights over it.

You mentioned that individualsí privacy can be violated without their knowledge, because they may not be notified when somebody is accessing their information. How can people fight for their privacy rights when they may not even be aware that their rights are being violated?

This is the hardest question in this domain at the moment. People still think that the privacy calculation and the surveillance calculation are somehow still done by the empowered individual who knowingly and willingly shares information with companies or is impelled to give information to government entities. The data that these companies are governments actually mine Ė the data that they process to get into your brain, to understand your behaviors, and ultimately to make decisions about you Ė decreasingly consists of the data you willingly and knowingly give over. Instead, it consists of data that is taken from you without your knowledge and consent or discerned about you without your involvement.

As a result, we have no insight into those processes and no way of controlling that information, which undermines the very premise of our participation in an open democracy or an open market. We always believed that we were empowered citizens and empowered consumers, when in reality, we are just the subjects of these massive systems that are taking our data without even asking us. The saddest thing is that there may have been a time when I could tell you to use a particular product or system that would have allowed you to assert some control. However, we are beyond even that point now, because the technologies that we use and that the people around us use, by their nature, accumulate vast amounts of information.

For instance, we recently worked on the case of the city of Naperville, which has mandated the deployment of smart meters to everybody in the city. Now, people have no knowledge of or control over the data that is being taken from them as a result of this law. This is only a sign of the future Ė the future will involve smart cities, in which cameras will be replaced by sensors that will monitor cars, people, and all transactions. In such a city, you will not have the ability to refuse to consent or to control the information that you provide. A city like this will just take all information it can gather and use it to make decisions about who should have access to which services and who should get priority when it comes to the use of roads or electricity. We will have no way of exerting control over this.

The only thing we can do right now is be outraged by this. We need to assert that these are not the systems that we want and this is not the vision of technology that we imagine for our future. If you ask people what they want from technology, they will always say that they want life to be easier, but they want to be in control. We are getting neither.

What responsibility do privately owned companies have to ensure the protection of their customersí information? Do they have the right to refuse to provide this information to the government?

No, unfortunately, they donít. Under the current constitutional status of data that resides with a company, there isnít even a warrant process required. Itís easier for the government to go to a company and demand all your health information than it is to go to you or your doctor. Itís easier for the government to find out everything that you read online than it is to go into your home and see which newspapers you read. And so, there are absolutely no useful protections for people when it comes to data that resides with companies. The thinking around this was shaped in the 1970s, and it has never been updated because, again, governments are not in a rush to update it. They know that if they start introducing laws about it, people are going to demand stronger protection, so they prefer not to have this debate at all.

Then thereís the question of what companies themselves do with their data. For the most part, we think that companies are just using our data to help us be customers and to make us happier. But the reality of how we increasingly see data being used is that, firstly, companies are very happy to profit off your data by selling it to other parties. So they donít see it as your data; they treat it as their data, off of which they can make money. Secondly, they mine this data and bring it together with other data to make decisions about you: they try to determine what kind of person you are, what kind of affluence you have, who your friends are, and what they should be advertising to you. But it gets far scarier when they start to determine whether you get access to a product or the terms under which you get access to it. For instance, they can charge you more because they think youíre wealthier or youíre more of a risk.

Thatís ultimately where the data industry is getting to. Companies are not accumulating data just because they can sell it or because they think itís fun to have it. They want to understand you so that they can manipulate you. Thatís the whole model of advertising online Ė that is, they want to model you, understand you, and change your behavior so that you make a purchase. And thatís just for advertising. Weíve also seen how data is used when it comes to influencing elections. They want to understand you in order to advertise to you and push you to vote a certain way. It wonít be long before they find ways to manipulate your actions, behaviors, and beliefs. Thatís the ultimate issue at stake.

Thereís also the issue of surveillance technology itself: there are companies that develop this technology, and there are governments around the world that want to purchase this technology with malicious intentions. Do companies that develop and sell surveillance technology have an ethical responsibility to ensure that it doesnít fall into the hands of authoritarian regimes?

I would begin by saying that they do have an ethical responsibility, but weíve been monitoring the surveillance industry for a number of years now, and I can tell you that there is a very strong contingent of companies in that industry that have no ethical concerns with what they do for a living. They tend to fall into two categories. One consists of companies that are firm believers that they can enable any governments, even the most despotic, to exert their powers in order to clamp down on threats to national security Ė although we would very much question whether threats to national security according to a despotic government are the ones that we, as democratic nations, would believe are threats. The other comprises companies that just donít care: they just want to make money in the process and believe that it doesnít matter who gets hurt along the way.

So an ethical responsibility is, unfortunately, a bit of a pipe dream for some of these players. Ultimately, we need legal accountability. We need governments not to allow for the export of these technologies to undemocratic regimes and to hold these companies to account when they are enabling surveillance that may not be illegal in the country theyíre exporting to, but is illegal in the country theyíre exporting from. 

We arenít just discussing simple wire-tapping technologies. Some of these technologies provide the capability to monitor the communications of an entire nation. They provide the ability to infiltrate networks and monitor the interactions of everybody on those networks. These are incredible capacities that we doubt should even be possessed by democratic governments, yet these companies are making quite a lot of money selling this kind of technology to undemocratic governments, who have no qualms around securing these technologies.

But the fundamental challenge is to ensure that, if we create rules that regulate the export of this technology, we do not regulate the technology itself. Technology is often a dual use good, in the sense that it has commercial as well as military application, and it can be used for both oppressive and non-oppressive purposes. So we have to regulate incredibly carefully to ensure that we do not restrict the flow of technology, which can be quite empowering to people who happen to live in undemocratic nations.

One of the problems with regulating surveillance technology seems to be that, for the regulation to be effective, it would have to be implemented globally. Otherwise, if a company is forbidden from selling this technology in a certain country, it could simply relocate its business to a country in which it is legal. With this in mind, how do we create effective legislation on a global scale? 

In theory, this is a global-scale question, but in application, it really only applies to a specific set of actors. The bulk of the industry that enables mass surveillance exists in the United States, Germany, the United Kingdom, France, and Israel. Now, there are Russian companies that do this for a living, but they usually only sell to countries that operate a telecommunications system similar to Russiaís, which are mostly Asian governments. China has a significant surveillance industry interest as well, but it already has a hard time selling its technology to governments, as many of them worry that China will have access to their information through Chinese surveillance technology. While China has been successful at selling across Eastern Africa, its ability to sell surveillance technology in a trustworthy manner has, to date, been quite limited. Israel is very successful at exporting surveillance technology, even to governments with which they have serious foreign policy concerns, which creates a fascinating dynamic wherein the Israeli government is granting companies the ability to sell technology to its foes.

To regulate all of this, all you really need to do is regulate the export of technology from the United States, from the United Kingdom, from France, and from Germany. And that is possible. In fact, the rules are available within the Wassenaar Arrangement, an agreement among major trading nations in the military and surveillance industries. In the agreement, all these countries agreed which technology should not get exported without a license. So, in a sense, this regulation can occur. The problem is that itís not occurring particularly well, and itís not being monitored particularly closely when it happens in countries with poor human rights practices.

Meanwhile, when the countries involved in the Wassenaar Arrangement came up with their list of regulated technologies, they did identify mass surveillance technology that should indeed be regulated, but they also identified technology that intrudes upon systems Ė essentially, hacking technology. That control became very problematic because it started to interfere with nonmilitary and non-surveillance applications, affecting the free flow of technologies that are actually good for people to know, use, and develop. So, even when they do regulate surveillance technology, they still make a mess of it.

Obviously, many of the ideas you have discussed can be quite disturbing to people. Many might feel powerless to prevent their personal information from being taken from them. Are there any reassuring sides to this issue Ė for instance, any laws that demonstrate that we are making progress in regulating surveillance technology?

I am actually incredibly optimistic about the future, in that it canít possibly get worse than it is right now. Over the past two years, we have seen the increasing authoritarian nature of governments, we have seen industry be unethical, and most importantly, we have seen the promise of technology being used to improve our lives be poisoned with uses like government hacking. For example, we have observed Russian and North Korean authorities, as well as other malicious entities, attack our infrastructure and weaken us as a result. Now, we finally recognize that the issue is not just that they might hack some part of our Internet. Instead, it is very tactical hacking. It is the undermining of democracy through hacking a few email accounts. It is the hacking of data sets that might change an algorithm that then decides different outcomes for people. That is a tactical threat model that we are facing.

The reason I am optimistic is that smart people who used to disagree about these issues are going to start agreeing that we need to fix something. People who work in the national security sector know that something is very wrong with the world when foreign intelligence agencies can interfere with an election. People who work in industry, who previously would have loved to accumulate vast amount of data and store it just in case it might be useful in the future, are now targets of both government hacking and hacking by third parties whom they cannot identify. They are now waking up to the reality that they may need to minimize the amount of data they hold in order to minimize their liability. And then you have companies who do what Alphabet Inc. did last year Ė namely, fight back by declaring that they will not introduce vulnerabilities and instead investing more and more into hardening their devices. This puts people more at ease about the security of their technologies.

As a result of all of that, I think the future is going to get a lot brighter. But the problem is that the people who want to see their agendas met at the expense of all reason are still the ones who have a lot of power. They are the ones we still have to disrupt.