Richard Buckland explains the contact tracing app

COVID-19 Contact Tracing App
Professor Richard Buckland, Director SECedu Australian Cybersecurity Education Network, provides some detail about the federal government’s proposed COVID-19 contact tracing app

[CLICK FOR PDF COPY]

We know generally about the operation of the proposed tracking scheme and understand it will be similar to the Singaporean system currently being used, however as yet we do not know the details. The details are extremely important in systems such as this which operate to balance the useful collection of data with user privacy. Below is what we currently understand to be the case.

Phones with the application installed continuously emit a faint radio beacon ID signal (using Bluetooth which is inherently low power and faint). The signals can only be picked up by nearby receivers, which would typically be other phones with the app installed which are also emitting ID beacon signals as well. If your phone can pick up the ID beacon of another phone then presumably you are close to each other, and so your phone logs the ID it received (and the time and the power level of the received signal).

The government keeps a central database of the identities of everyone who downloaded the COVID App and assigns them a private ID. Your app never broadcasts your private ID but uses it to generate a public ID for you which it broadcasts as your faint beacon signal. Your public ID is periodically changed, perhaps daily, but your private ID never varies. The app of everyone who is near you logs the public ID you broadcast and keeps that stored on their phone. If one those people you logged is later diagnosed as having COVID-19 then they give the authorities the list of all the public IDs they have logged and the government uses secret cryptographic keys (which only they know) to convert those into the corresponding private IDs and so learns which people were close contacts of the sick person. They then alert all those people.

There are weaknesses with this approach which might allow privacy breaches, perhaps serious ones. Here are three examples:

1. If you are diagnosed with COVID-19 then the identity of everyone your phone has logged, however fleetingly, becomes available to the authorities with metadata about the length of time and signal strength. This reveals data about others as much as it reveals data about yourself. It might for example reveal your neighbour having an affair next door (Bluetooth can pass through walls to some extent). It might reveal a passer-by outside your house exercising more than once a day.

2. If the central database and keys are hacked or erroneously leaked or deliberately shared with other government agencies or countries then they could determine the identity of every person from their app’s beacon signal. So, for example, putting a cheap beacon detector outside a brothel you would know the identity of every person who entered. Or you could program a drone to hover directly above and follow a particular individual. Or you could couple a network of Bluetooth sensors with the surveillance cameras throughout a city and know the identity of the people in the footage.

3. If the data were shared with police or other agencies or if police or agencies had the power to demand app users hand over the logs they had collected on their phones. For example, this could be used to: force reporters to reveal the identities of their sources; detect whistle-blowers; identify everyone who participates in mass protests; unmask police informants; force people to self-incriminate and reveal breaches of (sometimes unclear and confusing) lockdown or self isolation rules; identify those in hiding from abusive partners; detect politicians leaking to media or other parties; or indicate sensitive commercial negotiations are underway and with whom. The uncertainly and concern ordinary citizens might have about such things may well encourage people to avoid using the app or taking their phone with them and so undermine the potential benefits of comprehensive contact logging.

There is currently much discussion amongst health experts about the extent to which a technological solution such as an app can contribute to a real world solution. It is important to understand the data about the costs and the benefits before embarking on an intrusive solution. To what extent do people contract the virus from being near anonymous others versus contact with those they know or from touching contaminated surfaces? The Singaporean data could shed light on this.

What privacy safeguards are most important if such an app was used?

1. Explicit prohibition against subsequent scope creep: All the data should be only used for COVID contact tracing and this should be explicitly legally clear and enforced and not able to be subsequently undone by ministerial regulation. Not given to police for crime investigations, or to planning departments, or to medical researchers, or other ideas later proposed. Australian privacy history has shown that once we have this pool of data many agencies will want to get hold of it to use for things they regard as being important.

2. Genuine opt in: People should not be able to discriminate against people based on whether or not they have the app and the app data should not be able to be demanded by police or used as evidence in a court of law. People should be able to opt out of the system at any time and upon doing so all their data including all centrally held records identifying them should be securely and irrevocably destroyed.

3. Time limit and secure final deletion: The COVID App and data should have an end date after which all data will be securely destroyed, with no copies retained. It should only be able to be used for this virus not retained and extended to use against say influenza. There would be no problem with starting a new app for such a purpose of course.

Societal acceptance – if the COVID App was compulsory I suspect many Australians would try to avoid it as that’s not the sort of government to which they'd want to be entrusting their private life and data. But if the app was voluntary with the appropriate safeguards, then I believe we are a great nation of volunteers to help others in times of need.

Finally, it is worth thinking about the merits and risks of writing our own software rather than joining in and using a system already developed and tested by a wider group. Cyber security history has repeatedly shown there is usually merit in using an existing system (such as the Singaporean one, or one developed by tech giants) rather than “rolling your own” new untested system - unless you are doing something quite new and different. Software teething problems are inevitabe, and the stakes are currently high.

It is worth noting that Google and Apple are currently collaborating on developing their own inbuilt phone tracing system which removes the main weakness in the Singaporean system, being the need for a secure central database. In practice such a central database would be extremely attractive to criminals and other nations and almost impossible to confidently secure from coding bugs and mistakes, or insider or cyber attack.


Professor Richard Buckland
Professor of cyber security, cyber terror and cyber war
Director, SECedu Australian Cybersecurity Education Network
School of Computer Science and Engineering
UNSW Sydney

Media enquiries: 0413 581 603 (Lesa de Leau, General Manager, SECedu)

Previous
Previous

How the coronovirus app works

Next
Next

COVID-19 tracing app plagued by privacy, efficacy concerns