If you’ve been to even a single privacy conference, you’ve probably come across Helen Nissenbaum’s idea of “contextual integrity” (read her book, “Privacy in Context,” if you haven’t). It’s become fairly core to privacy thinking — if a little bit academic — and gets at the idea of “fairness,” for example, that makes up one of the core principles of the GDPR and other major privacy laws around the globe.
Quite simply, Nissenbaum argues, there are cultural norms about the way we expect to receive and consume information. When information flows — personal information flows, particularly — fall outside those norms, organizations are ethically bound to make data subjects aware of what’s going on. Essentially, if you think people might be surprised by the way you’re processing their data, you probably shouldn’t do it, regardless of whether it’s “legal” by the letter of the law.
Usually, contextual integrity is used in the context of terms and conditions, where companies may try to bury permissions in a long and boring set of terms and conditions they know most users will simply click through. Companies have famously had people sign over their souls and first-born children as a way to illustrate what can be hidden in those blocks of text and ethical companies clearly avoid tricking their customers into doing things — that’s hardly a way to build trust and brand loyalty.
I was very interested, though, in a study titled “Are You Really Muted?: A Privacy Analysis of Mute Buttons in VCAs,” which was published in the journal of the annual Privacy Enhancing Technologies Symposium, Proceedings on Privacy Enhancing Technologies (known as PoPETs). In it, the authors identify something I think most users would describe as a real surprise: The mute button on your video conferencing app doesn’t actually stop audio data from being transmitted.
This gets at an important point about privacy engineering and how to do it right.
The authors found the Cisco Webex video conference app sends a packet of audio data back to its servers once a minute and that they were able to intercept that data and somewhat reliably figure out if you’re vacuuming when you’re supposed to be on a video conference. In the grand scheme of things, only a very technically proficient person would be able to pull that off, but that doesn’t mean there’s no potential for abuse. And it certainly doesn’t mean companies shouldn’t be clear and transparent about it.
Rather, I think this gets at user expectations and how what might seem completely unobtrusive to a software engineer could come into conflict with those expectations and result in a real surprise — and lack of contextual integrity.
For example, as the study’s authors show, just about everyone assumes that the mute button turns their microphone off. It’s a fairly binary expectation: Mute means no one can hear, unmute means everyone can hear. And when people think no one can hear, that likely means servers and software can’t “hear” either.
Maybe you’ve been using Zoom, made some noise while muted, and gotten that message alerting you that you’re muted and people can’t hear you? That should be a heads up that the microphone is still receiving and transmitting data, but that’s not necessarily the sort of thing you should have to intuit on your own. In fact, the study found that 70% of users believe that “the mute button blocks the transmission of microphone data or disables the microphone altogether,” so perhaps not every user is thinking all that deeply about the messaging they’re presented with.
In reality, “All VCAs actively query the microphone when the user is muted, and they might have legitimate purposes,” the study’s authors say. And they conclude that, “The privacy policies of these services need to be explicit about microphone access, which is not currently the case.”
I tend to agree with them. While not many people are likely reading every word of a privacy notice, when expectations are this far apart from reality, the company has a responsibility to make things extra clear. While a software engineer might be thinking, “what’s the big deal? Nothing’s being recorded and no one’s listening,” a privacy professional should be thinking, “Whoa! It’s kind of a big deal if users think the microphone is off, but it’s actually transmitting data…”
Privacy engineering and privacy by design bring these two thoughts together so that software engineers and product managers have the privacy understanding to see that seemingly innocuous actions being taken by the app might in reality need a few words of explanation. Perhaps a dialog box for the user the first few times they hit the mute button: “Hi there! No one can hear you, but our software is monitoring your microphone to make sure you’re alerted if you start talking and maybe don’t know you’re on mute!”
I’m not a PR professional or a user-interface guru, but I know when I need to bring one into a project. For VCAs, this paper would indicate they could use a little help with transparency. For everyone else, it’s a good reminder that we should always be encouraging product people to ask two basic questions:
- What does the user expect to happen here?
- Would they be surprised to find out what’s really happening?
If the answer to #2 is “yes,” it might be time to go back to the drawing board.
Photo by Christina @ wocintechchat.com on Unsplash