Skip to main content

Industry and academic researchers present framework for “conscious” design of mixed reality technologies to safeguard user privacy, security, and safety

Illustration of people wearing augmented reality headsets

Emerging technologies like augmented and mixed reality have the potential to transform the way we experience the world and interact with each other. But like most technologies that, in their infancy, opened up exciting new avenues of engagement — the web, mobile phones, social networks, and so on — mixed reality also has a potential dark side. And it’s one that is made even more fraught by the interplay between the physical and virtual world. 

To encourage developers of these emerging technologies to employ a privacy and safeguard mindset early on, Allen School professors Tadayoshi Kohno and Franziska Roesner organized a summit last November that brought together representatives of academia and industry. The goal was two-fold: identify potential threats to user privacy, security and safety posed by augmented and mixed reality (AR/MR), and come up with a framework to guide designers and developers in addressing those threats.

With the birth of previous technologies, the world mostly charged full steam ahead, with security and privacy relegated to playing catch-up. This time, according to Roesner, there is widespread interest in addressing potential issues proactively rather than reactively.

“Augmented and mixed reality technologies are unique in their ability to impact a user’s perceptions of and interactions with the physical world compared to other technologies, so the associated risks are fundamentally different from those technologies, too,” explained Roesner, who co-directs the Security and Privacy Research Laboratory at the University of Washington with Kohno. “Last fall, we got together with other security researchers and industry representatives to explore a set of questions that creators of AR/MR technologies should be asking to address user privacy, security, and safety from the start.”

Mixed Reality report cover

Last week, summit contributors released a report laying out a comprehensive set of issues that should be considered when developing mixed reality hardware, platforms and applications. The report starts by acknowledging what could go right with mixed reality technologies — primarily, the variety of desirable features and functionality that will benefit users and society by enabling people to overcome human limitations of time and space, and in ways that are accessible to everyone regardless of physical, financial, or other capacities. 

But to realize this vision, the authors note, researchers and the industry have to work together to address what could go wrong. Examples range from exposure to undesirable content, to excessive or overly invasive advertising, to actual physical or psychological harm. Unlike other technologies that only capture snippets of a person’s life here and there — a credit card company knows your shopping habits, a health insurance company sees what tests your doctor runs — mixed reality platforms have the potential to build a much more complete picture of a user.

“The potential of MR devices to collect and infer incredibly sensitive information about users compels us to develop platforms that give users the ability to get the benefits of MR without having to hand over a deeply personal picture of themselves and their environment,” said co-author Blair MacIntyre, a principal research scientist at Mozilla and professor at Georgia Institute of Technology’s School of Interactive Computing. “Once this data is out there, it can’t be pulled back, and MR should be available to everyone, not just those willing to ignore the potential downsides of sharing such information.”

Industry has a strong motivation to get these issues right, even at this early stage. As the report points out, just one negative incident that results in actual harm to a user or users could represent an “existential threat” to the industry by discouraging adoption and prompting well-intentioned but overly prescriptive regulation that could stifle future innovation and growth. There are past examples to draw upon, such as the outcry against autonomous vehicles — and the companies developing them — following the death of a pedestrian. In that case, the software in one vehicle failed to correctly register the person’s presence in the road. The incident prompted a federal investigation and compelled several companies to temporarily put the brakes on testing their vehicles on public roadways.

Franziska Roesner portrait
Franziska Roesner

The authors of the report propose that designers apply a threat-modeling approach often used in security and privacy research. This enables stakeholders to systematically consider which assets require protection in the system, along with how and to which adversaries that system might be vulnerable. The report offers a “fill in the blanks” framework to aid discussion of the potential risks and harms — along with the potential benefits — of mixed reality platforms and applications. The goal is to support designers, engineers, researchers, and policymakers as they work through these issues together, to strike the right balance between user security, functionality, and the industry’s ability to innovate.

One of the big issues that the group considered using the threat-modeling approach is how to manage the interaction of virtual content overlaid on the physical world. The group considered questions surrounding the appearance of undesirable content, whether in the form of advertising plastered everywhere, content that is age-inappropriate for children, or content that is disturbing or harassing to an individual. Other questions arose regarding whether virtual content would be allowed to block real-world content in ways that the user would find disruptive, or if content from multiple sources would be permitted to interfere with each other.

There is also the matter of who has access and can alter content that someone else has created in virtual space. In one high-profile example, an augmented reality artwork titled “AR Balloon Dog” was “vandalized” — or rather, a copy was vandalized and superimposed over the original’s geolocation — by a group of artists protesting the notion of corporations monetizing digital art. The original was created by artist Jeff Koons as part of a 2017 collaboration with Snapchat. The incident raised a variety of questions regarding the ownership of virtual objects, whether they should be treated the same as their real-world analogs in physical space, and whether the act even constituted vandalism, given that the virtual object in question was a copy of another virtual object. 

Members of the UW lab have explored this issue before, inspired in part by the saga of AR Balloon Dog. Last year, Roesner and Kohno worked with undergraduate researcher Kimberly Ruth to release ShareAR, a toolkit that enables developers to build fully functional, multi-user AR applications that permit secure content sharing. That was the start of a more robust conversation around how to design platforms and apps that provide users a measure of control, but the researchers acknowledge that there are issues that need to be resolved that go beyond technology.

“In the physical world, if someone vandalized a painting or structure of historical significance, that person would get arrested. A person who posts offensive or trademarked content online would have that content moderated,” noted Roesner. “But those frameworks don’t exist — not yet, anyway — in the virtual world. Many questions around jurisdiction and enforcement, along with issues regarding ownership, attribution or trademark, are yet to be answered.”

There is also the matter of translating societal norms and the types of behavior we deem acceptable in our day-to-day interactions from the physical world to the virtual one. According to Kohno, it is a significant challenge — one made all the more complicated by the fact that the virtual world transcends the physical world’s cultural and geopolitical boundaries.

“This discussion raises a host of issues around how to apply rules that are ingrained in our social fabric into virtual space,” noted Kohno. “In the physical world, it would not be acceptable for someone to put a ‘kick me’ sign on someone’s back. That would violate our notion of personal space and autonomy. But how do we deal with that in a virtual world?”

In this and similar scenarios, Kohno explained, designers might consider what limitations or tools they can provide to prevent offensive content altogether and/or enable users to remove content they find offensive when it appears.

Tadayoshi Kohno portrait
Tadayoshi Kohno

“Perhaps I am not allowed to alter another person’s avatar, because we have decided that is a core tenet of virtual space. Or perhaps we are friends and so you give me permission to alter your avatar, but the platform’s built-in controls allow you to remove alterations that you don’t like,” Kohno mused. “At the summit we discussed a  ‘Bill of Rights’ governing digital spaces, to articulate what people can expect to be able to do and expect to have done to them. What we’ve done with this report is try to provide the scaffolding for the industry to consider the privacy and security issues raised by these platforms and to make conscious decisions about what rules and safeguards they need to incorporate into their design.”

A Bill of Rights, usable controls, and the aforementioned threat modeling are just a few of the potential solutions participants considered during the summit. Other avenues for exploration include early identification of trustworthy and responsible entities, the development of industry standards, support for application developers along the lines of ShareAR that make it easy for them to build security into their products, and — when the time is right — thoughtful regulatory and policy frameworks that will underpin user trust while reflecting the richness and complexity of augmented and mixed reality.

“The summit provided an opportunity for stakeholders to come together and have a collaborative conversation and forge a common language in which we can discuss these issues,” Roesner said. “With this report, we are taking a first step toward enabling a more holistic approach to important questions about security and privacy within this brave, new world that is being created.”

The Security and Privacy Research Laboratory was an early proponent of mixed reality security and privacy. In 2012, Kohno and Roesner co-authored a paper with security researcher David Molnar, then at Microsoft Research, laying out the privacy and security research challenges and new opportunities associated with emerging AR technologies. Two years later, Kohno and Roesner contributed to a primer on AR released by the UW’s Tech Policy Lab that was shared widely with policy makers to inform them about the still-nascent industry, its potential benefits, and associated pitfalls in relation to privacy, distraction and discrimination. That was followed by a series of papers in which the researchers applied what they had learned to supporting AR privacy and security more directly.

“A principled approach to user privacy and security will be a catalyst for innovation and widespread adoption, not an obstacle,” concluded Roesner. “We may still be playing catch-up when it comes to more mature technologies, but with AR/MR, we have an opportunity to build a virtual world that is safe and enjoyable for everyone almost from the ground up.”

The 2019 Industry-Academia Summit on Mixed Reality Security, Privacy, and Safety was co-funded by the UW Security and Privacy Research Lab and the UW Reality Lab. Read the full report here.