Category: Privacy

Designing Resources to Help Kids Learn about Privacy Online @ IDC 2018

What types of educational resources would help elementary school-age children learn about privacy online? Below I share findings and recommendations from a paper I co-wrote with Jessica Vitak, Marshini Chetty, Tammy Clegg, Jonathan Yang, Brenna McNally, and Elizabeth Bonsignore. I’ll present this paper at the 2018 ACM Conference on Interaction Design and Children (IDC).

What did we do? Children spend hours going online at home and school, but they receive little to no education about how going online affects their privacy. We explored the power of games and storytelling as two mechanisms for teaching children about privacy online.

How did we do it? We held three co-design sessions with Kidsteam, a team of children ages 7-11 and adults who meet regularly at the University of Maryland to design new technologies. In session 1, we reviewed existing privacy resources with children and elicited design ideas for new resources. In session 2, we iterated on a conceptual prototype of a mobile app inspired by the popular game Doodle Jump. Our version, which we called Privacy Doodle Jump, incorporated quiz questions related to privacy and security online. In session 3, children developed their own interactive Choose Your Own Adventure stories related to privacy online.

What did we find? We found that materials designed to teach children about privacy online often instruct children on “do’s and don’ts” rather than helping them develop the skills to navigate privacy online. Such straightforward guidelines can be useful when introducing children to complex subjects like privacy, or when working with younger children. However, focusing on lists of rules does little to equip children with the skills they need to make complex, privacy-related decisions online. If a resource presents children with scenarios that resonate with their everyday life, children may be more likely to understand and absorb its message. For example, a child might more easily absorb a privacy lesson from a story about another child who uses Instagram than a game that uses a fictional character in an imaginary world.

What are the implications of this work?

  • First, educational resources related to privacy should use scenarios that relate to children’s everyday lives. For instance, our Privacy Doodle Jump game included a question that asked a child what they would do if they were playing Xbox and saw an advertisement pop up that asked them to buy something.
  • Second, educational resources should go beyond listing do’s and don’ts for online behavior and help children develop strategies for dealing with new and unexpected scenarios they may encounter. Because context is such an important part of privacy-related decision making, resources should facilitate discussion between parents or teachers and children rather than simply tell children how to behave.
  • Third, educational resources should showcase a variety of outcomes of different online behaviors instead of framing privacy as a black and white issue. For instance, privacy guidelines may instruct children to never turn on location services, but this decision might differ based on the app that is requesting the data. Turning on location services in Snapchat may pinpoint one’s house to others — a potential negative, — but turning on location services in Google Maps may yield real-time navigation — a potential positive. Exposing children to a variety of positive and negative consequences of privacy-related decision making can help them develop the skills they need to navigate uncharted situations online.

Read the IDC 2018 paper for more details!

Citation: Priya Kumar, Jessica Vitak, Marshini Chetty, Tamara L. Clegg, Jonathan Yang, Brenna McNally, and Elizabeth Bonsignore. 2018. Co-Designing Online Privacy-Related Games and Stories with Children. In Proceedings of the 17th ACM Conference on Interaction Design and Children (IDC ’18). ACM, New York, NY, USA, 67-79. DOI: https://doi.org/10.1145/3202185.3202735

Parts of this entry were cross-posted on the Princeton HCI blog.

Advertisements

Kids and Privacy Online @ CSCW 2018

How do elementary school-aged children conceptualize privacy and security online? Below I share findings and recommendations from a paper I wrote with co-authors Shalmali Naik, Utkarsha Devkar, Marshini Chetty, Tammy Clegg, and Jessica Vitak. I’ll present this paper at the 2018 ACM Conference on Computer Supported Cooperative Work and Social Computing (CSCW).

What did we do? Children under age 12 increasingly go online, but few studies examine how children perceive and address privacy and security concerns. Using a privacy framework known as contextual integrity to guide our analysis, we interviewed children and their parents to understand how children conceptualize privacy and security online, what strategies they use to address any risks they perceive, and how their parents support them when it comes to privacy and security online.

How did we do it? We interviewed 26 children ages 5-11 and 23 parents from 18 families in the Washington, DC metropolitan area. We also walked through a series of hypothetical scenarios with children, which we framed as a game. For example, we asked children how they imagined another child would respond when receiving a message from an unknown person online.

What did we find? Children recognized how some components of privacy and security play out online, but those ages 5-7 had gaps in their knowledge. For example, younger children did not seem to recognize that sharing information online makes it visible in ways that differ from sharing information face-to-face. Children largely relied on their parents for support, but parents generally did not feel their children were exposed to privacy and security concerns. They felt such concerns would arise when children were older, had their own smartphones, and spent more time on social media.

What are the implications of this work? As the lines between offline and online increasingly blur, it is important for everyone, including children, to recognize (and remember) that use of smartphones, tablets, laptops, and in-home digital assistants can raise privacy and security concerns. Children absorb some lessons through everyday use of these devices, but parents have an opportunity to scaffold their children’s learning. Younger children may also be more willing to accept advice from their parents compared to teenagers. Parents would benefit from the creation of educational resources or apps that focus on teaching these concepts to younger children. The paper explains how the contextual integrity framework can inform the development of such resources.

Read the CSCW 2018 paper for more details!

Citation: Priya Kumar, Shalmali Milind Naik, Utkarsha Ramesh Devkar, Marshini Chetty, Tamara L. Clegg, and Jessica Vitak. 2017. ‘No Telling Passcodes Out Because They’re Private’: Understanding Children’s Mental Models of Privacy and Security Online. Proc. ACM Hum.-Comput. Interact. 1, CSCW, Article 64 (December 2017), 21 pages. DOI: https://doi.org/10.1145/3134699

Parts of this entry were cross-posted on the blogs of UMD’s Privacy Education and Research Laboratory (PEARL) and Princeton HCI.

Privacy Policies, PRISM, and Surveillance Capitalism in MaC

I recently published my first journal article in a special issue of Media and Communication (MaC) on Post-Snowden Internet Policy. (Unfortunately, the editors misgendered me in the editorial).

In my article, Corporate Privacy Policy Changes during PRISM and the Rise of Surveillance Capitalism, I analyzed the privacy policies of 10 internet companies to explore how company practices related to users’ privacy shifted over the past decade.

What did I do? The Snowden disclosures in 2013 re-ignited a public conversation about the extent to which governments should access data that people generate in the course of their daily lives. Disclosure of the PRISM program cast a spotlight on the role that major internet companies play in facilitating such surveillance. In this paper, I analyzed the privacy policies of the nine companies in PRISM, plus Twitter, to see how companies’ data management practices changed between their joining PRISM and the world learning about PRISM. I drew on my experience with the Ranking Digital Rights research initiative and specifically focused on changes related to the “life cycle” of user information — that is, the collection, use, sharing, and retention of user information.

How did I do it? I collected company privacy policies from four points in time: before and after the company joined PRISM and before and after the Snowden revelations. Google and Twitter provide archives of their policies on their websites; for the other companies, I used the Internet Archive’s Wayback Machine to locate the policies. I logged the changes in a spreadsheet and classified them into substantive or non-substantive changes. I then dug into the substantive changes and categorized them based on how they affected the life cycle of user information.

What did I find? Seventy percent of the substantive changes addressed the management of user information and data sharing and tracking. The changes related to management of user information provided additional detail about what companies collect and retain. The changes related to data sharing and tracking offered more information about companies’ targeted advertising practices. These often appeared to give companies wider latitude to track users and share user information with advertisers. While these policy changes disclosed more details about company practices, the practices themselves appeared to subject users to greater tracking for advertising purposes.

What are the implications of this work? Collectively, these privacy policy changes offer evidence that suggests several of the world’s largest internet companies operate according to what business scholar Shoshana Zuboff calls the logic of surveillance capitalism. Participating in PRISM did not cause surveillance capitalism, but this analysis suggests that the PRISM companies further enmeshed themselves in it over the past decade. The burgeoning flow of user information into corporate servers and government databases exemplifies what legal scholar Joel Reidenberg calls the transparent citizenry, where people become visible to institutions, but those institutions’ use of their data remains obscure. This analysis serves as a reminder that public debates about people’s privacy rights in the wake of the Snowden disclosures must not ignore the role that companies themselves play in legitimizing surveillance activities under the auspices of creating market value.

Read the journal article (PDF) for more details!

Citation: Kumar, P. (2017). Corporate Privacy Policy Changes during PRISM and the Rise of Surveillance Capitalism. Media and Communication, 5(1), 63-75. doi:10.17645/mac.v5i1.813

Google Glass: Not Scary, Worth Discussing

Instantaneous photographs…have invaded the sacred precincts of private and domestic life; and numerous mechanical devices threaten to make good the prediction that ‘what is whispered in the closet shall be proclaimed from the house-tops.’

Such is our fear of Google Glass, right? Taking pictures everywhere, sharing information with everyone, no more keeping secrets. But the above quote predates Glass by 123 years. Samuel Warren and Louis Brandeis penned these words shortly after the Kodak camera arrived.

I tried Glass yesterday, and I give Google one thing: Glass isn’t scary (yet). Users can’t do anything with Google Glass they can’t already do with a smartphone. Google Glass just lets people do some of the same things hands-free. A tap to the device’s right arm activates Glass. A small display appears above the right eye, not in the line of sight. It shows the time and the activation command, “OK Glass.” Speak these magic words and a menu with four options appears: Google, take a picture, look up directions, and record a video. I used the device on a guest account, so I couldn’t send emails/text messages or post information to social networks.

I’m impressed with the technology. Voice recognition could use some work, but a Google staffer said you hear the device not from airwaves entering your ear, but vibrations entering your skull. I wouldn’t buy Glass because I see no need for it. It’s meant to let you live life assured that you won’t miss anything important, but I’ve no need for that much convenience or connection.

Regardless, I hope the paranoia around it continues. Not because Glass is bad, but because seeing a camera on someone else’s face reminds us the default settings of information in society have shifted. This isn’t new (see above quote), but the pace at which technology has advanced means we can do a lot more with the information we collect. How do we as a society feel about that? Google Glass can’t post a running first-person video feed of your life to the Web (yet). But that which we fear already exists. Someone might record video of you and post it online without you realizing. Local governments have set up security cameras in parks. And the police use facial recognition software to mine enormous databases of images. So, let’s keep voicing our concern over recording and sharing photos and videos. Just don’t pretend it’s all Glass’s fault.