Category: Publications

Designing Resources to Help Kids Learn about Privacy Online @ IDC 2018

What types of educational resources would help elementary school-age children learn about privacy online? Below I share findings and recommendations from a paper I co-wrote with Jessica Vitak, Marshini Chetty, Tammy Clegg, Jonathan Yang, Brenna McNally, and Elizabeth Bonsignore. I’ll present this paper at the 2018 ACM Conference on Interaction Design and Children (IDC).

What did we do? Children spend hours going online at home and school, but they receive little to no education about how going online affects their privacy. We explored the power of games and storytelling as two mechanisms for teaching children about privacy online.

How did we do it? We held three co-design sessions with Kidsteam, a team of children ages 7-11 and adults who meet regularly at the University of Maryland to design new technologies. In session 1, we reviewed existing privacy resources with children and elicited design ideas for new resources. In session 2, we iterated on a conceptual prototype of a mobile app inspired by the popular game Doodle Jump. Our version, which we called Privacy Doodle Jump, incorporated quiz questions related to privacy and security online. In session 3, children developed their own interactive Choose Your Own Adventure stories related to privacy online.

What did we find? We found that materials designed to teach children about privacy online often instruct children on “do’s and don’ts” rather than helping them develop the skills to navigate privacy online. Such straightforward guidelines can be useful when introducing children to complex subjects like privacy, or when working with younger children. However, focusing on lists of rules does little to equip children with the skills they need to make complex, privacy-related decisions online. If a resource presents children with scenarios that resonate with their everyday life, children may be more likely to understand and absorb its message. For example, a child might more easily absorb a privacy lesson from a story about another child who uses Instagram than a game that uses a fictional character in an imaginary world.

What are the implications of this work?

  • First, educational resources related to privacy should use scenarios that relate to children’s everyday lives. For instance, our Privacy Doodle Jump game included a question that asked a child what they would do if they were playing Xbox and saw an advertisement pop up that asked them to buy something.
  • Second, educational resources should go beyond listing do’s and don’ts for online behavior and help children develop strategies for dealing with new and unexpected scenarios they may encounter. Because context is such an important part of privacy-related decision making, resources should facilitate discussion between parents or teachers and children rather than simply tell children how to behave.
  • Third, educational resources should showcase a variety of outcomes of different online behaviors instead of framing privacy as a black and white issue. For instance, privacy guidelines may instruct children to never turn on location services, but this decision might differ based on the app that is requesting the data. Turning on location services in Snapchat may pinpoint one’s house to others — a potential negative, — but turning on location services in Google Maps may yield real-time navigation — a potential positive. Exposing children to a variety of positive and negative consequences of privacy-related decision making can help them develop the skills they need to navigate uncharted situations online.

Read the IDC 2018 paper for more details!

Citation: Priya Kumar, Jessica Vitak, Marshini Chetty, Tamara L. Clegg, Jonathan Yang, Brenna McNally, and Elizabeth Bonsignore. 2018. Co-Designing Online Privacy-Related Games and Stories with Children. In Proceedings of the 17th ACM Conference on Interaction Design and Children (IDC ’18). ACM, New York, NY, USA, 67-79. DOI: https://doi.org/10.1145/3202185.3202735

Parts of this entry were cross-posted on the Princeton HCI blog.

Advertisements

Co-designing Mobile Monitoring Applications with Children @ CHI 2018

What do children think about mobile apps that allow their parents to monitor children’s technology use? How would children re-design such apps? Below, I share findings and recommendations from a paper I co-wrote with colleagues from UMD’s Human Computer Interaction Lab (HCIL). Today, lead author Brenna McNally presents this paper at the 2018 ACM Conference on Human Factors in Computing Systems (CHI).

What did we do? Children use mobile devices every day, and mobile monitoring applications enable parents to monitor or restrict their children’s mobile use in various ways. We explored to what extent do children consider different mobile monitoring activities appropriate and what other mobile monitoring solutions do they envision?

How did we do it? We held two co-design sessions with Kidsteam, a team of children ages 7-11 and adults who meet regularly at the University of Maryland to design new technologies. At both sessions, children filled out a survey about their opinions on various features of a commercially available mobile monitoring app, noting whether they felt parents should or should not be able to control each one. They then drew mock-ups that redesigned the features they felt parents should not be able to control. The second session included a design activity where our child design partners created a mobile interface to help children handle two common mobile risk scenarios: a content threat in which a child accidentally saw inappropriate material and a contact threat in which a child experiences cyberbullying via instant messaging.

What did we find? Most children were comfortable with various monitoring features, including letting parents see a child’s location or contacts. Comfort with parents seeing a child’s search/browsing history, social media posts, and text messages varied, with some noting that this information could be taken out of context. With regard to restriction features, most felt comfortable with parents seeing what apps are downloaded on a device, but fewer wanted parents to be able to restrict a child’s internet access or camera use. Child partners re-designed these features to support more active mediation, for example, creating “Ask Child” or “Consult Kid” buttons to prompt conversations between parents and children before a parent unilaterally restricts or blocks something on a child’s device.

In the activity on helping children handle common mobile risk scenarios, children’s designs emphasized automatic technology interventions, such as filters that would block “bad” text messages or contacts who used bad language. Their designs also included features that provided children immediate assistance when they encountered a concerning situation. These focused on helping the child feel better, for example, by showing cat videos or suggesting that the child play with a sibling.

What are the implications of this work? By incorporating children’s perspectives into the design process, this work suggests how mobile monitoring applications meant for parents to oversee their children’s technology use can be designed in ways that children find acceptable and beneficial. The children we worked with understood and even welcomed certain types of parental oversight, especially related to their physical safety. However, they questioned other types of parental monitoring and restriction of their mobile activities, re-designing these features so that the tools supported children through automated means or by helping children develop their own strategies for handling negative situations. Most mobile monitoring technologies emphasize the role of parental control, but this study suggests that children are eager for tools that help them learn about online risks and develop skills to navigate and cope with risky situations as they experience them.

Read the CHI 2018 paper for more details!

Citation: Brenna McNally, Priya Kumar, Chelsea Hordatt, Matthew Louis Mauriello, Shalmali Naik, Leyla Norooz, Alazandra Shorter, Evan Golub, and Allison Druin. 2018. Co-designing Mobile Online Safety Applications with Children. In Proceedings of the 2018 CHI Conference on Human Factors in Computing Systems (CHI ’18). ACM, New York, NY, USA, Paper 523, 9 pages. DOI: https://doi.org/10.1145/3173574.3174097

Kids and Privacy Online @ CSCW 2018

How do elementary school-aged children conceptualize privacy and security online? Below I share findings and recommendations from a paper I wrote with co-authors Shalmali Naik, Utkarsha Devkar, Marshini Chetty, Tammy Clegg, and Jessica Vitak. I’ll present this paper at the 2018 ACM Conference on Computer Supported Cooperative Work and Social Computing (CSCW).

What did we do? Children under age 12 increasingly go online, but few studies examine how children perceive and address privacy and security concerns. Using a privacy framework known as contextual integrity to guide our analysis, we interviewed children and their parents to understand how children conceptualize privacy and security online, what strategies they use to address any risks they perceive, and how their parents support them when it comes to privacy and security online.

How did we do it? We interviewed 26 children ages 5-11 and 23 parents from 18 families in the Washington, DC metropolitan area. We also walked through a series of hypothetical scenarios with children, which we framed as a game. For example, we asked children how they imagined another child would respond when receiving a message from an unknown person online.

What did we find? Children recognized how some components of privacy and security play out online, but those ages 5-7 had gaps in their knowledge. For example, younger children did not seem to recognize that sharing information online makes it visible in ways that differ from sharing information face-to-face. Children largely relied on their parents for support, but parents generally did not feel their children were exposed to privacy and security concerns. They felt such concerns would arise when children were older, had their own smartphones, and spent more time on social media.

What are the implications of this work? As the lines between offline and online increasingly blur, it is important for everyone, including children, to recognize (and remember) that use of smartphones, tablets, laptops, and in-home digital assistants can raise privacy and security concerns. Children absorb some lessons through everyday use of these devices, but parents have an opportunity to scaffold their children’s learning. Younger children may also be more willing to accept advice from their parents compared to teenagers. Parents would benefit from the creation of educational resources or apps that focus on teaching these concepts to younger children. The paper explains how the contextual integrity framework can inform the development of such resources.

Read the CSCW 2018 paper for more details!

Citation: Priya Kumar, Shalmali Milind Naik, Utkarsha Ramesh Devkar, Marshini Chetty, Tamara L. Clegg, and Jessica Vitak. 2017. ‘No Telling Passcodes Out Because They’re Private’: Understanding Children’s Mental Models of Privacy and Security Online. Proc. ACM Hum.-Comput. Interact. 1, CSCW, Article 64 (December 2017), 21 pages. DOI: https://doi.org/10.1145/3134699

Parts of this entry were cross-posted on the blogs of UMD’s Privacy Education and Research Laboratory (PEARL) and Princeton HCI.

Privacy Policies, PRISM, and Surveillance Capitalism in MaC

I recently published my first journal article in a special issue of Media and Communication (MaC) on Post-Snowden Internet Policy. (Unfortunately, the editors misgendered me in the editorial).

In my article, Corporate Privacy Policy Changes during PRISM and the Rise of Surveillance Capitalism, I analyzed the privacy policies of 10 internet companies to explore how company practices related to users’ privacy shifted over the past decade.

What did I do? The Snowden disclosures in 2013 re-ignited a public conversation about the extent to which governments should access data that people generate in the course of their daily lives. Disclosure of the PRISM program cast a spotlight on the role that major internet companies play in facilitating such surveillance. In this paper, I analyzed the privacy policies of the nine companies in PRISM, plus Twitter, to see how companies’ data management practices changed between their joining PRISM and the world learning about PRISM. I drew on my experience with the Ranking Digital Rights research initiative and specifically focused on changes related to the “life cycle” of user information — that is, the collection, use, sharing, and retention of user information.

How did I do it? I collected company privacy policies from four points in time: before and after the company joined PRISM and before and after the Snowden revelations. Google and Twitter provide archives of their policies on their websites; for the other companies, I used the Internet Archive’s Wayback Machine to locate the policies. I logged the changes in a spreadsheet and classified them into substantive or non-substantive changes. I then dug into the substantive changes and categorized them based on how they affected the life cycle of user information.

What did I find? Seventy percent of the substantive changes addressed the management of user information and data sharing and tracking. The changes related to management of user information provided additional detail about what companies collect and retain. The changes related to data sharing and tracking offered more information about companies’ targeted advertising practices. These often appeared to give companies wider latitude to track users and share user information with advertisers. While these policy changes disclosed more details about company practices, the practices themselves appeared to subject users to greater tracking for advertising purposes.

What are the implications of this work? Collectively, these privacy policy changes offer evidence that suggests several of the world’s largest internet companies operate according to what business scholar Shoshana Zuboff calls the logic of surveillance capitalism. Participating in PRISM did not cause surveillance capitalism, but this analysis suggests that the PRISM companies further enmeshed themselves in it over the past decade. The burgeoning flow of user information into corporate servers and government databases exemplifies what legal scholar Joel Reidenberg calls the transparent citizenry, where people become visible to institutions, but those institutions’ use of their data remains obscure. This analysis serves as a reminder that public debates about people’s privacy rights in the wake of the Snowden disclosures must not ignore the role that companies themselves play in legitimizing surveillance activities under the auspices of creating market value.

Read the journal article (PDF) for more details!

Citation: Kumar, P. (2017). Corporate Privacy Policy Changes during PRISM and the Rise of Surveillance Capitalism. Media and Communication, 5(1), 63-75. doi:10.17645/mac.v5i1.813