A Crack and a Relief

It happened. The crack, when “you can no longer stand what you put up with before, even yesterday” (Deleuze & Parnet quoted in Jackson & Mazzei, 2013); when “one can no longer think things as one formerly thought them, [and] transformation becomes both very urgent, very difficult and quite possible” (Foucault, quoted in St. Pierre, 2014).

For the past several months, I’ve been trying to understand epistemology and ontology — what they mean, what they mean for me, and what they mean for my research. I read “The Foundations of Social Research” by Michael Crotty. I read “The Body Multiple” by Annemarie Mol. I read other articles, mostly from communication, science and technology studies, and cultural studies.

I continued to analyze quantitative and qualitative data and write papers, but I felt increasingly perturbed, as if this work wasn’t adequately capturing what the research team said we were studying. I kept spouting my one-line summary of my dissertation research: “I study how parents post pictures of their kids online and what that means for kids’ identity development, sense of self, and understanding of privacy,” even after realizing that I’m not actually studying parents or kids or online or identity.

Epistemologically, I sensed that I wasn’t a positivist, but I couldn’t figure out whether I was a constructionist or a subjectivist. Theoretically, I didn’t think I was an objectivist, and I sensed that I might be an interpretivist who could one day become a critical scholar. I could be doing phenomenology, or potentially hermeneutics, or maybe symbolic interactionist work. I remained unsure of aligning myself with any methodology besides “qualitative,” which I do primarily through the methods of interviews and textual analysis.

Today, all of that fell apart and also came into sharp relief, thanks to readings on “New Materialism” and a conversation at the weekly journal club of my university’s physical cultural studies program.

I realized that I’ve been using what St. Pierre (2016) calls “conventional humanist qualitative methodology” — interviewing people and coding the data as a way to capture some aspect of their lived experience. I thought I’d sidestepped positivism because I don’t offer “hypotheses,” don’t calculate “inter-rater reliability,” and don’t purport to “predict” behavior. But I do define “research questions,” collect “data,” and code it to fill a “gap the knowledge” — all trappings of logical positivism.

And this would be fine, except that I’m also discussing Foucauldian analysis and Actor-Network Theory and assemblage. And I dig it. It resonates with me and it’s informing how I approach my dissertation. So no wonder the conventional research process I’m using feels stale — it does not align with the new (to me) theory/methods that are shaping the way I understand the world and the research that occurs in it.

But doing research from a Foucauldian or ANT or assemblage (or feminist or queer or post-structuaralist or post-colonial or post-humanist or…) perspective requires more than rethinking methods. It means letting go of a belief that any research, no matter how rigorously or reflexively it is done, can capture what is “going on.” It means accepting that research, and the production of knowledge, is always partial, always incomplete. It means that no matter how precisely or evocatively we write about our research, it remains a semblance.

But nevertheless, I feel this work is urgent. I know this work is difficult. And yes, I believe that it is possible.

(And while I will continue doing research in the conventional humanist qualitative vein, because learning new things takes time, Jackson & Mazzei show how to use these theories to think with typical interview data “within and against intepretivism.”)

Advertisements

On Assuming Mental Paralysis

A fellow graduate student recently asked me how I approach literature reviews. This question of how to find, read, and synthesize a body (or more) of research is central to producing good academic work. Yet it brings to mind Bellatrix Lestrange’s vault in Gringotts, where every paper you read yields six more until you’re neck deep with no foreseeable way out.

When I first started studying parents and social media use, I was content with Irwin Altman’s definition of privacy as controlling access to the self. Digging deeper, I learned to think of privacy as contextual integrity (thanks to Helen Nissenbaum) and as boundary management (thanks to Sandra Petronio). As I continued studying privacy over the years, I learned that lawyers, psychologists, communication scholars, economists, and computer scientists all conceptualize privacy in different ways. During my first year in the PhD, I considered creating a disciplinary map of privacy for a class project but quickly realized that was a much bigger undertaking than I imagined.

I’ve grown familiar with the feeling. I took a seminar with Jason Farman on “Place, Space, and Identity in the Digital Age,” and saw that entire careers can (and have) been built around each of these concepts. Place isn’t just a label on a physical space, it’s objects and bodies and relationships and memories and information flows and more coming together in a particular arrangement at a particular moment. Identity isn’t just a list of demographic characteristics, it’s the facets, fragments, memories, experiences, beliefs, roles, imaginaries and more that constantly intersect and intertwine into you. And this morning, while reading John Law’s “Objects and Spaces,” I realized that we can’t even take physical, 3-D, Euclidean space as a given.

Sigh.

It’s easy to see moments like these as overwhelming, paralyzing even. Especially when you do interdisciplinary research and plan to borrow theories and methods from other disciplines. Or to see these moments as challenges, as piles of reading to conquer so that you can one day claim the prize of “knowing” something.

But these moments keep happening. So the options are to feel constantly overwhelmed or to see grand quests pile up, neither of which is healthy (nor encouraging). I’ve come to an alternate response after starting a daily meditation practice: Let it go.

Let go of the overwhelm. Let go of the fear. Let go of the burden. Worried you don’t have time to read everything? Let it go. Concerned that you might overlook something? Let it go. Dreading the moment another scholar tells you, “Yeah, but what about [totally separate body of work that may or may not be relevant to your topic]?” Let it go.

It sounds simple, I know. But these three words, combined with the acknowledgement, acceptance, and even embrace of the vast, unimaginable, and ultimately unknowable amount of prior work out there is freeing.

I spent all day brainstorming the verb for this post’s title. When I do literature reviews, and when I do research in general, I want to assume mental paralysis. Meaning, I want to assume that I will experience moments of mental paralysis, of viewing the work ahead as a sheer, insurmountable rock wall I somehow have to climb, as a tangled thicket in dark jungle through which I have to chop my way out.

But I also want to take up the mental paralysis, to wear it as a badge, to make it part of me. Because even after I climb this wall or chop through those vines, there will be another wall, another tangle. And by accepting that, I hope to take greater joy in those moments when I DO learn something, when a concept finally DOES click in my head, even if it falls apart again a moment later. By acknowledging and expecting the complexity, I release the sense that I need to master it, to someday “figure it out.”

And that, I suppose, is how I approach literature reviews.

(Oh, and for anyone who wants actual advice on how to do a literature review, Raul Pacheco-Vega has a series of relevant blog posts.)

 

Designing Resources to Help Kids Learn about Privacy Online @ IDC 2018

What types of educational resources would help elementary school-age children learn about privacy online? Below I share findings and recommendations from a paper I co-wrote with Jessica Vitak, Marshini Chetty, Tammy Clegg, Jonathan Yang, Brenna McNally, and Elizabeth Bonsignore. I’ll present this paper at the 2018 ACM Conference on Interaction Design and Children (IDC).

What did we do? Children spend hours going online at home and school, but they receive little to no education about how going online affects their privacy. We explored the power of games and storytelling as two mechanisms for teaching children about privacy online.

How did we do it? We held three co-design sessions with Kidsteam, a team of children ages 7-11 and adults who meet regularly at the University of Maryland to design new technologies. In session 1, we reviewed existing privacy resources with children and elicited design ideas for new resources. In session 2, we iterated on a conceptual prototype of a mobile app inspired by the popular game Doodle Jump. Our version, which we called Privacy Doodle Jump, incorporated quiz questions related to privacy and security online. In session 3, children developed their own interactive Choose Your Own Adventure stories related to privacy online.

What did we find? We found that materials designed to teach children about privacy online often instruct children on “do’s and don’ts” rather than helping them develop the skills to navigate privacy online. Such straightforward guidelines can be useful when introducing children to complex subjects like privacy, or when working with younger children. However, focusing on lists of rules does little to equip children with the skills they need to make complex, privacy-related decisions online. If a resource presents children with scenarios that resonate with their everyday life, children may be more likely to understand and absorb its message. For example, a child might more easily absorb a privacy lesson from a story about another child who uses Instagram than a game that uses a fictional character in an imaginary world.

What are the implications of this work?

  • First, educational resources related to privacy should use scenarios that relate to children’s everyday lives. For instance, our Privacy Doodle Jump game included a question that asked a child what they would do if they were playing Xbox and saw an advertisement pop up that asked them to buy something.
  • Second, educational resources should go beyond listing do’s and don’ts for online behavior and help children develop strategies for dealing with new and unexpected scenarios they may encounter. Because context is such an important part of privacy-related decision making, resources should facilitate discussion between parents or teachers and children rather than simply tell children how to behave.
  • Third, educational resources should showcase a variety of outcomes of different online behaviors instead of framing privacy as a black and white issue. For instance, privacy guidelines may instruct children to never turn on location services, but this decision might differ based on the app that is requesting the data. Turning on location services in Snapchat may pinpoint one’s house to others — a potential negative, — but turning on location services in Google Maps may yield real-time navigation — a potential positive. Exposing children to a variety of positive and negative consequences of privacy-related decision making can help them develop the skills they need to navigate uncharted situations online.

Read the IDC 2018 paper for more details!

Citation: Priya Kumar, Jessica Vitak, Marshini Chetty, Tamara L. Clegg, Jonathan Yang, Brenna McNally, and Elizabeth Bonsignore. 2018. Co-Designing Online Privacy-Related Games and Stories with Children. In Proceedings of the 17th ACM Conference on Interaction Design and Children (IDC ’18). ACM, New York, NY, USA, 67-79. DOI: https://doi.org/10.1145/3202185.3202735

Parts of this entry were cross-posted on the Princeton HCI blog.

Co-designing Mobile Monitoring Applications with Children @ CHI 2018

What do children think about mobile apps that allow their parents to monitor children’s technology use? How would children re-design such apps? Below, I share findings and recommendations from a paper I co-wrote with colleagues from UMD’s Human Computer Interaction Lab (HCIL). Today, lead author Brenna McNally presents this paper at the 2018 ACM Conference on Human Factors in Computing Systems (CHI).

What did we do? Children use mobile devices every day, and mobile monitoring applications enable parents to monitor or restrict their children’s mobile use in various ways. We explored to what extent do children consider different mobile monitoring activities appropriate and what other mobile monitoring solutions do they envision?

How did we do it? We held two co-design sessions with Kidsteam, a team of children ages 7-11 and adults who meet regularly at the University of Maryland to design new technologies. At both sessions, children filled out a survey about their opinions on various features of a commercially available mobile monitoring app, noting whether they felt parents should or should not be able to control each one. They then drew mock-ups that redesigned the features they felt parents should not be able to control. The second session included a design activity where our child design partners created a mobile interface to help children handle two common mobile risk scenarios: a content threat in which a child accidentally saw inappropriate material and a contact threat in which a child experiences cyberbullying via instant messaging.

What did we find? Most children were comfortable with various monitoring features, including letting parents see a child’s location or contacts. Comfort with parents seeing a child’s search/browsing history, social media posts, and text messages varied, with some noting that this information could be taken out of context. With regard to restriction features, most felt comfortable with parents seeing what apps are downloaded on a device, but fewer wanted parents to be able to restrict a child’s internet access or camera use. Child partners re-designed these features to support more active mediation, for example, creating “Ask Child” or “Consult Kid” buttons to prompt conversations between parents and children before a parent unilaterally restricts or blocks something on a child’s device.

In the activity on helping children handle common mobile risk scenarios, children’s designs emphasized automatic technology interventions, such as filters that would block “bad” text messages or contacts who used bad language. Their designs also included features that provided children immediate assistance when they encountered a concerning situation. These focused on helping the child feel better, for example, by showing cat videos or suggesting that the child play with a sibling.

What are the implications of this work? By incorporating children’s perspectives into the design process, this work suggests how mobile monitoring applications meant for parents to oversee their children’s technology use can be designed in ways that children find acceptable and beneficial. The children we worked with understood and even welcomed certain types of parental oversight, especially related to their physical safety. However, they questioned other types of parental monitoring and restriction of their mobile activities, re-designing these features so that the tools supported children through automated means or by helping children develop their own strategies for handling negative situations. Most mobile monitoring technologies emphasize the role of parental control, but this study suggests that children are eager for tools that help them learn about online risks and develop skills to navigate and cope with risky situations as they experience them.

Read the CHI 2018 paper for more details!

Citation: Brenna McNally, Priya Kumar, Chelsea Hordatt, Matthew Louis Mauriello, Shalmali Naik, Leyla Norooz, Alazandra Shorter, Evan Golub, and Allison Druin. 2018. Co-designing Mobile Online Safety Applications with Children. In Proceedings of the 2018 CHI Conference on Human Factors in Computing Systems (CHI ’18). ACM, New York, NY, USA, Paper 523, 9 pages. DOI: https://doi.org/10.1145/3173574.3174097

Kids and Privacy Online @ CSCW 2018

How do elementary school-aged children conceptualize privacy and security online? Below I share findings and recommendations from a paper I wrote with co-authors Shalmali Naik, Utkarsha Devkar, Marshini Chetty, Tammy Clegg, and Jessica Vitak. I’ll present this paper at the 2018 ACM Conference on Computer Supported Cooperative Work and Social Computing (CSCW).

What did we do? Children under age 12 increasingly go online, but few studies examine how children perceive and address privacy and security concerns. Using a privacy framework known as contextual integrity to guide our analysis, we interviewed children and their parents to understand how children conceptualize privacy and security online, what strategies they use to address any risks they perceive, and how their parents support them when it comes to privacy and security online.

How did we do it? We interviewed 26 children ages 5-11 and 23 parents from 18 families in the Washington, DC metropolitan area. We also walked through a series of hypothetical scenarios with children, which we framed as a game. For example, we asked children how they imagined another child would respond when receiving a message from an unknown person online.

What did we find? Children recognized how some components of privacy and security play out online, but those ages 5-7 had gaps in their knowledge. For example, younger children did not seem to recognize that sharing information online makes it visible in ways that differ from sharing information face-to-face. Children largely relied on their parents for support, but parents generally did not feel their children were exposed to privacy and security concerns. They felt such concerns would arise when children were older, had their own smartphones, and spent more time on social media.

What are the implications of this work? As the lines between offline and online increasingly blur, it is important for everyone, including children, to recognize (and remember) that use of smartphones, tablets, laptops, and in-home digital assistants can raise privacy and security concerns. Children absorb some lessons through everyday use of these devices, but parents have an opportunity to scaffold their children’s learning. Younger children may also be more willing to accept advice from their parents compared to teenagers. Parents would benefit from the creation of educational resources or apps that focus on teaching these concepts to younger children. The paper explains how the contextual integrity framework can inform the development of such resources.

Read the CSCW 2018 paper for more details!

Citation: Priya Kumar, Shalmali Milind Naik, Utkarsha Ramesh Devkar, Marshini Chetty, Tamara L. Clegg, and Jessica Vitak. 2017. ‘No Telling Passcodes Out Because They’re Private’: Understanding Children’s Mental Models of Privacy and Security Online. Proc. ACM Hum.-Comput. Interact. 1, CSCW, Article 64 (December 2017), 21 pages. DOI: https://doi.org/10.1145/3134699

Parts of this entry were cross-posted on the blogs of UMD’s Privacy Education and Research Laboratory (PEARL) and Princeton HCI.

Privacy Policies, PRISM, and Surveillance Capitalism in MaC

I recently published my first journal article in a special issue of Media and Communication (MaC) on Post-Snowden Internet Policy. (Unfortunately, the editors misgendered me in the editorial).

In my article, Corporate Privacy Policy Changes during PRISM and the Rise of Surveillance Capitalism, I analyzed the privacy policies of 10 internet companies to explore how company practices related to users’ privacy shifted over the past decade.

What did I do? The Snowden disclosures in 2013 re-ignited a public conversation about the extent to which governments should access data that people generate in the course of their daily lives. Disclosure of the PRISM program cast a spotlight on the role that major internet companies play in facilitating such surveillance. In this paper, I analyzed the privacy policies of the nine companies in PRISM, plus Twitter, to see how companies’ data management practices changed between their joining PRISM and the world learning about PRISM. I drew on my experience with the Ranking Digital Rights research initiative and specifically focused on changes related to the “life cycle” of user information — that is, the collection, use, sharing, and retention of user information.

How did I do it? I collected company privacy policies from four points in time: before and after the company joined PRISM and before and after the Snowden revelations. Google and Twitter provide archives of their policies on their websites; for the other companies, I used the Internet Archive’s Wayback Machine to locate the policies. I logged the changes in a spreadsheet and classified them into substantive or non-substantive changes. I then dug into the substantive changes and categorized them based on how they affected the life cycle of user information.

What did I find? Seventy percent of the substantive changes addressed the management of user information and data sharing and tracking. The changes related to management of user information provided additional detail about what companies collect and retain. The changes related to data sharing and tracking offered more information about companies’ targeted advertising practices. These often appeared to give companies wider latitude to track users and share user information with advertisers. While these policy changes disclosed more details about company practices, the practices themselves appeared to subject users to greater tracking for advertising purposes.

What are the implications of this work? Collectively, these privacy policy changes offer evidence that suggests several of the world’s largest internet companies operate according to what business scholar Shoshana Zuboff calls the logic of surveillance capitalism. Participating in PRISM did not cause surveillance capitalism, but this analysis suggests that the PRISM companies further enmeshed themselves in it over the past decade. The burgeoning flow of user information into corporate servers and government databases exemplifies what legal scholar Joel Reidenberg calls the transparent citizenry, where people become visible to institutions, but those institutions’ use of their data remains obscure. This analysis serves as a reminder that public debates about people’s privacy rights in the wake of the Snowden disclosures must not ignore the role that companies themselves play in legitimizing surveillance activities under the auspices of creating market value.

Read the journal article (PDF) for more details!

Citation: Kumar, P. (2017). Corporate Privacy Policy Changes during PRISM and the Rise of Surveillance Capitalism. Media and Communication, 5(1), 63-75. doi:10.17645/mac.v5i1.813

My Mission

notebook-med

Image by Markus Spiske/Flickr

When I was 13 or 14, my parents gave me “The 7 Habits of Highly Effective Teens” by Sean Covey for Christmas. I devoured the book, re-reading it for the next several years. It was the first book in which I highlighted, dog-eared, and wrote notes directly on the pages.

Habit 2 encouraged readers to write a personal mission statement. I loved the idea but never wrote anything of consequence. Now, having accumulated several more years of life experience, I feel more equipped to write that statement.

The sentiment of my mission coalesced largely over the past six years. The transition from college to work to graduate school to now was difficult and enlightening. I finally have a sense of what I want to accomplish, yet I feel secure enough with myself to accept that may evolve.

So, what’s my mission?

I examine the forces that shape our lives and share that knowledge with the public.

This mission highlights what fascinates me and what I want to do with that knowledge. I am a writer, researcher, and storyteller at heart, and I aspire to write a book one day. In the interest of focusing on systems rather than goals, I aim to write pieces that people can point to and say, “I learned something from that.”

My professional and amateur interests span astronomy, psychology, Internet studies, and history — disparate disciplines bound by a common thread of humanity.

Like many people, I’m struck with awe every time I look up at the night sky. So much exists out there, and while science has enabled us to learn a tremendous amount about what’s up there, it’s impossible (for now) to travel across light years or stand on the event horizon of a black hole. So, why does astronomy matter?

Because every particle that makes up every human being on the planet comes from the stars in that sky. The universe began with hydrogen, a smattering of helium and a smidgen of lithium. All other elements in the periodic table, including the carbon that forms the basis of life as we know it, emerged from nuclear fusion occurring in the cores of stars and in the aftermath of star explosions. Everything that’s inside you comes from up there.

What goes on inside us, particularly our brains, also captivates me. While we don’t have to think about telling our body to breathe air, pump blood, or digest food, our thoughts drive so much of our behavior. And while thought processes may feel automatic, they’re malleable and well within our control. Figuring out how to change the way we think and implementing those changes isn’t easy. But I take comfort in the paradoxical notion that while I can’t control anything outside my own mind, taking control of my own mind grants me boundless potential to construct a fulfilling life.

Nowadays, that life is not just experienced; it is increasingly documented by digital technology that creeps deeper into our daily lives. Personal and sensitive communications, ranging from text messages to financial transactions to data points about our physical activities flow through privately owned networks and sit on servers operated by companies that have wide latitude to use that data as they see fit. We as individuals must ensure that this emerging ecosystem of networked digital technology benefits, rather than restricts, us.

To do so, I think it’s important to put this moment in historical context. The human race has advanced tremendously over its existence on this planet. Look around you. So much of what you see and feel was designed or affected by humans. Buildings, roads, cars, books, families, music, math, elections, and the disease-resistant tomatoes in your fridge are the result of human activity.

Even if you’re sitting in middle of an ocean, forest, desert, or glacier, the device (or perhaps piece of paper) on which you’re reading these words was invented by humans. The language you’re reading right now, the shapes of the letters and the grammatical rules that render these words meaningful were developed by humans.

This point reverberated while I recently read Amsterdam: A History of the World’s Most Liberal City. As author Russell Shorto described how the philosopher Baruch Spinoza first posited that church and state could exist as separate entities, it hit me in my gut that values, principles, and norms change. That there was a time when people truly believed that dark-skinned humans were inferior. That 100 years ago, women in the United States had no right to vote. That the notion of “this is just how things are” is simply not true. History is not facts and timelines; history is about moments and people who seize those moments and make them matter. History is learning how people have harnessed their potential and applying those lessons to the present day.

As I move through life, I want to understand more about these forces, the physical, internal, societal, and historical forces that have brought me, you, and those around us to this particular moment in time. And if in that process, I say something that makes you go, “Hmm, I never thought of that,” well then, mission accomplished.

This post also appears on Medium.