Re-making Boundaries

I study privacy. I’ve done so since joining the field of internet studies eight years ago, and I plan to do so for the foreseeable future. But early in my PhD program, a creeping sense of disillusionment made me question whether this work mattered. I was part of a few research teams studying how people considered privacy in relation to various digital technologies. While analyzing our interview data, I encountered quote after quote of people saying they didn’t care if anyone saw their data or they didn’t think their data had any value. With a few exceptions, it seemed like privacy was not a primary concern for most of the people we interviewed.

Convenience stood out to me as one reason why. Digital technologies made it easy for people to go about their everyday tasks, and this ease outweighed any privacy concerns they may have had. This makes sense given that so many digital technologies are designed to integrate seamlessly into everyday life. Or rather, we (at least those of us in the U.S.) are surrounded by a cultural narrative that tells us technology is supposed to make our lives easier and better, even if that doesn’t necessarily happen in practice.

These technologies aren’t disappearing soon, in part because people want or need them. But they also raise a host of concerns, privacy being just one. Fighting for privacy always struck me as an uphill battle, but confronting this convenience narrative felt like having to scale the face of a cliff.

Until I realized it’s just that. A narrative. A story.

Might we tell a different story?

To do that, we have to see the story and to realize just how deeply it has burrowed into our social psyche. A few weeks ago I started seeing a TV commercial advertising a bank’s mobile app. A mother and her elementary school-age son walk out of a suburban house and toward the car parked in front of it. The mother pulls out her smartphone and the voice-over enthusiastically remarks that with the bank’s app, you can pay your bills from anywhere, anytime! And then get back to what really matters! The mother puts her phone away and her son runs toward her, the commercial ending with their exuberant embrace.

This is the convenience narrative at work. “You can pay your bills from anywhere, anytime!” works as a compelling marketing message because our society treats freedom and choice as the ultimate markers of the “good life.” But perhaps paying bills is not something to do anywhere, anytime. Perhaps paying bills is something to devote one’s full attention to, even if it only takes a few minutes.

Mary Gray summed it up perfectly at a panel yesterday when she remarked, half-jokingly, that she wanted to banish the idea that flexibility is a good thing. The panel, “Post-work Productivity,” was part of the annual conference of the Association of Internet Researchers (AoIR), happening virtually this week. That conversation examined how “the promise of ‘anytime, anywhere’ has turned into a risk of ‘all the time, everywhere’.” Although the panelists focused the erosion of boundaries in the context of work, their critique applies to other dimensions of life.

At today’s plenary panel on “Living Data,” Seeta Peña Gangadharan discussed the Our Data Bodies project, which critiques and resists the incursion of data-driven technologies into marginalized communities. She talked of these systems as optimization technology, drawing attention to the logics of efficiency that both underpin them and that they advance. Lately, she’s examined refusal as a response to datafication. This isn’t refusal in the sense of rejecting technology outright, but of questioning the terms of the deal and of imagining new ones. In other words, it’s about resisting and remaking power relations.

We can resist the convenience narrative. We can reject the idea that constantly fluid boundaries are a good thing, or that seamless data flows are the price we have to pay for progress (another word that Mary wanted to banish). When we hear these words, we must ask, flexibility for whom? Progress on whose terms? Convenience at what cost?

During an AoIR discussion on internet and sociality yesterday, Nancy Baym commented that technology’s ability to manifest boundaries in new or different ways is part of what makes it enduringly exciting to study. In examining technologies, and the lives and societies and structures within which they are entangled, we can interrogate those boundaries and ask, “How do we make better ones?”

That’s the kind of privacy research I want to do, that I am doing as I write my dissertation.


Patience has never been one of my strong suits. When inspiration strikes, I want to act on it immediately. And since ideas emerge faster than the actions to execute them, I perpetually feel like I have so much to do.

Work-wise, two projects currently hold my attention: the dissertation and the paper I’m leading through my job as a research assistant (RA). Three others paper drafts await. Another two papers exist in outline form, and two ideas for large-scale projects live as sketches. And then there’s the dozen or so one-line “Hey, wouldn’t it be cool to do X” ideas.

I’ve accepted that my attention needs to remain on the dissertation and the RA project because I want to graduate and I want to keep my job. I thoroughly enjoy these two projects, so it’s not that I would prefer to work on other things. Instead, I wish I could work on everything at once. This enthusiasm is a function of excitement, but also uncertainty. Regarding work projects, my thinking went like this: I’m in a PhD program right now, so I definitely have the next few years in academia. The academic job market is crapshoot [this was true long before covid], and I might not get any more time in academia. So, let me cram as much research as I can into this time.

Reality, in the form of mental and bodily limits (working all the time wears you out) and relationship pressures (my husband, family, and friends want to spend time with me) luckily forced me to bound this all-encompassing attitude toward work after my first year in the PhD program. Beginning a meditation practice after year two attuned me to be present with the current moment and to accept impermanence as inherent to reality. Yes, I might feel certain that I have four or five years in academia through the PhD, but actually, nothing is certain beyond what exists right now, in this moment.

So, while I continued to juggle several projects and make plans for the future, that list of research ideas became less a reflection of inadequacy (you’re not getting enough done) and just that—a list of research ideas. I acknowledged that each week contained ~40 hours of work time, a fairly easy transition once I paid attention to the pleasure of spending time with loved ones and cultivating hobbies outside of work. But my relationship with this list remained one of détente: projects took time, and I only had a certain amount of it in each given day or week. This natural limit prevented me from accomplishing everything on that list. I accepted it, but begrudgingly. Time was the adversary I knew I’d never beat.

I’ve now grown to see time as a feature rather than an unfortunate constraint within academic work. Some, especially those outside academia, wonder why a PhD takes so long. When I started the program, I vowed to finish in four years. And if I’d avoided taking on side projects and stuck to a conventional research project, I believe I could have. But around the start of year three, after I’d finished coursework and advanced to candidacy but before I wrote my dissertation proposal, I knew I needed more time. I’d learned that entirely different ways of doing research and understanding the world existed, and I wanted my dissertation to engage with those different ways of knowing. Just like ideas outpace action, translating these exciting things I’d started learning about into a coherent and meaningful scholarly contribution required time. Time for me to read (much) more, time for those ideas to marinate, and time for me to talk through them with other people. If this was to be my only time in academia, I wanted as much fulfillment as possible. Producing a gratifying dissertation mattered more to me than finishing it in an arbitrarily determined number of years.

I recognize the amount of privilege entangled in even being able to make such a statement—institutional and social support, a welcoming advisor and committee, and a certain level of personal financial security. Neither is five years some magic number. People produce quality work in fewer (and more) than five years, and people who want more time cannot always take it. I’ve also heard the refrains: “The only good dissertation is a done dissertation” and “The dissertation is not your magnum opus.” But if an academic career is not in the cards for me, then the dissertation in some ways will be a magnum opus.

That’s not to equate the dissertation with everything I want to accomplish in research. My intention in taking an extra year was not to cram all the projects from that list of research ideas into the dissertation, but to refine the one idea I was exploring through the dissertation. The dissertation topic—the privacy issues of parents posting pictures of their children on social media—hasn’t changed, but the way I’m studying it has. The project is better because of the extra time I am thankfully able to dedicate to it.

As I’ve understood how this premise extends beyond the dissertation to many aspects of academic work, my adversarial glare toward time has softened. Last fall, I was stunned to read a timeline that one of my academic role models, Elizabeth St. Pierre (2019), presented for one of her ideas. While doing her doctoral work in the early 1990s, she increasingly felt that postmodernism and poststructuralism were incommensurable with conventional research methodologies. By 2003, she had a name for the different way she felt was needed to engage with those theories through research—post-qualitative inquiry—and she devised courses to work through this mode of research with like-minded students. In 2010, she presented her first conference paper on the topic. Over the past decade, she has published more than 20 academic articles on the topic, and in 2019 she was writing a book on it.

In other words, it took nearly 30 years for St. Pierre to refine an inchoate idea from her graduate student days into a book-level concept. Sure, career-defining achievements like opening up a new research paradigm obviously take time. But so do the bread and butter of academic contributions: journal articles. I recently saw a Twitter thread from Omar Wasow describing how he cultivated the seed of an idea from a graduate seminar for 14 years before a top journal in his field accepted it for publication. Closer to me in career stage and discipline, Kate Miltner tweeted that one of her latest journal articles was an idea she wrote about in a graduate seminar five years ago.

I’ve been studying parents and social media for seven years, so the fact that I’m only now finding the vocabulary to articulate my research contribution suggests I’m right on track. And while I’m fortunate to have two journal-level publications on my dissertation topic, I’ve been itching to publish more since my thinking has evolved significantly. These stories from academics help me tell that impatient voice to cool it.

Of course, time isn’t the only ingredient that matters. Those three unfinished papers have sat on my computer untouched for nearly a year, and they haven’t gotten any better. Those two outlines aren’t going to write themselves into papers. If I want those ideas to get out into the world, at some point I’ll have to do the work.

I began this essay referring to patience, and my lack of it. But perhaps the more valuable point is one of prioritization. Time, among many factors, prevents all of us from accomplishing everything all at once. Time also makes our projects better. But what we accomplish—and when—is partially a function of what we prioritize (acknowledging that people’s circumstances influence what they can prioritize).* Early in my program, I learned to prioritize health and relationships just as much as, and in some cases, more than work. Intellectually, I’ve prioritized the research directions that resonate with me over more conventional means.

Academia is full of people telling you what to do. Research supervisors, thesis committees, funding bodies, journal reviewers. Time is one way to respond: what do you have time to do? But priorities are another, perhaps more important, way: what do you want, or need to do? And how can you make time for it?

*All of us would be better off if the educational institutions that employ us, the publishing entities and academic associations that rely on our labor, and the publish-or-perish culture in which we operate recognized that each one of us has priorities other than work and that each one of us needs space to attend to those priorities. In other words, I’m *not* making the neoliberal argument that identifying our individual priorities would somehow magically overcome the exploitative dimensions of academia.

St. Pierre, E. A. (2019). Post Qualitative Inquiry, the Refusal of Method, and the Risk of the New. Qualitative Inquiry, 107780041986300.

Exploring Digital Privacy and Security in Elementary Schools @ CHI 2019

How do elementary school educators think about privacy and security when it comes to technology use in the classroom? What privacy and security lessons do students receive? Below, I describe findings and recommendations from a paper I co-wrote on this topic with Marshini Chetty, Tammy Clegg, and Jessica Vitak. I’ll present this paper at the 2019 ACM Conference on Human Factors in Computing Systems (CHI)

What did we do? Schools across the United States have integrated various digital technologies into K-12 classrooms, even though using them poses privacy and security concerns. As part of a broader project on how children ages 5-11 conceptualize privacy online, we wanted to understand how elementary-school educators decided what technologies to use, how privacy and security factored into these decisions, and what educators taught their students about digital privacy and security.

How did we do it? We held nine focus groups with a total of 25 educators from seven school districts in three metropolitan regions in the U.S. Our participants included teachers, teaching assistants, and student teachers.

What did we find? Educators used a range of digital devices, platforms, applications, resources, and games, some that their districts provided and others that school media specialists recommended. To them, privacy and security meant responsibly handling student data (e.g. login credentials) and minimizing students’ inappropriate use of technology. They largely did not give students lessons on privacy and security. Some educators felt such lessons were not necessary; others found it difficult to make such lessons resonate with their students.

What are the implications of this work? We see an opportunity for the HCI community and those who create educational technologies to help students develop privacy and security skills. This can include designing “teachable moments” into technologies, such as prompts that ask students to think about where their data goes when they submit something online. These are not meant to replace privacy lessons, but to spark conversations between students, teachers, and parents as well as to help students think about privacy during their everyday interactions with digital technology. School districts and teacher training programs should educate teachers about digital privacy and security issues. Finally, the HCI and other communities must grapple with broader tensions about the datafication of education and its concomitant privacy and security concerns.

Read the CHI 2019 paper for more details!

Citation: Priya C. Kumar, Marshini Chetty, Tamara L. Clegg, and Jessica Vitak. 2019. Privacy and Security Considerations For Digital Technology Use in Elementary Schools. In Proceedings of the 37th Annual ACM Conference on Human Factors in Computing Systems.

This entry was cross-posted on the Princeton HCI blog.

Why Deepak Chopra is Wrong About Technology

One reason I value my newspaper subscription is that it reminds me not to take things for granted. Especially when it comes to technology.

In a recent column, the Washington Post’s Geoffrey Fowler recounted a conversation with alternative medicine advocate Deepak Chopra. Chopra has been criticized for promoting medical treatments based on pseudoscience, and his views on technology seem to be just as misguided.

“Technology is neutral, number one. Number two, it’s unstoppable,” Chopra told Fowler as they walked through the tech industry’s trade show, CES, earlier this month.

No, and no.

Technology does not just fall from the sky. People create it. Which means that our human frailties get baked right in. Type a question into a search engine, and the results can be just as racist as the responses you might get from people. Use a mathematical model to make lending decisions, and the output can be just as discriminatory as if you ask a loan officer.

People create technology, which means people can also address problems that result from (or are exacerbated by) technology. The question is who is responsible for doing so. Chopra lays that burden squarely on everyday users, blaming them for succumbing to technology’s power.

“I think technology has created a lot of stress for a lot of people, but that’s not the fault of technology,” Chopra told Fowler, “It’s the fault of the people who use technology.”

Sure, we could all be more intentional in our technology use. But absolving technology of any role in stress is disingenuous. It ignores the fact that the people who create the digital technologies we use every day design them to be persuasive. To Chopra, this isn’t the problem, but the solution; the app Chopra developed and the company he advises also employ persuasive design principles to hook people into using their products.

Chopra thinks technology will support well-being by collecting data and using it to optimize our environmental conditions, by, for example, changing the lighting in your house.

“So we just have to accept more surveillance as the price of this way of living?” Fowler asks. “For the advancement of your well-being, what’s wrong with that?” Chopra responds.

A lot.

First, this kind of blind faith in technology prevents us from talking about the limits of what technology can and should do. Second, it ignores the fact that using data-driven systems to address social problems too often ends up penalizing poor and working-class people. And third, it perpetuates a belief that human experience is simply raw material for companies to datify and exploit for financial gain.

Digital technology is, as Chopra says, “here to stay.” But this flawed understanding of technology needs to go.

For further reading, see the books I cited in the links above:

LaTeX: A Window onto Another Way of Thinking

Last week, I worked with LaTeX, a formatting system that uses markup language to create documents, for the first time. The experience was:

  1. Not as complicated as I imagined, and
  2. Offered a glimpse into how the more technically oriented people in my research field think.

The decision to use LaTeX was not mine. The organizers of a conference where I had a paper accepted not-so-subtly told authors to switch from Microsoft Word to LaTeX (via the Overleaf interface) because the Word template they provided was so dysfunctional.

This understandably upset a lot of people. Many (myself included) had never used LaTeX and the revision period overlapped with the winter holidays. The research community raised valid concerns about the template issues that dogged the entire submission process, and I hope the conference organizers consider them. But that’s not my focus here.

By the time I sat down to re-format my paper, several researchers had voluntarily compiled a Google Doc with detailed, step-by-step instructions on how to transfer papers from Word to Overleaf. The process took me about a day-and-a-half and proceeded more smoothly than I expected. (Seriously, those researchers saved the day with that document.)

Now that I’ve used LaTeX/Overleaf, holy moly, no one should ever typeset a complex document in Microsoft Word again. I can’t believe how many hours of my life I’ve lost tinkering with tables, figures, and columns in that program, trying to divine what random collection of keystrokes and clicks would make everything snap into place on the page.

It’s not that things don’t break in LaTeX; they do. But when they do, I can more easily see why. I can check for missing parenthesis or parse the error statement and fix them. With Word, I have no idea why something breaks, and, more important, little sense of why my actions fixed it.

Part of this facility comes from the fact that I have basic HTML and programming experience, so I generally understand what the tags are trying to do. And the Overleaf interface, which shows the compiled document next to the code, makes it easy to see the results of my typing.

LaTeX sees documents as collections of different types of text. So formatting in LaTeX means defining the different categories, either within the template or by using software packages others have created for LaTeX, and then tagging the text to identify its category. So instead of manually changing the font and size of a heading title, or selecting a heading style in Word, you just type \section before the title and the text automatically formats to the pre-defined font and size.

I now get why so many people in computer science and math use LaTeX; it’s a programmer’s approach to formatting. Things (in this case text) belong to certain categories, and the author’s job is to label them. And with this realization, it also became a little clearer to me why some of my CS colleagues might struggle to understand the interpretive and mostly qualitative research I do, or why engineers might overlook social implications when they design technology. I study how people shape technology (and how, in turn, technology shapes them). And people cannot be slotted into categories. (As Mark Zuckerberg said at a tech event in 2016, “The code always does what you want—and people don’t.”)

Another small clue about the position of qualitative research in computing appeared in the sample template PDF. The document provided tips on how to format things like figures, tables, and equations. But nowhere in the document could I find a block quote, something that appears often in the papers I write. I’m not suggesting this was an intentional omission, but it made me wonder, did whosoever made the template not realize or expect that block quotes would appear in the papers submitted to this conference? Qualitative research contributions are an established part of human-computer interaction research, but that doesn’t mean everyone understands or accepts it.

I knew that before, but after using LaTeX, I have a slightly better sense of why that might be. I work in an interdisciplinary field whose members come from a variety of backgrounds and apply assorted research methods to study computing. Misunderstandings and disagreements are inevitable. Seeing how people whose research approaches differ from mine think helps me understand how to engage with them rather than wonder why they don’t get it.

So thanks, CHI2019, for pushing me to try LaTeX. My schedule and my brain appreciate it.

Becoming a Scholar

Since starting this PhD program, I’ve wanted to write an academic version of my personal mission statement. I assumed that if I dug deeply enough, or pondered long enough, the contours of Priya-as-scholar would sharpen into focus and reveal where in the realm of knowledge my research fits.

My fixation on “figuring it out,” belied an instrumental view of knowledge as a product or an outcome. Yet the philosophical view of knowledge, which is what I’m pursuing as a candidate for the degree of doctor of philosophy, sees knowledge as “necessarily ephemeral and incomplete,…never acquired…only reached proximally (Barnacle, 2005, p. 185-186). Being a philosopher, or lover of wisdom, is about pursuing knowledge, not capturing it.

In that spirit, I write this post not to mark an achievement (“I’ve figured it out!”), but to document a process (“This is where I am now”) and to leave breadcrumbs for future reflection (“Here’s the path I’ve taken”).

I first heard the words “epistemology” and “ontology,” while sitting in the opening lecture of an introductory government and politics class during the first week of my freshman year of college. The professor might as well have spoken gibberish, for as much as she tried to explain them, nothing stuck. I heard these terms much more after I entered the PhD program, and I’m just beginning to understand what they mean.

I found Michael Crotty’s (1998) book “The Foundations of Social Research,” a godsend to navigate the thicket of epistemology. Crotty sets aside ontology and focuses on the research process. He lays out a hierarchy to help readers understand how the abstract informs the granular (and vice versa): Epistemology –> Theoretical Perspective –> Methodology –> Methods. The easiest way for me to relate these concepts to my own scholarship is to move through them in reverse, starting with Methods.

In my research, I’ve primarily talked to people (through interviews and focus groups) and analyzed texts (including news articles, websites, company policies, blog posts, and social media posts). I’ve occasionally used design methods to work on the development of new technologies or educational resources. I also work with colleagues who use survey methods, though I have not used them in my personal research.

My dissertation focuses on pictures posted on social media, so I’m learning methods to more systematically analyze visual materials. I’m also interested in exploring methods like participant observation and diary studies that focus more on people’s practices.

My research explores how information about people flows through digital systems and what that means for privacy. I’m curious about how this plays out in the context of family, primarily pregnancy, parenting, and early childhood. My goal is not to measure variables, to prove hypotheses, or to predict outcomes; my goal is to consider what it means to be a parent, child, or person in a datafied world. Ethnography and discourse analysis resonate with me as ways to do this work because they speak to people’s lived experience as well as broader societal framings.

Theoretical Perspective
In my personal mission statement, I said “I want to understand more about…the physical, internal, societal, and historical forces that have brought me, you, and those around us to this particular moment in time.” I entered the PhD program wanting to do research that put things into context, that traced paths and made connections between different disciplines, topics, or time periods, something I still want. While analyzing data, I’ve focused on creating categories and distilling them into themes, which I then mold into findings that are situated in existing theory or other scholarship. This puts my work squarely in the realm of interpretivism, which Crotty defines as a research approach that “looks for culturally derived and historically situated interpretations of the social life-world” (p. 67).

But after spending time with human rights activists and cultural studies scholars, I’m drawn to more critical orientations to research, particularly post-structuralism, feminism, and post-humanism. This includes Actor-Network Theory (Bruno Latour, John Law), assemblage theory (Gilles Deleuze & Félix Guittari), Foucauldian perspectives, and agential realism (Karen Barad). This is in part because I’m less interested in people and what they think or do, and more interested in how people, technologies, platforms, affordances, and networks come together to produce certain effects.

In my research, I’ve strived to respect the “voice” of my participants while remaining cognizant that I as the researcher am the one interpreting what they say. When I interview someone, I’m not plucking a piece of knowledge that already existed in their brain. I’m having a conversation in which both of us are producing meaning together. This interpretation and shared construction of meaning form the basis of constructionist epistemology, at least the way Crotty defines it.

Consciousness and intentionality lie at the core of constructionism: “When the mind becomes conscious of something, when it ‘knows’ something, it reaches out to, and into, that object” (Crotty, 1998, p. 44). But post-structuralist and post-humanist perspectives reject these framings of consciousness and intentionality, de-centering language (a social structure) or humans as the source of knowledge construction. Crotty describes the epistemology of subjectivism as a subject imposing meaning on an object (p. 9). But my nascent understanding of post-structural and post-human perspectives is that they reject a clear separation between subject/object in the first place. Meaning is not “imposed” on anything, but constituted by the intra-action (Barad, 2003) between various human and non-human actors.

Right now, my work and research approach falls within the constructionist epistemology. I am interested in taking my work in a more post-structural and post-human direction. But writing this has made me realize that doing so requires rethinking my understanding of agency.

Concluding Thoughts
This is the first in an occasional series of posts in which I work through the type of scholar I am and the type of research I do. I initially envisioned these posts as quite future-focused (what to I want to be/do), but I now write them with the recognition that I’m “always already” there (Barad, 2003).

I thank Kari Kraus and my classmates in INST800, Jason Farman and my classmates in AMST628N, Shannon Jette and my classmates in KNES789N, Annette Markham + Kat Tiidenberg + Dèbora Lanzeni and my classmates in the Digital Media Ethnography workshop, Karen Boyd, Andrew Schrock, Cynthia Wang, Shaun Edmonds, and Eric Stone for indulging me in conversations about theory/method over the past two years.

In addition, I thank the UMD Libraries, the Interlibrary Loan service, this laptop on which I read articles and wrote notes, the printer, paper, and ink that came together to give me physical copies of texts, the pens for enabling me to take notes on those texts, Twitter, Evernote, WordPress, Scriviner, Wifi connections, and finally the desks and chairs on campus, at home, at conferences, and on the Metro and Amtrak that supported my body while I read and wrote.


Barad, K. (2003). Posthumanist performativity: Toward an understanding of how matter comes to matter. Signs: Journal of Women in Culture and Society, 28(3), 801–831.

Barnacle, R. (2005). Research education ontologies: Exploring doctoral becoming. Higher Education Research & Development, 24(2), 179–188.

Crotty, M. (1998). The foundations of social research: Meaning and perspective in the research process. London ; Sage Publications.

Creating a Productivity System that Works for Me

Like many people, I enjoy having a routine. This summer, after moving from a cubicle into a shared office space, I began going to campus more routinely and working a similar schedule each day. The regular schedule plus the commute activated more natural boundaries around “work” and “home” time. On campus, I focused a bit more and got distracted a bit less. Most important, I felt anchored. I cherish the self-directed and flexible nature of PhD life, but it sometimes left me feeling like a dandelion blowing in the wind.

This new routine has done wonders for my sense of well-being. But it hasn’t done much for my time management skills. I used to think I was great at time management because I always met my deadlines and my expectations. After an exhausting first semester in the PhD program nearly two years ago, I realized I was terrible at time management. The only reason why I met my deadlines (and satisfied my perfectionist tendencies) was that I let work take priority over everything else. If I didn’t feel like I had accomplished enough by 5:30, I’d keep working until 9, 10, or 11 pm. If I didn’t feel like I had gotten enough done enough by Friday evening, I’d let work consume Saturday and/or Sunday. This didn’t leave my body, my mind, or my husband very happy.

Since that realization, I’ve re-framed my attitude toward work (it is an important part of my life, but not the most important) and changed my practices (regularly went to campus). The fall semester started this week, which means goodbye languid summer days, hello bustling campus and fuller schedule. I don’t like feeling overwhelmed by this, and I don’t want to spend the next four months waiting for winter break.

Various productivity systems, designed for academic life and beyond, suggest keeping a detailed schedule or assigning specific tasks to each day. I tried these approaches and found them rigid and stifling. So I’m going to adapt their principles into a system that works for me.

First, I commit to a consistent weekday wake-up and go-to-bed time. My alarm goes off at the same time every weekday, but I snooze it for 5 to 75 minutes. I’d like to limit the snoozing to about 10 minutes. To help with that, I intend to go to bed at a consistent time, and to begin my bedtime routine 30 minutes prior to that bedtime.

Second, I will go to campus on weekdays unless I have a scheduling reason to work from home. My experience this summer reminded me that it’s much easier to treat the PhD as a job when it involves a distinct workplace and a commute.

Third, I’ll restart a practice I followed when I worked full-time — tracking my hours. I was fortunate to have supervisors who let me take comp time if I ever worked more than 40 hours per week, so you bet I tracked my hours. I can get obsessive with practices like this, which is why I refrained from tracking my hours as a PhD student. But since I work on various projects, eagerly say yes to other projects, tend to fall into rabbit holes while working on any project, and am a recovering perfectionist, I think time tracking is essential to improving my time management skills. I keep things simple and do this in a spreadsheet.

Fourth, I’ve created a task management workflow to help me figure out what to work on when. I’ve written a month-by-month list of my commitments, deadlines, and events. At the end of each week, I’ll spend half an hour previewing the next week. I’ll create a to-do list with the tasks that need to be completed that week. I’ll then look at the calendar and schedule time blocks to work on those tasks. As I go through the week, I can move things around if needed. After a few weeks of this, I hope to have a better sense of how much I can accomplish in a typical 40-ish hour week and how much time to budget for certain tasks. This will (hopefully) help me let go of the perfectionist tendencies, resist the temptation of distractions (Twitter, I’m looking at you) and understand the “price” of saying yes to a given task.

Finally, I commit to keep my campus desk tidy. Stalagmites of papers and books make my home desk an uncomfortable place to work, and looking at them unsettles my mind. Yes, I’d like to clean them off, but this is about baby steps. My campus desk is big enough that the two piles that have already sprouted aren’t in the way. I’d like to keep it that way.

So that’s my plan for this semester. Check with me in four months to see how it goes.

Time to Focus

A few weeks ago I played with a cousin’s four-year-old daughter.

“Look at the map!” she cried, laying out a Lego theme park map on the ground. “First we have to go through the ninjas, then we have to go get Hercules.” We ran around the park, her one-year-old brother wandering as we zipped by. She kept yelling out commands, and I’d ask her what we had to do next.

Suddenly, this morphed into a game of whales and humans. Namely, we kept being turned into whales and needing to turn back into humans. We did so by “eating” ice cubes that we had dumped on the ground. Every time she pointed at me, I bellowed like a whale.

Eventually, another child joined in and this turned into a game of house. We each claimed a tree and rushed back and forth, bringing each other cakes and omelettes and turning into superheros and stealing each other’s cars.

Preschool age children are full of energy and love make-believe. Engaging with them can be exhausting, but I found it freeing. I could say or do anything and she’d run with it. I declared we’d found a safe zone, I taped silly glasses to my face, I made strange noises, and she took it all in stride.

A few years from now, she’ll outgrow this tendency toward fantastical exuberance. She’ll go to school and focus on learning how the world works and managing a social life. I won’t be able to run up to her and say, “Bloooooooop” with pink flowery glasses taped to my face. In Deleuzian terms, her attention will be territorialized — marshaled into familiar, conventional, and normalized ways of thinking.

I feel myself making a similar transition from doctoral student to ABD. Since advancing to candidacy in May, I’ve been excited to write my dissertation proposal. Finally, I get to focus on the whole reason why I entered the PhD program in the first place. But writing the proposal means I have to focus. The floodlight of my academic gaze must sharpen into a spotlight. And that makes me a little sad.

I now look back on the past year and see it as a period of academic fantastical exuberance.

What’s epistemology? What’s ontology? What did Foucault and Latour and Haraway and Barad say? What’s my theoretical perspective? I’m an interpretivist. No, maybe I’m a critical researcher. Ooh, what is post-humanism. What happens if I drop terms like assemblage and actant and materiality into conversation? Do I want to do ethnography? Is my work content analysis or textual analysis? What discipline am I in? I’m in social computing. No, computer-mediated communication. Ooh, what if I call myself an internet studies researcher. Maybe I’m in media studies.

Hey, look at all these citations! What’s that? And that one? And that one? Let me download the article right now. And these books, let me get them from the library. Oooh look, these other books seem relevant too; let me check them all out. Here’s some sociology, some anthropology, some communication, some feminist scholarship, science and technology studies work, some cultural studies, ooh, political theory, whoa, I didn’t expect to go there, but OK, sure.

What has agency? Hmm, I didn’t think that was relevant to my research, but alright, let’s do that. Oh, you want deconstruct binaries. Fine. Subject/Object. Nature/Culture. Human/Thing.

And I’ve loved it. Loved it, loved it, loved it. But I can’t possibly read everything I’ve downloaded and checked out before writing the dissertation proposal. I can’t trace the history of all the theories I’ve started learning about.

Yes, the reading and the note-taking and the conversations will continue. But if I want to stay on track, more of that work needs a clearer purpose than simply, “Oooh, that sounds interesting.

I’m hitting a new academic stage, and it’s time to focus.

Escaping the Trepidation Trap

In the first sentence of his chapter “On Recalling ANT,” Bruno Latour lists four things that “do not work” with actor-network theory: “the word actor, the word network, the word theory and the hyphen! Four nails in the coffin” (p. 15). Apart from its bluntness, this sentence stands out because Latour is one of the creators of actor-network theory.

Any theory has its proponents and critics. One person finds a particular theory useful to inform a worldview or structure a research project, while another finds that theory bunk and proclaims so loudly. Such is the world of academia. We read what others have written, we think about it, sometimes supporting that thinking by studying something empirically, and we write. Often, our writing engages directly with what others have written — affirming, refuting, re-interpreting, critiquing the work of others, taking their work in a new direction, arguing that their work is flawed.

It is a world I happily inhabit, largely because of this focus on writing. I wrote poems and novels as a child, studied journalism as an undergraduate, and consider writing for the public a core part of my personal mission. I’m a published writer, and I don’t fear being published. I entered a PhD program in part to connect my writing with theory, yet I also approached this work with some trepidation.

Actor-network theory is well known and widespread. Actor-network theory is also recent enough that the people who created are still alive. Meaning they can see and respond to the ways that others use or critique their theory. While reading “On Recalling ANT,” I envisioned myself in Latour’s shoes. How would I feel if I invested countless hours into developing a theory, only to see others misunderstand or misinterpret it, or take it in directions that go against its original intention? How would I respond if people challenged my work or pointed out its flaws? Publishing theoretically informed research felt more vulnerable to me than other types of writing because it meant opening myself to critiques that my thinking was wrong. Which to me felt tantamount to critiquing my existence.

Last year, I wrote a paper in which I took a theory in a different direction. I wasn’t sure whether I correctly applied the theory, but the paper survived the peer review process and was published. Shortly after publication, I was invited to present the paper at a workshop that the theory’s founder helped organize. If I had applied the theory incorrectly, I’d find out now. The theorist gently critiqued a few other presentations, but mine proceeded unscathed. Though relieved, I wondered what would happen the next time I used a theory in my work. And the next time. Approaching my research with trepidation seemed exhausting.

Earlier this year, I found my way out of the trepidation trap. I was at another workshop, this time sitting in the audience listening to a theorist whose work has resonated with me for years. While summarizing her research, she remarked, offhandedly, “I’m still figuring out” the theory.

This stunned me. This theorist has published books and articles on this theory; her name is almost synonymous with it. If she’s still figuring it out, that means that any of us who use the theory are also figuring it out. And that means there’s no one “correct” way to interpret or apply the theory.

Someone may dedicate their entire professional life to developing a particular idea; indeed, their name may become synonymous with the idea. But ideas don’t belong to one person. The acknowledgements page in any book reveals the multitude of people involved in developing an idea. And, in the spirit of ANT, we must not forget that non-human actors also play a role. The library, my computer, and my glasses deserve as much credit as the people around me for bringing ideas to fruition.

So if ideas exist separately from authors, then critiques of ideas exist separately from critiques of authors. There’s no reason for me to equate a critique of my thinking to a critique of my existence. I escape the trepidation trap by letting go. I let go of the assumption that thoughts define me. I let go of the sense that there’s a “right” answer. And most important, I let go of the fear.

After re-reading the theory I used in the paper last year, and reading other work that engaged with the theory, I recently wrote another paper on this theory. In it, I critiqued my prior use of the theory and offered a more nuanced analysis. I felt comfortable doing so because theoretically informed research is not some pedestal I’m trying to climb onto. It’s a messy, iterative practice, just like everything else in life. We read, we write, we engage, we reflect, we read more, we write more, we revise, we clarify. We change. Bruno Latour himself moved away from the social constructionist views that pervaded his earlier work.

But separating the idea from the author is not a license to disengage. On the contrary — Latour ended his chapter, “On Recalling ANT” with this:

“[Y]ou cannot do to ideas what auto manufacturers do with badly conceived cars: you cannot recall them all by sending advertisements to the owners, retrofitting them with improved engines or parts, and sending them back again, all for free. Once launched in this unplanned and uncharted experiment in collective philosophy there is no way to retract and once again be modest. The only solution is to do what Victor Frankenstein did not do, that is, not to abandon the creature to its fate but continue all the way in developing its strange potential” (p. 24).

And that strange potential includes possibilities for illumination, not just openings for critique. One person recently told me they found the theory paper I published last year helpful because they had never thought of using the theory in that way. And that, more than anything else, is why I do this work in the first place.

A Crack and a Relief

It happened. The crack, when “you can no longer stand what you put up with before, even yesterday” (Deleuze & Parnet quoted in Jackson & Mazzei, 2013); when “one can no longer think things as one formerly thought them, [and] transformation becomes both very urgent, very difficult and quite possible” (Foucault, quoted in St. Pierre, 2014).

For the past several months, I’ve been trying to understand epistemology and ontology — what they mean, what they mean for me, and what they mean for my research. I read “The Foundations of Social Research” by Michael Crotty. I read “The Body Multiple” by Annemarie Mol. I read other articles, mostly from communication, science and technology studies, and cultural studies.

I continued to analyze quantitative and qualitative data and write papers, but I felt increasingly perturbed, as if this work wasn’t adequately capturing what the research team said we were studying. I kept spouting my one-line summary of my dissertation research: “I study how parents post pictures of their kids online and what that means for kids’ identity development, sense of self, and understanding of privacy,” even after realizing that I’m not actually studying parents or kids or online or identity.

Epistemologically, I sensed that I wasn’t a positivist, but I couldn’t figure out whether I was a constructionist or a subjectivist. Theoretically, I didn’t think I was an objectivist, and I sensed that I might be an interpretivist who could one day become a critical scholar. I could be doing phenomenology, or potentially hermeneutics, or maybe symbolic interactionist work. I remained unsure of aligning myself with any methodology besides “qualitative,” which I do primarily through the methods of interviews and textual analysis.

Today, all of that fell apart and also came into sharp relief, thanks to readings on “New Materialism” and a conversation at the weekly journal club of my university’s physical cultural studies program.

I realized that I’ve been using what St. Pierre (2016) calls “conventional humanist qualitative methodology” — interviewing people and coding the data as a way to capture some aspect of their lived experience. I thought I’d sidestepped positivism because I don’t offer “hypotheses,” don’t calculate “inter-rater reliability,” and don’t purport to “predict” behavior. But I do define “research questions,” collect “data,” and code it to fill a “gap the knowledge” — all trappings of logical positivism.

And this would be fine, except that I’m also discussing Foucauldian analysis and Actor-Network Theory and assemblage. And I dig it. It resonates with me and it’s informing how I approach my dissertation. So no wonder the conventional research process I’m using feels stale — it does not align with the new (to me) theory/methods that are shaping the way I understand the world and the research that occurs in it.

But doing research from a Foucauldian or ANT or assemblage (or feminist or queer or post-structuaralist or post-colonial or post-humanist or…) perspective requires more than rethinking methods. It means letting go of a belief that any research, no matter how rigorously or reflexively it is done, can capture what is “going on.” It means accepting that research, and the production of knowledge, is always partial, always incomplete. It means that no matter how precisely or evocatively we write about our research, it remains a semblance.

But nevertheless, I feel this work is urgent. I know this work is difficult. And yes, I believe that it is possible.

(And while I will continue doing research in the conventional humanist qualitative vein, because learning new things takes time, Jackson & Mazzei show how to use these theories to think with typical interview data “within and against intepretivism.”)