Making Indian Food

This summer, I want to learn how to make my favorite Indian foods. The ones that remind me of home and childhood; Mom’s cooking, in short. I’ve attempted this before but found it frustrating. Other second-gen Indian kids will sympathize when I say that my mom’s instructions have been, well, less than precise. Take dal, the lentil-based staple in our house.

(As an aside, I was about to describe dal as a kind of soup or stew, but those words do not resonate with me at all. Such comparisons, while often meant to help those unfamiliar with the food, can nevertheless reinforce the idea that non-Western food needs to be made legible through Western food to acquire legitimacy, rather than simply stand on its own. So, no, dal is not like a soup or stew. Dal is dal, and if you’re unfamiliar with it, I encourage you to try it and acquaint yourself!)

Dal is not a specific dish, like eggplant parm, but a type of dish, like pizza. Just as many different ingredients can top a pizza, many lentils can get cooked into dal. I grew up hearing their names—masoor, moong, urad, chana—and seeing their distinct dried bodies—flat orange disks, oblong green cylinders, split yellow spheres—but I didn’t know which names belonged to which lentil. During a weekend visit six years ago, I opened my parents’ pantry, poured each variety into a catori (small glass bowl), photographed it, and with their help noted each name.

So, how to cook dal? Mom said to put the lentils in a pot and add enough water so they are more than submerged. Add namak (salt), haldi (turmeric powder) and lal mirch (red chili powder), bring to a boil, then reduce to a simmer until cooked. Make the tadka by heating oil, add spices until they’re cooked, and then add the tadka to the dal. Mix until everything is cooked together, taste, add more salt or spices if needed, and that’s it.

I peppered her with follow-up questions: Do I soak the lentils first? How much of each spice do I put in the water? How long until the dal is cooked? You can soak them but you don’t have to. A few pinches of namak, only a little haldi and lal mirch, but of course it depends on how much dal you’re making. The dal is cooked when the lentils and water don’t look like separate things.


For as long as I’ve known him, my husband has had a binder containing sheet-protected printouts of his paternal grandmother’s recipes. Each page offers what one expects from a cookbook: name of dish; list of ingredients, with amounts specified; step-by-step instructions with specific times and temperatures.

My queries to Mom were not eliciting that kind of detail. If I wanted to create a similar cookbook of her recipes, I was going to have to extract the information from her line-by-line. I walked through each step of the dal instructions, asking for amounts and timings. Mom produced some numbers, but it was clear she was guessing. Hearing my sighs, she told me she didn’t cook with exact numbers. Having made this food for decades, she just knew. Intellectually, I understood that, but without directions, how was I supposed to gain that experience? She encouraged me to just try, assuring me that it wasn’t that hard.

I decided I’d watch her make my favorite dishes and quantify everything myself. First up was chana masala, chickpeas in an earthy, aromatic gravy. I dutifully observed each ingredient and its amount, noted the different levels of stove heat applied at the various steps, and explained how Mom determined what to do— (“Once the tomatoes are pulpy and smooth and the oil has separated and appeared on the sides, add the garbanzos.”)

Since then, I’ve only written this kind of detailed recipe for one other dish—Indian rice—and that barely counts because it’s far simpler than chana masala. Instead, I’ve called Mom for general instructions when I want to make something and then searched online or in my cookbooks for specific recipes. I’ve made a few good dals here and there, especially last summer when many took up cooking as a pandemic escape, but most have been middling.

Now that I have the summer off, I’m re-committing to honing my skills. After a dismaying pot of dal last week made from a recipe in an American cookbook (I know…), I resolved that I was finished with recipes, at least for this goal. If I want to make food that tastes like Mom’s, I need to take her approach. It’s time to stop trying to divine the recipe and start developing the intuition.

Last week, I asked her about one of my absolute favorite dishes, whole urad lentils made into kali dal, aka dal makhani (though we don’t put cream). I cooked it on Tuesday. And you know, it was this close to hers. The aroma was spot on. The taste was a little light on the salt and tomato, but nevertheless closer than my past attempts.

Earlier today I opened a document on my computer and started typing notes —the steps I took, the things I’ll do differently next time. After all, this is how you improve, right? Record what I did this time so I don’t repeat the same mistakes next time.

But the words felt flat and unnecessary as they appeared on the screen. This is unusual for me, as someone who lives very much in language. After my demographics (woman, Indian-American), writer is one of my strongest identity affiliations. Writing is how I make sense of myself and the world. That’s a big reason why I’m drawn to post-structural philosophies and discursive methods in my research. Words are the avenue through which I express myself best.

Nevertheless, while writing those words about my kali dal this morning, I felt not the thrill of articulation, but the melancholy of loss. The words couldn’t capture how I knew after one taste that the tadka needed more tomato. How I knew while looking at the tadka that the tomatoes could have cooked longer or that the masala wasn’t quite incorporated enough. Even now, I can’t describe how I knew it. Or rather, I don’t want to. Because once I write it down, the words become the authority.

I’m in the middle of the book 1491: New Revelations of the Americas Before Columbus. Its author mentions that one of the reasons why Europeans considered Indigenous American cultures less sophisticated was that some (though not all) lacked writing systems or didn’t use them extensively. While I’m fascinated by the concept of society without writing, I cannot personally fathom living in one, given how much writing has enriched my life. But this morning’s experience reminded me that while writing is valuable, it doesn’t capture everything. There are other ways of knowing. And when it comes to something as material, as sensual, as making food, perhaps those other ways are worth attending to.

My fixation on documenting Mom’s food in recipe format prioritized process over practice. I wanted her to tell me what to do rather than learn how to do it. That lesson I grasped years ago, after meticulously observing her chana masala process but not feeling any motivation to make the same detailed documentation for my other favorite dishes. But I didn’t fully embrace it until today, when I realized that I don’t have to write down my mistakes to learn from them. I don’t need the words when I have the experience.

I know what this food is supposed to taste like. I grew up eating this food. This food made me. And now, I want to learn to make it.


Gathering in Gratitude: My Commencement Speech

The College of Information Studies selected me as the PhD student speaker for yesterday’s commencement ceremony. The recording of the ceremony is below; my speech begins a few seconds after minute 7. The text of the speech is below the video.

Hello, fellow graduates of the Class of 2021. What a year. But we’ve made it through and we’ve earned our degrees. Congratulations.

We’re at commencement, which means we’re at the beginning of a new chapter in our lives. But for the next few minutes, I’d like to reflect on what brought us here. To the iSchool. And to this moment.

Can you remember when you first realized that you wanted to pursue a degree in information? Maybe a course caught your eye when you were scrolling through the catalog. Maybe a friend or mentor said, “Hey, you should check out the iSchool.” That’s what happened to me.

It was back in the Fall of 2010, a year after I’d finished undergrad. I was working as an internship coordinator here at the university but itching to pursue graduate study. The problem was, I didn’t know what I wanted to study. I looked at programs in journalism, law, public policy, education, cultural studies, creative writing, and while they all looked interesting, I kept feeling like I was a square peg trying to fit into a round hole.

One evening after work, I went to see a beloved advisor and mentor from my undergraduate days, Olive Reid. We spent hours catching up and I told her about my frustration trying to figure out my future. I loved writing; I loved research, and I was fascinated by how social media was changing entire industries, like journalism and raising questions about important societal values, like privacy. But where could I study those things? Olive told me not to worry so much. She was right about that. But she also asked if I had looked at programs in information.

“What?” I asked, “You can study information?”

“Yeah,” Olive said, “UMD has an entire college for it.”

The next day, I searched online and came across the iSchools, a consortium of information schools from around the world, including ours. “The iSchools are interested in the relationship between information, people and technology,” its website said. “The iSchools take it as given that expertise in all forms of information is required for progress in science, business, education, and culture.”

Finally – a square hole for my square peg self. I had found where I belonged. Since then, I’ve earned a master’s, and as of today, a PhD, in information.

Someone once told me that iSchools are like the Island of Misfit Toys, the place where we go when we realize we don’t quite fit in any other discipline. I understand that metaphor. But I don’t see us as misfits. Instead, I think of us as shape shifters, moving fluidly within the web of people, information, and technology.

We help people navigate a datafied world. We preserve information and cultural heritage for future generations. We design new technologies. We advocate for accessibility, security, equity, and justice. We are librarians, archivists, data analysts, developers, designers, policymakers, programmers, artists, educators, and yes, writers, and researchers.

That conversation with Olive 11 years ago changed my life and set me on the path to this very point. Who did that for you? Who sparked a new idea? Suggested a different direction? Made you feel welcome? Encouraged you to persevere? You might not even know the person directly. Maybe someone wrote a book that fascinated you or produced a podcast that changed your perspective. What big or small thing did somebody do to inspire you to study information?

As we celebrate our achievements over the next few days, I invite you to reach out to that person, or those people, to let them know they made a difference to you, to thank them for helping get you to this momentous occasion of earning your degree.

I’ll start. Thank you of course to Olive, for that conversation, and also to my incredible PhD advisor, Dr. Jessica Vitak. Jessica, you nurtured my ideas, patiently listened to all my rambling, cheered me on when I felt stuck, and reminded me not to take any of this too seriously. You are the kind of mentor that I strive to emulate.

Alright, Class of 2021. Now it’s your turn.

Congratulations again to all my fellow graduates. We made it.


Today I graduate from the University of Maryland with a doctorate of philosophy in information studies. My college is hosting a 30-minute virtual ceremony followed by a reading of graduates’ names. This graduation is different for obvious reasons—no donning of regalia, no gathering in a campus auditorium, no handshakes, hugs, or hooding, the tradition of an advisor awarding their student an academic hood to signify the student’s attainment of the highest academic degree and their transition from student to colleague. That’s fine; 15 months of pandemic life has accustomed me to virtual events, and I’ll attend the College’s in-person ceremony this December.

This graduation is different because of its un-intensity. My previous two graduations were finish lines for grueling marathons. The last few weeks of the semester were a frenzy of final papers, projects, and exams, including thesis research. I just had to make it to graduation, I told myself, and after that I could collapse and take a break. In both cases I had taken on so much schoolwork that I had no time or mental energy to search for a job, which meant I had no idea what was coming next.

Both times were also emotionally chaotic. As undergraduate graduation loomed, the pressure cooker in which I’d caged myself began to crack. The thing I excelled at—school—was ending, and there was no syllabus to tell me what to do next. Five years later, my master’s graduation was one of a confluence of life changes: ten days after graduation, I moved 500 miles, and ten days after that, I got married. In both cases, graduation felt like stepping out of a busy landscape onto a blank canvas, and the void agitated my professional self.

This time, graduation is a formality. I defended my dissertation in March and submitted the final draft in April, so my degree requirements have been complete for a month. The last course I took was in Spring 2018, and I haven’t felt like a student for years. I’ve felt much more like an academic. With the exception of teaching, I’ve been doing the same work as academics: conducting research, leading projects, applying for funding, working with students, peer-reviewing studies, and serving on college and university committees.

This time, I do have a job lined up. It’s a faculty position, which means I’ll get to continue what I’ve been doing, albeit in a new institutional environment. That’s the biggest reason why this graduation feels like a mark rather than a break. Undergraduate me was confronting the reality of adulthood, and master’s me was adapting to a partnered adulthood. Unlike both of those times, I finished this degree with a clear sense of what I wanted next professionally, and I feel extremely fortunate to have found a position that lets me do that work.

This graduation is also different because it’s the end of my formal education. When I finished undergrad, I knew I wanted to attend graduate school. And when I finished the master’s, I sensed doctoral study was in my future. Now, there’s no higher degree left to earn. I won’t say I’m done earning degrees; I half-jokingly told my spouse I might one day use my tuition remission benefits to get an MFA in creative writing. But in terms of professional advancement, yes, I’m done studying.

Of course, this year is not how I envisioned wrapping up doctoral study. I relished my routine of commuting to campus, writing in my shared office, and chatting in the common areas. I presented my dissertation proposal in a classroom full of colleagues, and I presumed I would defend the dissertation in the same room. Instead, I wrote the dissertation at a dining table in a small apartment while a deadly virus circulated outside, and I defended it over Zoom, at the same table.

Me smiling and holding up a mug that says, "Dr. Priya Kumar."
Me celebrating after passing my dissertation defense and officially becoming Dr. Priya Kumar

But if I learned anything from my undergraduate experience, it was that life does not work according to plan. Things happen, circumstances change, and you make your way through the world accordingly. The gift of rituals like graduation is that they prompt us to pause, to reflect, and to give thanks for what we have and what has brought us to the current moment. I recorded a speech for today’s graduation ceremony, and that was its theme. (Here’s a link to the recording and text of my speech.)

Doctoral study has been the most fulfilling experience of my life. To everyone who had a hand in making that so—thank you.

Re-making Boundaries

I study privacy. I’ve done so since joining the field of internet studies eight years ago, and I plan to do so for the foreseeable future. But early in my PhD program, a creeping sense of disillusionment made me question whether this work mattered. I was part of a few research teams studying how people considered privacy in relation to various digital technologies. While analyzing our interview data, I encountered quote after quote of people saying they didn’t care if anyone saw their data or they didn’t think their data had any value. With a few exceptions, it seemed like privacy was not a primary concern for most of the people we interviewed.

Convenience stood out to me as one reason why. Digital technologies made it easy for people to go about their everyday tasks, and this ease outweighed any privacy concerns they may have had. This makes sense given that so many digital technologies are designed to integrate seamlessly into everyday life. Or rather, we (at least those of us in the U.S.) are surrounded by a cultural narrative that tells us technology is supposed to make our lives easier and better, even if that doesn’t necessarily happen in practice.

These technologies aren’t disappearing soon, in part because people want or need them. But they also raise a host of concerns, privacy being just one. Fighting for privacy always struck me as an uphill battle, but confronting this convenience narrative felt like having to scale the face of a cliff.

Until I realized it’s just that. A narrative. A story.

Might we tell a different story?

To do that, we have to see the story and to realize just how deeply it has burrowed into our social psyche. A few weeks ago I started seeing a TV commercial advertising a bank’s mobile app. A mother and her elementary school-age son walk out of a suburban house and toward the car parked in front of it. The mother pulls out her smartphone and the voice-over enthusiastically remarks that with the bank’s app, you can pay your bills from anywhere, anytime! And then get back to what really matters! The mother puts her phone away and her son runs toward her, the commercial ending with their exuberant embrace.

This is the convenience narrative at work. “You can pay your bills from anywhere, anytime!” works as a compelling marketing message because our society treats freedom and choice as the ultimate markers of the “good life.” But perhaps paying bills is not something to do anywhere, anytime. Perhaps paying bills is something to devote one’s full attention to, even if it only takes a few minutes.

Mary Gray summed it up perfectly at a panel yesterday when she remarked, half-jokingly, that she wanted to banish the idea that flexibility is a good thing. The panel, “Post-work Productivity,” was part of the annual conference of the Association of Internet Researchers (AoIR), happening virtually this week. That conversation examined how “the promise of ‘anytime, anywhere’ has turned into a risk of ‘all the time, everywhere’.” Although the panelists focused the erosion of boundaries in the context of work, their critique applies to other dimensions of life.

At today’s plenary panel on “Living Data,” Seeta Peña Gangadharan discussed the Our Data Bodies project, which critiques and resists the incursion of data-driven technologies into marginalized communities. She talked of these systems as optimization technology, drawing attention to the logics of efficiency that both underpin them and that they advance. Lately, she’s examined refusal as a response to datafication. This isn’t refusal in the sense of rejecting technology outright, but of questioning the terms of the deal and of imagining new ones. In other words, it’s about resisting and remaking power relations.

We can resist the convenience narrative. We can reject the idea that constantly fluid boundaries are a good thing, or that seamless data flows are the price we have to pay for progress (another word that Mary wanted to banish). When we hear these words, we must ask, flexibility for whom? Progress on whose terms? Convenience at what cost?

During an AoIR discussion on internet and sociality yesterday, Nancy Baym commented that technology’s ability to manifest boundaries in new or different ways is part of what makes it enduringly exciting to study. In examining technologies, and the lives and societies and structures within which they are entangled, we can interrogate those boundaries and ask, “How do we make better ones?”

That’s the kind of privacy research I want to do, that I am doing as I write my dissertation.


Patience has never been one of my strong suits. When inspiration strikes, I want to act on it immediately. And since ideas emerge faster than the actions to execute them, I perpetually feel like I have so much to do.

Work-wise, two projects currently hold my attention: the dissertation and the paper I’m leading through my job as a research assistant (RA). Three others paper drafts await. Another two papers exist in outline form, and two ideas for large-scale projects live as sketches. And then there’s the dozen or so one-line “Hey, wouldn’t it be cool to do X” ideas.

I’ve accepted that my attention needs to remain on the dissertation and the RA project because I want to graduate and I want to keep my job. I thoroughly enjoy these two projects, so it’s not that I would prefer to work on other things. Instead, I wish I could work on everything at once. This enthusiasm is a function of excitement, but also uncertainty. Regarding work projects, my thinking went like this: I’m in a PhD program right now, so I definitely have the next few years in academia. The academic job market is crapshoot [this was true long before covid], and I might not get any more time in academia. So, let me cram as much research as I can into this time.

Reality, in the form of mental and bodily limits (working all the time wears you out) and relationship pressures (my husband, family, and friends want to spend time with me) luckily forced me to bound this all-encompassing attitude toward work after my first year in the PhD program. Beginning a meditation practice after year two attuned me to be present with the current moment and to accept impermanence as inherent to reality. Yes, I might feel certain that I have four or five years in academia through the PhD, but actually, nothing is certain beyond what exists right now, in this moment.

So, while I continued to juggle several projects and make plans for the future, that list of research ideas became less a reflection of inadequacy (you’re not getting enough done) and just that—a list of research ideas. I acknowledged that each week contained ~40 hours of work time, a fairly easy transition once I paid attention to the pleasure of spending time with loved ones and cultivating hobbies outside of work. But my relationship with this list remained one of détente: projects took time, and I only had a certain amount of it in each given day or week. This natural limit prevented me from accomplishing everything on that list. I accepted it, but begrudgingly. Time was the adversary I knew I’d never beat.

I’ve now grown to see time as a feature rather than an unfortunate constraint within academic work. Some, especially those outside academia, wonder why a PhD takes so long. When I started the program, I vowed to finish in four years. And if I’d avoided taking on side projects and stuck to a conventional research project, I believe I could have. But around the start of year three, after I’d finished coursework and advanced to candidacy but before I wrote my dissertation proposal, I knew I needed more time. I’d learned that entirely different ways of doing research and understanding the world existed, and I wanted my dissertation to engage with those different ways of knowing. Just like ideas outpace action, translating these exciting things I’d started learning about into a coherent and meaningful scholarly contribution required time. Time for me to read (much) more, time for those ideas to marinate, and time for me to talk through them with other people. If this was to be my only time in academia, I wanted as much fulfillment as possible. Producing a gratifying dissertation mattered more to me than finishing it in an arbitrarily determined number of years.

I recognize the amount of privilege entangled in even being able to make such a statement—institutional and social support, a welcoming advisor and committee, and a certain level of personal financial security. Neither is five years some magic number. People produce quality work in fewer (and more) than five years, and people who want more time cannot always take it. I’ve also heard the refrains: “The only good dissertation is a done dissertation” and “The dissertation is not your magnum opus.” But if an academic career is not in the cards for me, then the dissertation in some ways will be a magnum opus.

That’s not to equate the dissertation with everything I want to accomplish in research. My intention in taking an extra year was not to cram all the projects from that list of research ideas into the dissertation, but to refine the one idea I was exploring through the dissertation. The dissertation topic—the privacy issues of parents posting pictures of their children on social media—hasn’t changed, but the way I’m studying it has. The project is better because of the extra time I am thankfully able to dedicate to it.

As I’ve understood how this premise extends beyond the dissertation to many aspects of academic work, my adversarial glare toward time has softened. Last fall, I was stunned to read a timeline that one of my academic role models, Elizabeth St. Pierre (2019), presented for one of her ideas. While doing her doctoral work in the early 1990s, she increasingly felt that postmodernism and poststructuralism were incommensurable with conventional research methodologies. By 2003, she had a name for the different way she felt was needed to engage with those theories through research—post-qualitative inquiry—and she devised courses to work through this mode of research with like-minded students. In 2010, she presented her first conference paper on the topic. Over the past decade, she has published more than 20 academic articles on the topic, and in 2019 she was writing a book on it.

In other words, it took nearly 30 years for St. Pierre to refine an inchoate idea from her graduate student days into a book-level concept. Sure, career-defining achievements like opening up a new research paradigm obviously take time. But so do the bread and butter of academic contributions: journal articles. I recently saw a Twitter thread from Omar Wasow describing how he cultivated the seed of an idea from a graduate seminar for 14 years before a top journal in his field accepted it for publication. Closer to me in career stage and discipline, Kate Miltner tweeted that one of her latest journal articles was an idea she wrote about in a graduate seminar five years ago.

I’ve been studying parents and social media for seven years, so the fact that I’m only now finding the vocabulary to articulate my research contribution suggests I’m right on track. And while I’m fortunate to have two journal-level publications on my dissertation topic, I’ve been itching to publish more since my thinking has evolved significantly. These stories from academics help me tell that impatient voice to cool it.

Of course, time isn’t the only ingredient that matters. Those three unfinished papers have sat on my computer untouched for nearly a year, and they haven’t gotten any better. Those two outlines aren’t going to write themselves into papers. If I want those ideas to get out into the world, at some point I’ll have to do the work.

I began this essay referring to patience, and my lack of it. But perhaps the more valuable point is one of prioritization. Time, among many factors, prevents all of us from accomplishing everything all at once. Time also makes our projects better. But what we accomplish—and when—is partially a function of what we prioritize (acknowledging that people’s circumstances influence what they can prioritize).* Early in my program, I learned to prioritize health and relationships just as much as, and in some cases, more than work. Intellectually, I’ve prioritized the research directions that resonate with me over more conventional means.

Academia is full of people telling you what to do. Research supervisors, thesis committees, funding bodies, journal reviewers. Time is one way to respond: what do you have time to do? But priorities are another, perhaps more important, way: what do you want, or need to do? And how can you make time for it?

*All of us would be better off if the educational institutions that employ us, the publishing entities and academic associations that rely on our labor, and the publish-or-perish culture in which we operate recognized that each one of us has priorities other than work and that each one of us needs space to attend to those priorities. In other words, I’m *not* making the neoliberal argument that identifying our individual priorities would somehow magically overcome the exploitative dimensions of academia.

St. Pierre, E. A. (2019). Post Qualitative Inquiry, the Refusal of Method, and the Risk of the New. Qualitative Inquiry, 107780041986300.

Exploring Digital Privacy and Security in Elementary Schools @ CHI 2019

How do elementary school educators think about privacy and security when it comes to technology use in the classroom? What privacy and security lessons do students receive? Below, I describe findings and recommendations from a paper I co-wrote on this topic with Marshini Chetty, Tammy Clegg, and Jessica Vitak. I’ll present this paper at the 2019 ACM Conference on Human Factors in Computing Systems (CHI)

What did we do? Schools across the United States have integrated various digital technologies into K-12 classrooms, even though using them poses privacy and security concerns. As part of a broader project on how children ages 5-11 conceptualize privacy online, we wanted to understand how elementary-school educators decided what technologies to use, how privacy and security factored into these decisions, and what educators taught their students about digital privacy and security.

How did we do it? We held nine focus groups with a total of 25 educators from seven school districts in three metropolitan regions in the U.S. Our participants included teachers, teaching assistants, and student teachers.

What did we find? Educators used a range of digital devices, platforms, applications, resources, and games, some that their districts provided and others that school media specialists recommended. To them, privacy and security meant responsibly handling student data (e.g. login credentials) and minimizing students’ inappropriate use of technology. They largely did not give students lessons on privacy and security. Some educators felt such lessons were not necessary; others found it difficult to make such lessons resonate with their students.

What are the implications of this work? We see an opportunity for the HCI community and those who create educational technologies to help students develop privacy and security skills. This can include designing “teachable moments” into technologies, such as prompts that ask students to think about where their data goes when they submit something online. These are not meant to replace privacy lessons, but to spark conversations between students, teachers, and parents as well as to help students think about privacy during their everyday interactions with digital technology. School districts and teacher training programs should educate teachers about digital privacy and security issues. Finally, the HCI and other communities must grapple with broader tensions about the datafication of education and its concomitant privacy and security concerns.

Read the CHI 2019 paper for more details!

Citation: Priya C. Kumar, Marshini Chetty, Tamara L. Clegg, and Jessica Vitak. 2019. Privacy and Security Considerations For Digital Technology Use in Elementary Schools. In Proceedings of the 37th Annual ACM Conference on Human Factors in Computing Systems.

This entry was cross-posted on the Princeton HCI blog.

Why Deepak Chopra is Wrong About Technology

One reason I value my newspaper subscription is that it reminds me not to take things for granted. Especially when it comes to technology.

In a recent column, the Washington Post’s Geoffrey Fowler recounted a conversation with alternative medicine advocate Deepak Chopra. Chopra has been criticized for promoting medical treatments based on pseudoscience, and his views on technology seem to be just as misguided.

“Technology is neutral, number one. Number two, it’s unstoppable,” Chopra told Fowler as they walked through the tech industry’s trade show, CES, earlier this month.

No, and no.

Technology does not just fall from the sky. People create it. Which means that our human frailties get baked right in. Type a question into a search engine, and the results can be just as racist as the responses you might get from people. Use a mathematical model to make lending decisions, and the output can be just as discriminatory as if you ask a loan officer.

People create technology, which means people can also address problems that result from (or are exacerbated by) technology. The question is who is responsible for doing so. Chopra lays that burden squarely on everyday users, blaming them for succumbing to technology’s power.

“I think technology has created a lot of stress for a lot of people, but that’s not the fault of technology,” Chopra told Fowler, “It’s the fault of the people who use technology.”

Sure, we could all be more intentional in our technology use. But absolving technology of any role in stress is disingenuous. It ignores the fact that the people who create the digital technologies we use every day design them to be persuasive. To Chopra, this isn’t the problem, but the solution; the app Chopra developed and the company he advises also employ persuasive design principles to hook people into using their products.

Chopra thinks technology will support well-being by collecting data and using it to optimize our environmental conditions, by, for example, changing the lighting in your house.

“So we just have to accept more surveillance as the price of this way of living?” Fowler asks. “For the advancement of your well-being, what’s wrong with that?” Chopra responds.

A lot.

First, this kind of blind faith in technology prevents us from talking about the limits of what technology can and should do. Second, it ignores the fact that using data-driven systems to address social problems too often ends up penalizing poor and working-class people. And third, it perpetuates a belief that human experience is simply raw material for companies to datify and exploit for financial gain.

Digital technology is, as Chopra says, “here to stay.” But this flawed understanding of technology needs to go.

For further reading, see the books I cited in the links above:

LaTeX: A Window onto Another Way of Thinking

Last week, I worked with LaTeX, a formatting system that uses markup language to create documents, for the first time. The experience was:

  1. Not as complicated as I imagined, and
  2. Offered a glimpse into how the more technically oriented people in my research field think.

The decision to use LaTeX was not mine. The organizers of a conference where I had a paper accepted not-so-subtly told authors to switch from Microsoft Word to LaTeX (via the Overleaf interface) because the Word template they provided was so dysfunctional.

This understandably upset a lot of people. Many (myself included) had never used LaTeX and the revision period overlapped with the winter holidays. The research community raised valid concerns about the template issues that dogged the entire submission process, and I hope the conference organizers consider them. But that’s not my focus here.

By the time I sat down to re-format my paper, several researchers had voluntarily compiled a Google Doc with detailed, step-by-step instructions on how to transfer papers from Word to Overleaf. The process took me about a day-and-a-half and proceeded more smoothly than I expected. (Seriously, those researchers saved the day with that document.)

Now that I’ve used LaTeX/Overleaf, holy moly, no one should ever typeset a complex document in Microsoft Word again. I can’t believe how many hours of my life I’ve lost tinkering with tables, figures, and columns in that program, trying to divine what random collection of keystrokes and clicks would make everything snap into place on the page.

It’s not that things don’t break in LaTeX; they do. But when they do, I can more easily see why. I can check for missing parenthesis or parse the error statement and fix them. With Word, I have no idea why something breaks, and, more important, little sense of why my actions fixed it.

Part of this facility comes from the fact that I have basic HTML and programming experience, so I generally understand what the tags are trying to do. And the Overleaf interface, which shows the compiled document next to the code, makes it easy to see the results of my typing.

LaTeX sees documents as collections of different types of text. So formatting in LaTeX means defining the different categories, either within the template or by using software packages others have created for LaTeX, and then tagging the text to identify its category. So instead of manually changing the font and size of a heading title, or selecting a heading style in Word, you just type \section before the title and the text automatically formats to the pre-defined font and size.

I now get why so many people in computer science and math use LaTeX; it’s a programmer’s approach to formatting. Things (in this case text) belong to certain categories, and the author’s job is to label them. And with this realization, it also became a little clearer to me why some of my CS colleagues might struggle to understand the interpretive and mostly qualitative research I do, or why engineers might overlook social implications when they design technology. I study how people shape technology (and how, in turn, technology shapes them). And people cannot be slotted into categories. (As Mark Zuckerberg said at a tech event in 2016, “The code always does what you want—and people don’t.”)

Another small clue about the position of qualitative research in computing appeared in the sample template PDF. The document provided tips on how to format things like figures, tables, and equations. But nowhere in the document could I find a block quote, something that appears often in the papers I write. I’m not suggesting this was an intentional omission, but it made me wonder, did whosoever made the template not realize or expect that block quotes would appear in the papers submitted to this conference? Qualitative research contributions are an established part of human-computer interaction research, but that doesn’t mean everyone understands or accepts it.

I knew that before, but after using LaTeX, I have a slightly better sense of why that might be. I work in an interdisciplinary field whose members come from a variety of backgrounds and apply assorted research methods to study computing. Misunderstandings and disagreements are inevitable. Seeing how people whose research approaches differ from mine think helps me understand how to engage with them rather than wonder why they don’t get it.

So thanks, CHI2019, for pushing me to try LaTeX. My schedule and my brain appreciate it.

Becoming a Scholar

Since starting this PhD program, I’ve wanted to write an academic version of my personal mission statement. I assumed that if I dug deeply enough, or pondered long enough, the contours of Priya-as-scholar would sharpen into focus and reveal where in the realm of knowledge my research fits.

My fixation on “figuring it out,” belied an instrumental view of knowledge as a product or an outcome. Yet the philosophical view of knowledge, which is what I’m pursuing as a candidate for the degree of doctor of philosophy, sees knowledge as “necessarily ephemeral and incomplete,…never acquired…only reached proximally (Barnacle, 2005, p. 185-186). Being a philosopher, or lover of wisdom, is about pursuing knowledge, not capturing it.

In that spirit, I write this post not to mark an achievement (“I’ve figured it out!”), but to document a process (“This is where I am now”) and to leave breadcrumbs for future reflection (“Here’s the path I’ve taken”).

I first heard the words “epistemology” and “ontology,” while sitting in the opening lecture of an introductory government and politics class during the first week of my freshman year of college. The professor might as well have spoken gibberish, for as much as she tried to explain them, nothing stuck. I heard these terms much more after I entered the PhD program, and I’m just beginning to understand what they mean.

I found Michael Crotty’s (1998) book “The Foundations of Social Research,” a godsend to navigate the thicket of epistemology. Crotty sets aside ontology and focuses on the research process. He lays out a hierarchy to help readers understand how the abstract informs the granular (and vice versa): Epistemology –> Theoretical Perspective –> Methodology –> Methods. The easiest way for me to relate these concepts to my own scholarship is to move through them in reverse, starting with Methods.

In my research, I’ve primarily talked to people (through interviews and focus groups) and analyzed texts (including news articles, websites, company policies, blog posts, and social media posts). I’ve occasionally used design methods to work on the development of new technologies or educational resources. I also work with colleagues who use survey methods, though I have not used them in my personal research.

My dissertation focuses on pictures posted on social media, so I’m learning methods to more systematically analyze visual materials. I’m also interested in exploring methods like participant observation and diary studies that focus more on people’s practices.

My research explores how information about people flows through digital systems and what that means for privacy. I’m curious about how this plays out in the context of family, primarily pregnancy, parenting, and early childhood. My goal is not to measure variables, to prove hypotheses, or to predict outcomes; my goal is to consider what it means to be a parent, child, or person in a datafied world. Ethnography and discourse analysis resonate with me as ways to do this work because they speak to people’s lived experience as well as broader societal framings.

Theoretical Perspective
In my personal mission statement, I said “I want to understand more about…the physical, internal, societal, and historical forces that have brought me, you, and those around us to this particular moment in time.” I entered the PhD program wanting to do research that put things into context, that traced paths and made connections between different disciplines, topics, or time periods, something I still want. While analyzing data, I’ve focused on creating categories and distilling them into themes, which I then mold into findings that are situated in existing theory or other scholarship. This puts my work squarely in the realm of interpretivism, which Crotty defines as a research approach that “looks for culturally derived and historically situated interpretations of the social life-world” (p. 67).

But after spending time with human rights activists and cultural studies scholars, I’m drawn to more critical orientations to research, particularly post-structuralism, feminism, and post-humanism. This includes Actor-Network Theory (Bruno Latour, John Law), assemblage theory (Gilles Deleuze & Félix Guittari), Foucauldian perspectives, and agential realism (Karen Barad). This is in part because I’m less interested in people and what they think or do, and more interested in how people, technologies, platforms, affordances, and networks come together to produce certain effects.

In my research, I’ve strived to respect the “voice” of my participants while remaining cognizant that I as the researcher am the one interpreting what they say. When I interview someone, I’m not plucking a piece of knowledge that already existed in their brain. I’m having a conversation in which both of us are producing meaning together. This interpretation and shared construction of meaning form the basis of constructionist epistemology, at least the way Crotty defines it.

Consciousness and intentionality lie at the core of constructionism: “When the mind becomes conscious of something, when it ‘knows’ something, it reaches out to, and into, that object” (Crotty, 1998, p. 44). But post-structuralist and post-humanist perspectives reject these framings of consciousness and intentionality, de-centering language (a social structure) or humans as the source of knowledge construction. Crotty describes the epistemology of subjectivism as a subject imposing meaning on an object (p. 9). But my nascent understanding of post-structural and post-human perspectives is that they reject a clear separation between subject/object in the first place. Meaning is not “imposed” on anything, but constituted by the intra-action (Barad, 2003) between various human and non-human actors.

Right now, my work and research approach falls within the constructionist epistemology. I am interested in taking my work in a more post-structural and post-human direction. But writing this has made me realize that doing so requires rethinking my understanding of agency.

Concluding Thoughts
This is the first in an occasional series of posts in which I work through the type of scholar I am and the type of research I do. I initially envisioned these posts as quite future-focused (what to I want to be/do), but I now write them with the recognition that I’m “always already” there (Barad, 2003).

I thank Kari Kraus and my classmates in INST800, Jason Farman and my classmates in AMST628N, Shannon Jette and my classmates in KNES789N, Annette Markham + Kat Tiidenberg + Dèbora Lanzeni and my classmates in the Digital Media Ethnography workshop, Karen Boyd, Andrew Schrock, Cynthia Wang, Shaun Edmonds, and Eric Stone for indulging me in conversations about theory/method over the past two years.

In addition, I thank the UMD Libraries, the Interlibrary Loan service, this laptop on which I read articles and wrote notes, the printer, paper, and ink that came together to give me physical copies of texts, the pens for enabling me to take notes on those texts, Twitter, Evernote, WordPress, Scriviner, Wifi connections, and finally the desks and chairs on campus, at home, at conferences, and on the Metro and Amtrak that supported my body while I read and wrote.


Barad, K. (2003). Posthumanist performativity: Toward an understanding of how matter comes to matter. Signs: Journal of Women in Culture and Society, 28(3), 801–831.

Barnacle, R. (2005). Research education ontologies: Exploring doctoral becoming. Higher Education Research & Development, 24(2), 179–188.

Crotty, M. (1998). The foundations of social research: Meaning and perspective in the research process. London ; Sage Publications.

Creating a Productivity System that Works for Me

Like many people, I enjoy having a routine. This summer, after moving from a cubicle into a shared office space, I began going to campus more routinely and working a similar schedule each day. The regular schedule plus the commute activated more natural boundaries around “work” and “home” time. On campus, I focused a bit more and got distracted a bit less. Most important, I felt anchored. I cherish the self-directed and flexible nature of PhD life, but it sometimes left me feeling like a dandelion blowing in the wind.

This new routine has done wonders for my sense of well-being. But it hasn’t done much for my time management skills. I used to think I was great at time management because I always met my deadlines and my expectations. After an exhausting first semester in the PhD program nearly two years ago, I realized I was terrible at time management. The only reason why I met my deadlines (and satisfied my perfectionist tendencies) was that I let work take priority over everything else. If I didn’t feel like I had accomplished enough by 5:30, I’d keep working until 9, 10, or 11 pm. If I didn’t feel like I had gotten enough done enough by Friday evening, I’d let work consume Saturday and/or Sunday. This didn’t leave my body, my mind, or my husband very happy.

Since that realization, I’ve re-framed my attitude toward work (it is an important part of my life, but not the most important) and changed my practices (regularly went to campus). The fall semester started this week, which means goodbye languid summer days, hello bustling campus and fuller schedule. I don’t like feeling overwhelmed by this, and I don’t want to spend the next four months waiting for winter break.

Various productivity systems, designed for academic life and beyond, suggest keeping a detailed schedule or assigning specific tasks to each day. I tried these approaches and found them rigid and stifling. So I’m going to adapt their principles into a system that works for me.

First, I commit to a consistent weekday wake-up and go-to-bed time. My alarm goes off at the same time every weekday, but I snooze it for 5 to 75 minutes. I’d like to limit the snoozing to about 10 minutes. To help with that, I intend to go to bed at a consistent time, and to begin my bedtime routine 30 minutes prior to that bedtime.

Second, I will go to campus on weekdays unless I have a scheduling reason to work from home. My experience this summer reminded me that it’s much easier to treat the PhD as a job when it involves a distinct workplace and a commute.

Third, I’ll restart a practice I followed when I worked full-time — tracking my hours. I was fortunate to have supervisors who let me take comp time if I ever worked more than 40 hours per week, so you bet I tracked my hours. I can get obsessive with practices like this, which is why I refrained from tracking my hours as a PhD student. But since I work on various projects, eagerly say yes to other projects, tend to fall into rabbit holes while working on any project, and am a recovering perfectionist, I think time tracking is essential to improving my time management skills. I keep things simple and do this in a spreadsheet.

Fourth, I’ve created a task management workflow to help me figure out what to work on when. I’ve written a month-by-month list of my commitments, deadlines, and events. At the end of each week, I’ll spend half an hour previewing the next week. I’ll create a to-do list with the tasks that need to be completed that week. I’ll then look at the calendar and schedule time blocks to work on those tasks. As I go through the week, I can move things around if needed. After a few weeks of this, I hope to have a better sense of how much I can accomplish in a typical 40-ish hour week and how much time to budget for certain tasks. This will (hopefully) help me let go of the perfectionist tendencies, resist the temptation of distractions (Twitter, I’m looking at you) and understand the “price” of saying yes to a given task.

Finally, I commit to keep my campus desk tidy. Stalagmites of papers and books make my home desk an uncomfortable place to work, and looking at them unsettles my mind. Yes, I’d like to clean them off, but this is about baby steps. My campus desk is big enough that the two piles that have already sprouted aren’t in the way. I’d like to keep it that way.

So that’s my plan for this semester. Check with me in four months to see how it goes.