Last week, I worked with LaTeX, a formatting system that uses markup language to create documents, for the first time. The experience was:
- Not as complicated as I imagined, and
- Offered a glimpse into how the more technically oriented people in my research field think.
The decision to use LaTeX was not mine. The organizers of a conference where I had a paper accepted not-so-subtly told authors to switch from Microsoft Word to LaTeX (via the Overleaf interface) because the Word template they provided was so dysfunctional.
This understandably upset a lot of people. Many (myself included) had never used LaTeX and the revision period overlapped with the winter holidays. The research community raised valid concerns about the template issues that dogged the entire submission process, and I hope the conference organizers consider them. But that’s not my focus here.
By the time I sat down to re-format my paper, several researchers had voluntarily compiled a Google Doc with detailed, step-by-step instructions on how to transfer papers from Word to Overleaf. The process took me about a day-and-a-half and proceeded more smoothly than I expected. (Seriously, those researchers saved the day with that document.)
Now that I’ve used LaTeX/Overleaf, holy moly, no one should ever typeset a complex document in Microsoft Word again. I can’t believe how many hours of my life I’ve lost tinkering with tables, figures, and columns in that program, trying to divine what random collection of keystrokes and clicks would make everything snap into place on the page.
It’s not that things don’t break in LaTeX; they do. But when they do, I can more easily see why. I can check for missing parenthesis or parse the error statement and fix them. With Word, I have no idea why something breaks, and, more important, little sense of why my actions fixed it.
Part of this facility comes from the fact that I have basic HTML and programming experience, so I generally understand what the tags are trying to do. And the Overleaf interface, which shows the compiled document next to the code, makes it easy to see the results of my typing.
LaTeX sees documents as collections of different types of text. So formatting in LaTeX means defining the different categories, either within the template or by using software packages others have created for LaTeX, and then tagging the text to identify its category. So instead of manually changing the font and size of a heading title, or selecting a heading style in Word, you just type \section before the title and the text automatically formats to the pre-defined font and size.
I now get why so many people in computer science and math use LaTeX; it’s a programmer’s approach to formatting. Things (in this case text) belong to certain categories, and the author’s job is to label them. And with this realization, it also became a little clearer to me why some of my CS colleagues might struggle to understand the interpretive and mostly qualitative research I do, or why engineers might overlook social implications when they design technology. I study how people shape technology (and how, in turn, technology shapes them). And people cannot be slotted into categories. (As Mark Zuckerberg said at a tech event in 2016, “The code always does what you want—and people don’t.”)
Another small clue about the position of qualitative research in computing appeared in the sample template PDF. The document provided tips on how to format things like figures, tables, and equations. But nowhere in the document could I find a block quote, something that appears often in the papers I write. I’m not suggesting this was an intentional omission, but it made me wonder, did whosoever made the template not realize or expect that block quotes would appear in the papers submitted to this conference? Qualitative research contributions are an established part of human-computer interaction research, but that doesn’t mean everyone understands or accepts it.
I knew that before, but after using LaTeX, I have a slightly better sense of why that might be. I work in an interdisciplinary field whose members come from a variety of backgrounds and apply assorted research methods to study computing. Misunderstandings and disagreements are inevitable. Seeing how people whose research approaches differ from mine think helps me understand how to engage with them rather than wonder why they don’t get it.
So thanks, CHI2019, for pushing me to try LaTeX. My schedule and my brain appreciate it.
Since starting this PhD program, I’ve wanted to write an academic version of my personal mission statement. I assumed that if I dug deeply enough, or pondered long enough, the contours of Priya-as-scholar would sharpen into focus and reveal where in the realm of knowledge my research fits.
My fixation on “figuring it out,” belied an instrumental view of knowledge as a product or an outcome. Yet the philosophical view of knowledge, which is what I’m pursuing as a candidate for the degree of doctor of philosophy, sees knowledge as “necessarily ephemeral and incomplete,…never acquired…only reached proximally (Barnacle, 2005, p. 185-186). Being a philosopher, or lover of wisdom, is about pursuing knowledge, not capturing it.
In that spirit, I write this post not to mark an achievement (“I’ve figured it out!”), but to document a process (“This is where I am now”) and to leave breadcrumbs for future reflection (“Here’s the path I’ve taken”).
I first heard the words “epistemology” and “ontology,” while sitting in the opening lecture of an introductory government and politics class during the first week of my freshman year of college. The professor might as well have spoken gibberish, for as much as she tried to explain them, nothing stuck. I heard these terms much more after I entered the PhD program, and I’m just beginning to understand what they mean.
I found Michael Crotty’s (1998) book “The Foundations of Social Research,” a godsend to navigate the thicket of epistemology. Crotty sets aside ontology and focuses on the research process. He lays out a hierarchy to help readers understand how the abstract informs the granular (and vice versa): Epistemology –> Theoretical Perspective –> Methodology –> Methods. The easiest way for me to relate these concepts to my own scholarship is to move through them in reverse, starting with Methods.
In my research, I’ve primarily talked to people (through interviews and focus groups) and analyzed texts (including news articles, websites, company policies, blog posts, and social media posts). I’ve occasionally used design methods to work on the development of new technologies or educational resources. I also work with colleagues who use survey methods, though I have not used them in my personal research.
My dissertation focuses on pictures posted on social media, so I’m learning methods to more systematically analyze visual materials. I’m also interested in exploring methods like participant observation and diary studies that focus more on people’s practices.
My research explores how information about people flows through digital systems and what that means for privacy. I’m curious about how this plays out in the context of family, primarily pregnancy, parenting, and early childhood. My goal is not to measure variables, to prove hypotheses, or to predict outcomes; my goal is to consider what it means to be a parent, child, or person in a datafied world. Ethnography and discourse analysis resonate with me as ways to do this work because they speak to people’s lived experience as well as broader societal framings.
In my personal mission statement, I said “I want to understand more about…the physical, internal, societal, and historical forces that have brought me, you, and those around us to this particular moment in time.” I entered the PhD program wanting to do research that put things into context, that traced paths and made connections between different disciplines, topics, or time periods, something I still want. While analyzing data, I’ve focused on creating categories and distilling them into themes, which I then mold into findings that are situated in existing theory or other scholarship. This puts my work squarely in the realm of interpretivism, which Crotty defines as a research approach that “looks for culturally derived and historically situated interpretations of the social life-world” (p. 67).
But after spending time with human rights activists and cultural studies scholars, I’m drawn to more critical orientations to research, particularly post-structuralism, feminism, and post-humanism. This includes Actor-Network Theory (Bruno Latour, John Law), assemblage theory (Gilles Deleuze & Félix Guittari), Foucauldian perspectives, and agential realism (Karen Barad). This is in part because I’m less interested in people and what they think or do, and more interested in how people, technologies, platforms, affordances, and networks come together to produce certain effects.
In my research, I’ve strived to respect the “voice” of my participants while remaining cognizant that I as the researcher am the one interpreting what they say. When I interview someone, I’m not plucking a piece of knowledge that already existed in their brain. I’m having a conversation in which both of us are producing meaning together. This interpretation and shared construction of meaning form the basis of constructionist epistemology, at least the way Crotty defines it.
Consciousness and intentionality lie at the core of constructionism: “When the mind becomes conscious of something, when it ‘knows’ something, it reaches out to, and into, that object” (Crotty, 1998, p. 44). But post-structuralist and post-humanist perspectives reject these framings of consciousness and intentionality, de-centering language (a social structure) or humans as the source of knowledge construction. Crotty describes the epistemology of subjectivism as a subject imposing meaning on an object (p. 9). But my nascent understanding of post-structural and post-human perspectives is that they reject a clear separation between subject/object in the first place. Meaning is not “imposed” on anything, but constituted by the intra-action (Barad, 2003) between various human and non-human actors.
Right now, my work and research approach falls within the constructionist epistemology. I am interested in taking my work in a more post-structural and post-human direction. But writing this has made me realize that doing so requires rethinking my understanding of agency.
This is the first in an occasional series of posts in which I work through the type of scholar I am and the type of research I do. I initially envisioned these posts as quite future-focused (what to I want to be/do), but I now write them with the recognition that I’m “always already” there (Barad, 2003).
I thank Kari Kraus and my classmates in INST800, Jason Farman and my classmates in AMST628N, Shannon Jette and my classmates in KNES789N, Annette Markham + Kat Tiidenberg + Dèbora Lanzeni and my classmates in the Digital Media Ethnography workshop, Karen Boyd, Andrew Schrock, Cynthia Wang, Shaun Edmonds, and Eric Stone for indulging me in conversations about theory/method over the past two years.
In addition, I thank the UMD Libraries, the Interlibrary Loan service, this laptop on which I read articles and wrote notes, the printer, paper, and ink that came together to give me physical copies of texts, the pens for enabling me to take notes on those texts, Twitter, Evernote, WordPress, Scriviner, Wifi connections, and finally the desks and chairs on campus, at home, at conferences, and on the Metro and Amtrak that supported my body while I read and wrote.
Barad, K. (2003). Posthumanist performativity: Toward an understanding of how matter comes to matter. Signs: Journal of Women in Culture and Society, 28(3), 801–831.
Barnacle, R. (2005). Research education ontologies: Exploring doctoral becoming. Higher Education Research & Development, 24(2), 179–188.
Crotty, M. (1998). The foundations of social research: Meaning and perspective in the research process. London ; Sage Publications.
A few weeks ago I played with a cousin’s four-year-old daughter.
“Look at the map!” she cried, laying out a Lego theme park map on the ground. “First we have to go through the ninjas, then we have to go get Hercules.” We ran around the park, her one-year-old brother wandering as we zipped by. She kept yelling out commands, and I’d ask her what we had to do next.
Suddenly, this morphed into a game of whales and humans. Namely, we kept being turned into whales and needing to turn back into humans. We did so by “eating” ice cubes that we had dumped on the ground. Every time she pointed at me, I bellowed like a whale.
Eventually, another child joined in and this turned into a game of house. We each claimed a tree and rushed back and forth, bringing each other cakes and omelettes and turning into superheros and stealing each other’s cars.
Preschool age children are full of energy and love make-believe. Engaging with them can be exhausting, but I found it freeing. I could say or do anything and she’d run with it. I declared we’d found a safe zone, I taped silly glasses to my face, I made strange noises, and she took it all in stride.
A few years from now, she’ll outgrow this tendency toward fantastical exuberance. She’ll go to school and focus on learning how the world works and managing a social life. I won’t be able to run up to her and say, “Bloooooooop” with pink flowery glasses taped to my face. In Deleuzian terms, her attention will be territorialized — marshaled into familiar, conventional, and normalized ways of thinking.
I feel myself making a similar transition from doctoral student to ABD. Since advancing to candidacy in May, I’ve been excited to write my dissertation proposal. Finally, I get to focus on the whole reason why I entered the PhD program in the first place. But writing the proposal means I have to focus. The floodlight of my academic gaze must sharpen into a spotlight. And that makes me a little sad.
I now look back on the past year and see it as a period of academic fantastical exuberance.
What’s epistemology? What’s ontology? What did Foucault and Latour and Haraway and Barad say? What’s my theoretical perspective? I’m an interpretivist. No, maybe I’m a critical researcher. Ooh, what is post-humanism. What happens if I drop terms like assemblage and actant and materiality into conversation? Do I want to do ethnography? Is my work content analysis or textual analysis? What discipline am I in? I’m in social computing. No, computer-mediated communication. Ooh, what if I call myself an internet studies researcher. Maybe I’m in media studies.
Hey, look at all these citations! What’s that? And that one? And that one? Let me download the article right now. And these books, let me get them from the library. Oooh look, these other books seem relevant too; let me check them all out. Here’s some sociology, some anthropology, some communication, some feminist scholarship, science and technology studies work, some cultural studies, ooh, political theory, whoa, I didn’t expect to go there, but OK, sure.
What has agency? Hmm, I didn’t think that was relevant to my research, but alright, let’s do that. Oh, you want deconstruct binaries. Fine. Subject/Object. Nature/Culture. Human/Thing.
And I’ve loved it. Loved it, loved it, loved it. But I can’t possibly read everything I’ve downloaded and checked out before writing the dissertation proposal. I can’t trace the history of all the theories I’ve started learning about.
Yes, the reading and the note-taking and the conversations will continue. But if I want to stay on track, more of that work needs a clearer purpose than simply, “Oooh, that sounds interesting.”
I’m hitting a new academic stage, and it’s time to focus.
In the first sentence of his chapter “On Recalling ANT,” Bruno Latour lists four things that “do not work” with actor-network theory: “the word actor, the word network, the word theory and the hyphen! Four nails in the coffin” (p. 15). Apart from its bluntness, this sentence stands out because Latour is one of the creators of actor-network theory.
Any theory has its proponents and critics. One person finds a particular theory useful to inform a worldview or structure a research project, while another finds that theory bunk and proclaims so loudly. Such is the world of academia. We read what others have written, we think about it, sometimes supporting that thinking by studying something empirically, and we write. Often, our writing engages directly with what others have written — affirming, refuting, re-interpreting, critiquing the work of others, taking their work in a new direction, arguing that their work is flawed.
It is a world I happily inhabit, largely because of this focus on writing. I wrote poems and novels as a child, studied journalism as an undergraduate, and consider writing for the public a core part of my personal mission. I’m a published writer, and I don’t fear being published. I entered a PhD program in part to connect my writing with theory, yet I also approached this work with some trepidation.
Actor-network theory is well known and widespread. Actor-network theory is also recent enough that the people who created are still alive. Meaning they can see and respond to the ways that others use or critique their theory. While reading “On Recalling ANT,” I envisioned myself in Latour’s shoes. How would I feel if I invested countless hours into developing a theory, only to see others misunderstand or misinterpret it, or take it in directions that go against its original intention? How would I respond if people challenged my work or pointed out its flaws? Publishing theoretically informed research felt more vulnerable to me than other types of writing because it meant opening myself to critiques that my thinking was wrong. Which to me felt tantamount to critiquing my existence.
Last year, I wrote a paper in which I took a theory in a different direction. I wasn’t sure whether I correctly applied the theory, but the paper survived the peer review process and was published. Shortly after publication, I was invited to present the paper at a workshop that the theory’s founder helped organize. If I had applied the theory incorrectly, I’d find out now. The theorist gently critiqued a few other presentations, but mine proceeded unscathed. Though relieved, I wondered what would happen the next time I used a theory in my work. And the next time. Approaching my research with trepidation seemed exhausting.
Earlier this year, I found my way out of the trepidation trap. I was at another workshop, this time sitting in the audience listening to a theorist whose work has resonated with me for years. While summarizing her research, she remarked, offhandedly, “I’m still figuring out” the theory.
This stunned me. This theorist has published books and articles on this theory; her name is almost synonymous with it. If she’s still figuring it out, that means that any of us who use the theory are also figuring it out. And that means there’s no one “correct” way to interpret or apply the theory.
Someone may dedicate their entire professional life to developing a particular idea; indeed, their name may become synonymous with the idea. But ideas don’t belong to one person. The acknowledgements page in any book reveals the multitude of people involved in developing an idea. And, in the spirit of ANT, we must not forget that non-human actors also play a role. The library, my computer, and my glasses deserve as much credit as the people around me for bringing ideas to fruition.
So if ideas exist separately from authors, then critiques of ideas exist separately from critiques of authors. There’s no reason for me to equate a critique of my thinking to a critique of my existence. I escape the trepidation trap by letting go. I let go of the assumption that thoughts define me. I let go of the sense that there’s a “right” answer. And most important, I let go of the fear.
After re-reading the theory I used in the paper last year, and reading other work that engaged with the theory, I recently wrote another paper on this theory. In it, I critiqued my prior use of the theory and offered a more nuanced analysis. I felt comfortable doing so because theoretically informed research is not some pedestal I’m trying to climb onto. It’s a messy, iterative practice, just like everything else in life. We read, we write, we engage, we reflect, we read more, we write more, we revise, we clarify. We change. Bruno Latour himself moved away from the social constructionist views that pervaded his earlier work.
But separating the idea from the author is not a license to disengage. On the contrary — Latour ended his chapter, “On Recalling ANT” with this:
“[Y]ou cannot do to ideas what auto manufacturers do with badly conceived cars: you cannot recall them all by sending advertisements to the owners, retrofitting them with improved engines or parts, and sending them back again, all for free. Once launched in this unplanned and uncharted experiment in collective philosophy there is no way to retract and once again be modest. The only solution is to do what Victor Frankenstein did not do, that is, not to abandon the creature to its fate but continue all the way in developing its strange potential” (p. 24).
And that strange potential includes possibilities for illumination, not just openings for critique. One person recently told me they found the theory paper I published last year helpful because they had never thought of using the theory in that way. And that, more than anything else, is why I do this work in the first place.
It happened. The crack, when “you can no longer stand what you put up with before, even yesterday” (Deleuze & Parnet quoted in Jackson & Mazzei, 2013); when “one can no longer think things as one formerly thought them, [and] transformation becomes both very urgent, very difficult and quite possible” (Foucault, quoted in St. Pierre, 2014).
For the past several months, I’ve been trying to understand epistemology and ontology — what they mean, what they mean for me, and what they mean for my research. I read “The Foundations of Social Research” by Michael Crotty. I read “The Body Multiple” by Annemarie Mol. I read other articles, mostly from communication, science and technology studies, and cultural studies.
I continued to analyze quantitative and qualitative data and write papers, but I felt increasingly perturbed, as if this work wasn’t adequately capturing what the research team said we were studying. I kept spouting my one-line summary of my dissertation research: “I study how parents post pictures of their kids online and what that means for kids’ identity development, sense of self, and understanding of privacy,” even after realizing that I’m not actually studying parents or kids or online or identity.
Epistemologically, I sensed that I wasn’t a positivist, but I couldn’t figure out whether I was a constructionist or a subjectivist. Theoretically, I didn’t think I was an objectivist, and I sensed that I might be an interpretivist who could one day become a critical scholar. I could be doing phenomenology, or potentially hermeneutics, or maybe symbolic interactionist work. I remained unsure of aligning myself with any methodology besides “qualitative,” which I do primarily through the methods of interviews and textual analysis.
Today, all of that fell apart and also came into sharp relief, thanks to readings on “New Materialism” and a conversation at the weekly journal club of my university’s physical cultural studies program.
I realized that I’ve been using what St. Pierre (2016) calls “conventional humanist qualitative methodology” — interviewing people and coding the data as a way to capture some aspect of their lived experience. I thought I’d sidestepped positivism because I don’t offer “hypotheses,” don’t calculate “inter-rater reliability,” and don’t purport to “predict” behavior. But I do define “research questions,” collect “data,” and code it to fill a “gap the knowledge” — all trappings of logical positivism.
And this would be fine, except that I’m also discussing Foucauldian analysis and Actor-Network Theory and assemblage. And I dig it. It resonates with me and it’s informing how I approach my dissertation. So no wonder the conventional research process I’m using feels stale — it does not align with the new (to me) theory/methods that are shaping the way I understand the world and the research that occurs in it.
But doing research from a Foucauldian or ANT or assemblage (or feminist or queer or post-structuaralist or post-colonial or post-humanist or…) perspective requires more than rethinking methods. It means letting go of a belief that any research, no matter how rigorously or reflexively it is done, can capture what is “going on.” It means accepting that research, and the production of knowledge, is always partial, always incomplete. It means that no matter how precisely or evocatively we write about our research, it remains a semblance.
But nevertheless, I feel this work is urgent. I know this work is difficult. And yes, I believe that it is possible.
(And while I will continue doing research in the conventional humanist qualitative vein, because learning new things takes time, Jackson & Mazzei show how to use these theories to think with typical interview data “within and against intepretivism.”)
A fellow graduate student recently asked me how I approach literature reviews. This question of how to find, read, and synthesize a body (or more) of research is central to producing good academic work. Yet it brings to mind Bellatrix Lestrange’s vault in Gringotts, where every paper you read yields six more until you’re neck deep with no foreseeable way out.
When I first started studying parents and social media use, I was content with Irwin Altman’s definition of privacy as controlling access to the self. Digging deeper, I learned to think of privacy as contextual integrity (thanks to Helen Nissenbaum) and as boundary management (thanks to Sandra Petronio). As I continued studying privacy over the years, I learned that lawyers, psychologists, communication scholars, economists, and computer scientists all conceptualize privacy in different ways. During my first year in the PhD, I considered creating a disciplinary map of privacy for a class project but quickly realized that was a much bigger undertaking than I imagined.
I’ve grown familiar with the feeling. I took a seminar with Jason Farman on “Place, Space, and Identity in the Digital Age,” and saw that entire careers can (and have) been built around each of these concepts. Place isn’t just a label on a physical space, it’s objects and bodies and relationships and memories and information flows and more coming together in a particular arrangement at a particular moment. Identity isn’t just a list of demographic characteristics, it’s the facets, fragments, memories, experiences, beliefs, roles, imaginaries and more that constantly intersect and intertwine into you. And this morning, while reading John Law’s “Objects and Spaces,” I realized that we can’t even take physical, 3-D, Euclidean space as a given.
It’s easy to see moments like these as overwhelming, paralyzing even. Especially when you do interdisciplinary research and plan to borrow theories and methods from other disciplines. Or to see these moments as challenges, as piles of reading to conquer so that you can one day claim the prize of “knowing” something.
But these moments keep happening. So the options are to feel constantly overwhelmed or to see grand quests pile up, neither of which is healthy (nor encouraging). I’ve come to an alternate response after starting a daily meditation practice: Let it go.
Let go of the overwhelm. Let go of the fear. Let go of the burden. Worried you don’t have time to read everything? Let it go. Concerned that you might overlook something? Let it go. Dreading the moment another scholar tells you, “Yeah, but what about [totally separate body of work that may or may not be relevant to your topic]?” Let it go.
It sounds simple, I know. But these three words, combined with the acknowledgement, acceptance, and even embrace of the vast, unimaginable, and ultimately unknowable amount of prior work out there is freeing.
I spent all day brainstorming the verb for this post’s title. When I do literature reviews, and when I do research in general, I want to assume mental paralysis. Meaning, I want to assume that I will experience moments of mental paralysis, of viewing the work ahead as a sheer, insurmountable rock wall I somehow have to climb, as a tangled thicket in dark jungle through which I have to chop my way out.
But I also want to take up the mental paralysis, to wear it as a badge, to make it part of me. Because even after I climb this wall or chop through those vines, there will be another wall, another tangle. And by accepting that, I hope to take greater joy in those moments when I DO learn something, when a concept finally DOES click in my head, even if it falls apart again a moment later. By acknowledging and expecting the complexity, I release the sense that I need to master it, to someday “figure it out.”
And that, I suppose, is how I approach literature reviews.
(Oh, and for anyone who wants actual advice on how to do a literature review, Raul Pacheco-Vega has a series of relevant blog posts.)
When I was 13 or 14, my parents gave me “The 7 Habits of Highly Effective Teens” by Sean Covey for Christmas. I devoured the book, re-reading it for the next several years. It was the first book in which I highlighted, dog-eared, and wrote notes directly on the pages.
Habit 2 encouraged readers to write a personal mission statement. I loved the idea but never wrote anything of consequence. Now, having accumulated several more years of life experience, I feel more equipped to write that statement.
The sentiment of my mission coalesced largely over the past six years. The transition from college to work to graduate school to now was difficult and enlightening. I finally have a sense of what I want to accomplish, yet I feel secure enough with myself to accept that may evolve.
So, what’s my mission?
I examine the forces that shape our lives and share that knowledge with the public.
This mission highlights what fascinates me and what I want to do with that knowledge. I am a writer, researcher, and storyteller at heart, and I aspire to write a book one day. In the interest of focusing on systems rather than goals, I aim to write pieces that people can point to and say, “I learned something from that.”
My professional and amateur interests span astronomy, psychology, Internet studies, and history — disparate disciplines bound by a common thread of humanity.
Like many people, I’m struck with awe every time I look up at the night sky. So much exists out there, and while science has enabled us to learn a tremendous amount about what’s up there, it’s impossible (for now) to travel across light years or stand on the event horizon of a black hole. So, why does astronomy matter?
Because every particle that makes up every human being on the planet comes from the stars in that sky. The universe began with hydrogen, a smattering of helium and a smidgen of lithium. All other elements in the periodic table, including the carbon that forms the basis of life as we know it, emerged from nuclear fusion occurring in the cores of stars and in the aftermath of star explosions. Everything that’s inside you comes from up there.
What goes on inside us, particularly our brains, also captivates me. While we don’t have to think about telling our body to breathe air, pump blood, or digest food, our thoughts drive so much of our behavior. And while thought processes may feel automatic, they’re malleable and well within our control. Figuring out how to change the way we think and implementing those changes isn’t easy. But I take comfort in the paradoxical notion that while I can’t control anything outside my own mind, taking control of my own mind grants me boundless potential to construct a fulfilling life.
Nowadays, that life is not just experienced; it is increasingly documented by digital technology that creeps deeper into our daily lives. Personal and sensitive communications, ranging from text messages to financial transactions to data points about our physical activities flow through privately owned networks and sit on servers operated by companies that have wide latitude to use that data as they see fit. We as individuals must ensure that this emerging ecosystem of networked digital technology benefits, rather than restricts, us.
To do so, I think it’s important to put this moment in historical context. The human race has advanced tremendously over its existence on this planet. Look around you. So much of what you see and feel was designed or affected by humans. Buildings, roads, cars, books, families, music, math, elections, and the disease-resistant tomatoes in your fridge are the result of human activity.
Even if you’re sitting in middle of an ocean, forest, desert, or glacier, the device (or perhaps piece of paper) on which you’re reading these words was invented by humans. The language you’re reading right now, the shapes of the letters and the grammatical rules that render these words meaningful were developed by humans.
This point reverberated while I recently read Amsterdam: A History of the World’s Most Liberal City. As author Russell Shorto described how the philosopher Baruch Spinoza first posited that church and state could exist as separate entities, it hit me in my gut that values, principles, and norms change. That there was a time when people truly believed that dark-skinned humans were inferior. That 100 years ago, women in the United States had no right to vote. That the notion of “this is just how things are” is simply not true. History is not facts and timelines; history is about moments and people who seize those moments and make them matter. History is learning how people have harnessed their potential and applying those lessons to the present day.
As I move through life, I want to understand more about these forces, the physical, internal, societal, and historical forces that have brought me, you, and those around us to this particular moment in time. And if in that process, I say something that makes you go, “Hmm, I never thought of that,” well then, mission accomplished.
This post also appears on Medium.
Last week, check marks sprouted next to two items on my bucket list: earn a graduate degree and complete an individual thesis.
Before embarking on both journeys, I knew I loved to research and write. I felt like my mind, fascinated by such topics as journalism, astronomy, neuroscience, and colonial-era U.S. history, embodied the aphorism that a journalist’s expertise is a mile wide and an inch deep. Two years after becoming a student the University of Michigan School of Information, I have discovered where I want to go deep.
I want to understand how digital technology affects our relationships with ourselves, our significant others, our kids, our parents, our friends (and Friends), our governments, our devices, and the companies that manufacture those devices and harvest the data they so dutifully collect.
I’m a Millennial. I hand-wrote book reports in elementary school and made science projects out of cardboard and foam. My family bought a computer when I was nine years old, and I began typing my school assignments because tapping the keys was more fun than scrawling the pencil across the page. As a high schooler I conversed with friends over AIM; as a college student I was among the first generation to latch my social life to Facebook. I studied journalism as an undergraduate and watched digital technology pull the rug out of that industry right as I graduated and faced “the real world.”
I cannot imagine my life without digital technology. But I also wonder whether and how it is changing the way we live. Excited by our ability to capture, store, and disseminate large amounts of data, I designed my own curriculum in data storytelling to learn the basics of programming and design and apply those skills to the art of storytelling. The idea that people could use data to discover personal information (e.g., someone’s pregnancy) captivated me.
This became the basis for my thesis research in which I interviewed new mothers about their decisions to post baby pictures on Facebook. I had begun seeing baby pictures on my own Facebook News Feed, and I was curious whether the question of what to post and not post online entered new mothers’ minds.
As I was wrapping up one research interview a few months ago, the participant asked what I was studying.
“Data storytelling,” I replied, launching into my well-rehearsed, 30-second definition of this field of study.
“I feel like Facebook is the definition of data storytelling,” she said. “I am telling my life story in the way that I want to,…And it’s all data…That’s, like, the perfect thesis for what you’re studying.”
Her statement comforted me because I, for some reason, had equated data storytelling to working with numbers. But data is data, whether words or numbers. My thesis distilled more than 400 pages of interview transcripts into a story about what types of pictures new mothers do and don’t post online as well as what factors influence their decision.
The most rewarding aspect of completing this degree and this thesis has been hearing people’s enthusiasm and encouragement when I tell them what I’m doing. It is so exciting to believe you’re helping to make sense of what feels like a rapidly changing world, but also to realize that while the circumstances in which you’re asking the questions may be changing, the questions themselves are timeless. In the case of my thesis, taking baby pictures is nothing new, but broadcasting them to an audience of hundreds is.
One of my professors quoted a colleague of hers as saying, “Graduate school was when they stopped asking me the questions they already know the answers to.” In my time at UMSI, I’ve helped to answer some of those unanswered questions. I’m leaving campus with a better sense of what questions I want to ask of the world moving forward.
Learn to code? The question populated headlines this year. The Atlantic‘s Olga Khazan set journalists a-Twitter after pronouncing that journalism schools should not require students to “learn code.” She insisted her opposition extended to HTML and CSS, not data journalism, data analysis, or data visualization, making her post’s headline feel misleading given that those can require learning code.
Sean Mussenden of the American Journalism Review concisely expressed what I thought when reading Khazan’s piece. I fact-checked AJR articles in college, and tricking my brain to think I was fact-checking is the only thing that saved me from hurling a rock at my laptop while coding.
Four months ago I was a coding newbie. My crowning achievement was a Python script that determined whether a given string of text was of Tweet-able length. By December, I had cleaned and manipulated datasets in Python, created heat maps and scree plots in R, designed map visualizations in D3, and analyzed my Facebook and Twitter data. I needed the structure and graded homework assignments that graduate school courses in data manipulation, exploratory data analysis, and information visualization offered, but I wouldn’t have survived those classes without the wealth of resources on the Interwebz. These lessons I absorbed may help you meet your code-learning resolutions.
1. Find a tutorial that works for you
Free online tutorials abound. Shop around, take what works, and leave what doesn’t. I’m not suggesting giving up at the first sign of difficulty. Coding is hard, frustrating, tedious, and time-consuming. But it won’t always be. Rewards, even just the personal satisfaction of overcoming challenges, await those patient enough to try. Sink your time into a tutorial that fits your learning style and avoid wasting time on one that doesn’t. Last January I enrolled in a Coursera class on data analysis in R. The description said a programming background was helpful but not required. A week into the course, it was clear: a programming background was definitely required. I couldn’t afford to spend 10 hours on assignments I didn’t understand, so I stopped.
2. Google is your friend
Tutorials won’t give you all the information you need, but Google can help. Paste your error message into the search bar to get a sense of what went wrong. Or, (and I found this more effective), type what you’re trying to accomplish. Even the craziest phrase (“after splitting elements in lines in python, keep elements together in for loop”) will get you somewhere. People often share snippets of code on forums like Stack Overflow. Test their code on your machine and see what happens. Debugging is a random walk, requiring you to chase links and try several strategies before that glorious moment when the code finally listens to you. Don’t worry. You’re learning even when you’re doing it wrong.
3. But people are your best friend
I tweeted my frustration with the Coursera class last January. To my surprise, digital storyteller Amanda Hickman responded to my tweets and set up a Tumblr to walk me through the basics of R Studio. People want to help, and their help will get you through the frustration of learning to code. This semester I saw the graduate student instructor nearly every week during office hours, bringing him the specific or conceptual questions that tutorials and Google couldn’t explain me. When you get stuck, reach out. Ask that cousin who works in IT to help you debug something. Post on social media that you’re looking for help. Use Meetup to find fellow coders with whom you can meet face-to-face. Find groups like PyLadies (for Python) and go to their meetings. Don’t let impostor syndrome, or the feeling that you’re not really a “coder” stop you. You are a coder.
4. Take breaks
My first coding professor said, “Don’t spend hours on a coding problem. Take a break and return when your mind is fresh.” LISTEN TO HIM. More than once, I sunk six or seven hours trying to debug code, only to collapse into bed and then solve the problem within an hour the next morning. When coding threatens to consume your life (or unleash dormant violent tendencies), say, “Eff this for now” and take a well-deserved break.