Why Deepak Chopra is Wrong About Technology

One reason I value my newspaper subscription is that it reminds me not to take things for granted. Especially when it comes to technology.

In a recent column, the Washington Post’s Geoffrey Fowler recounted a conversation with alternative medicine advocate Deepak Chopra. Chopra has been criticized for promoting medical treatments based on pseudoscience, and his views on technology seem to be just as misguided.

“Technology is neutral, number one. Number two, it’s unstoppable,” Chopra told Fowler as they walked through the tech industry’s trade show, CES, earlier this month.

No, and no.

Technology does not just fall from the sky. People create it. Which means that our human frailties get baked right in. Type a question into a search engine, and the results can be just as racist as the responses you might get from people. Use a mathematical model to make lending decisions, and the output can be just as discriminatory as if you ask a loan officer.

People create technology, which means people can also address problems that result from (or are exacerbated by) technology. The question is who is responsible for doing so. Chopra lays that burden squarely on everyday users, blaming them for succumbing to technology’s power.

“I think technology has created a lot of stress for a lot of people, but that’s not the fault of technology,” Chopra told Fowler, “It’s the fault of the people who use technology.”

Sure, we could all be more intentional in our technology use. But absolving technology of any role in stress is disingenuous. It ignores the fact that the people who create the digital technologies we use every day design them to be persuasive. To Chopra, this isn’t the problem, but the solution; the app Chopra developed and the company he advises also employ persuasive design principles to hook people into using their products.

Chopra thinks technology will support well-being by collecting data and using it to optimize our environmental conditions, by, for example, changing the lighting in your house.

“So we just have to accept more surveillance as the price of this way of living?” Fowler asks. “For the advancement of your well-being, what’s wrong with that?” Chopra responds.

A lot.

First, this kind of blind faith in technology prevents us from talking about the limits of what technology can and should do. Second, it ignores the fact that using data-driven systems to address social problems too often ends up penalizing poor and working-class people. And third, it perpetuates a belief that human experience is simply raw material for companies to datify and exploit for financial gain.

Digital technology is, as Chopra says, “here to stay.” But this flawed understanding of technology needs to go.

For further reading, see the books I cited in the links above:

Advertisements

LaTeX: A Window onto Another Way of Thinking

Last week, I worked with LaTeX, a formatting system that uses markup language to create documents, for the first time. The experience was:

  1. Not as complicated as I imagined, and
  2. Offered a glimpse into how the more technically oriented people in my research field think.

The decision to use LaTeX was not mine. The organizers of a conference where I had a paper accepted not-so-subtly told authors to switch from Microsoft Word to LaTeX (via the Overleaf interface) because the Word template they provided was so dysfunctional.

This understandably upset a lot of people. Many (myself included) had never used LaTeX and the revision period overlapped with the winter holidays. The research community raised valid concerns about the template issues that dogged the entire submission process, and I hope the conference organizers consider them. But that’s not my focus here.

By the time I sat down to re-format my paper, several researchers had voluntarily compiled a Google Doc with detailed, step-by-step instructions on how to transfer papers from Word to Overleaf. The process took me about a day-and-a-half and proceeded more smoothly than I expected. (Seriously, those researchers saved the day with that document.)

Now that I’ve used LaTeX/Overleaf, holy moly, no one should ever typeset a complex document in Microsoft Word again. I can’t believe how many hours of my life I’ve lost tinkering with tables, figures, and columns in that program, trying to divine what random collection of keystrokes and clicks would make everything snap into place on the page.

It’s not that things don’t break in LaTeX; they do. But when they do, I can more easily see why. I can check for missing parenthesis or parse the error statement and fix them. With Word, I have no idea why something breaks, and, more important, little sense of why my actions fixed it.

Part of this facility comes from the fact that I have basic HTML and programming experience, so I generally understand what the tags are trying to do. And the Overleaf interface, which shows the compiled document next to the code, makes it easy to see the results of my typing.

LaTeX sees documents as collections of different types of text. So formatting in LaTeX means defining the different categories, either within the template or by using software packages others have created for LaTeX, and then tagging the text to identify its category. So instead of manually changing the font and size of a heading title, or selecting a heading style in Word, you just type \section before the title and the text automatically formats to the pre-defined font and size.

I now get why so many people in computer science and math use LaTeX; it’s a programmer’s approach to formatting. Things (in this case text) belong to certain categories, and the author’s job is to label them. And with this realization, it also became a little clearer to me why some of my CS colleagues might struggle to understand the interpretive and mostly qualitative research I do, or why engineers might overlook social implications when they design technology. I study how people shape technology (and how, in turn, technology shapes them). And people cannot be slotted into categories. (As Mark Zuckerberg said at a tech event in 2016, “The code always does what you want—and people don’t.”)

Another small clue about the position of qualitative research in computing appeared in the sample template PDF. The document provided tips on how to format things like figures, tables, and equations. But nowhere in the document could I find a block quote, something that appears often in the papers I write. I’m not suggesting this was an intentional omission, but it made me wonder, did whosoever made the template not realize or expect that block quotes would appear in the papers submitted to this conference? Qualitative research contributions are an established part of human-computer interaction research, but that doesn’t mean everyone understands or accepts it.

I knew that before, but after using LaTeX, I have a slightly better sense of why that might be. I work in an interdisciplinary field whose members come from a variety of backgrounds and apply assorted research methods to study computing. Misunderstandings and disagreements are inevitable. Seeing how people whose research approaches differ from mine think helps me understand how to engage with them rather than wonder why they don’t get it.

So thanks, CHI2019, for pushing me to try LaTeX. My schedule and my brain appreciate it.

Becoming a Scholar

Since starting this PhD program, I’ve wanted to write an academic version of my personal mission statement. I assumed that if I dug deeply enough, or pondered long enough, the contours of Priya-as-scholar would sharpen into focus and reveal where in the realm of knowledge my research fits.

My fixation on “figuring it out,” belied an instrumental view of knowledge as a product or an outcome. Yet the philosophical view of knowledge, which is what I’m pursuing as a candidate for the degree of doctor of philosophy, sees knowledge as “necessarily ephemeral and incomplete,…never acquired…only reached proximally (Barnacle, 2005, p. 185-186). Being a philosopher, or lover of wisdom, is about pursuing knowledge, not capturing it.

In that spirit, I write this post not to mark an achievement (“I’ve figured it out!”), but to document a process (“This is where I am now”) and to leave breadcrumbs for future reflection (“Here’s the path I’ve taken”).

I first heard the words “epistemology” and “ontology,” while sitting in the opening lecture of an introductory government and politics class during the first week of my freshman year of college. The professor might as well have spoken gibberish, for as much as she tried to explain them, nothing stuck. I heard these terms much more after I entered the PhD program, and I’m just beginning to understand what they mean.

I found Michael Crotty’s (1998) book “The Foundations of Social Research,” a godsend to navigate the thicket of epistemology. Crotty sets aside ontology and focuses on the research process. He lays out a hierarchy to help readers understand how the abstract informs the granular (and vice versa): Epistemology –> Theoretical Perspective –> Methodology –> Methods. The easiest way for me to relate these concepts to my own scholarship is to move through them in reverse, starting with Methods.

Methods
In my research, I’ve primarily talked to people (through interviews and focus groups) and analyzed texts (including news articles, websites, company policies, blog posts, and social media posts). I’ve occasionally used design methods to work on the development of new technologies or educational resources. I also work with colleagues who use survey methods, though I have not used them in my personal research.

My dissertation focuses on pictures posted on social media, so I’m learning methods to more systematically analyze visual materials. I’m also interested in exploring methods like participant observation and diary studies that focus more on people’s practices.

Methodology
My research explores how information about people flows through digital systems and what that means for privacy. I’m curious about how this plays out in the context of family, primarily pregnancy, parenting, and early childhood. My goal is not to measure variables, to prove hypotheses, or to predict outcomes; my goal is to consider what it means to be a parent, child, or person in a datafied world. Ethnography and discourse analysis resonate with me as ways to do this work because they speak to people’s lived experience as well as broader societal framings.

Theoretical Perspective
In my personal mission statement, I said “I want to understand more about…the physical, internal, societal, and historical forces that have brought me, you, and those around us to this particular moment in time.” I entered the PhD program wanting to do research that put things into context, that traced paths and made connections between different disciplines, topics, or time periods, something I still want. While analyzing data, I’ve focused on creating categories and distilling them into themes, which I then mold into findings that are situated in existing theory or other scholarship. This puts my work squarely in the realm of interpretivism, which Crotty defines as a research approach that “looks for culturally derived and historically situated interpretations of the social life-world” (p. 67).

But after spending time with human rights activists and cultural studies scholars, I’m drawn to more critical orientations to research, particularly post-structuralism, feminism, and post-humanism. This includes Actor-Network Theory (Bruno Latour, John Law), assemblage theory (Gilles Deleuze & Félix Guittari), Foucauldian perspectives, and agential realism (Karen Barad). This is in part because I’m less interested in people and what they think or do, and more interested in how people, technologies, platforms, affordances, and networks come together to produce certain effects.

Epistemology
In my research, I’ve strived to respect the “voice” of my participants while remaining cognizant that I as the researcher am the one interpreting what they say. When I interview someone, I’m not plucking a piece of knowledge that already existed in their brain. I’m having a conversation in which both of us are producing meaning together. This interpretation and shared construction of meaning form the basis of constructionist epistemology, at least the way Crotty defines it.

Consciousness and intentionality lie at the core of constructionism: “When the mind becomes conscious of something, when it ‘knows’ something, it reaches out to, and into, that object” (Crotty, 1998, p. 44). But post-structuralist and post-humanist perspectives reject these framings of consciousness and intentionality, de-centering language (a social structure) or humans as the source of knowledge construction. Crotty describes the epistemology of subjectivism as a subject imposing meaning on an object (p. 9). But my nascent understanding of post-structural and post-human perspectives is that they reject a clear separation between subject/object in the first place. Meaning is not “imposed” on anything, but constituted by the intra-action (Barad, 2003) between various human and non-human actors.

Right now, my work and research approach falls within the constructionist epistemology. I am interested in taking my work in a more post-structural and post-human direction. But writing this has made me realize that doing so requires rethinking my understanding of agency.

Concluding Thoughts
This is the first in an occasional series of posts in which I work through the type of scholar I am and the type of research I do. I initially envisioned these posts as quite future-focused (what to I want to be/do), but I now write them with the recognition that I’m “always already” there (Barad, 2003).

 

Acknowledgements
I thank Kari Kraus and my classmates in INST888, Jason Farman and my classmates in AMST628N, Shannon Jette and my classmates in KNES789N, Annette Markham + Kat Tiidenberg + Dèbora Lanzeni and my classmates in the Digital Media Ethnography workshop, Karen Boyd, Andrew Schrock, Cynthia Wang, Shaun Edmonds, and Eric Stone for indulging me in conversations about theory/method over the past two years.

In addition, I thank the UMD Libraries, the Interlibrary Loan service, this laptop on which I read articles and wrote notes, the printer, paper, and ink that came together to give me physical copies of texts, the pens for enabling me to take notes on those texts, Twitter, Evernote, WordPress, Scriviner, Wifi connections, and finally the desks and chairs on campus, at home, at conferences, and on the Metro and Amtrak that supported my body while I read and wrote.

References

Barad, K. (2003). Posthumanist performativity: Toward an understanding of how matter comes to matter. Signs: Journal of Women in Culture and Society, 28(3), 801–831. [ADD LINK]

Barnacle, R. (2005). Research education ontologies: Exploring doctoral becoming. Higher Education Research & Development, 24(2), 179–188.

Crotty, M. (1998). The foundations of social research: Meaning and perspective in the research process. London ; Sage Publications.

Creating a Productivity System that Works for Me

Like many people, I enjoy having a routine. This summer, after moving from a cubicle into a shared office space, I began going to campus more routinely and working a similar schedule each day. The regular schedule plus the commute activated more natural boundaries around “work” and “home” time. On campus, I focused a bit more and got distracted a bit less. Most important, I felt anchored. I cherish the self-directed and flexible nature of PhD life, but it sometimes left me feeling like a dandelion blowing in the wind.

This new routine has done wonders for my sense of well-being. But it hasn’t done much for my time management skills. I used to think I was great at time management because I always met my deadlines and my expectations. After an exhausting first semester in the PhD program nearly two years ago, I realized I was terrible at time management. The only reason why I met my deadlines (and satisfied my perfectionist tendencies) was that I let work take priority over everything else. If I didn’t feel like I had accomplished enough by 5:30, I’d keep working until 9, 10, or 11 pm. If I didn’t feel like I had gotten enough done enough by Friday evening, I’d let work consume Saturday and/or Sunday. This didn’t leave my body, my mind, or my husband very happy.

Since that realization, I’ve re-framed my attitude toward work (it is an important part of my life, but not the most important) and changed my practices (regularly went to campus). The fall semester started this week, which means goodbye languid summer days, hello bustling campus and fuller schedule. I don’t like feeling overwhelmed by this, and I don’t want to spend the next four months waiting for winter break.

Various productivity systems, designed for academic life and beyond, suggest keeping a detailed schedule or assigning specific tasks to each day. I tried these approaches and found them rigid and stifling. So I’m going to adapt their principles into a system that works for me.

First, I commit to a consistent weekday wake-up and go-to-bed time. My alarm goes off at the same time every weekday, but I snooze it for 5 to 75 minutes. I’d like to limit the snoozing to about 10 minutes. To help with that, I intend to go to bed at a consistent time, and to begin my bedtime routine 30 minutes prior to that bedtime.

Second, I will go to campus on weekdays unless I have a scheduling reason to work from home. My experience this summer reminded me that it’s much easier to treat the PhD as a job when it involves a distinct workplace and a commute.

Third, I’ll restart a practice I followed when I worked full-time — tracking my hours. I was fortunate to have supervisors who let me take comp time if I ever worked more than 40 hours per week, so you bet I tracked my hours. I can get obsessive with practices like this, which is why I refrained from tracking my hours as a PhD student. But since I work on various projects, eagerly say yes to other projects, tend to fall into rabbit holes while working on any project, and am a recovering perfectionist, I think time tracking is essential to improving my time management skills. I keep things simple and do this in a spreadsheet.

Fourth, I’ve created a task management workflow to help me figure out what to work on when. I’ve written a month-by-month list of my commitments, deadlines, and events. At the end of each week, I’ll spend half an hour previewing the next week. I’ll create a to-do list with the tasks that need to be completed that week. I’ll then look at the calendar and schedule time blocks to work on those tasks. As I go through the week, I can move things around if needed. After a few weeks of this, I hope to have a better sense of how much I can accomplish in a typical 40-ish hour week and how much time to budget for certain tasks. This will (hopefully) help me let go of the perfectionist tendencies, resist the temptation of distractions (Twitter, I’m looking at you) and understand the “price” of saying yes to a given task.

Finally, I commit to keep my campus desk tidy. Stalagmites of papers and books make my home desk an uncomfortable place to work, and looking at them unsettles my mind. Yes, I’d like to clean them off, but this is about baby steps. My campus desk is big enough that the two piles that have already sprouted aren’t in the way. I’d like to keep it that way.

So that’s my plan for this semester. Check with me in four months to see how it goes.

Time to Focus

A few weeks ago I played with a cousin’s four-year-old daughter.

“Look at the map!” she cried, laying out a Lego theme park map on the ground. “First we have to go through the ninjas, then we have to go get Hercules.” We ran around the park, her one-year-old brother wandering as we zipped by. She kept yelling out commands, and I’d ask her what we had to do next.

Suddenly, this morphed into a game of whales and humans. Namely, we kept being turned into whales and needing to turn back into humans. We did so by “eating” ice cubes that we had dumped on the ground. Every time she pointed at me, I bellowed like a whale.

Eventually, another child joined in and this turned into a game of house. We each claimed a tree and rushed back and forth, bringing each other cakes and omelettes and turning into superheros and stealing each other’s cars.

Preschool age children are full of energy and love make-believe. Engaging with them can be exhausting, but I found it freeing. I could say or do anything and she’d run with it. I declared we’d found a safe zone, I taped silly glasses to my face, I made strange noises, and she took it all in stride.

A few years from now, she’ll outgrow this tendency toward fantastical exuberance. She’ll go to school and focus on learning how the world works and managing a social life. I won’t be able to run up to her and say, “Bloooooooop” with pink flowery glasses taped to my face. In Deleuzian terms, her attention will be territorialized — marshaled into familiar, conventional, and normalized ways of thinking.

I feel myself making a similar transition from doctoral student to ABD. Since advancing to candidacy in May, I’ve been excited to write my dissertation proposal. Finally, I get to focus on the whole reason why I entered the PhD program in the first place. But writing the proposal means I have to focus. The floodlight of my academic gaze must sharpen into a spotlight. And that makes me a little sad.

I now look back on the past year and see it as a period of academic fantastical exuberance.

What’s epistemology? What’s ontology? What did Foucault and Latour and Haraway and Barad say? What’s my theoretical perspective? I’m an interpretivist. No, maybe I’m a critical researcher. Ooh, what is post-humanism. What happens if I drop terms like assemblage and actant and materiality into conversation? Do I want to do ethnography? Is my work content analysis or textual analysis? What discipline am I in? I’m in social computing. No, computer-mediated communication. Ooh, what if I call myself an internet studies researcher. Maybe I’m in media studies.

Hey, look at all these citations! What’s that? And that one? And that one? Let me download the article right now. And these books, let me get them from the library. Oooh look, these other books seem relevant too; let me check them all out. Here’s some sociology, some anthropology, some communication, some feminist scholarship, science and technology studies work, some cultural studies, ooh, political theory, whoa, I didn’t expect to go there, but OK, sure.

What has agency? Hmm, I didn’t think that was relevant to my research, but alright, let’s do that. Oh, you want deconstruct binaries. Fine. Subject/Object. Nature/Culture. Human/Thing.

And I’ve loved it. Loved it, loved it, loved it. But I can’t possibly read everything I’ve downloaded and checked out before writing the dissertation proposal. I can’t trace the history of all the theories I’ve started learning about.

Yes, the reading and the note-taking and the conversations will continue. But if I want to stay on track, more of that work needs a clearer purpose than simply, “Oooh, that sounds interesting.

I’m hitting a new academic stage, and it’s time to focus.

Escaping the Trepidation Trap

In the first sentence of his chapter “On Recalling ANT,” Bruno Latour lists four things that “do not work” with actor-network theory: “the word actor, the word network, the word theory and the hyphen! Four nails in the coffin” (p. 15). Apart from its bluntness, this sentence stands out because Latour is one of the creators of actor-network theory.

Any theory has its proponents and critics. One person finds a particular theory useful to inform a worldview or structure a research project, while another finds that theory bunk and proclaims so loudly. Such is the world of academia. We read what others have written, we think about it, sometimes supporting that thinking by studying something empirically, and we write. Often, our writing engages directly with what others have written — affirming, refuting, re-interpreting, critiquing the work of others, taking their work in a new direction, arguing that their work is flawed.

It is a world I happily inhabit, largely because of this focus on writing. I wrote poems and novels as a child, studied journalism as an undergraduate, and consider writing for the public a core part of my personal mission. I’m a published writer, and I don’t fear being published. I entered a PhD program in part to connect my writing with theory, yet I also approached this work with some trepidation.

Actor-network theory is well known and widespread. Actor-network theory is also recent enough that the people who created are still alive. Meaning they can see and respond to the ways that others use or critique their theory. While reading “On Recalling ANT,” I envisioned myself in Latour’s shoes. How would I feel if I invested countless hours into developing a theory, only to see others misunderstand or misinterpret it, or take it in directions that go against its original intention? How would I respond if people challenged my work or pointed out its flaws? Publishing theoretically informed research felt more vulnerable to me than other types of writing because it meant opening myself to critiques that my thinking was wrong. Which to me felt tantamount to critiquing my existence.

Last year, I wrote a paper in which I took a theory in a different direction. I wasn’t sure whether I correctly applied the theory, but the paper survived the peer review process and was published. Shortly after publication, I was invited to present the paper at a workshop that the theory’s founder helped organize. If I had applied the theory incorrectly, I’d find out now. The theorist gently critiqued a few other presentations, but mine proceeded unscathed. Though relieved, I wondered what would happen the next time I used a theory in my work. And the next time. Approaching my research with trepidation seemed exhausting.

Earlier this year, I found my way out of the trepidation trap. I was at another workshop, this time sitting in the audience listening to a theorist whose work has resonated with me for years. While summarizing her research, she remarked, offhandedly, “I’m still figuring out” the theory.

This stunned me. This theorist has published books and articles on this theory; her name is almost synonymous with it. If she’s still figuring it out, that means that any of us who use the theory are also figuring it out. And that means there’s no one “correct” way to interpret or apply the theory.

Someone may dedicate their entire professional life to developing a particular idea; indeed, their name may become synonymous with the idea. But ideas don’t belong to one person. The acknowledgements page in any book reveals the multitude of people involved in developing an idea. And, in the spirit of ANT, we must not forget that non-human actors also play a role. The library, my computer, and my glasses deserve as much credit as the people around me for bringing ideas to fruition.

So if ideas exist separately from authors, then critiques of ideas exist separately from critiques of authors. There’s no reason for me to equate a critique of my thinking to a critique of my existence. I escape the trepidation trap by letting go. I let go of the assumption that thoughts define me. I let go of the sense that there’s a “right” answer. And most important, I let go of the fear.

After re-reading the theory I used in the paper last year, and reading other work that engaged with the theory, I recently wrote another paper on this theory. In it, I critiqued my prior use of the theory and offered a more nuanced analysis. I felt comfortable doing so because theoretically informed research is not some pedestal I’m trying to climb onto. It’s a messy, iterative practice, just like everything else in life. We read, we write, we engage, we reflect, we read more, we write more, we revise, we clarify. We change. Bruno Latour himself moved away from the social constructionist views that pervaded his earlier work.

But separating the idea from the author is not a license to disengage. On the contrary — Latour ended his chapter, “On Recalling ANT” with this:

“[Y]ou cannot do to ideas what auto manufacturers do with badly conceived cars: you cannot recall them all by sending advertisements to the owners, retrofitting them with improved engines or parts, and sending them back again, all for free. Once launched in this unplanned and uncharted experiment in collective philosophy there is no way to retract and once again be modest. The only solution is to do what Victor Frankenstein did not do, that is, not to abandon the creature to its fate but continue all the way in developing its strange potential” (p. 24).

And that strange potential includes possibilities for illumination, not just openings for critique. One person recently told me they found the theory paper I published last year helpful because they had never thought of using the theory in that way. And that, more than anything else, is why I do this work in the first place.

A Crack and a Relief

It happened. The crack, when “you can no longer stand what you put up with before, even yesterday” (Deleuze & Parnet quoted in Jackson & Mazzei, 2013); when “one can no longer think things as one formerly thought them, [and] transformation becomes both very urgent, very difficult and quite possible” (Foucault, quoted in St. Pierre, 2014).

For the past several months, I’ve been trying to understand epistemology and ontology — what they mean, what they mean for me, and what they mean for my research. I read “The Foundations of Social Research” by Michael Crotty. I read “The Body Multiple” by Annemarie Mol. I read other articles, mostly from communication, science and technology studies, and cultural studies.

I continued to analyze quantitative and qualitative data and write papers, but I felt increasingly perturbed, as if this work wasn’t adequately capturing what the research team said we were studying. I kept spouting my one-line summary of my dissertation research: “I study how parents post pictures of their kids online and what that means for kids’ identity development, sense of self, and understanding of privacy,” even after realizing that I’m not actually studying parents or kids or online or identity.

Epistemologically, I sensed that I wasn’t a positivist, but I couldn’t figure out whether I was a constructionist or a subjectivist. Theoretically, I didn’t think I was an objectivist, and I sensed that I might be an interpretivist who could one day become a critical scholar. I could be doing phenomenology, or potentially hermeneutics, or maybe symbolic interactionist work. I remained unsure of aligning myself with any methodology besides “qualitative,” which I do primarily through the methods of interviews and textual analysis.

Today, all of that fell apart and also came into sharp relief, thanks to readings on “New Materialism” and a conversation at the weekly journal club of my university’s physical cultural studies program.

I realized that I’ve been using what St. Pierre (2016) calls “conventional humanist qualitative methodology” — interviewing people and coding the data as a way to capture some aspect of their lived experience. I thought I’d sidestepped positivism because I don’t offer “hypotheses,” don’t calculate “inter-rater reliability,” and don’t purport to “predict” behavior. But I do define “research questions,” collect “data,” and code it to fill a “gap the knowledge” — all trappings of logical positivism.

And this would be fine, except that I’m also discussing Foucauldian analysis and Actor-Network Theory and assemblage. And I dig it. It resonates with me and it’s informing how I approach my dissertation. So no wonder the conventional research process I’m using feels stale — it does not align with the new (to me) theory/methods that are shaping the way I understand the world and the research that occurs in it.

But doing research from a Foucauldian or ANT or assemblage (or feminist or queer or post-structuaralist or post-colonial or post-humanist or…) perspective requires more than rethinking methods. It means letting go of a belief that any research, no matter how rigorously or reflexively it is done, can capture what is “going on.” It means accepting that research, and the production of knowledge, is always partial, always incomplete. It means that no matter how precisely or evocatively we write about our research, it remains a semblance.

But nevertheless, I feel this work is urgent. I know this work is difficult. And yes, I believe that it is possible.

(And while I will continue doing research in the conventional humanist qualitative vein, because learning new things takes time, Jackson & Mazzei show how to use these theories to think with typical interview data “within and against intepretivism.”)