Tagged: technology

Exploring Digital Privacy and Security in Elementary Schools @ CHI 2019

How do elementary school educators think about privacy and security when it comes to technology use in the classroom? What privacy and security lessons do students receive? Below, I describe findings and recommendations from a paper I co-wrote on this topic with Marshini Chetty, Tammy Clegg, and Jessica Vitak. I’ll present this paper at the 2019 ACM Conference on Human Factors in Computing Systems (CHI)

What did we do? Schools across the United States have integrated various digital technologies into K-12 classrooms, even though using them poses privacy and security concerns. As part of a broader project on how children ages 5-11 conceptualize privacy online, we wanted to understand how elementary-school educators decided what technologies to use, how privacy and security factored into these decisions, and what educators taught their students about digital privacy and security.

How did we do it? We held nine focus groups with a total of 25 educators from seven school districts in three metropolitan regions in the U.S. Our participants included teachers, teaching assistants, and student teachers.

What did we find? Educators used a range of digital devices, platforms, applications, resources, and games, some that their districts provided and others that school media specialists recommended. To them, privacy and security meant responsibly handling student data (e.g. login credentials) and minimizing students’ inappropriate use of technology. They largely did not give students lessons on privacy and security. Some educators felt such lessons were not necessary; others found it difficult to make such lessons resonate with their students.

What are the implications of this work? We see an opportunity for the HCI community and those who create educational technologies to help students develop privacy and security skills. This can include designing “teachable moments” into technologies, such as prompts that ask students to think about where their data goes when they submit something online. These are not meant to replace privacy lessons, but to spark conversations between students, teachers, and parents as well as to help students think about privacy during their everyday interactions with digital technology. School districts and teacher training programs should educate teachers about digital privacy and security issues. Finally, the HCI and other communities must grapple with broader tensions about the datafication of education and its concomitant privacy and security concerns.

Read the CHI 2019 paper for more details!

Citation: Priya C. Kumar, Marshini Chetty, Tamara L. Clegg, and Jessica Vitak. 2019. Privacy and Security Considerations For Digital Technology Use in Elementary Schools. In Proceedings of the 37th Annual ACM Conference on Human Factors in Computing Systems. https://doi.org/10.1145/3290605.3300537

This entry was cross-posted on the Princeton HCI blog.

Advertisements

Why Deepak Chopra is Wrong About Technology

One reason I value my newspaper subscription is that it reminds me not to take things for granted. Especially when it comes to technology.

In a recent column, the Washington Post’s Geoffrey Fowler recounted a conversation with alternative medicine advocate Deepak Chopra. Chopra has been criticized for promoting medical treatments based on pseudoscience, and his views on technology seem to be just as misguided.

“Technology is neutral, number one. Number two, it’s unstoppable,” Chopra told Fowler as they walked through the tech industry’s trade show, CES, earlier this month.

No, and no.

Technology does not just fall from the sky. People create it. Which means that our human frailties get baked right in. Type a question into a search engine, and the results can be just as racist as the responses you might get from people. Use a mathematical model to make lending decisions, and the output can be just as discriminatory as if you ask a loan officer.

People create technology, which means people can also address problems that result from (or are exacerbated by) technology. The question is who is responsible for doing so. Chopra lays that burden squarely on everyday users, blaming them for succumbing to technology’s power.

“I think technology has created a lot of stress for a lot of people, but that’s not the fault of technology,” Chopra told Fowler, “It’s the fault of the people who use technology.”

Sure, we could all be more intentional in our technology use. But absolving technology of any role in stress is disingenuous. It ignores the fact that the people who create the digital technologies we use every day design them to be persuasive. To Chopra, this isn’t the problem, but the solution; the app Chopra developed and the company he advises also employ persuasive design principles to hook people into using their products.

Chopra thinks technology will support well-being by collecting data and using it to optimize our environmental conditions, by, for example, changing the lighting in your house.

“So we just have to accept more surveillance as the price of this way of living?” Fowler asks. “For the advancement of your well-being, what’s wrong with that?” Chopra responds.

A lot.

First, this kind of blind faith in technology prevents us from talking about the limits of what technology can and should do. Second, it ignores the fact that using data-driven systems to address social problems too often ends up penalizing poor and working-class people. And third, it perpetuates a belief that human experience is simply raw material for companies to datify and exploit for financial gain.

Digital technology is, as Chopra says, “here to stay.” But this flawed understanding of technology needs to go.

For further reading, see the books I cited in the links above: