Newswise — When ChatGPT surged into public life in late 2022, it brought new urgency to long-running debates: Does technology help or hinder kids’ learning? How can we make sure tech’s influence on kids is positive?

Such questions live close to the work of Jason Yip, a University of Washington associate professor in the Information School. Yip has focused on technology’s role in families to support collaboration and learning.

As another school year approaches, Yip spoke with UW News about his research.

What sorts of family technology issues do you study?

I look at how technologies mediate interactions between kids and their families. That could be parents or guardians, grandparents or siblings. My doctoral degree is in science education, but I study families as opposed to schools because I think families make the biggest impact in learning.

I have three main pillars of that research. The first is about building new technologies to come up with creative ways that we can study different kinds of collaboration. The second is going into people’s homes and doing field studies on things like how families search the internet, or how they interact with voice assistants or digital games. We look at how new consumer technologies influence family collaborations. The third is co-design: How do adults work with children to co-create new technologies? I’m the director of KidsTeam UW. We have kids come to the university basically to work with us as design researchers to make technologies that work for other children.

Can you explain some ways you’ve explored the pros and cons of learning with technology?

I study “joint media engagement,” which is a fancy way of saying that kids can work and play with others when using technology. For example, digital games are a great way parents and kids can actually learn together. I’m often of the opinion that it’s not the amount that people look at their screens, but it’s the quality of that screen time.

I did my postdoc at Sesame Workshop, and we’ve known for a long time that if a child and parent watch Sesame Street together and they’re talking, the kid will learn more than by watching Sesame Street alone. We found this in studies of “Pokémon Go” and “Animal Crossing.” With these games, families were learning together and, in the case of Animal Crossing, processing pandemic isolation together.

Whether I’m looking at artificial intelligence or families using internet search, I’m asking: Where does the talking and sharing happen? I think that’s what people don’t consider enough in this debate. And that dialogue with kids matters much more than these questions of whether technology is frying kids’ brains. I grew up in the ‘90s when there was this vast worry about video games ruining children’s lives. But we all survived, I think.

When ChatGPT came out, it was presented as this huge interruption in how we’ve dealt with technology. But do you think it’s that unprecedented in how kids and families are going to interact and learn with it?

I see the buzz around AI as a hype curve — with a surge of excitement, then a dip, then a plateau. For a long time, we’ve had artificial intelligence models. Then someone figured out how to make money off AI models and everything’s exploding. Goodbye, jobs! Goodbye, school! Eventually we’re going to hit this apex — I think we’re getting close — and then this hype will fade.

The question I have for big tech companies is: Why are we releasing products like ChatGPT with these very simple interfaces? Why isn’t there a tutorial, like in a video game, that teaches the mechanics and rules, what’s allowed, what’s not allowed?

Partly, this AI anxiety comes because we don’t yet know what to do with these powerful tools. So I think it’s really important to try to help kids understand that these models are trained on data with human error embedded in it. That’s something that I hope generative AI makers will show kids: This is how this model works, and here are its limitations.

Have you begun studying how ChatGPT and generative AI will affect kids and families?

We’ve been doing co-design work with children, and when these AI models started coming out, we started playing around with them and asked the kids what they thought. Some of them were like, “I don’t know if I trust it.” Because it couldn’t answer simple questions that kids have.

A big fear is that kids and others are going to just accept the information that ChatGPT spits out. That’s a very realistic perspective. But there’s the other side: People, even kids, have expertise, and they can test these models. We had a kid start asking ChatGPT questions about Pokémon. And the kid is like, “This is not good!” Because the model was contradicting what they knew about Pokémon.

We’ve also been studying how public libraries can use ChatGPT to teach kids about misinformation. So we asked kids, “If ChatGPT makes a birthday card greeting for you to give to your friend Peter, is that misinformation?” Some of the kids were like, “That’s not okay! The card was fine, but Peter didn’t know whether it came from a human.”

The third research area is going into the homes of immigrant families and trying to understand whether ChatGPT does a decent job of helping them find critical information about health or finances or economics. We’ve studied how the children of immigrant families are searching the internet and helping their families understand the information. Now we’re trying to see how AI models affect this relationship.

What are important things for parents and kids to consider when using new technology — AI or not — for learning?

I think parents need to pay attention to the conversations they’re having around it. General parenting styles range from authoritative to negotiation style to permissive. Which style is best is very contextual. But the conversations around technology still have to happen, and I think the most important thing parents can do is say to themselves, “I can be a learner, too. I can learn this with my kids.” That’s hard, but parenting is really hard. Technologies are developing so rapidly that it’s OK for parents not to know. I think it’s a better position to be in this growth mindset together.

You’ve taught most every grade level: elementary, junior high, high school and college. What should teachers be conscious of when integrating generative AI in their classrooms?

I feel for the teachers, I really do, because a lot of the teachers’ decisions are based on district policies. So it totally depends on the context of the teaching. I think it’s up to school leaders to think really deeply about what they’re going to do and ask these hard questions, like: What is the point of education in the age of AI?

For example, with generative AI, is testing the best way to gauge what people know? Because if I hand out a take-home test, kids can run it through an AI model and get the answer. Are the ways we’ve been teaching kids still appropriate?

I taught AP chemistry for a long time. I don’t encounter AP chemistry tests in my daily life, even as a former chemistry teacher. So having kids learn to adapt is more important than learning new content, because without adaptation, people don’t know what to do with these new tools, and then they’re stuck. Policymakers and leaders will have to help the teachers make these decisions.