Clustered around a desk outside the high school library, a group of students laugh as they type in their names to an artificial intelligence image generator, which returns an image of an artificially generated person based on that name. All the people look funny, yet uncannily realistic. Some have eyes off center, crooked mouths or missing eyebrows and even ears.
Students have begun to use generative AI behind the scenes in the research process of both classrooms and student-run clubs to summarize and generate articles and even fact-check work, while some wonder if essential learning is lost through this shortcut.
In debate, AI can be used to summarize research articles or even generate arguments when given points and parameters.
“The Lab team has never used Chat GPT or other AI, but I know other teams have,” former debate member Isaac Sutherland said.
Isaac thinks much is lost when AI is used in debate.
“There’s the thinking-on-your-feet part of debate, and then there’s the preparing, and doing the research and stuff. And that’s where you gain a lot of your skills in debate, so if you don’t have that you’re just not getting anything out of debate,” Isaac said.
U-High English teacher Arney Bray is aware that students are using AI, and has warned her class against using it in academically dishonest ways. She thinks AI can be a useful tool for students when reading dense or confusing material, so long as students know how to be responsible about their use of it.
“I’ve seen AI be very helpful to students when there’s very dense text or it’s very confusing, and students will put it into an AI generator that will summarize it, or give it to them in a more concise and succinct form,” she said. “I think that that’s potentially helpful. We just have to be really good at ensuring students use it responsibly.”
In her class, Ms. Bray expects AI is mostly used in the research process, like summarizing and generating articles.
“I do know that in research, some students will use it if they have something really dense or confusing, and they may put it into an AI tool for a summary,” Ms. Bray said.
Ms. Bray would be interested in doing an experiment with students to see what’s lost when AI is used to summarize text, by comparing and contrasting text summarized by AI with text summarized by other students and peers.
Along with the potential benefits of AI, Ms. Bray is aware of the racial and other subconscious biases it may reinforce.
“I’ve done exercises in some of the spaces I’ve taught in, where I have students search something like a king, and we see what images are generated and we analyze which pictures pop up and why,” she said. “I think it’s important for us to remember that any technology, any program, is programmed by someone. So whatever biases that person or company might have behind it are going to be reflected in the AI.
Most images found by students were of caucasian men. Ms. Bray does this exercise to demonstrate to students the misconceptions that AI can reinforce, such ideas of only white men holding positions of power.
Isaac believes research is the primary area in which AI is used in debate, too, and that this form of use damages the learning process.
“Using AI would just remove any good skills that you would get out of debate,” Isaac said, “I mean it’s not like you’re losing anything specifically, you’re just losing everything.”