Editor’s note: This is the third in a series of guest columns by Rettig on AI. Read parts one and two here.

Have you noticed since the beginning of the year if you Google a term the first description to appear is an automated AI response? We worry that it puts people into intellectual lethargy. Students may be lulled into stopping there and not digging further. In fact, Google searches don’t take students to verified scholarly work. Professional journal articles, for example, are often protected behind journal firewalls and only accessible through professional memberships.

What we see on Google is not the best of the best. It hasn’t been vetted, and it isn’t even written by people. Understanding how AI works would be helpful here. The process, while admittedly ingenious, is deeply flawed.

Perry Rettig, professor and former vice president at Piedmont University. (Courtesy)
icon to expand image

AI tools like ChatGPT are large language models trained on vast collections of text including millions of webpages, books and articles from the internet. It generates responses using the patterns it learned from its training data, essentially predicting what words are likely to come next in a sentence. These AI systems rely on complex algorithms to produce responses, but they do not verify facts or evaluate the credibility of those responses.

AI may even synthesize information in new ways, producing entirely fabricated content in a phenomenon known as an “AI hallucination.” Hallucinations occur when the model generates responses that seem credible but are ultimately false. These may include invented facts, documents or sources. Hallucinations are among the most significant limitations of today’s AI systems and highlight the importance of human critical thinking.

Elias Clinton, Piedmont University associate professor, chair of the Department of Exceptional Child Education and associate dean of the College of Education. (Courtesy)

Credit: Contributed

icon to expand image

Credit: Contributed

Artificial intelligence, though, can help generate ideas and perhaps move us in the right direction. But it must not take away our thinking — our intellectual engagement. Teachers play a critical role in showing students how to use AI responsibly and guiding them through that process.

Teachers can also guide students by emphasizing that good writing is a process, not just a finished product. When students submit outlines, drafts, and revision logs, they’re engaging in thoughtful, reflective work that AI should support, not replace. In fact, part of the process can include showing how students used AI to generate ideas or explore sources — so long as they also reflect on and personalize that content, making it their own.

The first opportunity is to help them generate ideas or main topics. AI can also assist in developing an outline to help organize their writing in a logical sequence. This helps them organize their thoughts before they begin writing the actual essay.

Students and researchers can use artificial intelligence to assist in the editing and revision process to ensure clarity and coherence. AI can also help students think through how different sources or theories might connect. Of course, students will need to verify and cite sources themselves.

Again, artificial intelligence should be a “partner” for the student-writer, not a ghostwriter. It’s an opportunity to grow and to engage intellectually. A good check for teachers is to ask the students to verbalize in their own words what they have written and learned. Artificial intelligence cannot do that for you.

Writing assignments may need to evolve by asking students to reflect on their process, make personal connections to the content, or apply concepts in ways that AI alone cannot replicate. While concerns about shortcuts and misuse are certainly warranted, AI also presents a valuable new opportunity to teach metacognition, research habits, and ethical literacy practices — if educators are prepared to guide their students thoughtfully through this evolving digital landscape.

As AI continues to evolve, so must our approaches to teaching, learning and assessing. We need to embrace a balanced perspective — one that leverages AI’s benefits while preserving critical thinking and intellectual engagement. Educators must lead this charge by designing assignments that integrate AI thoughtfully, teaching students to verify information and emphasizing the human elements of learning that technology cannot replace.

By doing so, we can prepare students not merely to coexist with AI, but to use it as a tool that enhances rather than diminishes their intellectual growth. We’ll need to return to this conversation often because the story of AI and education is still being written.

Elias Clinton is a Piedmont University associate professor who is chair of the department of Exceptional Child Education and associate dean of the College of Education. Perry Rettig is a professor and former vice president at Piedmont University. He has spent 42 years as an educator, including stints as a public schoolteacher and principal.

About the Authors

Featured

A sign announcing a home for sale is posted outside a home Feb. 1, 2024, in Acworth. Metro Atlanta saw a 4% decrease in April home sales compared to April 2024. (Mike Stewart/AP 2024)

Credit: AP