Search form

Teaching Resourcefulness in the Age of Artificial Intelligence

artificial intelligence

If we made a laundry list of topics that teachers are most worried about right now, it might run a little long. One item that would almost certainly appear is a widespread set of concerns about artificial intelligence (AI) and its role in the work students produce. When ChatGPT first entered the public consciousness a little over a year ago, the number-one worry that teachers expressed was that students would take advantage of AI to be academically dishonest, whether that meant plagiarizing their writing or using the available technology to complete their homework.

To an extent, the fears that teachers harbor about AI can reflect a deficit mindset toward their expectations of the values students hold about their education. For example, when I asked a group of students (anonymously, to ensure honesty) how they like to use ChatGPT, the most common answer I received was that it was most useful in helping them understand class content that was difficult to unpack, or that a teacher had not fully explained. In addition, many students pointed out that the unpaid versions of AI (which are also the most pervasively used) do not yet have the ability to “talk” to the internet, and that the information they receive can be therefore limited or dated. Those who pay for the most recent versions of AI (like ChatGPT 4) get better results, but they too become frustrated by not necessarily understanding whether what various programs produce reflects what a teacher might be looking for. 

Ultimately, the constraints students currently experience with AI are often grounded in their lack of resourcefulness. Sure, they can ask a computer to help them. But do they know what questions to ask, and can they verify how good the results are? When we teach students about growth mindset, one thing that is good to emphasize is that asking for help requires a level of skill. Similarly, students won’t get very far with AI if they don’t have enough of a handle on what they’re studying to ask good questions. For example, suppose a student is asked to write about the key reasons that America became involved in the Vietnam War. While an AI-generated response can certainly answer the question, there are certain difficulties with a student submitting the result without doing any research of their own. For one thing, the response might cover topics that have not been explored by the class, which is something a teacher would notice right away. Furthermore, depending on the sources that the AI program is consulting, the facts contained within the product may be inaccurate or biased. In addition, the voice or level of writing skill often will not match the student who is submitting work, which is something that teachers also recognize.

Students are aware of these pitfalls more often than not, and they are also not necessarily looking to be academically dishonest. Rather, in their pursuit to get work completed that they don’t understand, or that they do not have much time to explore further, they consult AI as a study aide. This approach can certainly go too far or backfire, so the question becomes: What can teachers do to help students use AI appropriately, knowing that this new technology not only isn’t going anywhere, but that it’s also getting better with each passing week?

The answer lies in how we set kids up for success. Just as we want students to know how to use the resources around them correctly (both human and material) to learn about the world around them, there should be no exception to this approach when it comes to AI. What that means is that we ourselves have to become more knowledgeable by experimenting with available technology, discussing how kids are currently using what is available to them in open and safe ways, and creating clear boundaries for what is acceptable in terms of AI use, and what is not.

To consider what setting boundaries looks like, let’s go back to the concerns around cheating or plagiarism. Students are often not fully aware (especially these days) of what is considered academic dishonesty, so we must be clear about where the line falls. For example, students are probably aware that copying and pasting another source in its entirety and pretending they wrote it is unacceptable. However, would they feel the same way about an AI response they modified, perhaps quite a lot? Or what if they consult AI for inspiration, but then craft their own response? To make what is acceptable completely clear, teachers must specifically delineate what is considered cheating, and what is permitted. 

As AI emerges as a strong and enduring force in the world around us, the role of teachers is evolving quickly. Our role is to provide guidance to students about how to best use this technology, model our own best practices for integrating technological resources into the learning process, and keep avenues of communication open so that students do not feel alone as they navigate a growing collection of options. The truth is, we are all learning about AI together, and kids only have an edge if adults do not move with the learning curve. To work with AI, prioritizing resourceful behavior for everyone will provide a productive pathway forward.


Written by Miriam Plotinsky, Education World Contributing Writer

Miriam Plotinsky is an instructional specialist with Montgomery County Public Schools in Maryland, where she has taught and led for more than 20 years. She is the author of Teach More, Hover Less, Lead Like a Teacher and Writing Their Future Selves. She is also a National Board-Certified Teacher with additional certification in administration and supervision. She can be reached at www.miriamplotinsky.com or via Twitter: @MirPloMCPS

Copyright© 2024 Education World