Suggestions for teaching strategies, prompt-writing skills, and tools, plus an overview of those ethical questions.
Accessibility is a primary mission of school libraries, and artificial intelligence (AI) can help with that mission by opening many doors for students. It can re-level, outline, summarize, narrate, and remove distractions from text, unlocking rigorous curricula for struggling readers. Students with dyslexia or ADHD, for example, can get information in the way that is most helpful for them without having to work through the lens of their disability.
Meanwhile, there are many questions around uses, ethics, and copyright regarding AI. While those concerns stand, AI can be incredibly useful for both students and teachers.
AI large language models (LLMs) are not only useful for special education students, but for all students, particularly in writing skills. Both Gemini (formerly Bard) and ChatGPT can help students who are paralyzed by choice in their research to take a step forward—either with a topic choice or in finding search terms for a fully formed research question.
AI is also useful in the editing process. Using a text-based AI tool has several advantages over a human editor; first among them, consistency. The typos, punctuation errors, or misused words that might pass by the human reader will be caught by AI.
Secondly, it is incredibly fast. While students often must wait for a peer or educator to have time to read their papers, AI can complete the task of editing for grammar in a matter of seconds. It can also be used to teach editing by having students look back and identify where and why the AI has changed parts of the text.
[Also read: School Librarian AI Hacks]
As educators, we know how challenging structuring an essay can be for students. The best way to get AI to help with structure is to ask it to outline the student’s paper—which should reveal any organizational struggles.
Generative AI can also produce and complete student work in writing and in math. In fact, most students are already using it. According to a survey by BestColleges.com, 43 percent of college students are using AI applications and 22 percent have used them to complete exams or assignments. That same survey shared that 61 percent of respondents think AI tools will become “the new normal.” Focusing instructional products on skills that students will need in the AI generation will help them build skills in areas that AI will never be able to subsume: collaboration, relationship management, creativity, empathy, and problem solving. Leaning into a maker pedagogy regarding AI can help students approach real skills in a workplace that will be built on the backs of these tools.
Teacher uses of AI
There is new evidence that levels of cheating among high school students has remained consistent or has declined in the past year. This seems like great news! Perhaps AI isn’t facilitating academic dishonesty as most educators feared.
Over the past few months, the thinking around AI and the role it could and should play in education has indeed gradually been shifting, as Washington Post columnist Josh Tyrangiel wrote in his article “An ‘education legend’ has created an AI that will change your mind about AI.” Of course, there are many things to consider - ethical and copyright concerns about how different LLMs source their information, and valid concerns around how AI will impact the growth, development, and education of the next generation. Tyrangiel characterizes the outrage over student use of AI as an “academic satanic panic.” Meanwhile, Khan Academy has integrated AI so that its Khanmigo tutor can guide students rather than generate answers, work at the skill level of the student, and focuses on accuracy and student safety. This customization of student learning may turn applications into tools our students use every day.
Many librarians may feel that learning how to use AI and how to integrate it into instruction is too daunting a task, but there are practical ways to approach this. Here are some starting points.
Prompt writing strategies
It is often noted in literature around AI copyright that the creativity lies in the prompt rather than the product. As such, there are things that make it easier for the AI engine to render the correct product.
Ethical considerations
The world of AI is changing very, very quickly. So, the concerns of the present may be addressed - or may persist. In working with students, I would primarily caution them about hallucinations, copyright, and bias.
AI engines are designed to give you the answer that you want. To do so, they may hallucinate and answer that may not have basis in reality—either in whole or in part. AI can make up articles, authors, quotes, and events in an effort to generate what the user is requesting. As such, AI products should be thoroughly fact checked - which builds amazing research skills.
Copyright is also a concern. The copyright status of AI products is currently unclear. According to the US. Copyright Review Board, “The U.S. Copyright Office will not register a work that was created by an autonomous artificial intelligence tool” This ruling was affirmed by Judge Beryl A. Howell of the US District Court for the District of Columbia in her ruling in the 2023 case of Thaler v. Perlmutter. This could make products created by AI public domain. However, the Terms of Service for ChatGPT note that, “OpenAI hereby assigns to you all its right, title and interest in and to Output.” The issue of copyright may make students and researchers more reluctant to use it for that purpose.
Last and most importantly, is bias. AI engines are only as good as the dataset from which they derive. In a body of data that large, our societal biases shine brightly (Cowgill, 2020). However, AI engines learn from the interaction they have with their users. The more we use, correct, and revise our interactions with AI engines, the more it will learn to mitigate the bias of its sources.
Ultimately, AI engines and LLMs are not replacements for good research practices, but they can be supplemental. They contain different kinds of information and that produces varying results—both in quality and fidelity. That information also comes from somewhere; currently that means from the users and from user data. Using AI well and thoroughly in these early stages helps users around the globe engage in an ethical, polite, and accurate community.
Further reading
AI & Accessibility | Cornell Center for Teaching Innovation.
Cowgill, B., et al. “Biased programmers? Or biased data? A field experiment in operationalizing AI ethics, Proceedings of the 21st ACM Conference on Economics and Computation (pp. 679-681, July 2020).
Kelly, N. “Costs and benefits of artificial intelligence editing tools.” MDPI Blog.
Langreo, L. ”Beyond ChatGPT: The other AI tools teachers are using." Education Week.
Poth, R. D. "AI tools that help teachers work more efficiently.” Edutopia.
Shaw, J. "AI in the academy: Cautious embrace of a new technology.” Harvard Magazine.
Spector, C. “What do AI chatbots really mean for students and cheating?” Stanford Graduate School of Education.
Thaler v. Perlmutter, Dist. Court, Dist. of Columbia 2023
Tyrangiel, J. "Opinion | An ‘education legend’ has created an AI that will change your mind about AI.” Washington Post.
Welding, L. “Half of college students say using AI is cheating.” | BestColleges.
IdaMae Craddock is the librarian at the Community Lab School in Charlottesville, VA, where Kristen Wilson is high school lead teacher.
We are currently offering this content for free. Sign up now to activate your personal profile, where you can save articles for future viewing
Add Comment :-
Be the first reader to comment.
Comment Policy:
Comment should not be empty !!!