Contents
AI Literacy
We define AI literacy as a set of knowledge, skills, and reflective practices that enable individuals to critically engage with AI technologies as users, analysts, and communicators. This includes the ability to recognize how AI systems are designed, what kinds of outputs they produce, and how those outputs should be interpreted and assessed. This section introduces a framework for understanding and teaching generative AI literacy in the Writing and Communication Program (WCP).
What is Generative AI Literacy?
Generative AI literacy refers to the ability to use, analyze, and critically evaluate AI tools that produce new content—text, images, code, sound, or video—based on patterns learned from data. This is distinct from simply knowing how to use an AI tool. Instead, it reflects a broader understanding of how GenAI systems are built, how their outputs are shaped by their inputs, and what social, cultural, and rhetorical implications those outputs may carry.
Stanford Teaching Commons, drawing from Selber (2004) and Becker, et. al. (2024) explains that generative AI literacy is multifaceted, involving functional, ethical, rhetorical, and pedagogical dimensions. In the WCP, we emphasize these four overlapping dimensions of GenAI literacy, adapted from Selber’s multiliteracies model and recent AI literacy scholarship:
Pedagogical: How can instructors engage generative tools in teaching? Faculty should have a clear rationale for when and why to incorporate AI into the classroom and be prepared to address questions of assessment, authorship, and accessibility.
Functional: How do generative tools work? What are their technical affordances and limitations? Students should be able to describe how GAI systems generate content and differentiate them from tools like search engines or databases.
Ethical: Why might someone choose to use—or refuse—generative AI? Students should be able to recognize the ethical stakes involved in AI production and use, including concerns related to labor, bias, environmental impact, and academic integrity.
Rhetorical: How do generative tools participate in meaning-making? Students should be able to analyze and shape GAI outputs for specific audiences, genres, and communicative goals, while recognizing the distinction between human and machine-authored language.

Understanding How Generative AI Tools Work
At the core of most generative AI writing tools, such as ChatGPT or Microsoft Copilot, are Large Language Models (LLMs). These models operate not by retrieving facts but by predicting the next most likely word (or image, or sound) based on prior input and statistical probability. Though their outputs may appear coherent and even persuasive, they do not “understand” language, context, or meaning in the way humans do.
Some platforms now combine LLMs with access to external sources or retrieval-based tools, allowing for more current or source-linked outputs. However, the underlying mechanism remains predictive rather than interpretive. Even when tools cite sources—as in Elicit or ChatGPT’s deep research mode—students should understand that these systems do not comprehend information or reason like humans. Outputs may still be inaccurate, oversimplified, or fabricated, especially when models go beyond what their sources directly support.
Hallucinations and Sourcing
One of the most significant challenges with GenAI tools is hallucination, a term used to describe false, misleading, or fabricated content generated by AI models. These errors often stem from the model’s architecture: LLMs do not store discrete facts but instead generate outputs based on patterns in training data. Without access to a source of truth, models may produce fictional citations, misquote authors, or invent terminology, especially when prompted in ambiguous or unfamiliar ways.
For students, this introduces new risks in the research and drafting process, particularly if they rely on AI outputs without critical verification. For instructors, it raises questions about how to teach research practices, evaluate student work, and support information literacy in AI-inflected classrooms. Best practices for mitigating hallucinations include:
- Reinforcing citation accuracy and fact-checking protocols
- Encouraging cross-referencing and triangulation of sources
- Teaching students to distinguish between generated and retrieved content
- Using AI outputs as a starting point for inquiry, not an end
Generative AI Beyond Text: Multimodal Tools
Generative AI now extends far beyond language, with tools available for generating and manipulating images, video, audio, and even code. Instructors and students may encounter these technologies in a variety of creative, professional, or rhetorical contexts.
Image tools (e.g., DALL·E, Midjourney) generate visuals based on text prompts but do not “see” or interpret meaning. Their outputs reflect training data patterns and may carry embedded biases or copyright concerns.
Audio tools can synthesize human-like speech, soundscapes, or music. While useful for accessibility or creative projects, they also raise issues of consent, authenticity, and emotional impact.
Video tools can automate editing or animate scripts, often with limited contextual nuance. Their use in student work should be paired with conversations about credibility and narrative intent.
Research tools (including citation and summarization features) may support early-stage ideation but must be vetted against traditional standards of accuracy, attribution, and reliability.
Teaching Implications: Supporting Student AI Literacy
Teaching AI literacy in writing courses involves more than showing students how to use generative tools. It includes helping them think about when and why they might use AI, how it influences their writing process and sense of authorship, and what larger contexts—such as bias, access, or environmental impact—shape these technologies. Instructors can support this work through assignments, discussion, and reflection that ask students to evaluate AI outputs, make choices about their use, and consider the implications of those choices. This kind of engagement helps students use AI more thoughtfully and with greater awareness of its role in writing and communication.
Resources
- Becker, Kimberly P. et al. “Framework for the Future: Building AI Literacy in Higher Education.” Moxie, 18 July 2024, https://moxielearn.ai/wp-content/uploads/2024/06/Ai-literacies-white-paper.docx.pdf.
- Lacy, Lisa. “Hallucinations: Why AI Makes Stuff Up, and What’s Being Done About It.” CNET, 1 July 2024, https://www.cnet.com/tech/hallucinations-why-ai-makes-stuff-up-and-whats-being-done-about-it.
- Scherlis, Bill. “Weaknesses and Vulnerabilities in Modern AI: Integrity, Confidentiality, and Governance.” Carnegie Mellon University, Software Engineering Institute’s Insights (blog). Carnegie Mellon’s Software Engineering Institute, August 5, 2024. https://doi.org/10.58012/638h-ab63.
- “When AI Gets It Wrong: Addressing AI Hallucinations and Bias.” MIT Sloan, updated 2024. https://mitsloanedtech.mit.edu/ai/basics/addressing-ai-hallucinations-and-bias/.