All Writing and Communication Program instructors should have a policy that provides students with expectations about using generative AI tools (ChatGPT, DALL-E, etc.) in class. Instructors may adopt the program’s recommended policy below, or, in consultation with the WCP Director, use one of the other options discussed or develop their own policy. These policies should be built on the principles of responsibility, transparency, and documentation discussed below.
These requirements, principles, and policies stem from the following understanding about AI tool use (as of July 2023):
- Generative AI tools are here to stay
- Generative AI tools are increasingly integrated into expectations for professional, social, and civic practice, in addition to academic practice
- Students need to learn about critical and responsible AI tool use
- Most students follow course policies and instructor expectations
Contents
- 1 Background
- 2 Principles for All WCP Courses
- 3 The Challenge of Banning Generative AI Tools and/or Relying on AI Detectors
- 4 Practices for Discussing and Using Generative AI Tools in Class
- 5 Policy Options
- 5.1 Recommended Policy: Generative AI Tools Allowed In Specified Instances
- 5.2 Option 2: Generative AI Tools Allowed—With WCP Director Consultation And Approval
- 5.3 Option 3: Generative AI Tools Not Allowed—With WCP Director Consultation And Approval
- 5.4 Option 4: Develop Your Own Policy—With WCP Director Consultation And Approval
- 6 Resources
- 7 Acknowledgements
Background
Generative AI “refers to computer systems that can produce, or generate, various forms of traditionally human expression, in the form of digital content including language, images, video, and music” (MLA-CCCC Joint Task Force, 5). Generative AI includes standalone programs such as ChatGPT and Dall-E; increasingly, generative AI “assistance” or “collaboration” is integrated into other applications such as Google Docs, Microsoft Word, and Adobe Photoshop. Knowing the basics of how AI works and the kinds of tools available is foundational for understanding the current and potential effects of generative AI tools in our classes.
Following from the MLA-CCCC Joint Task Force on Writing and AI and other researchers and teachers (see Resources section below), our goal should be to consider how generative AI tools may be used in alignment with disciplinary knowledge about teaching and learning writing and communication. Most importantly, writers (or speakers, designers, etc.) are responsible for the artifacts they produce. This responsibility includes ensuring that any facts or cited sources presented are correct (and are not, for example, the “hallucinations” of ChatGPT). For students in particular, this responsibility includes being transparent about their compositional process (e.g., through process documents and reflections) and following course and institute policies related to plagiarism and cheating. It also means documenting sources (including AI-generated content) to make clear distinctions between the author’s ideas and ideas that are not the author’s.
Principles for All WCP Courses
Generative AI tools are unable to do the work of learning for students, though these tools may be able to support that learning. Generative AI tools cannot themselves meet course outcomes—they cannot “develop knowledge of genre conventions” or “develop flexible strategies for generating, revising, editing, and proofreading,” for example. But when used responsibly, transparently, and with appropriate documentation, generative AI tools may offer support and opportunities for students in their learning.
The MLA-CCCC Joint Task Force notes that “critical AI literacy is now part of digital literacy” (11). As teachers of multimodal composition and practitioners of digital pedagogy, we are encouraged to consider—critically—the ways new technologies (such as generative AI tools) can support student learning and communication practices. Minimally, that means helping students consider the ways generative AI tools and AI-generated content fit within existing expectations for academic and professional work—expectations such as providing process documents, citing sources, and reflecting on one’s work. These expectations are expressed in these three principles:
- Responsibility: Students are responsible for the work they submit. This means that any work they submit should be their own, with any AI assistance appropriately disclosed (see “Transparency” below) and any AI-produced content appropriately cited (see “Documentation” below). This also means students must ensure that any factual statements produced by the generative AI tool are true and that any references or citations produced by the generative AI tool are correct.
- Transparency: Any generative AI tools students use in the work of the course should be clearly acknowledged as indicated by the instructor. This work includes not only when students use content directly produced by a generative AI tool but also when they use a generative AI tool in the process of composition (for example, for brainstorming, outlining, or translation purposes). See the Discussing and Using AI Tools in Class document for guidance about transparently disclosing and reflecting on generative AI tool use.
- Documentation: Students should cite any content generated by an generative AI tool as they would when quoting, paraphrasing, or summarizing ideas, text, images, or other content made by other people. See the Discussing and Using AI Tools in Class document for guidance about citing AI-generated content as a secondary source, including in recommended MLA or APA formats.
In general, these principles provide a framework for talking about and using AI tools in the context of writing and communication education. See the Discussing and Using AI Tools in Class document for ways to integrate these principles into your courses.
The Challenge of Banning Generative AI Tools and/or Relying on AI Detectors
Students are using generative AI tools, and the use of these tools is very difficult to identify with certainty, even with purported AI detectors. For better or worse, we now live in an increasingly AI-integrated world. Therefore, a policy that bans generative AI tool use in your course may be very difficult to truly enforce. Trusting AI detectors or trying to otherwise “catch” students using generative AI tools may also lead to unproductive, adversarial relationships with students. (See Lance Eaton’s “AI Plagiarism Considerations Part 1: AI Plagiarism Detectors” for more about the cons of AI detectors.)
As educators, we should help students think about critical and responsible use of generative AI tools in relation to their writing and communication learning. As such, WCP recommends use of a flexible policy that (1) allows generative AI tool use in circumstances defined by the instructor and (2) requires students to follow the three principles of responsibility, transparency, and documentation discussed above. Such a policy does not mean that instructors need to allow generative AI tool use at all times or completely change their course to emphasize using generative AI tools. It does mean that instructors who adopt this policy should allow generative AI tool use for at least one aspect of the course (one project perhaps) and discuss appropriate modes for using those tools responsibility, transparently, and with appropriate documentation. Such a policy engages with the facts of generative AI tool use, provides a way for students to use those tools appropriately, and gives the instructor flexibility for when generative AI tools are used.
Of course, such a policy doesn’t mean that some students won’t try to inappropriately use generative AI tools, trying to pass off content generated by an AI as their own thought, writing, and/or communication. But for the most part students want to learn and they want to follow the rules. The upside of helping the vast majority of students consider critical and responsible ways to use generative AI tools almost certainly outweighs the possibility of a smaller number of students using generative AI tools inappropriately. We’re at the moment when expectations and “rules” are being formed for using generative AI; we have an opportunity to ensure that these expectations align with what we know about good writing, communication, and education.
If you feel strongly about prohibiting generative AI tool use in your course, see the policy option below and consult with the WCP Director to design a policy. Those developing such a policy should consider how to make inappropriate generative AI use difficult through assignment design, assessment practices, and other practices.
Practices for Discussing and Using Generative AI Tools in Class
Generative AI tools are new for all of us. Some of us want to dive in to explore the possibilities of AI tools for teaching multimodal composition; some of us want to go slower. At a minimum, WCP instructors should discuss generative AI tools in the context of their course AI policy. Beyond this, instructors are also encouraged to discuss generative AI tools as a part of class and experiment with ways of using generative AI in parts of the course. That experimentation may be incremental: it might be one discussion, or an in-class activity, or one stage (brainstorming, say) of one project. More generally, instructors are encouraged to learn more about the ways generative AI tools might be critically and responsibly used in the writing and communication classroom. See the Discussing and Using AI Tools in Class document for additional ideas.
Policy Options
The following policy options cover the spectrum of possible generative AI tool use in WCP classes. The final option gives instructors the flexibility to develop their own policy language or adapt a policy from another source; in that case, the instructor should speak with the WCP Director to review the policy.
There are four policy options offered below:
- Recommended Policy: Generative AI Tools Allowed In Specified Instances
- Option 2: Generative AI Tools Allowed—With WCP Director Consultation And Approval
- Option 3: Generative AI Tools Not Allowed—With WCP Director Consultation And Approval
- Option 4: Develop Your Own Policy—With WCP Director Consultation And Approval
Recommended Policy: Generative AI Tools Allowed In Specified Instances
Instructor notes
- Allows instructors the greatest flexibility in having students use or not use generative AI tools in instances the instructor determines
- Allows instructors to control the scope of generative AI tool use in the class
- Allows instructors flexibility in how much they want to discuss using generative AI critically (i.e., giving students more freedom with these tools means providing more guidance about how to appropriately use them)
- Provides students with a critical framework and set of expectations for engaging generative AI tools in the course and in their writing/communication processes
This course is about growing in your ability to write, communicate, and think critically. Generative AI agents such as ChatGPT, DALL-E 2, and others present great opportunities for learning and for communicating. However, AI cannot learn or communicate for you, and so cannot meet the course requirements for you.
In this course, using generative AI tools in the work of the course (including assignments, discussions, ungraded work, etc.) is allowed only in instances specified by your instructor.
As with any technology, generative AI tools need to be used critically and according to academic and professional expectations. Thus, in instances in which your instructor allows generative AI tool use, you are expected to adhere to these principles:
- Responsibility: You are responsible for the work you submit. In instances in which your instructor allows generative AI tool use, this means that any work you submit should be your own, with any AI assistance appropriately disclosed (see “Transparency” below) and any AI-generated content appropriately cited (see “Documentation” below). This also means you must ensure that any factual statements produced by a generative AI tool are true and that any references or citations produced by the AI tool are correct.
- Transparency: Any generative AI tools you use in the work of the course should be clearly acknowledged as indicated by the instructor. This work includes not only when you use content directly produced by a generative AI tool but also when you use a generative AI tool in the process of composition (for example, for brainstorming, outlining, or translation purposes).
- Documentation: You should cite any content generated by an AI tool as you would when quoting, paraphrasing, or summarizing ideas, text, images, or other content made by other people.
Using generative AI tools at times not allowed by the instructor will be considered an infraction of the Georgia Tech Honor Code subject to investigation by the Office of Student Integrity. Likewise, using generative AI tools in the course without adhering to these principles will be considered an infraction of the Georgia Tech Honor Code subject to investigation by the Office of Student Integrity.
Option 2: Generative AI Tools Allowed—With WCP Director Consultation And Approval
Instructor notes
- Allows students to use generative AI tools as they see fit, given that they do so responsibly, transparently, and with appropriate documentation
- Provides students with a critical framework and set of expectations for engaging AI tools in the course and in their writing/communication processes
- Requires substantive attention to teaching/learning responsible and critical use of generative AI tools (i.e., giving students more freedom with these tools means providing more guidance about how to appropriately use them)
This course is about growing in your ability to write, communicate, and think critically. Generative AI agents such as ChatGPT, DALL-E 2, and others present great opportunities for learning and for communicating. However, AI cannot learn or communicate for you, and so cannot meet the course requirements for you.
In this course, using generative AI tools in the work of the course (including assignments, discussions, ungraded work, etc.) is allowed.
As with any technology, generative AI tools need to be used critically and according to academic and professional expectations. Thus, when using generative AI tools in the work of this course, you are expected to adhere to these principles:
- Responsibility: You are responsible for the work you submit. In instances in which your instructor allows generative AI tool use, this means that any work you submit should be your own, with any AI assistance appropriately disclosed (see “Transparency” below) and any AI-generated content appropriately cited (see “Documentation” below). This also means you must ensure that any factual statements produced by a generative AI tool are true and that any references or citations produced by the AI tool are correct.
- Transparency: Any generative AI tools you use in the work of the course should be clearly acknowledged as indicated by the instructor. This work includes not only when you use content directly produced by a generative AI tool but also when you use a generative AI tool in the process of composition (for example, for brainstorming, outlining, or translation purposes).
- Documentation: You should cite any content generated by an AI tool as you would when quoting, paraphrasing, or summarizing ideas, text, images, or other content made by other people.
Using generative AI tools in the course without adhering to these principles may be considered an infraction of the Georgia Tech Honor Code subject to investigation by the Office of Student Integrity.
Option 3: Generative AI Tools Not Allowed—With WCP Director Consultation And Approval
Instructor notes
- See “The Challenge of Banning Generative AI Tools” above
- Requires consideration of assessment processes
This course is about growing in your ability to write, communicate, and think critically. Generative AI agents such as ChatGPT, DALL-E 2, and others present great opportunities for learning and for communicating. However, AI cannot learn or communicate for you, and so cannot meet the course requirements for you.
In this course, we will be learning and communicating without the aid of generative AI tools. Using generative AI tools in the work of the course (including assignments, discussions, ungraded work, etc.) is not allowed. Using generative AI tools in the course will be considered an infraction of the Georgia Tech Honor Code subject to investigation by the Office of Student Integrity.
Option 4: Develop Your Own Policy—With WCP Director Consultation And Approval
Instructor notes
- Gives instructors flexibility in designing their own policy
- Allows for the use of wording from others’ policies
- Provides students with a critical framework and set of expectations for engaging generative AI tools in the course and in their writing/communication processes
[Instructors can develop their own policy or use a policy they’ve seen elsewhere. At a minimum, the policy should include the following:
- Clear expectations about the extent of allowed generative AI tool use in the course (e.g., in specified situations, in all situations, etc.)
- Clear expectations about how generative AI tools must be used responsibly, transparently, and with appropriate documentation]
Resources
Eaton, Lance. “AI Plagiarism Considerations Part 1: AI Plagiarism Detectors.” AI + Education = Simplified, Substack, June 2024, https://aiedusimplified.substack.com/p/ai-plagiarism-considerations-part
—. “AI Plagiarism Considerations Part 2: When Students Use AI.” AI + Education = Simplified, Substack, June 2024, https://aiedusimplified.substack.com/p/ai-plagiarism-cons
—.“Classroom Policies for AI Generative Tools.” Google Docs, https://docs.google.com/document/d/1RMVwzjc1o0Mi8Blw_-JUTcXv02b2WRH86vw7mi16W3U/preview
Gimpel, Henner, et. al. Unlocking the Power of Generative AI Models and Systems Such as GPT-4 and Chat GPT for Higher Education: A Guide for Students and Lecturers, ResearchGate, March 2023, https://www.researchgate.net/publication/369369378_Unlocking_the_Power_of_Generative_AI_Models_and_Systems_such_as_GPT-4_and_ChatGPT_for_Higher_Education_A_Guide_for_Students_and_Lecturers_Unlocking_the_Power_of_Generative_AI_Models_and_Systems_such_a?channel=doi&linkId=6417e5d166f8522c38bb42b1&showFulltext=true
MLA-CCCC Joint Task Force on Writing and AI. “Working Paper: Overview of the Issues, Statement of Principles, and Recommendations.” Modern Language Association and Conference on College Composition and Communication, 2023. https://aiandwriting.hcommons.org/working-paper-1/
OpenAI. “Educator Considerations for ChatGPT.” OpenAI Platform, https://platform.openai.com/docs/chatgpt-education
Sentient Syllabus Project. Sentient Syllabus Project, http://sentientsyllabus.org/
Smith, Emily. “New Policies Navigate Role of AI Assistants in CS Courses.” Georgia Tech Daily Digest, Georgia Tech College of Computing, 15 Jun. 2023, https://www.cc.gatech.edu/news/new-policies-navigate-role-ai-assistants-cs-courses?utm_source=newsletter&utm_medium=email&utm_content=New%20Policies%20Navigate%20AI%20in%20Computer%20Science&utm_campaign=Daily%20Digest%20-%20June%2022%2C%202023
Webb, Michael. “Considerations On Wording When Creating Advice or Policy on AI Use.” National Centre for AI, 14. Feb. 2023, https://nationalcentreforai.jiscinvolve.org/wp/2023/02/14/considerations-on-wording-ai-advice/.
WMUx. “AI in the Syllabus.” WMUx, Western Michigan University, https://wmich.edu/x/instructors/resources/ai-syllabus
Acknowledgements
Thanks to Dr. Dori Coblentz for sharing her research and thinking about AI tools in the classroom, and for providing many of the resources listed here. Thanks also to those who suggested other resources or who shared their AI tool policies, including Dr. Paige Arrington, Dr. Rachel Dean-Ruzicka, and Dr. Lainie Pomerleau.
Please note that these requirements, principles, and policies will be further developed over the course of 2023-24.