Contents
AI Ethics
Twentieth-century technologist Melvin Kranzberg’s first law of technology invites scholars to consider ethics alongside innovation: technology is neither good nor bad, nor is it neutral. His law urges us to consider how to fairly and ethically create and implement technological tools and advancements. For our purposes, we define ethics similarly, both broadly and in application, as the set of principles that guide the development, use, and impact of technology.
AI ethics refers to principles that govern AI’s activity to ensure it is developed and used in ways that are beneficial to society. According to IBM, it is a multidisciplinary field “that studies how to optimize AI’s beneficial impact while reducing risks and adverse outcomes.” Concerns over AI’s impact on writing and creativity, work and job availability, bias baked into the algorithms, unequal access to AI technology, and its detrimental impact on the environment are all topics that will be introduced in this module.
Here, we offer topics regarding the principled development, use, and deployment of AI for discussion in the composition classroom. We consider AI/GenAI, the urgent issues and risks, and how the industry is responding to those issues through AI ethics. Each section offers a short overview of the topic and some resources to inform teachers and students. These topics include data bias, intellectual property and privacy, accessibility and access, work and human labor, environmental impact, deepfakes, plagiarism, refusing GenAI, and the field of AI ethics.
In viewing AI through an ethical lens, we consider both the consequences and emphasize the need for critical thinking and responsible application related to AI in a variety of contexts. The following list is by no means exhaustive but instead represents a cross-section of concerns, considerations, and topics that currently connect our students, ethics, and AI.
Plagiarism and Cheating
Of particular concern to academics and educators are the roles that LLMs and GenAI might play in academic spaces with reference to plagiarism and cheating. Scholars across fields and journals seek to review, study, and build policies for present and future contexts (for how we address these issues, see WCP Policies and Recommendations and Developing an AI Pedagogy). Examining plagiarism, cheating, and AI connects student acts of thinking and writing necessary for knowledge-building and expertise, asking, “when is it sound and ethical practice to employ AI as part of student processes?” Some concerns include offloading cognitive tasks involving brainstorming, language structures, and in-line editing, particularly for student populations for whom writing poses additional considerations, such as non-native speakers of English and neurodivergent learners. Among responses to these conversations is the acronym FEAL—Faster, Ethical, Accurate, Learning—which is a series of questions that helps students and educators evaluate if and how AI tools might be used1:
- Is it [the use of AI tools] faster?
- Does the use of AI tools align with the ethical practices of the field, context, or genre?
- Are results accurate, current, and relatable?
- Does the inclusion of AI tools support or accelerate consequential learning?
However, these questions do not cover all reasons students seek out AI tools; conversations surrounding the use of GenAI and LLMs are further complicated when considering their usage by certain student populations, including non-native English speakers and neurodivergent writers, who employ them for increased accessibility (see Accessibility).
For more on conversations about ethics, plagiarism, and cheating, see the following resources.
AI and Data Bias
Artificial Intelligence (AI) systems learn from data to make predictions or decisions. Bias in AI happens when the data—or the way the AI is designed—causes unfair or inaccurate outcomes for certain groups of people. Indeed, when moving through the various sequential stages of model training, AI systems can compound and magnify the underlying biases.
Since AI systems are increasingly used everywhere—from social media to hiring and banking—teaching students about AI bias helps them understand fairness, equality, and ethical issues in decision-making, communication, and technology use.
Where and How AI Bias Appears
- Data Collection: If the training data reflects unequal treatment in society (e.g., higher arrest rates in certain neighborhoods), the AI will repeat these patterns.
- Data Labeling: People who label training data can bring personal assumptions or cultural stereotypes, which become part of the AI’s “knowledge.”
- Developer Team Composition: Without diverse perspectives, development teams might not notice biases that harm people different from themselves.
- After Deployment: Without checks and balances after the training period, even a well-designed model can become biased if used in a biased environment or if it makes decisions that affect already marginalized communities.
Consequences of AI Bias
- Effect on Data Representation: If a group is missing or underrepresented in training data, AI systems often perform poorly for that group.
- Effect on Fairness Metrics: Different ways of measuring fairness—such as equal error rates for different groups or ensuring the same positive outcome rate—may lead to different conclusions about what “fair” means.
- Effect on Law and Rights: Biased AI systems in court sentencing can contribute to longer or unjust sentences for specific communities. Furthermore, biased facial recognition can lead to misidentification or privacy concerns, particularly for women and people of color.
AI bias emerges from imperfect data and human decision-making during AI development. When teaching students, we can focus on how real-world examples reveal why fairness and representation are crucial in technology. By examining these case studies, discussing possible solutions, and highlighting the importance of diverse perspectives, we can empower the next generation to be more responsible creators and users of AI.
Hidden Human Labor
When we think of AI, we often imagine advanced computers processing vast amounts of data with minimal human intervention. However, many AI systems rely on behind-the-scenes human labor at almost every step, from preparing training data (labeling images, moderating content) to continually checking outputs after the AI is deployed.
Amid the global turn towards a gig economy, which relies on exploiting workers without providing stable contracts to flout legal and ethical obligations, the labor conditions behind the simple, sleek AI agents are particularly alarming. Understanding hidden human labor helps students see that AI is not entirely “automated.” It raises ethical questions about working conditions, fair pay, and the mental health impact on workers who process disturbing content.
Data Labeling and Content Moderation
Data Labeling
Human workers tag or classify raw data (e.g., identifying objects in photos for computer vision and autonomous driving systems). Much of this work is outsourced to platforms like Amazon Mechanical Turk or specialized data-labeling companies like Scale AI. For example, workers labeling images for self-driving cars on crowdwork platforms may earn only a few cents per task, leading to debates about fair compensation. See, for instance, the Ghost Work Project by Mary L. Gray and Siddharth Suri and how Scale AI relies on ‘digital sweatshops’ in the Philippines.
Content Moderation
Even after an AI model is built, humans stay involved in handling edge cases, new data, or changing requirements. People manually review social media posts, chat logs, or other content to filter out hateful, violent, or illegal material. Moderators can encounter traumatic or disturbing content, which can affect their mental health. For example, many social media platforms hire contractors in countries like the Philippines or India to label and review flagged posts. They often work under intense pressure and strict quotas. See, for instance, Behind the Screen by Sarah T. Roberts, which interviews workers worldwide, and the TIME article on Kenyan Content Moderators.
Ethical and Social Considerations
- Fair Wages: Many data annotators earn low pay without benefits, despite performing work essential for training advanced AI models.
- Working Conditions: Content moderators may experience high stress or emotional harm from exposure to violent or disturbing materials.
- Global Inequalities: Much of this labor is outsourced to countries where workers have fewer legal protections, highlighting global labor disparities.
- Transparency: Companies often don’t reveal how many humans contribute to the process or what these jobs entail, making it difficult for the public to understand AI’s hidden human cost.
Accessibility
Another challenge discussed with the integration of AI is accessibility, which is connected with conversations about the access gap, but is defined more broadly. In education, accessibility refers to the availability of academic resources, programs, services, and opportunities for all students. For some students, AI can advance some parity, particularly in the realm of communication and writing for International students, non-native speakers of English, students with physical limitations, and Neurodivergent learners. For students like these, AI can increase accessibility through translation and editing programming, automated image descriptions (AID), audio description generators, note-taking, mind-mapping, and brainstorming assistance, speech recognition, and more. First-generation students may also benefit from AI tools, which can offer additional support with advising and maneuvering through the traditions and semester-to-semester protocols that impact student success. However, increased awareness and knowledge about these tools and the diversity of use is necessary for accessibility. For instance, the integrated use of these tools can be troublesome for some, who might ethically engage AI tools and legitimately find benefits in their use, but face pushback from others, including instructors and departmental policy-makers (see Plagiarism and Cheating).
Intellectual Property and Privacy
One dimension of ethics and AI concerns intellectual property and privacy, including issues of copyright and the use of student writing. Assessment tools like Turnitin are helpful in detecting traditional plagiarism and somewhat helpful with detecting the use of AI, but data reports vary in their levels of success, which raises concerns for educators and students. Aside from concerns about false claims of plagiarism (see Plagiarism and Cheating for more), there are also concerns related to student privacy and FERPA when student papers are aggregated for comparison or used to train LLMs. However, conversations about intellectual property and privacy extend to other spheres, as well, with questions about how AI tools change law, publishing, authorship, and more, raised by authors, government officials, writing scholars, and communication industries.
Environmental Impact
The use and development of GenAI come at a high environmental cost. While you may have encountered articles that spoke to how “GPT-4 uses approximately 519 milliliters of water, slightly more than one 16.9 ounce bottle, in order to write one 100-word email,” the environmental impacts extend far beyond water consumption.2 Researchers have found that obtaining exact metrics can be challenging, as AI companies sometimes claim they do not have precise figures on their environmental impacts—a statement that has been questioned by some researchers. However, important research on this topic is emerging.
Broadly, we can categorize these impacts into three main areas: the immense energy and water required to train large AI models; the energy and water used during their post-training operation; and the wider consequences of AI-focused data centers, including construction, infrastructure, and hardware demands. As MIT News puts it:
“The computational power required to train generative AI models that often have billions of parameters, such as OpenAI’s GPT-4, can demand a staggering amount of electricity, which leads to increased carbon dioxide emissions and pressures on the electric grid.
Furthermore, deploying these models in real-world applications, enabling millions to use generative AI in their daily lives, and then fine-tuning the models to improve their performance draws large amounts of energy long after a model has been developed.
Beyond electricity demands, a great deal of water is needed to cool the hardware used for training, deploying, and fine-tuning generative AI models, which can strain municipal water supplies and disrupt local ecosystems. The increasing number of generative AI applications has also spurred demand for high-performance computing hardware, adding indirect environmental impacts from its manufacture and transport.”3
At the same time, some have argued that AI will enable solutions for the climate crisis. While such claims are frequently made by technology companies themselves, we recommend the United Nations’ page on “AI Solutions for the Environment” to learn about ongoing initiatives and partnerships.
Adam Zewe’s MIT News article, “Explained: Generative AI’s environmental impact,” and “The Environmental Impacts of AI – Primer” from Hugging Face provide useful starting points for those interested in learning more. For a perspective from within industry, we recommend Microsoft’s “Accelerating Sustainability with AI: Innovations for a Better Future.” Shaolei Ren and Adam Wierman’s article, “The Uneven Distribution of AI’s Environmental Impacts,” discusses some troubling inequities related to these impacts. Similarly, Michael Kwet explores these issues in his book, “Digital Degrowth: Technology in the Age of Survival.”
The AI Access Gap
The AI Access Gap refers to the systemic disparities in access to and utilization of artificial intelligence technologies across different populations, organizations, and socioeconomic groups. This gap manifests in multiple ways, including technological infrastructure, economic resources, educational opportunities, and organizational policies.
The gap is characterized by several distinct but interrelated disparities:
Socioeconomic Disparities
In high-income urban areas, AI integration is increasingly commonplace in education, healthcare, and daily life, while low-income and rural areas may lack even basic digital infrastructure. This creates a fundamental divide in access to AI-enabled services and opportunities.
Organizational Barriers
Many employees face restrictions on AI tool usage for various reasons:
- Legitimate regulatory compliance concerns (e.g., HIPAA requirements)
- Organizational policies stemming from risk aversion
- Lack of clear guidelines for appropriate AI use in professional contexts
Cultural Impact
The gap has significant implications for cultural representation in AI development and deployment, including:
- Underrepresentation of minority perspectives in AI training data
- Risk of erasure or misrepresentation in digital spaces
- Healthcare disparities in AI diagnostic accuracy across different demographic groups
Knowledge and Skills
An “ignorance gap” compounds the access issue, as many individuals lack sufficient training or knowledge of how to effectively use AI, even when they have technical access to the tools (see AI Literacy).
Addressing the gap depends on a comprehensive approach that considers technological, educational, economic, and cultural factors.
- Policy solutions to develop balanced organizational policies, rather than prohibitions, can offer clear guidelines for use.
- Educational initiatives work to close the gap by providing formal training in usage and prompt engineering, supporting skill development with hands-on application, and investing in digital literacy programs for underserved communities.
- Developing infrastructures can ensure diversified training data and increased availability of modern devices and connectivity, which can make way for access to underserved areas and communities.
Resources and Sources
AI Ethics
- Ammanath, Beena. “Thinking Through the Ethics of Tech… Before There’s a Problem.” Harvard Business Review, 9 Nov. 2021. https://hbr.org/2021/11/thinking-through-the-ethics-of-new-techbefore-theres-a-problem.
- Dubber, M. D., Pasquale, F., & Das, S. (Eds.). The Oxford Handbook of Ethics of AI. Oxford University Press, 2020.
- Kranzberg, Melvin. “Technology and History: ‘Kranzberg’s Laws.’” Bulletin of Science, Technology & Society, vol. 15, no. 1, Feb. 1995, pp. 5–13, https://doi.org/10.1177/027046769501500104.
- IBM. “AI Ethics | IBM.” Www.ibm.com, 2023, https://www.ibm.com/topics/ai-ethics.
- Silvergate, Paul H., et al. “Beyond Good Intentions: Navigating the Ethical Dilemmas Facing the Technology Industry.” Deloitte Insights, 27 Oct. 2021, https://www2.deloitte.com/us/en/insights/industry/technology/ethical-dilemmas-in-technology.html.
Plagiarism and Cheating
- Ammanath, Beena. “Thinking Through the Ethics of Tech… Before There’s a Problem.” Harvard Business Review, 9 Nov. 2021, https://hbr.org/2021/11/thinking-through-the-ethics-of-new-techbefore-theres-a-problem.
- Becker, Kimberly P. et al. “Framework for the Future: Building AI Literacy in Higher Education.” Moxie, 18 July 2024, https://moxielearn.ai/wp-content/uploads/2024/06/Ai-literacies-white-paper.docx.pdf.
- Dietis, Nikolas. “Three Ways to Use ChatGPT to Enhance Student’s Critical Thinking in the Classroom.” Times Higher Education, 8 Jan. 2024, https://www.timeshighereducation.com/campus/three-ways-use-chatgpt-enhance-students-critical-thinking-classroom.
- Dwivedi, Yogesh K. “Opinion Paper: ‘So what if ChatGPT wrote it?’ Multidisciplinary Perspectives on Opportunities, Challenges and Implications of Generative Conversational AI for Research, Practice and Policy.” International Journal of Information Management, vol. 71, Aug. 2023, ScienceDirect, 102642, https://doi.org/10.1016/j.ijinfomgt.2023.102642.
- Gašević, Dragan, et al. “Empowering learners for the age of artificial intelligence.” Computers & Education: Artificial Intelligence, vol. 4, 2023, ScienceDirect, 100130. https://doi.org/10.1016/j.caeai.2023.100130.
- Gurung, Regan A. “Get a FEAL for AI.” The Teaching Professor, 5 Sep. 2023, ResearchGate, https://www.researchgate.net/publication/374868239_Get_a_FEAL_for_AI.
- Hubbard, Jacob. “The Pedagogical Dangers of AI Detectors for the Teaching of Writing.” Composition Studies Journal, 30 Jun 2023. https://compstudiesjournal.com/2023/06/30/the-pedagogical-dangers-of-ai-detectors-for-the-teaching-of-writing/.
- Kranzberg, Melvin. “Technology and History: ‘Kranzberg’s Laws’.” Technology and Culture, vol. 27, no. 3, 1986, pp. 544-560. https://doi.org/10.2307/3105385.
- Kwon, Diana. “AI is Complicating Plagiarism. How Should Scientists Respond?” Nature, 30 Jul. 2024, https://www.nature.com/articles/d41586-024-02371-z.
- Mathewson, Tara Garcia. “AI Detection Tools Falsely Accuse International Students of Cheating.” The Markup, 14 Aug. 2023, https://themarkup.org/machine-learning/2023/08/14/ai-detection-tools-falsely-accuse-international-students-of-cheating.
- Peters, Martine. “Stop Focusing on Plagiarism, Even Though ChatGPT is Here: Create a Culture of Academic Integrity Instead.” Harvard Business Publishing, 14 Sep. 2023, https://hbsp.harvard.edu/inspiring-minds/stop-focusing-on-plagiarism-even-though-chatgpt-is-here.
- Silvergate, Paul H., et al. “Beyond Good Intentions: Navigating the Ethical Dilemmas Facing the Technology Industry.” Deloitte Insights, 27 Oct. 2021, https://www2.deloitte.com/us/en/insights/industry/technology/ethical-dilemmas-in-technology.html.
- “The AI Efficiency Myth: Why Instant Error Correction May Be Hurting Your Writing.” Moxie, 9 Oct. 2024, https://moxielearn.ai/blog/the-ai-efficiency-myth-why-instant-error-correction-might-be-hurting-your-writing.
AI and Data Bias
- Angwin, Julia, et al. “Machine Bias.” ProPublica, 23 May 2016, www.propublica.org/article/machine-bias-risk-assessments-in-criminal-sentencing.
- Buolamwini, Joy, et al. “Gender Shades.” Gendershades.org, 2018, gendershades.org/.
- West, Sarah Myers, et al. “Discriminating Systems: Gender, Race, and Power in AI – Report.” AI Now Institute, Apr. 2019, ainowinstitute.org/publication/discriminating-systems-gender-race-and-power-in-ai-2.
- University of Minnesota. “Mapping Prejudice.” Www.mappingprejudice.org, 2022, mappingprejudice.umn.edu/.
Accessibility
- “AI and Accessibility.” Cornell Center for Teaching Innovation, updated 2024, https://teaching.cornell.edu/generative-artificial-intelligence/ai-accessibility.
- Gibson, Rob. “The Impact of AI in Advancing Accessibility for Learners with Disabilities.” Educause Review, 10 Sep. 2024, https://er.educause.edu/articles/2024/9/the-impact-of-ai-in-advancing-accessibility-for-learners-with-disabilities.
- Kim, Jinhee. “Exploring Students’ Perspectives of Generative AI-Assisted Academic Writing.” Education and Information Technologies, Jul. 2024, pp. 1-36. http://dx.doi.org/10.1007/s10639-024-12878-7.
- Liang, Weixin, et al. “GPT Detectors Are Biased Against Non-Native English Writers.” Patterns, vol. 4, no. 7, 14 Jul. 2023. https://www.cell.com/patterns/fulltext/S2666-3899(23)00130-7?_returnURL=https%3A%2F%2Flinkinghub.elsevier.com%2Fretrieve%2Fpii%2FS2666389923001307%3Fshowall%3Dtrue.
- Ma, Wenting, et al. “Intelligent Tutoring Systems and Learning Outcomes: A Meta-Analysis.” Journal of Educational Psychology, vol. 106, no. 4, 2014, pp. 901–918. https://www.apa.org/pubs/journals/features/edu-a0037123.pdf.
- Mathewson, Tara Garcia. “AI Detection Tools Falsely Accuse International Students of Cheating.” The Markup, 14 Aug. 2023, https://themarkup.org/machine-learning/2023/08/14/ai-detection-tools-falsely-accuse-international-students-of-cheating.
- Mowreader, Ashley. “Report: Generative AI Can Address Advising Challenges.” Inside HigherEd, 05 Sep. 2024, https://www.insidehighered.com/news/student-success/academic-life/2024/09/05/survey-college-advisers-could-benefit-ai-assistance.
- Snow, Jackie. “How People with Disabilities are Using AI to Improve Their Lives.” NOVA website, 30 January 2019, Public Broadcasting System, https://www.pbs.org/wgbh/nova/article/people-with-disabilities-use-ai-to-improve-their-lives/.
Intellectual Property and Privacy
- Alter, Alexandra, and Elizabeth A. Harris. “Franzen, Grisham, and Other Prominent Scholars Sue OpenAI.” New York Times, 20 Sep. 2023, https://www.nytimes.com/2023/09/20/books/authors-openai-lawsuit-chatgpt-copyright.html.
- Appel, Gil et al. “Generative AI Has an Intellectual Property Problem.” Harvard Business Review, 7 April 2023, https://hbr.org/2023/04/generative-ai-has-an-intellectual-property-problem.
- Band, Jonathan, and Cliff Lynch. “AI and Copyright: 3 Key Issues.” Educause Review, 25 Jul. 2024, https://er.educause.edu/multimedia/2024/7/ai-and-copyright-3-key-issues.
- Coffey, Lauren. “Professors Cautious of Tools to Detect AI-Generated Writing.” Inside Higher Ed, 09 Feb 2024, https://www.insidehighered.com/news/tech-innovation/artificial-intelligence/2024/02/09/professors-proceed-caution-using-ai.
- Kassorla, Michelle. “Teaching with GAI in Mind.” Educause, 14 Dec. 2023, https://er.educause.edu/articles/2023/12/teaching-with-gai-in-mind.
- Kelly, Samantha Murphy. “Teachers are Using AI to Grade Essays. But Some Experts are Raising Ethical Concerns.” CNN Business, 6 April 2024, https://www.cnn.com/2024/04/06/tech/teachers-grading-ai/index.html.
- “Report on Copyright and Artificial Intelligence.” U.S. Copyright Office, https://www.copyright.gov/ai/.
- Day, Kathleen. “What’s Yours Isn’t Mine: AI and Intellectual Property.” Johns Hopkins Carey Business School, 14 June 2024, https://carey.jhu.edu/research/whats-yours-isnt-mine-aI-intellectual-property.
- Klosek, Katherine, and Marjory S. Blumenthal. “Training Generative AI Models on Copyrighted Works is Fair Use.” ARL Views, 23 Jan. 2024, Association of Research Libraries, https://www.arl.org/blog/training-generative-ai-models-on-copyrighted-works-is-fair-use/.
Environmental Impact
- UN Environment Programme. “AI Solutions for the Environment.” UNEP – UN Environment Programme, 2024, www.unep.org/topics/digital-transformations/digital-accelerator-lab/ai-solutions-environment.
- Zewe, Adam. “Explained: Generative AI’s Environmental Impact.” MIT News, Massachusetts Institute of Technology, 17 Jan. 2025, news.mit.edu/2025/explained-generative-ai-environmental-impact-0117.
- Luccioni, Sasha, et al. “The Environmental Impacts of AI — Primer.” Hugging Face, 3 Sept. 2024, huggingface.co/blog/sasha/ai-environment-primer.
- Microsoft. “Accelerating Sustainability with AI: Innovations for a Better Future.” Microsoft, Jan. 2025, cdn-dynmedia-1.microsoft.com/is/content/microsoftcorp/microsoft/msc/documents/presentations/CSR/Accelerating-Sustainability-with-AI-2025.pdf.
- Ren, Shaolei, and Adam Wierman. “The Uneven Distribution of AI’s Environmental Impacts.” Harvard Business Review, Harvard Business Publishing, 15 July 2024, hbr.org/2024/07/the-uneven-distribution-of-ais-environmental-impacts.
- Kwet, Michael. Digital Degrowth. Pluto Press (UK), 2024.
The AI Access Gap
- Cacal, Nicole. “Unequal Access to AI and Its Cultural Implications.” Medium, 24 Jan 2024. https://medium.com/the-modern-scientist/unequal-access-to-ai-and-its-cultural-implications-0948a8042c91.
De la Torre, Adela, and James Frazee. “Bridging the AI Divide: A Call to Action.” Inside HigherEd, 4 April 2024, https://www.insidehighered.com/opinion/views/2024/04/04/call-action-address-inequity-ai-access-opinion.
Goldenthal, Emma, et al. “Not All AI are Equal: Exploring the Accessibility of AI-Mediated Communication Technology.” Computers in Human Behavior, vol. 125, 2021. https://doi.org/10.1016/j.chb.2021.106975. - Jensen, Kyle. “We Need to Address the Generative AI Literacy Gap in Education Higher Education.” Times Higher Education, 18 Mar. 2024, https://www.timeshighereducation.com/campus/we-need-address-generative-ai-literacy-gap-higher-education.
- Mollick, Ethan. “Reshaping the Tree: Rebuilding Organizations for AI.” One Useful Thing, 27 Nov. 2023. https://www.oneusefulthing.org/p/reshaping-the-tree-rebuilding-organizations.
- Pham, Hoang, et al. “How Will AI Impact Racial Disparities in Education?” SLS Blogs, 29 Jun. 2024, Stanford Center for Racial Justice, https://law.stanford.edu/2024/06/29/how-will-ai-impact-racial-disparities-in-education/.
- Trucano, Michael. “AI and the Next Digital Divide in Education.” Brookings Institute, 10 July 2023, https://www.brookings.edu/articles/ai-and-the-next-digital-divide-in-education/.
Footnotes
- Gurung, Regan A. “Get a FEAL for AI.” The Teaching Professor, 5 Sep. 2023, ResearchGate, https://www.researchgate.net/publication/374868239_Get_a_FEAL_for_AI. ↩︎
- Crouse, Megan. “Sending One Email with ChatGPT Is the Equivalent of Consuming One Bottle of Water.” TechRepublic, 20 Sept. 2024, www.techrepublic.com/article/generative-ai-data-center-water-use/. ↩︎
- Zewe, Adam. “Explained: Generative AI’s Environmental Impact.” MIT News, Massachusetts Institute of Technology, 17 Jan. 2025, news.mit.edu/2025/explained-generative-ai-environmental-impact-0117. ↩︎