What is AI?

Defining AI

Artificial Intelligence (AI) is a widely used yet ambiguously defined term encompassing a collection of historical and emergent technologies that profoundly impact many facets of our lives, social systems, the environment, and beyond. Although much of the current media landscape presents AI as though it were a coherent and consistent term, as Yarden Katz has argued, the reality is that AI has always had a “nebulous and shifting character.”1 Broadly, two reasons for this are important to highlight: Since the term was introduced in 1955, debates about the definition and nature of AI have been an intrinsic feature within the field itself. Just as significant, AI’s “nebulous and shifting character” stems from the complex and ethically fraught ways in which the field has arisen out of broader epistemic and sociopolitical paradigms.

“AI” within the Field

Just as algorithmic technologies predate the digital computer (for example, see Striphas), questions concerning the nature of so-called “intelligent machines” predate the use of the term “artificial intelligence.” An oft-cited touchstone in this history is Alan Turing’s influential paper (written in 1950), “Computing Machinery and Intelligence,” in which Turing writes: “I propose to consider the question, ‘Can machines think?’”2 As Turing’s provocation and his famous “imitation game” illustrate, even within the domain of AI developers and practitioners, efforts to define and debate surrounding the term are prominent. A significant number of definitions have linked AI to “machines that perform functions that require intelligence when performed by people,”3 while others have defined it as “the science of making machines do things that would require intelligence if done by man.”4 These two points of emphasis loosely correlate with what is referred to as strong AI and weak AI, and can be indirectly linked to the contemporary proliferation of the term “artificial general intelligence.”5

Most histories of AI explain that there have been two primary schools of thought on AI over the past two decades: the symbolic AI and connectionist AI schools. As Ashok K. Goel of the Georgia Institute of Technology explains, “While symbolic AI posits the use of knowledge in reasoning and learning as critical to producing intelligent behavior, connectionist AI postulates that learning of associations from data (with little or no prior knowledge) is crucial for understanding behavior.”6 The broad field of machine learning takes this latter approach, training models on (predominantly human-produced) data rather than attempting to design AI through logical processes (such as codified deductive logic or if/then statements). 

One reason for AI’s “nebulous and shifting character” is that it has historically been pursued through various multidisciplinary fields of study, as illustrated by the history of the field of cybernetics. Does AI have a “mind,” does it “think,” can it “reason”? Well, it depends on who you ask, and likely the field of study in which they are situated.

“AI” from Alternative Perspectives

As Katz explains, “AI is often treated as a philosophically charged but autonomous technical pursuit, a product of great men’s imaginations.”7 In contrast, many scholars align with Katz in arguing “that developments in computing are shaped by, and in turn shape, social conditions.”8 For many, AI must be understood in relation to its social conditions and as a uniquely potent technology for how it reflects certain ideological and political understandings of people and their world. In addressing the symbolic vs. connectionist paradigms explained above, Katz writes that “these classifications mask more fundamental epistemic commitments. Alison Adam has argued that AI practitioners across the board have aspired to a ‘view from nowhere’—to build systems that apparently learn, reason, and act in a manner freed from social context. The view from nowhere turned out to be a view from a rather specific, white, and privileged place.”9

Defining AI is challenging due to technical, philosophical, and ethical issues, especially those related to race, gender, and class. Ideas about what may be “intelligent” about “artificial intelligence” (if anything) have too often been both overtly and inadvertently laden with raced, classed, and gendered conceptions of “the self.” While by no means comprehensive, “The Basics of AI Ethics” section in this portal briefly addresses some of these concerns.

Our Present Moment

As Critical AI Journal’s “Teaching Critical AI Literacies: ‘Explainer’ and Resources for the New Semester” living document explains, since the 2010s, 

“[…] technologies began to power widespread applications including voice assistants, recommendation systems, and grammar checks. When technologists speak of deep learning (DL), which is a type of machine learning (ML), the learning in question denotes a computer model’s ability to “optimize” for useful predictions while “training” on data (a process that involves adjusting the weights in an elaborate set of statistical calculations). The “learning” is deep because of the multiple computational layers in the very large models that DL involves. Because AI researchers have used this anthropomorphic language for many decades, today’s DL and ML  models are often said to “understand,” “learn,” “reason,” “experience,” and “think.” Although most technologists recognize that products like OpenAI’s ChatGPT or Microsoft’s Copilot are built on disembodied statistical models that do not “understand, “learn,” or “experience” the way that people do, this confusing vocabulary pervades the hype surrounding this resource-intensive technology at the expense of public understanding. Teaching critical AI literacies in the current landscape begins with helping students to distinguish between the functionalities of actually existing technologies, and the fictional “AI” on view in popular media such as Blade Runner (1982), Ex Machina (2014), or Westworld (2016-2022).”

We share this sentiment in the WCP, believing that teaching students media literacy now should include helping them productively question, examine, and formulate founded perspectives on these ever-more ubiquitous communicative agents. 


Generative AI: LLMs

“Generative AI” (GenAI) refers to AI that generates new, unique, or novel multimodal content. However, not all AI is generative; sometimes GenAI is pitted against predictive AI, while other times it is compared to discriminative AI. Currently, there are GenAI tools that produce text, code, images, audio, and video. Many tools that qualify as “generative AI” were developed using transformer architecture, which is one of the key building blocks of large language models (such as ChatGPT or Claude). A key characteristic of large language models (LLMs) is that they can not only “learn” new information but also acquire new “skills” while training on a wide range of data. In this way, LLMs are not programmed in the traditional sense but, instead, developed through a time-consuming and highly resource-intensive training process.  

At their core, LLMs are predictive machines. As metaLAB (at) Harvard explains in The AI Pedagogy Project:

“One way to think about large language models is to picture them as an extremely powerful form of autocomplete. A simple autocomplete takes the last word you typed, refers to a table to find the most likely words that could follow it, and then suggests some options. In a tool like this, the table might have been generated by analyzing a large body of text, counting how many times each word follows any other word, and calculating probabilities. Similarly, large language models like GPT-3.5 and GPT-4, which power ChatGPT, analyze the input text and then predict the next likely word based on the words that have come so far, and add that word to the string. This continues, word by word, until a complete response is generated.”


Glossary

Generative AI 
As MIT News puts it, “Generative AI can be thought of as a machine-learning model that is trained to create new data, rather than making a prediction about a specific dataset. A generative AI system is one that learns to generate more objects that look like the data it was trained on.”

Prompt
A prompt is “the information, sentences, or questions that you enter into a Generative AI tool” to generate an output (i.e., text, image, video, etc.).10

Algorithm
In computer science, a classic definition of an algorithm comes from Harold Stone, who stated in a 1971 textbook that “an algorithm is a set of rules that precisely define a sequence of operations.”11 As Matteo Pasquinelli writes, “historians have found that Indian mathematics has been predominantly algorithmic since ancient times, meaning that the solution to a problem was proposed via a step-by-step procedure rather than a logical demonstration.”12

Hallucination 
In the context of generative AI, a hallucination refers to a process in which Large Language Models (LLMs) generate non-sensical or inaccurate responses to prompts. These responses can appear sensible but are, in fact, fabricated. 

Machine Learning (ML) 
According to the Berkeley School of Information, “the basic concept of machine learning in data science involves using statistical learning and optimization methods that let computers analyze datasets and identify patterns. Machine learning techniques leverage data mining to identify historic trends and inform future models.”

Deep Learning 
Deep Learning is “a subset of machine learning that uses multi-layered neural networks, called deep neural networks, to simulate the complex decision-making power of the human brain. Some form of deep learning powers most of the artificial intelligence (AI) in our lives today.”

LLM (Large Language Models)  
According to the University of Arizona Library, “A large language model (LLM) is a type of artificial intelligence that can generate human language and perform related tasks. These models are trained on huge datasets, often containing billions of words. By analyzing all this data, the LLM learns patterns and rules of language, similar to how a human learns to communicate through exposure to language. LLMs can perform various language tasks, such as answering questions, summarizing text, translating between languages, and writing content. Some examples of LLMs include ChatGPT, Claude, Microsoft Copilot, Gemini, and Meta AI.”

Natural Language Processing (NLP)
Natural language processing, or NLP, combines computational linguistics—rule-based modeling of human language—with statistical and machine learning models to enable computers and digital devices to recognize, understand and generate text and speech.”

Training Data 
The data on which a machine learning model is trained.   

AI Literacy
Using the various processes of generative AI technologies effectively in Writing and Communication while applying critical thinking. 


Resources and Sources


Defining AI

  • Goel, Ashok. “Looking Back, Looking Ahead: Symbolic versus Connectionist AI.” AI Magazine, vol. 42, no. 4, Jan. 2022, https://doi.org/10.1609/aaai.12026.
  • Katz, Yarden. Artificial Whiteness: Politics and Ideology in Artificial Intelligence. Columbia University Press, 2020.
  • Kurzweil, Ray. The Age of Intelligent Machines. MIT Press, 1990.
  • Mitchell, Melanie. Artificial Intelligence: A Guide for Thinking Humans. Farrar, Straus and Giroux, 2019.
  • Pasquinelli, Matteo. The Eye of the Master. Verso Books, 2023.
  • Raphael, Bertram. The Thinking Computer: Mind inside Matter. W.H. Freeman, 1976.
  • Striphas, Ted. Algorithmic Culture before the Internet. Columbia University Press, 2023.
  • Turing, Alan. “Computing Machinery and Intelligence.” Mind, vol. 59, no. 236, Oct. 1950, pp. 433–60, https://doi.org/10.1093/mind/lix.236.433.
  • Wooldridge, Michael. A Brief History of Artificial Intelligence. Flatiron Books, 2020.


Generative AI: LLMs


Glossary


Footnotes

  1. Katz, Yarden. Artificial Whiteness: Politics and Ideology in Artificial Intelligence. Columbia University Press, 2020, pp. 8-9. ↩︎
  2. Turing, Alan. “Computing Machinery and Intelligence.” Mind, vol. 59, no. 236, Oct. 1950, pp. 433, https://doi.org/10.1093/mind/lix.236.433. ↩︎
  3. Kurzweil, Ray. The Age of Intelligent Machines. MIT Press, 1990, p. 14. quoted in Perihan Elif, Ekmekci, and Berna Arda. Artificial Intelligence and Bioethics. Springer, 2020, p. 19. ↩︎
  4. Raphael, Bertram. The Thinking Computer: Mind Inside Matter. W.H. Freeman, 1976. quoted in Perihan Elif, Ekmekci, and Berna Arda. Artificial Intelligence and Bioethics. p. 19. ↩︎
  5. Perihan Elif, Ekmekci, and Berna Arda. Artificial Intelligence and Bioethics. p. 19. ↩︎
  6. Goel, Ashok. “Looking Back, Looking Ahead: Symbolic versus Connectionist AI.” AI Magazine, vol. 42, no. 4, Jan. 2022, pp. 83, https://doi.org/10.1609/aaai.12026. ↩︎
  7. Katz, Yarden. Artificial Whiteness: Politics and Ideology in Artificial Intelligence. p. 4. ↩︎
  8. Ibid. ↩︎
  9. Ibid. p. 6. ↩︎
  10. Harvard University. “Getting Started with Prompts for Text-Based Generative AI Tools.” Huit.harvard.edu, 30 Aug. 2023, huit.harvard.edu/news/ai-prompts. ↩︎
  11. Stone, Harold S. Introduction to Computer Organization and Data Structures. McGraw-Hill Companies, 1971, p. 4. ↩︎
  12. Pasquinelli, Matteo. The Eye of the Master. Verso Books, 2023, p. 26. ↩︎