


Computer Science Education is shifting to meet the needs of students and equipping them to master AI.
Learn about new Georgia Tech research at SIGCSE 2026
Explore Now


Georgia Tech @ SIGCSE 2026
One of the country’s largest annual professional gatherings for research in computer science education, the Technical Symposium on Computer Science Education (SIGCSE TS) is organized by the Association for Computing Machinery’s Special Interest Group on Computer Science Education.
SIGCSE 2026 is Feb. 18 – 21 in St. Louis.
Georgia Tech is a leading contributor to SIGCSE 2026, with research across the technical program.
HIGHLIGHTS:
- Award-winning paper in Computing Ethics Education
- 18 Teams with 44 Georgia Tech Authors and 9 Partner Authors
- Research topics in: Culturally Responsive Computing Education, Computing Ethics Education, Online Education, Software Engineering, and Study Abroad and Students Transitions.


Meet Our Experts
Discover the full range of new Georgia Tech research in computer science education at SIGCSE 2026.
Through the interactive data story, meet the faculty, students, and partners advancing the field, and explore details of their work.


Who’s Who of Organizations at SIGCSE 2026
Get exclusive early access to an interactive visual analysis of the “Who’s Who of Organizations” with accepted research papers at SIGCSE 2026.
Find your organization of interest and discover all related institutions by topic and the number of people involved.
Leading up to SIGCSE, which starts on Feb. 18, the analysis will be accessible through a scavenger hunt. Public access will be available starting on Feb. 18.

Research Highlights

Ethics Education in Computer Science: Insights from 100 U.S. Programs
Best Paper 🏆
Ethics instruction is an important part of undergraduate computer science (CS) education, based on results from a research duo who analyzed ethics requirements in 100 four-year, public and private not-for-profit CS bachelor’s programs in the United States.
Most recent studies that examine how often CS programs include ethics instruction focus on a limited set of institutions, such as top-ranked programs, research-intensive (R1) universities, and ABET-accredited programs.
The goal was to evaluate whether prior findings held true when evaluated against a full range of U.S. colleges and universities. The analysis examined whether CS programs require ethics instruction, either as a standalone course or as content embedded within other courses. The data was grouped by institutional characteristics to align with earlier studies and allow for comparison.
“The findings show that 55% of U.S. CS programs require ethics instruction in some form,” according to lead researcher Grace Barkhuff, a Ph.D. student in Human-Centered Computing. She conducted the work with advisor Ellen Zegura, professor of Computer Science.
“The results also indicate that ABET accreditation and Carnegie research classification are strong factors influencing whether a program requires ethics education,” wrote Barkhuff.

Alex Greenhalgh‘s new research looks at students who are finishing an online master’s program and how the students transition to STEM Ph.D. programs. He made the jump himself, earning his online master’s degree in computer science from Georgia Tech while in New Mexico, then starting the Ph.D. CSE program at the institute.
Expand to Learn More
The study selected one master’s program and its impact on graduates in pursuing a Ph.D. in a STEM-related field. Alumni noted that involvement with graduate research and coursework was a key component in their preparation for a Ph.D. program.
The results demonstrate that an affordable, online, asynchronous graduate STEM program can provide non-traditional students with an effective pathway to Ph.D. enrollment. The paper concludes with recommendations for asynchronous, at-scale degree programs seeking to expand their research opportunities for students with a desire to move forward into Ph.D. programs.

Shi Ding developed a conceptual framework that connects responsible AI principles with AI literacy. A practical checklist and actionable prompts gives users a way to responsibly use AI in educational and design contexts.
Expand to Learn More
This new work presents TEACH-RAI, a domain-independent framework and toolkit for building responsible and trustworthy human–AI decision-making systems in high-stakes, human-facing applications. By integrating responsible AI principles—such as transparency, fairness, safety, and human agency—into practical evaluation and design workflows, TEACH-RAI supports rapid system assessment and risk identification during development.
The researchers used an educational application as a case study to demonstrate new design and evaluation methods for integrating responsible AI into classroom development pipelines and fostering AI literacy. By lowering the barrier to responsible AI adoption, the work supports emerging AI research, products, and companies in building scalable, human-centered solutions for real-world deployment.
FACULTY VOICES: What’s Next for CS Education?

AI is going to make CS education more important, not less. CS education is going to evolve from learning to write lines of code to learning to properly instruct the computer—but we could have said the same for every major development in CS history, from punch cards to assembly language to higher-level languages to development frameworks.
David Joyner
Associate Dean for Off-Campus and Special Initiatives, College of Computing

Sed ut perspiciatis unde omnis iste natus error sit voluptatem accusantium doloremque laudantium, totam rem aperiam, eaque ipsa quae ab illo inventore veritatis et quasi architecto beatae vitae dicta sunt explicabo.
Brian Magerko
Professor, Literature, Media, and Communication

The future of CS education lies in cross‑curricular, K–12 integration that equips teachers, supports responsible AI use, and ensures students in diverse communities have opportunities for quality computing education.
Judith Uchidiuno
Asst. Professor, Interactive Computing

Strong foundations remain non-negotiable in CS education, even as AI tools reshape how students build software. The next phase is teaching students to reason, design, and evaluate systems—then leverage AI to accelerate iteration without sacrificing understanding.
Nimisha Roy
Lecturer, Computing Instruction

The future of CS education, especially in the age of AI, lies in how we develop people, not just programmers. This requires bridging formal and hidden curricular scenarios for lifelong professional development, while helping students develop judgment about when and how to use and adapt AI in their careers.
Pedro Guillermo FeijĂło-GarcĂa
Lecturer, Computing Instruction

Sed ut perspiciatis unde omnis iste natus error sit voluptatem accusantium doloremque laudantium, totam rem aperiam, eaque ipsa quae ab illo inventore veritatis et quasi architecto beatae vitae dicta sunt explicabo.
Ronnie Howard
Lecturer, Computing Instruction
FEATURED RESEARCH | School of Computing Instruction
Teaching Students to Use AI Wisely: New Research Highlights Opportunities and Risks in CS Education
Artificial intelligence (AI) is rapidly reshaping how software is built, and increasingly, how it’s taught.
At the 2026 SIGCSE Technical Symposium, researchers from Georgia Tech’s School of Computing Instruction (SCI) will present how AI tools can be integrated into computing classrooms to support learning.
By Emily Smith đź”— Share story

Evaluating AI Across the Software Engineering Workflow
In their paper Benchmarking AI Tools for Software Engineering Education: Insights into Design, Implementation, and Testing, SCI faculty member Nimisha Roy, computer science (CS) major Oleksandr Horielko, and associate dean Olufisayo Omojokun explore how AI tools are reshaping software engineering workflows and what educators need to know about them.
The study builds on their previous course redesign work and benchmarks popular AI tools, including GitHub Copilot, ChatGPT, Claude, and Gemini, across key software engineering tasks: design, implementation, debugging, and testing. Five evaluators experienced in using AI tools for software engineering conducted controlled experiments with standardized prompts, measuring task speed, output accuracy, the amount of human correction required, and cross-file consistency.
The results show that AI tools can speed up boilerplate code generation, sketching UML diagrams, and other tasks, but they struggle to maintain test coverage and consistency across files, and to handle complex prompts.
“AI tools are often discussed as being either broadly beneficial or broadly harmful. Our results suggest a more nuanced picture,” Roy said. “The effectiveness of an AI tool depends heavily on where it is introduced in the workflow and on the level of scaffolding provided to students.”
Teaching How to Use AI
The study also highlights instructional challenges. Novice students may see AI as a shortcut to competence, while advanced students may automate tasks they’ve already mastered. Roy notes that students can over-trust AI outputs, assuming they are correct if they “look right,” which can undermine verification and debugging habits.
“Without explicit instruction in verification, refinement, and error detection, AI can unintentionally weaken students’ debugging and reasoning habits,” she said.
Omojokun emphasizes the rapidly evolving nature of AI tools.
“Something that’s considered state-of-the-art at the beginning of a semester can easily become old news before the end of the term. Instructors should continually survey the technological landscape to ensure that the concepts they are teaching aren’t contradictory to what’s current,” he said.
The researchers offer practical guidance for instructors looking to integrate AI effectively into software engineering courses, helping students benefit from AI while developing strong coding and problem-solving skills.


AI tools are often discussed as being either broadly beneficial or broadly harmful. Our results suggest a more nuanced picture. The effectiveness of an AI tool depends heavily on where it is introduced in the workflow and on the level of scaffolding provided to students.
Nimisha Roy
Lecturer, School of Computing Instruction
Georgia Tech


AI tools are often discussed as being either broadly beneficial or broadly harmful. Our results suggest a more nuanced picture. The effectiveness of an AI tool depends heavily on where it is introduced in the workflow and on the level of scaffolding provided to students.
Nimisha Roy
Lecturer, School of Computing Instruction
Georgia Tech
How AI Is Shaping Student Learning in Introductory CS
Extending this work on AI in computing education, two SCI posters also examine how AI tools influence student learning in undergraduate CS classrooms.
The first poster, To Tell or to Ask? Comparing the Effects of Targeted vs. Socratic AI Hints, by Zhixian Liding, Michael Osmolovskiy, Harshith Lanka, Ronnie Howard, Nimisha Roy, and Rodrigo Borela, examines how AI-generated hint styles affect students.
In a randomized trial involving 178 CS students, those who received Socratic, question-based hints spent more time and made more attempts to solve problems. However, they showed no clear long-term learning gains during the study period. The findings highlight the trade-offs between encouraging productive struggle and maintaining efficiency.
The second poster, AI-Augmented Instruction: Real-Time Misconception Detection, is by Zhixian Liding, Michael Osmolovskiy, Harshith Lanka, Nimisha Roy, and Rodrigo Borela.
It introduces a system that uses AI to classify and cluster student coding errors in real time. An instructor dashboard surfaces common misconceptions across a class, enabling more targeted, timely instructional support in large courses.
More to Explore
In addition to these AI-focused efforts, SCI researchers are presenting other CS education work at SIGCSE 2026, which runs from February 18 to 21 in St. Louis, Missouri.
RESEARCH đź”—
Papers
Benchmarking AI Tools for Software Engineering Education: Insights into Design, Implementation, and Testing
Nimisha Roy, Oleksandr Horielko, Fisayo Omojokun
Abstract
As generative AI (Gen AI) tools reshape software engineering (SE) workflows, educators are exploring how to meaningfully integrate them into computing education. This experience report presents a structured benchmarking of widely used AI tools—such as GitHub Copilot, GPT-4, Codeium, Claude 3.5, Gemini 1.5, Supermaven, TabNine, Testim, Postman, Eraser.io, and Lucidchart AI—across key SE phases: design, implementation, debugging, and testing. Tools were selected based on industry relevance, accessibility for students, and alignment with common SE tasks. Through controlled experiments conducted by five AI-experienced evaluators with matched exposure levels, we assessed tool performance using standardized prompts, counterbalanced task roles, and a range of proxy metrics—including prompt iterations, task completion time, human correction burden, hallucination frequency, output accuracy, and cross-file consistency—to capture both cognitive load and tool limitations. While AI tools accelerated tasks such as boilerplate generation and UML sketching, they exhibited challenges in test coverage quality, cross-file coherence, and reliability under complex prompts. We discuss educational implications, including managing cognitive load, aligning tools with task types, and explicitly teaching prompt refinement and verification strategies. The paper offers actionable guidance for instructors, curriculum-ready artifacts, and a roadmap for scaling AI integration in SE classrooms, while also noting key limitations to support replication and contextual adoption.
Examining Discourse in a Large Online Education Program: A Machine-in-the-Loop Approach
Erik Goh, Xuan Wang, David A. Joyner, Ana Rusch
Abstract
This paper explores the discourse surrounding large online education programs by focusing on a Computer Science Masters program offered by an R1 research university in the United States. Leveraging a machine-in-the-loop (MITL) approach, the authors combine traditional qualitative methods with computational techniques, including semantic embedding, natural language processing, and unsupervised learning, to extract and analyze discourse from 14 social media platforms. The research identifies various categories of discourse from Program Administration, Structure, and Outcomes, Reputation and Rigor, Interactions and Peer Learning, to Emerging and Niche Discussions. The paper reveals insights into public perceptions, motivations, and concerns related to online education, such as the importance of career outcomes, program flexibility, academic rigor, and community building. The study also demonstrates the value of MITL approaches by integrating large language models (LLMs) into qualitative research to efficiently analyze large datasets and by using semantic differentiation to uncover nuanced discourse for further analysis. The methods and results have implications for educators, curriculum designers, administrators, and future research in online education, highlighting the need for institutions to engage with online communities and monitor public discourse to enhance program perception, quality and impact.
Exploring transitions of graduates from an online master’s in computer science program to doctoral programs
Alex Greenhalgh, Brian Yu, Patrick Deng, David A. Joyner, Nicholas Lytle
Abstract
The flexibility and affordability of online, asynchronous, at-scale degree programs have significantly increased the accessibility of a masters-level graduate education. While studies have been conducted on the general growth of such programs and the quality of the online courses compared to their on-campus counterparts, few (if any) have examined outcomes such as alumni career growth or admission into other graduate programs. This work examines how {program blinded for review} prepared alumni for matriculation into STEM PhD programs. Enrollment data from the National Student Clearinghouse was analyzed to identify key trends in alumni PhD enrollment. Surveys and interviews with program alumni were also conducted to investigate the unique paths that these individuals had taken to beginning their PhD education. This study finds that {program blinded for review} positively impacted alumni PhD experiences in a STEM-related field. Alumni noted that involvement with graduate research and coursework was a key component in their preparation for a PhD program. These results demonstrate that an affordable, online, asynchronous graduate STEM program can provide non-traditional students with an effective pathway to PhD enrollment. The paper concludes with recommendations for asynchronous, at-scale degree programs seeking to expand their research opportunities for students with a desire to move forward into PhD programs.
For TAs, With TAs: A Responsive Pedagogy Co-Design Workshop
Ian Pruitt, Grace Barkhuff, Vyshnavi Namani, Ellen Zegura, William Gregory Johnson, Rodrigo Borela, Benjamin Shapiro, Anu G. Bourgeois
Abstract
Teaching assistants (TAs) play an increasingly vital role in computer science (CS) education, particularly amid rising enrollments, expanding instructional modalities, and the emergence of generative AI tools. In this evolving landscape, CS TAs are taking on greater responsibilities and often serve as the primary point of personal interaction for students, particularly through recitations, lab sessions, and office hours. However, many CS TAs receive limited preparation in inclusive and responsive teaching practices, limiting their ability to effectively support students from diverse cultural and educational backgrounds. To address this gap, we developed a series of responsive pedagogy workshops at two diverse institutions. These workshops aimed to deepen CS TAs’ understanding of inclusive and responsive teaching strategies, support their implementation in practice, and create space for co-design by positioning TAs not only as learners, but as partners in imagining how responsive pedagogy principles could be more effectively integrated into the courses and contexts in which they teach. In this experience report, we describe the design and implementation of these workshops with 118 TA participants, share workshop materials for broader adoption, and reflect on key findings related to integrating responsive pedagogy into CS education through TA training.
Mapping Required Ethics Education in Computer Science: Insights from 100 U.S. Programs
Grace Barkhuff, Ellen Zegura
Abstract
Ethics instruction, as required for ABET-accreditation and recommended by the ACM/IEEE-CS/AAAI curricular guidelines, is an important element of the undergraduate computer science (CS) curriculum. Recent papers which analyze the proportion of programs offering CS ethics education focus on specific types of programs, such as top-ranked programs, programs at R1 institutions under the Carnegie Classification, and/or ABET-accredited programs. This leaves out a large portion of CS programs which may fall under none of those categories including many small colleges. In this paper, we analyze a true random sample of all 4-year Bachelor’s, Public or Private Not-For-Profit CS programs in the U.S. to assess the extent to which the previous data holds true across the full spectrum of U.S. colleges and universities. Using a systematic approach, we look at which CS programs require CS ethics instruction, whether as a standalone course or integrated into other courses. In addition, we break down the data by categorization to replicate previous studies and place our data in conversation with those. We found 55% of all U.S. CS programs require CS ethics in some capacity. Additionally, we found ABET-accreditation and Carnegie research classification to be a major driver toward programs requiring ethics instruction in the U.S.
Poster
Can Afterschool Volunteers Teach Educational Robotics?
Elliot Roe, Jareda Ordona Lim, Cedric Stallworth, Judith Uchidiuno
Abstract
After-school centers present a valuable opportunity to broaden access to Computer Science (CS) education in the United States (US). To address the shortage of CS instructors in afterschool programs, prior work has shown that volunteers and K-12 teachers from other disciplines can be trained to teach foundational CS topics, such as block programming. However, supporting these educators to teach specialized content, such as Robotics, requires further investigation into the training strategies and pedagogical scaffolding required for them to provide high-quality instruction. To address this gap, we present a case study of one afterschool instructor, a kindergarten teacher, whom we trained to teach virtual and physical robotics to fourth and fifth grade students. This study highlights the different levels of instructional support that such teachers need across virtual and physical curricula, and provides strategies that can support teachers without formal CS knowledge to teach robotics.
To Tell or to Ask? Comparing the Effects of Targeted vs. Socratic AI Hints
Zhixian Liding, Michael Osmolovskiy, Harshith Lanka, Ronnie Howard, Nimisha Roy, Rodrigo Borela
Abstract
As enrollment in CS1 courses continues to increase, extensive research has focused on autonomous support to offer personalized assistance for struggling students at scale. However, it is crucial that these intervention techniques do not inadvertently hinder the development of higher-order, computational thinking skills for novice programmers. This poster extends upon the research on LLM-based support by assessing the short- and long-term student outcomes from two carefully prompt-engineered, LLM-generated hint styles: Targeted and Socratic hints. A randomized controlled trial with 178 students was conducted over two semesters in a CS1 course at a large university, allowing students to interact with a hint generation AI agent while attempting course coding assignments. In the short-term, students receiving Socratic hints spent more time, took more attempts, and used more keystrokes to solve coding questions, while committing more repeat errors. Furthermore, this short-term loss in debugging efficiency is not counteracted by any evidence of an improvement in long-term student outcomes. Further research is being conducted to quantify the tradeoff between short-term performance and long-term, higher-order coding skill improvement in the development of educational AI agents.
AI-Augmented Instruction: Real-Time Misconception Detection
Zhixian Liding, Michael Osmolovskiy, Harshith Lanka, Nimisha Roy, Rodrigo Borela
Abstract
Enrollments in introductory computer science (CS1) courses continue to rise, making it difficult for instructors to deliver rapid, individualized feedback that addresses students’ misconceptions at scale. We present an analysis framework and instructor tool that leverage large language models (LLMs) to classify, cluster, and present students’ coding errors in real time. Our approach comprises two main contributions: (1) a prompt-engineered workflow for automatic error detection and a clustering pipeline using universal sentence encoders, KMeans, and t-SNE to group errors into thematic clusters; and (2) a dashboard that enables instructors to review class-wide, LLM-identified errors and dynamically tailor instruction toward current student misunderstandings. Our automated thematic clustering system is able to surface conceptual and strategic pitfalls that often persist beneath superficial debugging. A pilot study is being conducted to evaluate the effectiveness of the dashboard tool in large-scale CS1 instructional settings to enhance active learning at scale.
PlayFutures: Imagining Civic Futures with AI and Puppets
Supratim Pait, Sumita Sharma, Ashley Frith, Michael Nitsche, Noura Howell
Abstract
Our project supports children’s critical engagement with Artificial Intelligence (AI) through participatory workshops envisioning shared public spaces. We ran a workshop with 9-12 year olds (n=7) combining AI, puppet-making, performance and design futuring to reimagine civic spaces. Participants used ChatGPT to visualize changes to local environments, created puppets representing different viewpoints, and performed debates about the proposed changes. Our findings reveal children’s critical engagement with AI as a design tool, their preference for iterative refinement, and their desire for authentic rather than ’AI-like’ content. This work contributes methods for engaging children in civic discussions through making and performing futures. It also demonstrates how these methods can scaffold critical AI literacy.
Gaming Towards Understanding: Shifting AI Perceptions in High School Students
Bryan Wallace, Jane Awuah, Judith Uchidiuno
Abstract
Students’ perceptions of AI are often shaped by commercialized and fictional depictions of AI, leading to misuse, underuse, and over-reliance on AI tools. Such preconceptions place additional burdens on K-12 AI curricula to not only educate students on foundational AI knowledge components, but also uncover and reshape students’ understanding of AI. To understand ways to uncover and address students’ AI misconceptions in formal learning settings, we conducted a study that taught foundational AI concepts using a formal AI curriculum, games, interactive activities, and scenario-based challenges to high school girls. We analyzed student dialog and survey responses data to observe shifts in their perceptions and understanding of AI. Our findings indicate that unstructured AI activities effectively scaffold student reflection and metacognition on AI topics, contributing to significant shifts in perception. This work provides insights for designing engaging and effective AI curricula that address students’ misunderstandings of AI.
Bridging Responsible AI and AI Literacy: The TEACH-RAI Framework and Toolkit for Education, Design, and Research
Shi Ding, Brian Magerko
Abstract
The rapid growth of generative AI in education has deepened the need for both responsible AI practices and AI literacy. However, these domains are often addressed separately, leaving a gap in guidance for educators, designers, developers, and researchers. This paper presents a conceptual framework that connects responsible AI principles with AI literacy and introduces its application through a practical checklist as a toolkit, with a hypothetical case example, which offers actionable prompts for responsible use of AI in educational and design contexts. Together, the TEACH RAI framework and toolkit aim to provide both theoretical grounding and accessible resources to foster responsible engagement with AI in learning environments.
Modeling Ethical Technology Use with Generative AI
Grace Barkhuff
Abstract
This poster describes a single-session module for an undergraduate computing ethics course focused on generative AI (GenAI) use in higher education. The session asks students to reflect on both their own use (or non-use) of GenAI as well as that of their instructors. Details of the session, including pre-reading, session topics, and reflection questions, are included so the module may be repeated at other institutions. In the discussion, I discuss the importance of computing ethics instructors as models for ethical technology use in the classroom.
Student Research Competition
Undergraduate Students’ Struggles in Computer Science
Sai Nakirikanti, Idel Martinez-Ramos, Betsy Disalvo
Abstract
Computing Education Research (CER) has made valuable contributions to improving undergraduate computing. CER tends to focus on instructional challenges, developing tools or techniques for teaching, or researching a single institution or classroom experience and sometimes may lack a student-first perspective. To identify if there were student concerns that current research was overlooking, we designed an open-ended survey that asked undergraduate computing students what struggles they experienced and how they solved or coped with those struggles. This paper reports on this survey with $N=201$ responses from 45 US institutions. Our analysis identified two primary factors in student concerns: social/personal and academic/structural. Many of the top concerns of students aligned with trends in CER literature; however, there were issues, particularly those related to structural factors (curriculum, lengthy course content, lack of institutional support, and unenthusiastic or uninterested professors), that have little representation in CER literature. Many of these issues may have temporal context, suggesting ongoing data collection from students may help identify appropriate directions and trends for CER. Students also related ways they navigate these issues with social support systems and academic resources, which emphasize the importance of providing students with strong academic/personal support. While we know students struggle in many ways, this survey provides evidence that further research in areas that are pressing today may be under-addressed.
Lighting Talk
Integrating Professional Identity Development into Large-Scale CS First Year Seminar Courses
Kristine Nagel, Aibek Musaev
Abstract
This lightning talk presents an initiative to integrate professional identity development activities into a large enrollment undergraduate CS First Year Seminar course. This CS majors only, required, one-credit hour course is an opportunity to excite students about computing as a profession. In fall 2024, inspired by Fink’s Significant Learning Theory, where multiple dimensions contribute to changing how a student lives their personal and professional life, we added a reflection assignment and an external engagement requirement to foster professional identity. This experiential learning required three different types of campus activities outside the classroom: resume and interview preparation, networking with professionals, and student presentations. The first month students are required to participate in a Career Services workshop for resumes, interviewing, or decoding job postings. The following month, students participate in one of several corporate partner activities or workshops, and the final month they attend a student project expo or research poster presentation. The last assignment has each student write a personal mission statement and write a letter to their current self from their sixty year old self! The letters and missions are emailed to the students the next semester, to remind them of their larger goals as a professional. What are next steps for measuring influence and adjusting course activities? We can require a student attend and write a reflection, but how do we determine whether these promote learning to value computing as your professional identity? Are there practical measures to increase potential for integrating excitement for computing?
Beyond Traditional Exams: Student-Created Podcasts for Collaborative Learning in Computing Education
Pedro Guillermo FeijĂło-GarcĂa, Lucas Guarenti Zangari
Abstract
As computing cohorts grow in scale, designing assessments that foster communication, collaboration, and meaningful learning becomes increasingly challenging. This lightning talk shares a pedagogical strategy implemented in a large-scale (over 300 students) software design and engineering course at a Southeastern university in the United States. In this iteration, teams of computer science (CS) students created two video-recorded podcast chapters, each 30 to 60 minutes long, as part of a conceptual assessment. Each chapter focused on a key topic in software design and engineering, one on three design patterns (e.g., the Factory Method Pattern) and another on code smells, refactoring, and test-driven development. Podcasts, now widely popular across streaming platforms, offered students an accessible and creative medium to communicate technical concepts while practicing professional collaboration. The team-based format aimed to promote interaction, reinforce conceptual understanding, and develop communication skills as all members planned, scripted, and discussed each episode together. Rather than focusing only on outcomes, this activity emphasized process and reflection as students explained, questioned, and built on one another’s ideas. This lightning talk is presented to gather feedback from the computing education community on this instructional strategy and possibly collaborate on further extensive research on student-created artifacts in large-scale CS courses.
When AI Meets the Clock: Rethinking Learning and Assessment in Large-Scale Computing Courses
Pedro Guillermo FeijĂło-GarcĂa, Lucas Guarenti Zangari, Fisayo Omojokun
Abstract
As artificial intelligence (AI) tools like ChatGPT become more common, their role in computer science (CS) education continues to evolve, especially in large courses with time-limited assessments. This lightning talk presents an observation from a large-scale (over 300 students) introductory software design and engineering course at a university in the Southeastern United States. During a 30-minute, open-notes assessment where students were allowed to use generative AI, they were asked to extend one user story in an existing codebase they had previously built for the course. The task followed an all-or-nothing grading approach that required a fully functional user story implementation for credit. Although students often support using AI for learning, their reactions revealed a gap between what they expected AI to do and the actual thinking required to solve real problems quickly: Some students even expressed frustration, noting that AI tools offered little help under time pressure. To close the experience, we held a reflection lecture where students analyzed the role of AI as a tool and discussed its purpose in supporting augmented intelligence rather than replacing human reasoning. This case illustrates how assessment design can expose the limits of generative AI as a learning aid and highlights the importance of helping students build awareness of time, effort, and reflection when using these tools. The goal of this talk is to share these insights, invite discussion on AI use under time constraints, and explore how students adapt or resist adapting when AI cannot “think fast enough” for them.
LLMTutorBench: A Benchmark for University-level TCS AI Tutoring Systems
Anant Gupta, Hieu Nguyen, Carine G Webber, Justin Stevens, Abrahim Ladha, Sanika Ainchwar, Vijay Ganesh
Abstract
Large Language Models (LLM) are transforming Intelligent Tutoring Systems (ITS) via more natural explanations, multi-turn dialogue, and more adaptive support for students. Yet their effectiveness depends on rigorous benchmarking to ensure reliability, fairness, and pedagogical soundness. Such benchmarking relies on detailed student data, especially data that accurately reflect the actual distribution of wrong answers and misconceptions. A robust dataset of domain-specific wrong answers and misconceptions is critical for the ITS research community. Such a dataset enables training and testing of LLM-based ITS designed to correct misconceived student responses and guide students appropriately. Unfortunately, in advanced areas such as Theoretical Computer Science (TCS), such data are scarce, costly to collect, and limited by privacy concerns.
To address this problem, we propose a synthetic data generation technique grounded in real-world data. Our method works as follows: we curate a set of human-generated (question, answer, misconception) tuples to seed an LLM with the goal of generating a corpus of incorrect answers that resemble the kinds of mistakes students make while solving undergraduate-level math and algorithmic problems. We then prompt the LLM to generate a synthetic dataset with similar distribution of mistakes. Once such a technique has been validated on a math topic, we can easily transfer it over to others. Our goal is to lay the groundwork for scalable benchmarks that enable rigorous evaluation and broader adoption of LLM-based tutoring systems in the most conceptually demanding areas of computer science education, namely, theoretical computer science.
Doctoral Consortium
What skills do students need to use programming environments?
Idel Martinez-Ramos
Abstract
Students in programming courses are expected to learn a new programming language while simultaneously mastering programming environments. These environments are essential for designing, building, and testing programs and include integrated development environments, text editors, and command-line interfaces. Previous research in Computer Science Education has focused on the challenges of learning to program. However, limited research has addressed the challenges students face with programming environments. To bridge this gap, I conducted interviews with 15 undergraduate students and surveyed 300 students at two R1 institutions, finding that the majority of them faced multiple challenges with programming environments. Students also shared that programming courses do not provide foundational instruction on programming environments, so they often relied on online sources, AI agents, and friends to solve these challenges. To better equip students in programming classes, we must further integrate content on programming environments into the curriculum. However, we do not know the foundational knowledge needed for effective use and problem-solving of programming environments. In my dissertation, I aim to identify the foundational knowledge and skills needed for effective use of programming environments by conducting interviews and collaborating with educators, researchers, and programmers. These conversations will help me gain a deeper understanding of their mental models and the skills they employ when setting up, using, and debugging programming environments, so that they can be broken down and scaffolded for the curriculum.
Demo
Adaptive Skill-Mastery Feedback Loops in AI-Generated Courses
Aibek Musaev, Kerimbek Musaev, Mirbek Dzhumaliev, Calton Pu
Abstract
This demo presents the next-generation version of KimBilet.com, an educational platform that leverages generative AI to create adaptive, mastery-driven learning experiences. Building on last year’s personalized course generation, the new system introduces a fine-grained skill taxonomy and a feedback loop that evaluates and responds to learner performance in real time.
The adaptive algorithm begins with a user’s prompt to generate a topic and a taxonomy of skills needed to master that topic. For each skill, the system creates a focused lesson and a quiz. At the end of the course, learners receive a mastery profile displaying their performance across all skills. Based on these results, the system dynamically adapts: skills mastered with high accuracy are marked complete, while weaker areas trigger the generation of additional remedial lessons and quizzes until mastery is achieved.
This feedback loop transforms static AI-generated lessons into personalized, mastery-based learning pathways. The demo will showcase how KimBilet.com combines generative AI, adaptive skill assessment, and iterative content refinement to engage learners across domains such as standardized test preparation, academic coursework, and professional upskilling. Participants will explore how this approach supports scalable, individualized education and discuss its potential applications in diverse learning environments.
⬆️ Go to Top of Page


See you in St. Louis!
Development: College of Computing
Project and Web Lead/Data Graphics: Joshua Preston
Web Support: Joni Isbell
Featured Research News: Emily Smith
Featured Photography: Kevin Beasley, Terence Rushin, Emily Smith
Data: https://sigcse2026.sigcse.org/