This tutorial is a focused, hands-on introduction to the rapidly evolving world of large language models (LLMs) for scientific research. Aimed at graduate students in disciplines such as chemistry, physics, materials science, computational biology, engineering, and related fields, the tutorial provides a practical and accessible entry point to applying LLMs in research workflows. Participants will learn how to use LLMs for inference, agentic workflows, and fine-tuning, with step-by-step guidance designed for those who are curious about AI but are not yet using advanced frameworks or training methods.

Led by a team of experts in areas such as neuro-symbolic AI, quantum chemistry, physics, astronomy, and particle physics, the program combines high-level insights with interactive tutorials. Participants will gain exposure to real-world applications of LLMs across disciplines and work through guided exercises on inference, agentic systems, and fine-tuning. This focused one-day tutorial will help you confidently start working with LLMs for science, from inference to fine-tuning, without the distraction of rapidly changing trends.

By the end of the tutorial, participants will have both the conceptual understanding and practical skills to start incorporating LLMs into their own scientific research.

Registration Form

Google Form

Seats are limited, so please register early to secure your place.

Audience

Who is it for?

This tutorial is primarily aimed at PhD students, postdocs, scientists, and professors in scientific disciplines such as physics, chemistry, materials science, computational biology, engineering, and related fields who are interested in applying LLM-based techniques to their research, including those curious about agentic workflows and LLM-assisted coding or analysis.

Prerequisites:

  • Basic understanding of Python programming.
  • Basic command line experience
  • Very basic familiarity with machine learning concepts (for example, you have heard of supervised learning or reinforcement learning and know at a high level what they mean). No prior experience with LLM fine-tuning is required.
  • Get verified as a Student for GitHub Copilot (must be done before tutorial day to ensure access is granted, Link)
  • Software Installation ahead of afternoon session:
    • Ollama (Link)
    • Node package manager (Node Version Manager is suggested if you do not already have this installed)
    • miniconda3 for Python environments (Link)

Who is it not for?

Researchers who are already actively using the latest high-speed inference frameworks (like vLLM) and advanced fine-tuning techniques (LoRA, ReFT, RLHF) for their own scientific work, or heavily incorporate agentic/AI coding workflows.

Venue, Date, and Time

Klaus 1116, Klaus Advanced Computing Building (KACB), Georgia Tech
3 October 2025, 9 AM – 5 PM (ET)

Live links

Tutorial 1 GitHub link
Tutorial 2 Colab Link 1 (Data, RL), Colab Link 2 (Pre-training, SFT)