• Skip to content

Georgia Tech at NeurIPS 2021

GT Research in Machine Learning

  • Research Insights
  • Lead Author Spotlight
  • Authors
  • Papers
  • About

Main Content

Georgia Tech at NeurIPS 2021

The Conference on Neural Information Processing Systems (NeurIPS) represents some of the leading work in the field of machine learning and is the largest annual gathering of researchers in the space. Georgia Tech research at the 2021 event (Dec. 6-14) showcases the institute’s diversity in talent and contributions to advancing the state-of-the-art.

Explore our interactive virtual experience of Georgia Tech research at NeurIPS and see where the future of machine learning leads.

Start Your NeurIPS Experience

click the chart ▼ to get the Georgia Tech 360 view

Welcome to your front row Georgia Tech seats in the main NeurIPS program. Click the chart to discover when you can talk live with speakers.

NeurIPS 2021 Lead Author Spotlight

Ran Liu, PhD Machine Learning student

Drop, Swap, and Generate: A Self-Supervised Approach for Generating Neural Activity

Friday, Dec. 10, 11:30 am EDT

oral

#COMPUTATIONAL NEUROSCIENCE
We introduced a novel unsupervised approach for learning disentangled representations of neural activity called SwapVAE. Our approach combines a generative modeling framework with an instance-specific alignment loss that tries to maximize the representational similarity between transformed views of the input (brain state). Through evaluations on both synthetic data and neural recordings from hundreds of neurons in different primate brains, we show that it is possible to build representations that disentangle neural datasets along relevant latent dimensions linked to behavior.

Q&A with Ran Liu

(click question to show answer)

What motivated your work on this paper?

Brain activities are often complex and noisy, yet it is believed by neuroscientists that a low-dimensional neural representation that governs neural signals exists. The biggest motivation of this project is to find a low-dimensional representation space that could ‘explain’ the neural signals. Our work SwapVAE presents an initial step towards this goal by combining self-supervised learning (SSL) techniques with the generative modeling framework to learn interpretable representations of neural activities.

If readers remember one takeaway from the paper, what should it be and why?

It should be our latent space augmentation operation BlockSwap. BlockSwap makes the latent representation more interpretable by separating the latent representation into augmentation-invariant information and augmentation-variant information, and swapping the invariant part before reconstruction. We hope BlockSwap can be applied in other scenarios when interpretability matters.

Were there any “aha” moments or lessons that you’ll use to inform your future work?

My lesson was summarized 2000 years ago by Aristotle: “For the things we have to learn before we can do them, we learn by doing them.”

What are you most excited for at NeurIPS and what do you hope to take away from the experience?

I am excited to check out other cutting-edge works! I hope to learn from the computational neuroscientists to see how they approach similar problems, and am also excited to learn from other deep learning scientists for inspiration.

Ran with her cat Tigger who is currently pursuing a master of catputer science degree.

MEET MORE lead authorS

Making Sense of the Brain

Eva Dyer is at the forefront of the surge in computational neuroscience research at Georgia Tech

When someone asks Eva Dyer what she does for a living, she has a short and simple answer: “I try to teach machines how to understand the brain.” 

As the principal investigator of the Neural Data Science Lab — or NerDS Lab — at the Georgia Institute of Technology she leads a diverse team of researchers in developing machine learning approaches to analyze and interpret massive, complex neural datasets. At the same time, they are designing better machines, inspired by the organization and function of biological brains.

Dyer’s lab earned a prestigious spot as an oral presenter at NeurIPS, the conference on Neural Information Processing Systems. 

The work they’re presenting — about a new set of tools in self-supervised learning, a method of machine learning that more closely imitates how humans classify objects — is the NerDS Lab’s latest contribution in addressing one of the biggest challenges in neuroscience: finding simplified representations of neural activity that allow for greater insights into the link between the brain and behavior. 

Full Story

Georgia Tech research at NeurIPS is pushing the boundaries of machine learning. Discover the people and get full access to details and papers in the main program, datasets and benchmarks, workshops, and more.

Explore

Research Insights

click on charts to interact with data

NeurIPS 2021 Georgia Tech lead authors include 22 graduate students from across the institute.
The chart shows papers with Georgia Tech authors grouped by institution of the lead author. GT is lead on 20 of the 43 papers with GT contributors.

Machine Learning Center at Georgia Tech

The Machine Learning Center was founded in 2016 as an interdisciplinary research center (IRC) at the Georgia Institute of Technology. Since then, we have grown to include over 190 affiliated faculty members and 145 Ph.D. students, all publishing at world-renowned conferences. The center aims to research and develop innovative and sustainable technologies using machine learning and artificial intelligence (AI) that serve our community in socially and ethically responsible ways. Our mission is to establish a research community that leverages the Georgia Tech interdisciplinary context, trains the next generation of machine learning and AI pioneers, and is home to current leaders in machine learning and AI.

Read more

Copyright © 2025 · Altitude Pro on Genesis Framework · WordPress · Log in