EXPERT VOICES

Georgia Techโ€™s leading experts in machine learning share insight into the field, how their work is breaking new ground, and what comes next for artificial intelligence.

JUDY HOFFMAN ๐Ÿ”—

Assistant Professor, Interactive Computing

Data is everywhere today. We take photos of our vacations, track our steps, and use natural language to search for information.

Machine learning, or more simply, learning from data, allows us to leverage the vast amounts of ever emerging data to build tools that improve people’s lives.

We can organize and edit oneโ€™s photos, suggest health and fitness goals, and provide answers to user posed questions.

One of the key challenges today is that the best performing models are highly data hungry and compute intensive. This limits the viability of an individual, small business, or even researchers, to use the best algorithms and models for their emerging applications.

ICML is a great place to learn about cutting edge advances in improving the data and compute efficiency of machine learning systems, including our work on a new transformer-based architecture, Hiera.

ZSOLT KIRA ๐Ÿ”—

Assistant Professor, Interactive Computing

As Artificial Intelligence (AI) begins to pervade society, it is important to understand how such AIs can learn to interact with each other and humans.

AI will never exist in isolation, but rather be part of an ecosystem full of diverse decision-makers of varying intelligence.

As an example, our work looks at how a house-tidying robot can interact with a range of human and AI partners, including ones that it has not worked with before. Our work addresses this by learning a diverse set of partners that a robot can practice with, so that it can effectively coordinate with new ones including ones that are not helpful or cooperative. I am excited about these and other works that incorporate learning within the context of social interaction.

DHRUV BATRA ๐Ÿ”—

Associate Professor, Interactive Computing

There is a lot of public commentary โ€“ and unfortunately fear-mongering โ€“ around run-away scenarios and misaligned Artificial General Intelligence (AGI) systems. 

Much of the conversation is grounded in speculation. No scientific theory has enough reach to predict the content of its successors. 

As physicist and philosopher David Deutsch says in his book โ€œThe Beginning of Infinity,” beware of the difference between prediction and prophecy. Prophecy purports to know things which cannot be known, and prophecy around the future serves no one well.

We have hard problems to solve, but problems have solutions. Solving them creates new problems, which in turn are soluble.

At ICML, my lab at Georgia Tech, together with collaborators at FAIR, is presenting research on zero-shot human-robot collaboration, i.e. training agents to collaborate with new partners. 

BO DAI ๐Ÿ”—

Assistant Professor, Computational Science and Engineering

The recent advancements in generative AI have unlocked a realm of exciting possibilities for the future. With rapid progress, the frontiers of AI are expanding, empowering individuals in various creative endeavors. However, a critical next step to advance towards responsible general intelligence is the development of “decision AI,” which helps people determine courses of action in their work.

At Georgia Tech, we are deeply involved in the pursuit of “decision AI.”

Our focus is on bridging the current gaps in AI by incorporating state-of-the-art planning techniques, optimal control strategies, and contextual comprehension into AI systems. Through these efforts, we aim to shape the future of AI as a valuable tool that assists human decision-making processes and generates positive societal impact.

ROSA ARRIAGA ๐Ÿ”—

Associate Professor, Interactive Computing

Large language models are touted as approaching human-like performance on a variety of human intelligence tasks. But are LLMs really “performing” this well?

We are interested in putting this question to a systematic analysis. To address this challenge, we introduce the Turing Experiment methodology. Unlike Turingโ€™s Imitation Game, which involves simulating a single arbitrary individual, a Turing Experiment requires simulating a diverse sample of participants in human subject research and determining how well the simulation results align with human results. 

Initial findings from this human-centered approach show that Turing Experiments can replicate experimental findings, show sensitivity to group differences, and identify distortions. 

Today, a major risk in replacing human studies with simulations is that the simulations might be reflecting biases from the authors of the model training data rather than the actual behavior of human populations.

Thus, comparing Turing Experiments results to empirical human results can be useful in identifying these distortions. In the long-run, language model-based simulations may be a useful alternative when it is costly to carry out experiments on humans due to scale, selection bias, monetary cost, legality, morality, or privacy. They can also lead to AI approaches that meet the needs of groups of humans as opposed to taking a one-model fits all approach.

FLORIAN SCHร„FER ๐Ÿ”—

Assistant Professor, Computational Science and Engineering

Numerical computation and statistical reasoning are critical tools to make sense of the world around us.

Right now, we see this fascinating convergence between the two where classical numerical methods inform machine learning algorithms that in turn accelerate numerical computation.

YINGYAN (CELINE) LIN ๐Ÿ”—

Associate Professor, Computer Science

As AI technology rapidly evolves, enhancing its efficiency, scalability, and safety has become paramount.

At ICML, we’ll present two exciting works advancing this frontier of AI. Our first work has improved automatic speech recognition (ASR) by developing a more scalable solution that supports more languages with reduced training and storage requirements.

Our second work has uncovered potential security vulnerabilities in Generalizable Neural Radiance Fields (GNeRF) and developed tools called NeRFool and NeRFool+ to ensure safer real-world GNeRF applications.

These works underscore our commitment to shaping a more efficient and robust AI landscape.