New Research from Georgia Tech and DeepMind Shows How to Slow Down Graph-Based Networks to Boost Their Performance

New Research from Georgia Tech and DeepMind Shows How to Slow Down Graph-Based Networks to Boost Their Performance

By Joshua Preston

Georgia Tech and DeepMind researchers have developed a novel graph augmentation technique called Half-Hop that focuses on enhancing message passing neural networks, which are designed to operate on graph-structured data and capture complex interactions. Half-Hop introduces specialized “slow nodes” to delay communication between nodes in a graph. This effectively limits a graph model from inadvertently classifying members of the graph into unsuitable groups, which sometimes happens on large networks.

Social media applications, as an example, might benefit from Half-Hop to better predict users’ preferences or group similar individuals. This could lead to better targeting of content and advertising for platforms and potentially limit more negative material making its way into user feeds.

“The results of this study showcase the potential of Half-Hop in a wide array of real-world applications and across various graph benchmarks,” says Mehdi Azabou, lead researcher and Ph.D. student in the Machine Learning Ph.D. Program at Georgia Tech.

Researchers conducted experiments on 11 real-world graph datasets, covering both supervised learning and self-supervised learning (SSL) scenarios, with the findings demonstrating the robustness and versatility of Half-Hop. In scenarios where more heterogeneous entities interact (heterophilic settings), the inclusion of Half-Hop leads to a significant performance boost, comparable to specialized models tailored to such conditions. Additionally, in SSL, Half-Hop outperforms state-of-the-art graph representation learning methods. Half-Hop can be effectively utilized either as a standalone augmentation or in conjunction with existing augmentations.

“We are excited about Half-Hop’s potential as a practical, plug-and-play augmentation that seamlessly integrates into existing workflows,” says Eva Dyer, a team member and associate professor in the Coulter Department of Biomedical Engineering.

Graph-based data augmentation has proven to be a crucial factor in the success of SSL. However, unlike other domains like computer vision, the pool of augmentations that work effectively for graphs is limited, according to the researchers.

“Half-Hop emerges as a simple yet powerful addition to the toolkit of graph augmentations,” says Azabou. “It demonstrates impressive performance across a wide range of datasets, making it a valuable asset for researchers and practitioners in various fields.”

Azabou says further studies are needed to understand the specific invariances introduced into the representation under Half-Hop. The team aims to investigate how this augmentation performs in downstream tasks, such as link prediction or graph classification.

The research, which opens new avenues for graph-based data augmentation, leverages insights from recent work on the effects of data augmentations on model generalization. This could ultimately lead to the discovery of even more innovative augmentations to shape the future of graph-based learning.

The new research is accepted as part of the 2023 proceedings of the International Conference on Machine Learning. Team members on the work include Azabou, Venkataramana Ganesh, Chi-Heng Lin, Lakshmi Sathidevi, Ran Liu, and Dyer from Georgia Tech and Shantanu Thakoor, Michal Valko, and Petar Veličković from DeepMind.

The project is supported by NIH award 1R01EB029852-01, NSF awards IIS-2212182 and IIS-2146072, as well as generous gifts from the Alfred Sloan Foundation (ELD), the McKnight Foundation (MA, ELD), and the CIFAR Azrieli Global Scholars Program (ELD).

Coming Thursday!