Projects

Hume’s Leaf Warbler Bird Behavior

The goal of this project is to analyze the nesting and breeding patterns of the Hume’s Leaf-Warbler by developing an object detection model that tracks the presence of the bird in camera trap footage. Cameras have been placed near numerous Warbler nests to capture continuous footage of their activity. By detecting when and how often the Warbler appears in the footage, researchers can infer patterns in nest visitation and overall breeding behavior.

The model is trained using annotated frames that indicate whether the bird is present or absent. These annotations are used to fine-tune the detection model until it can reliably identify the Warbler’s presence in new footage. The resulting data enables the team to uncover meaningful patterns in the species’ nesting activity, which can inform future ecological research and conservation efforts.

Stone Mountain Species Animal Detection

This project focuses on the identification of species from camera trapping images using computer vision. 5 motion activated camera traps have been set up in different ecosystems across Stone Mountain. Previously, a team of researchers have tried using publicly available ML processes for species identification of individual animals. However, they found their solution to be more time consuming and less accurate than manual identification. Our objective is to find a regionally specific training data set and come up with a more effective strategy to identify species in the camera trap images.

To learn more, check out: https://humanaugmentedanalyticsgroup.miraheze.org/wiki/Stone_Mountain_Species_Detection

Lizard Movement

The Lizard Locomotion Analysis project aims to develop a data processing pipeline to automatically extract biomechanical and performance data from footage of lizards walking and running. Precise pose of the lizards running can be captured using the DeepLabCut library, which uses a machine learning framework to predict the joint and limb positions of animals from a small amount of annotated training footage. By using this pipeline, performance data such as the lizard’s speed, stride length, stride angle, and spinal undulation can be captured easily from field or lab footage of lizards. Once this data is obtained, it can be used to answer research questions about how the movement patterns and motion of the lizards impact the survival of the species. In other words, conclusions can be drawn about the relationship between these traits and the fitness of the lizard. By using the specific biomechanical traits, as opposed to the traditional center-of-mass tracking, more meaningful relationships can be found. This work will not only advance the understanding of lizard movement patterns but also contribute to the growing field of automated behavioral analysis in biological research.

To learn more, check out: https://humanaugmentedanalyticsgroup.miraheze.org/wiki/Lizard_Movement

Spatial Cameratrap

The Haag Spatial Cameratrap project will assess how spatial scaling/averaging affects the apparent composition of communities. 

The projects goals are to download and clean SnapShot USA Data, compile the species that comprise individual “communities” by compiling single-site data in 1-year and multi-year intervals, compile the species that comprise individual “communities” by spatially clustering cameras within 50 km of one another and across the full dataset time, compare the compositions of these communities, and stretch goal: compare communities compiled from cameratrap data to communities compiled using IUCN range data.

To learn more, check out: https://humanaugmentedanalyticsgroup.miraheze.org/wiki/Project-haag-spatial-cameratrap

Past Seminars

6/23/2025 Meeting Summary

Summary of Dr. Steve Mussmann’s talk:

  • What is Slurm?
    • Stands for Simple Linux Utility for Resource Management.
    • Used by over half of the world’s top 500 supercomputers to handle resource management and job scheduling.
  • Basic Cluster Structure:
    • Login nodes: Where you SSH in, browse files (cd, ls), write/edit scripts — but don’t run heavy jobs here or you’ll slow everyone down.
    • Job nodes: Where actual compute jobs run. To use them, you submit jobs through Slurm.
  • How to run jobs:
    • Interactive jobs: Good for debugging or using tools like VS Code, Jupyter — lets you iterate and see errors. You request resources with salloc or similar. You might wait in a queue (especially for GPUs).
    • Batch jobs: When your code is ready to run on large datasets — you use sbatch to submit a script that runs in the background. Can set up email notifications for start/finish.
  • Tips:
    • Popular GPUs like V100s and A100s may have long queues; lesser-used GPUs (like L40 or RTX) often have no wait.
  • Monitoring:
    • Use other Slurm commands to check queues, running jobs, storage quota, etc.
  • Virtual Environments:
    • For Python, built-in venv is recommended — simple and reliable vs Conda (which can cause dependency issues).
    • Downside: each venv is isolated and can get large — store them where storage is cheap (like scratch directories).

Resources

Zotero Link

FAQ

preload imagepreload image