Brainhack ATL 2019 has 3 tracks: (1) Building Scalable Elastic Frameworks for Neuroimaging Pipelines, and (2) Automated Quality Control Tool to Identify Poorly Preprocessed fMRI Scans, and (3) the Pitch-A-Project Track,
Participants will be able to choose their project track at the actual Brainhack ATL event. If you would like to pitch a project via the Pitch-A-Project Track, please submit a project abstract submission in the appropriate section below!
The winning teams of the 3 Project Tracks can choose between 2 prizes! These prizes include Titan RTX GPU cards sponsored by NVIDIA, or a 3D print of the winner’s brain sponsored by the Center for Advanced Brain Imaging!
Project Track: Pitch-A-Project
Have an idea other than the Project Tracks below for a neuroscience project? Fill out an Abstract Submission for the Pitch-A-Project track!
If you are interested in working on projects other than those listed below, you can join the Pitch-A-Project track! Pitch-A-Project Abstracts can be (1) other computational neuroscience projects that fit with Brainhack’s ATL overall goals, or (2) non-computational projects that contribute to the field of neuroimaging in some way.
Examples of Pitch-A-Project project pitches: Using fMRI images to create neuroscience art, designing an infographic to explain fMRI preprocessing, etc.
Projects can be pitched anytime via the Pitch-A-Project Abstract Submission form linked below. Submissions will be accepted up until Wednesday, November 13th at 5pm (the first day of Brainhack ATL).
Projects can be submitted to the form in two ways:
-
Projects can be submitted as an already established group of people that are interested in working on the project (Only 1 Pitch-A-Project Abstract Submission is required).
-
Projects can also be submitted by individual people who are interested in starting the project, acting as the Project Lead and recruiting other Brainhack ATL attendees to join at the hackathon in November. Attendees can also be recruited to projects via the Brainhack ATL Slack workspace, which will be accessible to all accepted Brainhack ATL attendees.
Early submissions are welcome, especially if submitted projects would need time to set up datasets and tools for the project idea. Early submissions will also have the opportunity to present the project to the Brainhack ATL Slack community to recruit members for their ideas, once the submitter has been accepted into Brainhack ATL.
NOTE: Please make sure to apply to Brainhack ATL 2019 first, before you submit a Pitch-a-Project Abstract Submission. You must first apply to the hackathon via the ‘Apply’ tab before you submit a project idea.
Is this project track competitive?
- Yes! There will be prizes given to the team with the Most Creative & Innovation Project and Solution!
Interested? Submit a Pitch-A-Project Submission here!
Project Track 1: Building Scalable Elastic Frameworks for Neuroimaging Pipelines
Can you engineer a fast and affordable MRI processing pipeline using cloud-based technology?
Abstract
Before brain images collected from an MRI scanner become ready for statistical analysis, they go through a processing pipeline, preparing them for use in further neuroscientific research. These processing pipelines consist of a set of well defined, computationally intensive tasks, such as brain matter segmentation, spatial warping, voxel smoothing, and more. The resources these tasks demand, place a burden on traditional, serial approaches, especially when applied on large data sets. To solve these problems, researchers are turning to innovative solutions using cloud-based technology to process data quickly and at scale. Due to the ability to process many subjects quickly, and in parallel, elastic cloud solutions seem to provide especially attractive options for speeding up overall pipeline runtime and enabling quick reprocessing of large datasets when different settings are desired or an issue is identified. At the same time, however, the cost of using such resources must also be considered, and should not become prohibitively high when large data sets are accumulated. With all of this in mind, your task is to build an MRI processing pipeline using Amazon Web Services that elastically scales with the data available and computational demand, while controlling the growing costs that using such resources can accrue.
As a bonus, let’s find metrics for detecting when to call for human attention on the subjects that can potentially be fixed, before the analysis.
Given a list of pipeline steps provided by the organizers, the team that implements the greatest number of processing steps that complete in the shortest time on the same amount of data for the cheapest price wins. Winning strategies may include clever scaling to multiple nodes, or reserving larger nodes on spot prices and releasing them quickly (price is a factor in winning), GPUs are an option too, as long as they do not hurt the overall metric.
Maximum number of participants per team: 5
Is this project track competitive?
- Yes! There will be prizes given to the team with the best & most efficient solution to this project.
Relevant resources for this project track:
- FMRIPrep: a robust preprocessing pipeline for functional MRI
- SPM Documentation
- TReNDS Available Software
- AWS Lambda
- AWS Sagemaker
- AWS FarGate
Relevant papers summarizing the project field:
- Running Neuroimaging Applications on Amazon Web Services: How, When, and at What Cost?
- The three NITRC’s: software, data and cloud computing for brain science and cancer imaging research
- Harnessing cloud computing for high capacity analysis of neuroimaging data from NDAR
- Human neuroimaging as a “Big Data” science
- Container-based clinical solutions for portable and reproducible image analysis
Relevant skills to take part in the project:
- Required:
- Programming skills in Python and/or MATLAB
- Helpful, but not required:
- Programming in bash/shell scripting
- Familiarity with neuroimaging processing pipelines
- Familiarity with AWS technologies
Skills and competences you can learn during the project:
- Gaining a wealth of knowledge about the complexities of processing fMRI data, from the basic alignments all the way down to group ICA and FNC.
- Familiarity with the processing pipeline is helpful for anyone working with neuroimaging data. In practice, processing steps will affect analysis and clinical results, so understanding the steps involved has extremely important practical and theoretical value.
- The structure and the bottlenecks of the neuroimaging pipeline
- In a practical setting, it is useful to know what the stages of the pipeline are in detail, and where slowdowns can and will happen. This can help with project planning, understanding peculiarities in processing runtime and results, and other useful applications.
- How to efficiently use cloud resources for large-scale data processing.
- Neuroimaging data presents a scaling problem that requires interesting tools and solutions to processing at scale. The skills learned wielding these tools should be useful in developing large-scale solutions for many other applications in neuroimaging and data science.
Is there a plan for extending this work to a paper in case the results are promising?
Yes! A promising pipeline may be developed into a workflow put into use in real analysis settings at TReNDS. One or multiple journal or conference papers describing the methodology will also be possible, depending on the methods applied and their success.
Project Track 2: Automated Quality Control Tool to Identify Poorly Preprocessed MRI Scans
Can we build an automated quality control tool to identify poorly preprocessed MRI scans?
Abstract
There are multiple pipelines available to assess data quality prior to preprocessing imaging data. However, there are few available methods outside of visual inspection to assess the success of preprocessing steps (e.g. slice time correction, masking, normalization). In an era of increasingly large datasets (tens of thousands of subjects), visual inspection becomes less optimal, increasing the risk of erroneous, poorly processed data to find its way into analysis pipelines. This project track is dedicated to the development of a pipeline/approach for identifying errors in brain normalization without visual inspection in order to streamline data correction and analysis. Success is evaluated based on the accuracy and confidence delivered by the detection approach to identify poor normalization/registration in a novel dataset.
Maximum number of participants per team: 5
Is this project track competitive?
- Yes! There will be prizes given to the team with the best & most efficient solution to this project.
Relevant resources for this project track:
Relevant papers summarizing the project field:
- MRIQC: predicting quality in manual MRI assessment protocols using no-reference image quality measures
- MRIQC: Advancing the automatic prediction of image quality in MRI from unseen sites
- Evaluation of 14 nonlinear deformation algorithms applied to human brain MRI registration
- Image processing and quality control for the first 10,000 brain imaging datasets from UK Biobank
Relevant skills to take part in the project:
- Conversational English
- Some experience in Machine Learning (basic understanding of feed forward NNs, CNNs and classifiers)
- Programming skills in Matlab or Python, Scikit-learn, Tensorflow, PyTorch, or Keras
Skills and competences you can learn during the project:
- Learn about the standard and optional preprocessing steps in brain imaging
- Experience with basic and advanced machine learning classifiers
- Data augmentation skills (Generative Adversarial Networks, MixUp)
Is there a plan for extending this work to a paper in case the results are promising?
Yes! A paper AND a toolbox/workflow.