Figure: (a) Two-part Robotic Flower. (b) Neck connective recording location. (c) Distinct types of responsive units. (d) Units identified from multielectrode array, with distinct stimulus-response properties
Using dense multielectrode arrays, we have been recording from the neck connective in hawkmoths, while they are presented stimuli using a two-part robotic flower. The two-part roboflower has a separate visual (façade) and mechanosensory (nectary) component. By playing back the sensory experiences of freely flying moths, as well as using structured single and multi-sensory cues, we look for populations of multimodal descending neurons based on their responses to the different sensory stimuli.
In the initial set of recordings, we have identified visual-only, mechanosensory-only, as well as visual-and-mechanosensory neurons. While some units have high firing rates, some others are sparse and bursting. This indicates that population-level analysis may be required for complete behavioral decoding. In addition, self-motion of the proboscis by moths (which is different from imposed movement of the proboscis) was found to affect descending mechanosensory information. Improvements in signal-to-noise ratio have now made it possible to get populations of neurons from individual experiments, and fast (2-weeks) iterations of experiments. The increase in dataset sizes necessitates an automated spike-sorting method. Using Spikeinterface with Mountainsort5, we have had some success in recent recordings, and we are currently validating this sorting pipeline on our datasets. The results from this project tell us that mechanosensory and visual responses have specific frequency characteristics. We are investigating the presence or absence of such frequency-dependence in our descending connective recordings. Additionally, the linearity of multimodal integration, seen in free behavior, is being analyzed in identified multimodal units.