Custom-pattern-featured-experts

Agriculture

Applying a Soft Touch to Agricultural Harvesting

“Our research in automated harvesting can be applied to other soft fruits such as raspberries, loganberries, and grapes, that require intensive labor to harvest manually. Automating the harvesting process will allow for a drastic increase in productivity and decrease in human labor and time spent harvesting.”

Alex Qiu
PhD Student, Mechanical Engineering
A robotic gripper utilizes three rubber-based fingers for automated harvesting of blackberries and other fragile fruits.

Research

New research outlines the design, fabrication, and testing of a soft robotic gripper for automated harvesting of blackberries and other fragile fruits. A robotic gripper utilizes three rubber-based fingers that are uniformly actuated by fishing line, similar to how human tendons function. This gripper also includes a ripeness sensor that utilizes the reflectance of near-infrared light to determine ripeness level, as well as an endoscopic camera for blackberry detection and providing image feedback for robot arm manipulation. The gripper was used to harvest 139 berries with manual positioning in two separate field tests. The retention force – the amount of force required to detach the berry from the stem – and average reflectance value was able to be determined for both ripe and unripe blackberries.

The soft robotic gripper was integrated onto a rigid robot arm and successfully harvested fifteen artificial blackberries in a lab setting using visual servoing, a technique that allows the robot arm to iteratively approach the target using image feedback. 

Why It Matters

  • In the blackberry industry, about 40-50% of total labor hours are spent in maintaining and harvesting the crops; harvesting blackberries contributed to ~56% of the total cost of bringing blackberries to the market.  
  • Automating the harvesting process will allow for a drastic increase in productivity and decrease in human labor/time spent harvesting.
  • Very few studies have been conducted on automated harvesting for soft fruits and blackberries. This research acts as a proof of concept and spearhead for more technological development in this area.
  • The research can be applied to other soft fruits such as raspberries, loganberries, grapes, etc.  
Azadeh Ansari
Asst. Professor, Electrical and Computer Engineering

Multi-Robot Systems

Together We Swarm, In Research & Robots 

“By collaborating with roboticists, we were able to ‘close the gap’ between single robot design and swarm control. All the different elements were there. We just made the connection.” 

Research

Systematically designing local interaction rules to achieve collective behaviors in robot swarms is a challenging endeavor, especially in microrobots, where size restrictions imply severe sensing, communication, and computation limitations. New research demonstrates a systematic approach to control the behaviors of microrobots by leveraging the physical interactions in a swarm of 300 3-mm vibration-driven “micro bristle robots” that designed and fabricated at Georgia Tech. The team’s investigations reveal how physics-driven interaction mechanisms can be exploited to achieve desired behaviors in minimally equipped robot swarms and highlight the specific ways in which hardware and software developments aid in the achievement of collision-induced aggregations.  

Why It Matters

  • Microrobots have great potential for healthcare and drug delivery, however these applications are impeded by the inaccurate control of microrobots, especially in swarms.
  • The new published work is the first demonstration of motility-induced phase separation (MIPS) behaviors on a swarm robotic platform. 
  • This research promises to help overcome current constraints in the deployment of microrobots, such as limited motility, sensing, communication, and computational limitations. 
PhD student Xxxxxxxx Xxxxxx and Azadeh Ansari

Home Bots

Giving Robots a Way to Connect by Seeing and Sensing

“Future robots that operate in homes and around people will need to connect via touch with the external world, requiring expensive sensors. Our method dramatically lowers the cost of these touch sensors, making robots more accessible, capable, and safe.”

Jeremy Collins
PhD Student, Mechanical Engineering
Researchers used machine learning to estimate the force that robots need for gripping in a variety of household tasks.

Research

Force and torque sensors are commonly used today to give robots a sense of touch. They are often placed at the “wrist” of a robot arm and allow a robot to precisely sense how much force its gripper applies to the world. However, these sensors are expensive, complex, and fragile.

To address these problems, we present Visual Force/Torque Sensing, a method to estimate force and torque without a dedicated sensor. We mount a camera to the robot which focuses on the gripper and train a machine learning algorithm to observe small deflections in the gripper and estimate the forces and torques that caused them. While our method is less accurate than a purpose-built force/torque sensor, it is 100x cheaper and can be used to make touch-sensitive robots that accomplish real-world tasks. 

Why It Matters

  • Currently, robots are programmed to avoid obstacles, and touching the environment or a human is often viewed as a failure.
  • Future robots that operate in homes and around people will need to make contact with the world to manipulate objects and collaborate with humans.
  • Currently available touch sensors are expensive and impractical to mount on home robots.
  • This work proposes a method to replace these expensive force sensors with a simple camera, allowing robots to be touch-sensitive around humans.
  • New method dramatically lowers the cost of these touch sensors, making robots more accessible, capable, and safe.
  • New method is useful for potential applications in healthcare, such as making a bed and cleaning surfaces. 
Gerry Chen
PhD Student, Interactive Computing

Cross-Applications

Robot Turns from Graffiti to Growing Plants in Hydroponics Effort

“This project highlights how Georgia Tech’s commitment to interdisciplinary research has led to unexpected applications of seemingly unrelated technologies, and serves as a testament to the value of exploring diverse fields of study and collaboration in order to develop innovative solutions to real-world problems.”

Research

Researchers have applied technology they developed for robotic art research to a hydroponics robot. The team originally developed a cable-based robot designed to spray paint graffiti on large buildings. Because the cable robot technology scales well to large sizes, the same robot became an ideal fit for many agricultural applications.

Through a collaboration with the N.E.W. Center for Agriculture Technology at Georgia Tech, a robotic plant phenotyping robot was built, tested, and deployed in a hydroponic pilot farm on campus. The robot takes around 75 photos of each plant every day as the plant grows, then uses computer vision to construct 3D models of the plants which can be used to non-destructively estimate properties such as crop biomass and a photosynthetically active area for tracking, modeling, and predicting plant growth.

Why It Matters

  • Efficiency of resources in agriculture – using less water and fertilizer to produce more food – is becoming increasingly important.
  • Advancements in agricultural efficiency are driven by better understanding of plant growth and substrate dynamics.  There is a need to understand how plants grow to know how much to feed them.
  • This work addresses plant growth model accuracy, which is currently limited by the difficulty in collecting data that is both large-scale and detailed with genetic and environmental variations make data “noisy.”
  • By combining two existing robot architectures – cable robots and serial manipulators – large numbers of plants can be autonomously imaged at high quality from many angles.
  • In leveraging computer vision algorithms, plant images can be used to generate measurements that are accurate enough to be used in developing more advanced plant growth models.
A robot takes around 75 photos of each plant every day as the plant grows, then uses computer vision to construct 3D models of the plants to track and predict growth.