Abstract: Mobile robots are widely used and crucial in diverse fields due to their autonomous task performance. They enhance efficiency, and safety, and enable novel applications like precision agriculture, environmental monitoring, disaster management, and inspection. Perception plays a vital role in their autonomous behavior for environmental understanding and interaction. Perception in robots refers to their ability to gather, process, and interpret environmental data, enabling autonomous interactions. It facilitates navigation, object identification, and real-time reactions. By integrating perception, robots achieve onboard autonomy, operating without constant human intervention, even in remote or hazardous areas. This enhances adaptability and scalability. This thesis explores the challenge of developing autonomous systems for smaller robots used in precise tasks like confined space inspections and robot pollination. These robots face limitations in real-time perception due to computing, power, and sensing constraints. To address this, we draw inspiration from small organisms such as insects and hummingbirds, known for their sophisticated perception, navigation, and survival abilities despite their minimalistic sensory and neural systems. This research aims to provide insights into designing compact, efficient, and minimal perception systems for tiny autonomous robots. Embracing this minimalism is paramount in unlocking the full potential of tiny robots and enhancing their perception systems. By streamlining and simplifying their design and functionality, these compact robots can maximize efficiency and overcome limitations imposed by size constraints. In this work, I propose a Minimal Perception framework that enables onboard autonomy in resource-constrained robots at scales (as small as a credit card) that were not possible before. Minimal perception refers to a simplified, efficient, and \textit{selective} approach from both hardware and software perspectives to gather and process sensory information. Adopting a task-centric perspective allows for further refinement of the minimalist perception framework for tiny robots. For instance, certain animals like jumping spiders, measuring just 1/2 inch in length, demonstrate minimal perception capabilities through sparse vision facilitated by multiple eyes, enabling them to efficiently perceive their surroundings and capture prey with remarkable agility. The contributions of this work can be summarized as follows:
About Me
I am a third year Ph.D. student and a Dean's Fellow in the Perception & Robotics Group (PRG) at University of Maryland, College Park (UMD), advised by Prof. Yiannis Aloimonos and Dr. Cornelia Fermuller. PRG is associated with the University of Maryland Institute of Advanced Computer Science Studies (UMIACS) and Autonomy, Robotics and Cognition Lab (ARC).
Interests: Active perception and deep learning applications to boost multi-robot interaction and navigation ability in aerial robots.
Prior to pursuing Ph.D., I did my Masters in Robotics at UMD where I worked on active behaviour of aerial robots and published GapFlyt where we used motion cues of the aerial agent to detect an unknown-shaped gap using a monocular camera. Apart from research work, I love to teach! I, along with Nitin J. Sanket designed and taught an open-source graduate course: ENAE788M (Hands-On Autonomous Aerial Robotics) at UMD in Fall 2019. In my spare time, I love to capture nature on my camera, especially landscape and wildlife photographs; watch and play competitive video games — Counter Strike and Dota 2.Teaching
This is an advanced graduate course that exposes the students with mathematical foundations of computer vision, planning and control for aerial robots. This course was designed and taught by me and Nitin J. Sanket. The course is designed to balance theory with an application on hardware.
The entire course is open-source! The links to video lectures and projects are given below:
CMSC 733 is an advanced graduate course on classical and deep learning approaches for geometric computer vision and computational photography which explores through image formation, visual features, image segmentation, recognition, motion estimation and 3D point clouds. We redesigned this course to showcase how to model classical 3D geometry problems using Deep Learning!
The entire course is open-source! The link to projects and student outputs are given below:
CMSC 426 is an introductory course on computer vision and computational photography that explores image formation, image features, image segmentation, image stitching, image recognition, motion estimation, and visual SLAM.
The entire course is open-source! The link to projects and student outputs are given below:
Research
Nitin J. Sanket, Chahat Deep Singh, Chethan M. Parameshwara, Cornelia Fermuller, Guido C.H.E. de Croon, Yiannis Aloimonos, Robotics Science and Systems (RSS), 2021.
Nitin J. Sanket, Chahat Deep Singh, Varun Asthana, Cornelia Fermuller, Yiannis Aloimonos, IEEE International Conference on Robotics and Automation (ICRA) , 2021.
Nitin J. Sanket, Chahat Deep Singh, Cornelia Fermuller, Yiannis Aloimonos, IEEE Transactions on Robotics (Under Review), 2020.
Chethan M. Parameshwara, Nitin J. Sanket, Chahat Deep Singh, Cornelia Fermuller, Yiannis Aloimonos, IEEE International Conference on Robotics and Automation (ICRA), 2021.
Nitin J. Sanket*, Chethan M. Parameshwara*, Chahat Deep Singh, Cornelia Fermuller, Davide Scaramuzza, Yiannis Aloimonos, IEEE International Confernce on Robotics and Automation, Paris, 2020.
* Equal Contribution
Chahat Deep Singh*, Nitin J. Sanket*, Kanishka Ganguly, Cornelia Fermuller, Yiannis Aloimonos, IEEE Robotics and Automation Letters, 2018.
* Equal Contribution
Video Tutorials
Services
Contact Me
A108 Adam Street, New York, NY 535022
contact@example.com
+1 5589 55488 55