Rajmeet Singh completed his Ph.D. in Robotics from Thapar Institute of Engineering and Technology, Patiala, India in 2019. Currently, he serves as a Postdoctoral Researcher at the Centre for Autonomous Robotic Systems (KUCARS) at Khalifa University in the UAE. Previously, from 2022 to 2023, he held a Postdoctoral Researcher position at the University of Windsor, ON, Canada. Since 2010, Dr. Singh has been working as an Assistant Professor in the Mechanical Engineering Department at Thapar Institute of Engineering and Technology, Patiala and BBSBEC College, India. Dr. Singh is an accomplished author with three book chapters and over 30 articles to his credit. His research interests encompass various areas including mobile robotics, visual servoing, reinforcement learning, visual odometry, software-hardware integration, and controls. In recognition of his academic excellence, he was awarded the University Gold Medal in 2005 for his B-Tech studies.
A Deep Learning-Based Approach to Strawberry Grasping using a Telescopic-Link Differential Drive Mobile Robot in ROS-Gazebo for Greenhouse Digital Twin Environments
This research aims to address the labor demands and economic challenges in strawberry
greenhouse farms by exploring the potential of a deep learning approach. To achieve this, a telescopic-
link differential drive mobile robot named MARTA (Mobile Autonomous Robot with Telescopic Arm) is
implemented within a simulated greenhouse environment. The paper provides a detailed explanation of the
3D modeling and simulation process of MARTA using the Gazebo simulator and Robot Operating System
(ROS). This approach allows for offline programming and performance analysis of the system. MARTA,
designed as a two-wheel differential drive steering platform, is accurately modeled and simulated, including
the physical components, sensors, and control system, using the Unified Robot Description Format (URDF)
file in Gazebo. The integration of ROS with Gazebo enables the utilization and implementation of diverse
robotic software and tools on the simulated robot.For strawberry maturity detection and classification, an
advanced deep learning algorithm called YOLOv9-GELAN is employed, utilizing instance segmentation.
The proposed detection model is compared with existing state-of-the-art algorithms. To train the deep learn-
ing model, a dataset consisting of both simulated and real strawberry images is created. The trained model
is then deployed within the ROS-Gazebo environment to detect and classify strawberries for harvesting and
grasping purposes.This approach offers several benefits, including the ability to develop, test, and validate
MARTA and its associated software before implementing them in the real system. ROS-MoveIt is utilized in
this research to implement visual servoing, which generates the motion trajectory for the robot to approach
the identified strawberry. This technique combines the power of ROS and MoveIt, allowing for efficient
and precise control of the robot’s movements. By integrating visual feedback into the servoing process, the
robot can accurately reach the detected strawberry. Additionally, it provides valuable theoretical and practical
insights for specialists in the field of robotic systems, particularly in the modeling and simulation of ground
mobile robotic systems and greenhouse environment using Gazebo and ROS.
Robust Pollination for Tomato farming using Deep Learning and Visual Servoing
In this work, a novel approach for tomato pollination that utilizes visual servo control is proposed. The objective
is to meet the growing demand for automated robotic pollinators to overcome the decline in bee populations. Our
approach focuses on addressing this challenge by leveraging visual servo control to guide the pollination process.
The proposed method leverages deep learning to estimate the orientations and depth of detected flower, incorpo-
rating CAD-based synthetic images to ensure dataset diversity. By utilizing a 3D camera, the system accurately
estimates flower depth information for visual servoing. The robustness of the approach is validated through experi-
ments conducted in a laboratory environment with a 3D printed tomato flower plant. The results demonstrate a high
detection rate, with a mean average precision (mAP) of 91.2 %. Furthermore, the average depth error for accurately
localizing the pollination target is impressively minimal, measuring only 1.1 cm. This research presents a promis-
ing solution for tomato pollination, showcasing the effectiveness of visual-guided servo control and its potential to
address the challenges posed by diminishing bee populations in greenhouses.
Deep Learning Approach for Detecting Tomato Flowers and Buds in Greenhouses on 3P2R Gantry Robot
In recent years, significant advancements have been made in the field of smart greenhouses, particularly in the application
of computer vision and robotics for pollinating flowers. Robotic pollination offers several benefits, including reduced labor
requirements and preservation of costly pollen through artificial tomato pollination. However, previous studies have primarily
focused on the labeling and detection of tomato flowers alone. Therefore, the objective of this study was to develop a
comprehensive methodology for simultaneously labeling, training, and detecting tomato flowers specifically tailored for robotic pollination. To achieve this, transfer learning techniques were employed using well-known models, namely YOLOv5 and the recently introduced YOLOv8, for tomato flower detection. The performance of both models was evaluated using the same image dataset, and a comparison was made based on their Average Precision (AP) scores to determine the superior model. The results indicated that YOLOv8 achieved a higher mean AP (mAP) of 92.6% in tomato flower and bud detection, outperforming YOLOv5 with 91.2%. Notably, YOLOv8 also demonstrated an inference speed of 0.7 ms when considering an image size of 1920 × 1080 pixels resized to 640 × 640 pixels during detection. The image dataset was acquired during both morning and evening periods to minimize the impact of lighting conditions on the detection model. These findings highlight the potential of YOLOv8 for real-time detection of tomato flowers and buds, enabling further estimation of flower blooming peaks and facilitating robotic pollination. In the context of robotic pollination, the study also focuses on the deployment of the proposed detection model on the 3P2R gantry robot. The study introduces a kinematic model and a modified circuit for the gantry robot. The position-based visual servoing method is employed to approach the detected flower during the pollination process. The effectiveness of the proposed visual servoing approach is validated in both un-clustered and clustered plant environments in the laboratory setting. Additionally, this study provides valuable theoretical and practical insights for specialists in the field of greenhouse systems, particularly in the design of flower detection algorithms using computer vision and its deployment in robotic systems used in greenhouses.
10 years Experience in Teaching (Robotics, Mechatronics, Controls)
03 years Industry Experience (Assistant Manager Production)
03 years Reseach Experience ( Ex-Post Doc at University of Windsor)