My primary research directions encompass agent and multi-agent systems, reinforcement learning, robotics, machine learning and computer vision, and extended reality.


Agent and Multi-Agent Systems

Main description

description

description


Reinforcement Learning

Main description

description

description


Robotics

Our research covers collaborative UAV–UGV systems with dynamic aerial-to-ground SLAM mapping and real-time data exchange; vineyard robotics using depth cameras and robotic arms for precise sample collection, dynamic leaf picking, and digital twin simulations with Gazebo; low-cost autonomous Periplus boat solutions with onboard computing and tourist storytelling; ML/CV-powered robotic arm simulations for vine leaf detection and garbage sorting with YOLO; and personalized interactive robotic fitness instructors on NAO robots leveraging pose estimation for continuous, customized guidance.

Collaborative heterogeneous robotics: dynamic mapping from aerial drones transmitted to the UGV for SLAM-based navigation, with real-time data and knowledge exchange.

UGV-mounted robotic arm and gripper for vineyard sample collection, using a depth camera for precise distance estimation to enhance manipulation and grip control.

Dynamic leaf picking with a UGV-mounted robotic arm and gripper, leveraging depth camera feedback to adapt to moving foliage in real time.

Digital twin robotics: real-time (custom) Gazebo simulation of a UGV (equipped arm), with predictive commands streamed to the physical robot for vineyard operations.

Periplus: autonomous boat navigation with integrated storytelling. A low-cost solution adaptable for every boat powered by a single-board computer, interconnected for remote monitoring and equipped with onboard tablets delivering interactive storytelling to tourists.

Simulated robotic arm for vineyard leaf detection and collection, utilizing YOL (ML/CV) for object detection and localization to guide the manipulator.

Simulated robotic arm for automated waste sorting and recycling, employing YOLO-based vision to detect and localize different types of garbage.

Personalized interactive robotic fitness instructor (NAO) integrating ML/CV pose estimation for position tracking and continuous autonomous guidance tailored to individual needs.


Machine Learning and Computer Vision

description

description

description


Extended Reality

description

description

description

description

description

description

description