Welcome!
I am a Senior Autonomy Engineer at Plus, where I work on advanced perception and control systems for autonomous long-haul trucks. I recently completed my Ph.D. in Electrical and Computer Engineering at New York University, where I conducted research in the Agile Robotics and Perception Lab (ARPL). My work focuses on building autonomous systems that operate safely and reliably in the real world, with an emphasis on perception and optimal control for high-speed navigation in complex, GPS-denied environments. Before my Ph.D., I studied computer engineering in Padova with research experiences in Taiwan and Switzerland, and worked as a research engineer at FlexSight, developing navigation systems for agricultural and medical robots. My research has been validated through DARPA and Army Research Lab field trials, published in leading IEEE venues, and resulted in multiple journal articles and U.S. patents.
Ph.D. Research
Autonomous drones are becoming indispensable in critical missions such as search and rescue and disaster response. Yet todayโs systems still require expert human pilots because autonomy fails where it matters most: in complex, GPS-denied, and rapidly changing environments. GPS signals vanish, maps become outdated, and controllers tuned in the lab collapse when conditions shift. During my Ph.D., I rebuilt the autonomy stack around a simple premise:
Adaptive dynamics and control. Quadrotor dynamics change continuously with wind, payload variations, and hardware wear. These effects are difficult to capture with traditional physics-based models that assume fixed and known conditions. The first step in my research was to remove this assumption and design a system that learns and adapts its dynamics online. I developed a physics-informed model that corrects itself in real time from prediction errors, allowing the controller to remain accurate as conditions evolve. To further accelerate learning, I introduced an uncertainty-aware strategy that guides the controller toward regions of the state space that are most informative, producing more stable gradients and faster adaptation. As a result, the vehicle can stabilize within seconds, even when carrying a 30% heavier payload, flying with swapped propellers, or facing strong gusts of wind.
Reactive collision avoidance. Accurate dynamics provide the foundation for stable flight, but they are not enough to guarantee safety when maps quickly become outdated. In cluttered and rapidly changing environments, a map can be obsolete the moment it is built, leaving the vehicle exposed to collisions. To address this, I replaced the conventional mapโplanโtrack pipeline with a reactive collision-avoidance strategy that enforces safety directly from perception. Raw stereo depth is densified with monocular depth estimation and scale alignment to recover reliable geometry. From this, the system computes time-to-collision and extracts only the most critical points, which are injected as control barrier function constraints into the controller. Updated at every perception cycle, these constraints allow the quadrotor to react instantly to obstacles while maintaining real-time performance.
Instantaneous relative navigation. Even with adaptation and reactivity, any autonomy stack ultimately fails if localization drifts. In GPS-denied flight, global horizontal position and yaw are unobservable, which causes objectives defined in a world frame to collapse over time. My solution was to abandon the global frame altogether and introduce instantaneous relative navigation. Instead of relying on absolute position, the vehicle fixes its frame to gravity, and plans only from directly observable quantities such as attitude, altitude, velocity, and objects in view. In search mode, this allows the quadrotor to traverse unknown environments without accumulating drift. Once a target is detected, the frame is anchored to it, and the system transitions seamlessly into pursuit. In large-scale field trials, the vehicle tracked ground targets at more than 90 km/h and 60ยฐ pitch angles, consistently reacquiring them even under intermittent detections and degraded sensing.
Together, these advances form an autonomy stack that is adaptive, reactive, and relative. By continuously updating its dynamics, guaranteeing safety directly from perception, and navigating without reliance on global references, the system achieves reliable high-speed flight in environments where traditional pipelines fail. Validated on embedded compute in forests and urban compounds, my PhD research demonstrates that safe and agile autonomy is not just possible in controlled settings but deployable in the real world.
Latest News
Latest News
- Jan 2026 Paper accepted at ICRA 2026 (arXiv) ๐ช๐ผ
- Nov 2025 Journal paper accepted at Transactions on Field Robotics (arXiv) ๐ช๐ผ
- Oct 2025 Joined PlusAI
- Oct 2025 PhD defense: Adaptive and Reactive Visual Autonomy for Safe Agile Flight in Unstructured Environments
- May 2025 Oral presentation at ICRA 2025 (Atlanta, USA) โ๏ธ
- Jan 2025 Paper accepted at ICRA 2025 (arXiv) ๐ช๐ผ
- Jan 2025 Paper accepted at ICRA 2025 (arXiv) ๐ช๐ผ
- May 2024 Oral presentation at ICRA 2024 (Yokohama, Japan) โ๏ธ
- Jan 2024 Paper accepted at ICRA 2024 (arXiv) ๐ช๐ผ
- Nov 2023 Journal paper accepted at Transactions on Robotics (arXiv) ๐ช๐ผ
- Oct 2023 Paper featured on IEEE Spectrum ๐บ
- Oct 2023 Paper accepted at ICAR 2023 ๐ช๐ผ
- Jun 2023 AutoCharge featured on IEEE Spectrum ๐บ
- May 2023 Oral presentation at ICRA 2023 (London, UK) โ๏ธ
- May 2023 Recipient of the Dr. Li Annual ECE Publication Award ๐
- Mar 2023 Journal paper accepted at Annual Reviews in Control (link) ๐ช๐ผ
- Jan 2023 AutoCharge and GaPT accepted at ICRA 2023 ๐ช๐ผ๐ช๐ผ
- Oct 2022 Virtual presentation at IROS 2022 ๐บ
- Jun 2022 PI-TCN accepted at RAL + IROS 2022 ๐ช๐ผ
- May 2022 Oral presentation at ICRA 2022 (Philadelphia, USA) โ๏ธ
- Jan 2022 AutoTune accepted at RAL + ICRA 2022 ๐ช๐ผ
- Aug 2021 Joined Agile Robotics and Perception Lab
