Sean Xia is a Perception Machine Learning Engineer at Aurora Innovation with over four years of experience in autonomous driving. He specializes in developing robust perception systems with a focus on sensor fusion and machine learning to improve the safety and reliability of self-driving vehicles. At Aurora, Sean played a key role in the launch of the company’s first commercial driverless trucking service in the U.S. He co-authored a WACV 2025 paper on detecting and correcting sensor misalignment for long-range perception and contributed to SpotNet, a vision-based 3D object detection framework optimized for sparse LiDAR. Sean holds a Master’s degree in Computer Science from the University of California, San Diego. He is passionate about solving real-world problems through applied research and advancing autonomous systems from engineering efforts into impactful, deployed technology.
The Pop in Your Job – What drives you? Why do you love your job?
What drives me is the opportunity to contribute to technology that has a tangible, positive impact on society. At Aurora Innovation, we’re not just developing autonomous driving systems; we’re striving to make transportation safer, more efficient, and more accessible for everyone. Being part of the team that launched the first commercial driverless trucking service in the U.S. has been incredibly rewarding. It’s inspiring to see our work transition from development to real-world applications, knowing that we’re addressing critical challenges like driver shortages and supply chain inefficiencies. Collaborating with a diverse group of experts who share a commitment to innovation and safety makes each day fulfilling. I take pride in knowing that our efforts are paving the way for a future where technology enhances daily life in meaningful ways.
Case Study
Monday, June 30
09:30 am - 10:00 am
Live in San Francisco
Less Details
Reliable long-range perception is a fundamental challenge for autonomous vehicles, demanding both high semantic understanding and precise distance estimation. In this presentation, we introduce SpotNet: a fast, single-stage, image-centric yet LiDAR-anchored approach to long-range 3D object detection. Based on our recently published research, SpotNet efficiently combines image and LiDAR data to deliver high accuracy with very sparse LiDAR input, achieving superior scalability compared to traditional BEV (bird’s-eye-view) methods. By anchoring predictions directly to LiDAR points, SpotNet bypasses explicit distance regression, enabling seamless transfer across image resolutions without retraining. This architecture highlights a new direction in efficient, high-performance sensor fusion, perfectly aligned with the needs of scalable, production-ready autonomous systems.
In this session, you will learn more about: