Perception Engineer at Origin
Bengaluru, karnataka, India -
Full Time


Start Date

Immediate

Expiry Date

24 Apr, 26

Salary

0.0

Posted On

24 Jan, 26

Experience

2 year(s) or above

Remote Job

Yes

Telecommute

Yes

Sponsor Visa

No

Skills

Python 3.x, C++17/20, ROS 2, Deep-Learning Vision, Point-Cloud Processing, Camera-LiDAR Fusion, Geometric Scene Understanding, Sensor Fusion, NVIDIA Jetson, TensorRT, ONNX, RANSAC, ICP, PCL, Open3D, Experiment Tracking

Industry

Robotics Engineering

Description
As a Perception Engineer at Origin (Formerly 10xconstruction), you will help our autonomous drywall-finishing robots "see" the job-site. You'll design and deploy perception pipelines—camera + LiDAR fusion, deep-learning vision models, and point-cloud geometry—to give the robot the awareness it needs. Key Responsibilities Develop and deploy 3D perception components for geometric scene understanding using depth sensors, LiDAR, and RGB cameras. Build ROS 2 nodes that process and interpret spatial data (point clouds, depth maps, image streams) for environment modeling and task planning. Train and integrate deep-learning models for 3D semantic understanding, including surface analysis and object segmentation. Design robust sensor fusion strategies that combine visual, inertial, and spatial data for scene reconstruction and robot localization. Benchmark and optimize perception models for deployment on edge compute platforms (e.g., NVIDIA Jetson) using tools like TensorRT or ONNX. Collect and curate high-quality datasets (real and synthetic); automate training pipelines and experiment tracking. Collaborate across robotics teams (manipulation, navigation, cloud) to deliver production-ready perception stacks for autonomous operation in dynamic construction environments. Qualifications & Skills Solid grasp of linear algebra, probability and geometry; coursework or projects in CV or robotics perception. Proficient in Python 3.x and C++17/20; comfortable with git and CI workflows. Experience with ROS 2 (rclcpp / rclpy) and custom message / launch setups. Familiarity with deep-learning vision (PyTorch or TensorFlow)—classification, detection or segmentation. Hands-on work with point-cloud processing (PCL, Open3D); know when to apply voxel grids, KD-trees, RANSAC or ICP. Bonus: exposure to camera–LiDAR calibration, or real-time optimization libraries (Ceres, GTSAM). Why Join us? Work side-by-side with founders and senior engineers to redefine robotics in construction. Build tech that replaces dangerous, repetitive wall-finishing labor with intelligent autonomous systems. Help shape not just a product, but an entire company—and see your code on real robots at active job-sites. Python 3.x C++17/20 ROS 2 PyTorch Open3D RANSAC
Responsibilities
Develop and deploy 3D perception components for autonomous drywall-finishing robots. Collaborate across robotics teams to deliver production-ready perception stacks for dynamic construction environments.
Loading...