Software Engineer IV at OKSI
Clearwater, Florida, United States -
Full Time


Start Date

Immediate

Expiry Date

21 May, 26

Salary

0.0

Posted On

20 Feb, 26

Experience

5 year(s) or above

Remote Job

Yes

Telecommute

Yes

Sponsor Visa

No

Skills

Real-Time Computer Vision, Object Detection, Tracking, Recognition, GPS-Denied Navigation, Embedded Hardware, C++, Python, Edge Inference, Jetson, ARM, GPU Acceleration, Sensor Fusion, EO/IR, IMU, OpenCV

Industry

Defense and Space Manufacturing

Description
Description Position Overview We are seeking a hands-on Real-Time Computer Vision Engineer to develop and deploy onboard perception and vision-navigation systems for unmanned aerial systems (UAS). This role focuses on operational autonomy — object detection, tracking, recognition, and GPS-denied navigation — running in real time on embedded hardware. This is a deploy-to-flight role. Not modeling. Not simulation. Not offline research. You will build, optimize, integrate, and fly vision systems under real-world constraints. Key Responsibilities • Develop and deploy real-time object detection, tracking, and recognition pipelines for airborne platforms. • Implement vision-based navigation capabilities (visual odometry, feature tracking, obstacle detection, VIO integration). • Optimize models for low-latency edge inference (Jetson, ARM, GPU acceleration). • Integrate perception systems with flight controls and autonomy stacks. • Support ground and flight testing; debug performance in live operational environments. • Implement sensor fusion across EO/IR, IMU, GPS, and telemetry inputs. • Transition prototype algorithms into reliable, production-ready systems. Requirements Basic Qualifications • Bachelor’s or Master’s degree in Computer Science, Robotics, Electrical Engineering, or related field. • 4–8+ years of experience in real-time computer vision or autonomy systems. • Strong C++ and/or Python proficiency. • Experience deploying real-time vision algorithms to hardware platforms. Solid understanding of: Object detection and multi-object tracking Feature detection and tracking Visual odometry or VIO Real-time system optimization Experience with OpenCV and PyTorch or TensorFlow. Preferred Qualifications Experience with UAS or airborne autonomy systems. Experience with ROS/ROS2, PX4, MAVLink, or ArduPilot. CUDA, TensorRT, or hardware acceleration experience. EO/IR payload integration experience. SLAM, GPS-denied navigation, or contested environment experience. Experience supporting live flight testing. Additional Requirements Must be eligible to obtain and maintain a DoD Secret clearance. Willingness to support flight test operations and travel as needed. To comply with U.S. Government export control regulations, including the International Traffic in Arms Regulations (ITAR), you must be a U.S. person as defined by law. A U.S. person includes a U.S. citizen, lawful permanent resident, or protected individual as defined by 8 U.S.C. 1324b(a)(3), or an individual otherwise eligible to obtain the required authorization from the U.S. Department of State. We are an equal employment opportunity and affirmative action employer. All qualified applicants will receive consideration for employment without regard to race, color, religion, sex, national origin, age, disability, protected veteran status, or any other status protected by law. We provide reasonable accommodations for qualified individuals with disabilities in the application and hiring process. This employer participates in E Verify.
Responsibilities
The engineer will develop and deploy real-time object detection, tracking, and recognition pipelines for unmanned aerial systems, focusing on operational autonomy like GPS-denied navigation running on embedded hardware. Key tasks include optimizing models for low-latency edge inference, integrating perception systems with flight controls, and supporting live ground and flight testing.
Loading...