<?xml version="1.0" encoding="UTF-8"?>
<source>
  <jobs>
    <job>
      <externalid>dab43521-cfa</externalid>
      <Title>Software Engineer, Robotics &amp; Autonomous Systems</Title>
      <Description><![CDATA[<p>In this role, you&#39;ll be a key contributor building production systems for robotics data collection, model training pipelines, and evaluation infrastructure. You&#39;ll have the opportunity to own critical parts of our robotics platform, work directly with cutting-edge robotics and AV customers, and shape the future of embodied AI systems.</p>
<p>Your responsibilities will include:</p>
<ul>
<li>Owning and architecting large-scale data processing pipelines for robotics and autonomous vehicle datasets</li>
<li>Building ML training and fine-tuning pipelines using Scale&#39;s robotics data</li>
<li>Working across backend (Python, Node.js, C++) and frontend (React, TypeScript) stacks to build end-to-end solutions</li>
<li>Developing tools and systems for robotics data collection, teleoperation, and model evaluation</li>
<li>Interacting directly with robotics and AV stakeholders to understand their technical needs and drive product development</li>
<li>Building real-time systems for robotic control, sensor fusion, and perception pipelines</li>
<li>Designing comprehensive monitoring and evaluation frameworks for robotics models and data quality</li>
<li>Collaborating with ML engineers and researchers to bring robotics research into production</li>
<li>Delivering features at high velocity while maintaining system reliability and performance</li>
</ul>
<p>Ideally, you have:</p>
<ul>
<li>3+ years of software engineering experience in robotics, autonomous vehicles, or related fields</li>
<li>Strong programming skills in Python and TypeScript/Node.js for production systems</li>
<li>Experience with React and modern frontend development for 3D interfaces</li>
<li>Practical experience with robotics frameworks (ROS/ROS2), simulation environments, or AV systems</li>
<li>Understanding of distributed systems, workflow orchestration, and cloud infrastructure (AWS, Temporal, Kubernetes, Docker)</li>
<li>Experience with databases (MongoDB, PostgreSQL) and data processing at scale</li>
<li>Track record of working with cross-functional teams including ML engineers, researchers, and customers</li>
<li>Strong communication skills and ability to operate with high autonomy</li>
</ul>
<p>Nice to have:</p>
<ul>
<li>Experience with C++</li>
<li>Experience with robotics hardware platforms (robotic arms, mobile robots, perception systems) with a focus on time synchronization</li>
<li>Background in computer vision, SLAM, motion planning, or imitation learning</li>
<li>Familiarity with autonomous vehicle data, lidar technologies, or 3D data processing</li>
<li>Experience with ML model deployment and serving frameworks</li>
<li>Knowledge of teleoperation systems (ALOHA, UMI, hand tracking) or VR interfaces</li>
<li>Experience with workflow orchestration systems (Temporal, Airflow)</li>
<li>Published research or open-source contributions in robotics or autonomous systems</li>
</ul>
<p style="margin-top:24px;font-size:13px;color:#666;">XML job scraping automation by <a href="https://yubhub.co">YubHub</a></p>]]></Description>
      <Jobtype>full-time</Jobtype>
      <Experiencelevel>mid</Experiencelevel>
      <Workarrangement>onsite</Workarrangement>
      <Salaryrange>$180,000-$225,000 USD</Salaryrange>
      <Skills>Python, TypeScript, Node.js, React, C++, ROS/ROS2, simulation environments, AV systems, distributed systems, workflow orchestration, cloud infrastructure, databases, data processing, robotics hardware platforms, computer vision, SLAM, motion planning, imitation learning, autonomous vehicle data, lidar technologies, 3D data processing, ML model deployment, serving frameworks, teleoperation systems, VR interfaces, workflow orchestration systems</Skills>
      <Category>Engineering</Category>
      <Industry>Technology</Industry>
      <Employername>Scale</Employername>
      <Employerlogo>https://logos.yubhub.co/scale.com.png</Employerlogo>
      <Employerdescription>Scale develops reliable AI systems for the world&apos;s most important decisions, providing high-quality data and full-stack technologies to power leading models.</Employerdescription>
      <Employerwebsite>https://www.scale.com/</Employerwebsite>
      <Compensationcurrency></Compensationcurrency>
      <Compensationmin></Compensationmin>
      <Compensationmax></Compensationmax>
      <Applyto>https://job-boards.greenhouse.io/scaleai/jobs/4618065005</Applyto>
      <Location>San Francisco, CA</Location>
      <Country></Country>
      <Postedate>2026-04-18</Postedate>
    </job>
    <job>
      <externalid>d6fc00c5-564</externalid>
      <Title>Software Engineer, Robotics</Title>
      <Description><![CDATA[<p>We&#39;re seeking a skilled Software Engineer to join our Robotics business unit, focused on solving the data bottleneck in Physical AI across Robotics, Autonomous Vehicles, and Computer Vision. As a key contributor, you&#39;ll own and architect large-scale data processing pipelines, build ML training and fine-tuning pipelines, and develop tools and real-time systems for robotics data collection, teleoperation, model evaluation, data curation, and data annotation.</p>
<p>In this role, you&#39;ll interact directly with robotics and AV stakeholders to understand their technical needs and drive product development. You&#39;ll also design comprehensive monitoring and evaluation frameworks for robotics models and data quality, and collaborate with ML engineers and researchers to bring robotics research into production.</p>
<p>To succeed, you&#39;ll need at least 6 years of high-proficiency software engineering experience, with a strong background in complex systems and the ability to independently research, analyze, and unblock hard technical problems. You should have strong programming skills in Python and TypeScript/Node.js for production systems, experience with React and modern frontend development for 3D interfaces, and concurrent and real-time systems expertise.</p>
<p>We&#39;re looking for someone who can deliver features at high velocity while maintaining system reliability and performance, and has a track record of working with cross-functional teams including ML engineers, researchers, and customers. Strong communication skills and the ability to operate with high autonomy are essential.</p>
<p style="margin-top:24px;font-size:13px;color:#666;">XML job scraping automation by <a href="https://yubhub.co">YubHub</a></p>]]></Description>
      <Jobtype>full-time</Jobtype>
      <Experiencelevel>senior</Experiencelevel>
      <Workarrangement>remote</Workarrangement>
      <Salaryrange></Salaryrange>
      <Skills>Python, TypeScript/Node.js, React, Concurrent and real-time systems, Distributed systems, Workflow orchestration, Cloud infrastructure, Databases, Data processing at large scale, C++, Robotics hardware platforms, Computer vision, SLAM, Motion planning, Imitation learning, Autonomous vehicle data, Lidar technologies, 3D data processing, ML model deployment and serving frameworks, Teleoperation systems, VR interfaces, Workflow orchestration systems</Skills>
      <Category>Engineering</Category>
      <Industry>Technology</Industry>
      <Employername>Scale</Employername>
      <Employerlogo>https://logos.yubhub.co/scale.com.png</Employerlogo>
      <Employerdescription>Scale develops reliable AI systems for the world&apos;s most important decisions, providing high-quality data and full-stack technologies to power leading models.</Employerdescription>
      <Employerwebsite>https://scale.com/</Employerwebsite>
      <Compensationcurrency></Compensationcurrency>
      <Compensationmin></Compensationmin>
      <Compensationmax></Compensationmax>
      <Applyto>https://job-boards.greenhouse.io/scaleai/jobs/4612282005</Applyto>
      <Location>Argentina; Uruguay</Location>
      <Country></Country>
      <Postedate>2026-04-18</Postedate>
    </job>
    <job>
      <externalid>ef6605f2-fe0</externalid>
      <Title>Software Engineer, Robotics</Title>
      <Description><![CDATA[<p>We&#39;re looking for a skilled Software Engineer to join our Robotics business unit. As a key contributor, you&#39;ll build production systems for robotics data collection, model training pipelines, and evaluation infrastructure. You&#39;ll have the opportunity to own critical parts of our robotics platform, work directly with cutting-edge robotics and AV customers, and shape the future of embodied AI systems.</p>
<p>Your responsibilities will include:</p>
<ul>
<li>Owning and architecting large-scale data processing pipelines for robotics and autonomous vehicle datasets</li>
<li>Building ML training and fine-tuning pipelines using Scale&#39;s robotics data</li>
<li>Working across backend (Python, Node.js, C++) and frontend (React, TypeScript) stacks to build end-to-end solutions</li>
<li>Developing tools and real-time systems for robotics data collection, teleoperation, model evaluation, data curation, and data annotation</li>
<li>Interacting directly with robotics and AV stakeholders to understand their technical needs and drive product development</li>
<li>Designing comprehensive monitoring and evaluation frameworks for robotics models and data quality</li>
</ul>
<p>Ideal candidates will have:</p>
<ul>
<li>3+ years of high-proficiency software engineering experience, with a strong background in complex systems and the ability to independently research, analyze, and unblock hard technical problems</li>
<li>Strong programming skills in Python and TypeScript/Node.js for production systems</li>
<li>Experience with React and modern frontend development for 3D interfaces</li>
<li>Concurrent and real-time systems, with special attention to timing constraints</li>
<li>Understanding of distributed systems, workflow orchestration, and cloud infrastructure (AWS, Temporal, Kubernetes, Docker)</li>
<li>Experience with databases (MongoDB, PostgreSQL) and data processing at large scale</li>
<li>Track record of working with cross-functional teams including ML engineers, researchers, and customers</li>
<li>Strong communication skills and ability to operate with high autonomy</li>
</ul>
<p>Nice to have:</p>
<ul>
<li>Experience with C++</li>
<li>Experience with robotics hardware platforms (robotic arms, mobile robots, perception systems) with a focus on time synchronization</li>
<li>Background in computer vision, SLAM, motion planning, or imitation learning</li>
<li>Familiarity with autonomous vehicle data, lidar technologies, or 3D data processing</li>
<li>Experience with ML model deployment and serving frameworks</li>
<li>Knowledge of teleoperation systems (ALOHA, UMI, hand tracking) or VR interfaces</li>
<li>Experience with workflow orchestration systems (Temporal, Airflow)</li>
<li>Published research or open-source contributions in robotics or autonomous systems</li>
</ul>
<p style="margin-top:24px;font-size:13px;color:#666;">XML job scraping automation by <a href="https://yubhub.co">YubHub</a></p>]]></Description>
      <Jobtype>full-time</Jobtype>
      <Experiencelevel>senior</Experiencelevel>
      <Workarrangement>onsite</Workarrangement>
      <Salaryrange></Salaryrange>
      <Skills>Python, TypeScript, Node.js, C++, React, Distributed systems, Workflow orchestration, Cloud infrastructure, Databases, Data processing, Robotics hardware platforms, Computer vision, SLAM, Motion planning, Imitation learning, Autonomous vehicle data, Lidar technologies, 3D data processing, ML model deployment, Serving frameworks, Teleoperation systems, VR interfaces, Workflow orchestration systems</Skills>
      <Category>Engineering</Category>
      <Industry>Technology</Industry>
      <Employername>Scale</Employername>
      <Employerlogo>https://logos.yubhub.co/scale.com.png</Employerlogo>
      <Employerdescription>Scale develops reliable AI systems for the world&apos;s most important decisions, providing high-quality data and full-stack technologies.</Employerdescription>
      <Employerwebsite>https://scale.com/</Employerwebsite>
      <Compensationcurrency></Compensationcurrency>
      <Compensationmin></Compensationmin>
      <Compensationmax></Compensationmax>
      <Applyto>https://job-boards.greenhouse.io/scaleai/jobs/4655050005</Applyto>
      <Location>Mexico City, MX</Location>
      <Country></Country>
      <Postedate>2026-04-18</Postedate>
    </job>
    <job>
      <externalid>eccf1031-6f3</externalid>
      <Title>Senior Computer Vision Engineer, Space</Title>
      <Description><![CDATA[<p>We are seeking a Senior Computer Vision Engineer to join our rapidly growing team in Washington DC. The ideal candidate will have a strong background in computer vision and machine learning, with experience in developing and implementing computer vision algorithms for various spacecraft efforts in all orbital regimes.</p>
<p>The Senior Computer Vision Engineer will be responsible for proposing and prototyping innovative solutions to solve real-world problems, developing and maintaining core libraries and runtime applications, integrating classical and geometric methods in computer vision with ML methods, and working with space vehicle CV software and hardware subsystems for various spacecraft efforts in all orbital regimes.</p>
<p>The successful candidate will have a Master&#39;s or Ph.D. in Machine Learning, Robotics, or Computer Science, with a strong background in computer vision and machine learning. They will also have experience in one or more of the following: objection detection, object tracking, instance segmentation, semantic segmentation, semantic change detection, natural feature tracking (NFT), visual odometry, SLAM, multi-view geometry, structure from motion, 3D geometry, discriminative correlation filters, stereo, neural 3D reconstruction, multi-band sensor processing, RGB-D and LIDAR sensor fusion.</p>
<p>The Senior Computer Vision Engineer will work closely with related teams, including Sensors, GNC, Avionics, Systems, Flight Software, Mission Operations, and Ground Software, to develop and implement computer vision algorithms for various spacecraft efforts in all orbital regimes.</p>
<p>The ideal candidate will have excellent communication and organizational skills, including documentation and training material, and will be able to work effectively in a fast-paced environment with tight deadlines.</p>
<p>The salary range for this role is $191,000-$253,000 USD, and highly competitive equity grants are included in the majority of full-time offers.</p>
<p style="margin-top:24px;font-size:13px;color:#666;">XML job scraping automation by <a href="https://yubhub.co">YubHub</a></p>]]></Description>
      <Jobtype>full-time</Jobtype>
      <Experiencelevel>senior</Experiencelevel>
      <Workarrangement>onsite</Workarrangement>
      <Salaryrange>$191,000-$253,000 USD</Salaryrange>
      <Skills>Machine Learning, Robotics, Computer Science, Computer Vision, Object Detection, Object Tracking, Instance Segmentation, Semantic Segmentation, Semantic Change Detection, Natural Feature Tracking (NFT), Visual Odometry, SLAM, Multi-view Geometry, Structure from Motion, 3D Geometry, Discriminative Correlation Filters, Stereo, Neural 3D Reconstruction, Multi-band Sensor Processing, RGB-D and LIDAR Sensor Fusion, Matlab, Simulink, Python, Go, C++, Linux systems, OpenCV, NFT</Skills>
      <Category>Engineering</Category>
      <Industry>Technology</Industry>
      <Employername>Anduril Industries</Employername>
      <Employerlogo>https://logos.yubhub.co/andurilindustries.com.png</Employerlogo>
      <Employerdescription>Anduril Industries is a defense technology company that develops advanced technology for the U.S. and allied military.</Employerdescription>
      <Employerwebsite>https://www.andurilindustries.com/</Employerwebsite>
      <Compensationcurrency></Compensationcurrency>
      <Compensationmin></Compensationmin>
      <Compensationmax></Compensationmax>
      <Applyto>https://job-boards.greenhouse.io/andurilindustries/jobs/5016343007</Applyto>
      <Location>Washington, District of Columbia, United States</Location>
      <Country></Country>
      <Postedate>2026-04-18</Postedate>
    </job>
    <job>
      <externalid>cb57c7a1-7e6</externalid>
      <Title>Senior Computer Vision Engineer, Space</Title>
      <Description><![CDATA[<p>We are looking for a Senior Computer Vision Engineer to join our rapidly growing team in Costa Mesa, CA. In this role, you will be responsible for working on and understanding the design of all perception subsystems to include but not limited to hardware sensors and advanced processing platforms, navigation algorithms, flight software implementation, subsystem integration &amp; test (I&amp;T), and vehicle I&amp;T.</p>
<p>The computer vision engineering team will work closely with related teams, including Sensors, GNC, Avionics, Systems, Flight Software, Mission Operations, and Ground Software. The Computer Vision Engineer will work algorithm design, truth and physics modeling, scene rendering, simulation and analysis for a wide variety of spacecraft and space missions to include but not limited to LEO, MEO, GEO, Reentry, and RPOD (Rendezvous Proximity Operations and Docking).</p>
<p>The CV Engineer will help lead successful implementation, validation and CV operations of Anduril’s fleet of space vehicles. This role is directly tied to ongoing, funded programs within Anduril’s Space Business Line. The programs require building and fielding a resilient, software-defined spacecraft systems across numerous mission threads. We work with mission partners and customers to deploy reliable and robust capabilities on operationally-relevant fielding timelines to meet complex challenges across the DOD and IC.</p>
<p>The position requires a strong background in computer vision, machine learning, and software development, with experience in developing and implementing computer vision algorithms for space-based applications. The ideal candidate will have a deep understanding of computer vision principles, including object detection, tracking, and recognition, as well as experience with software development in languages such as C++ and Python.</p>
<p>Key Responsibilities:</p>
<ul>
<li>Propose and prototype innovative solutions to solve real-world problems, leveraging the latest state-of-the-art techniques in the field</li>
<li>Develop and maintain core libraries and runtime applications</li>
<li>Integrate classical and geometric methods in computer vision with ML methods</li>
<li>Work space vehicle CV software and hardware subsystems for various spacecraft efforts in all orbital regimes and work closely with partners for successful implementation</li>
<li>Develop modern, software-defined approaches to autonomous spacecraft operations with maneuvering capabilities to successfully accomplish mission objectives</li>
<li>Develop appropriate test plans and procedures to validate the CV system during ground checkout, on-orbit commissioning and operations</li>
<li>Collaborate across multiple teams to plan, build, and test complex functionality</li>
<li>Coordinate with end-users, other operators and customers to turn needs into features while balancing user experience with engineering constraints</li>
<li>Support challenging schedules during ground testing, launch windows and on-orbit operations of the spacecraft systems</li>
<li>Design of flight software and firmware, algorithms, and simulation products</li>
<li>Development of space vehicle autonomy tools for dynamic space operations</li>
<li>Test process development and execution</li>
<li>Define automated fault detection and responses</li>
<li>Provide hardware-in-the-loop and monte-carlo simulation capabilities</li>
</ul>
<p>Required Qualifications:</p>
<ul>
<li>MS or PhD in Machine Learning, Robotics or Computer Science, Image Science with emphasis on Computer Vision</li>
<li>BS in Computer Science, Machine Learning, Electrical Engineering, or related field</li>
<li>Advanced professional experience developing and benchmarking ML algorithms on large-scale datasets</li>
<li>High proficiency in C++ development in a Linux environment</li>
<li>Experience in one or more of the following: Objection Detection, Object Tracking, Instance Segmentation, Semantic Segmentation, Semantic Change Detection, Natural Feature Tracking (NFT)</li>
<li>Experience in one or more of the following: Visual Odometry, SLAM, Multi-view Geometry, Structure from Motion, 3D Geometry, Discriminative Correlation Filters, Stereo, Neural 3D Reconstruction, Multi-band sensor processing, RGB-D and LIDAR Sensor Fusion</li>
<li>Ability to quickly understand and navigate complex systems and detailed requirements</li>
<li>Familiarity with terminal guidance, rendezvous proximity operations and docking, orbital mechanics with propulsive spacecraft, and/or spacecraft/missile GNC</li>
<li>Clear communication and organizational skills including documentation and training material</li>
<li>Currently possesses and is able to maintain an active U.S. Top Secret security clearance</li>
</ul>
<p>Preferred Qualifications:</p>
<ul>
<li>Experience with Matlab, Simulink, Python, Go, C++ and/or Linux systems</li>
<li>A desire to work on critical software and hardware designs in the space domain</li>
<li>Strong preference for candidates possessing computer vision, vision navigation, image processing, feature tracking, SLAM, Open CV, and NFT</li>
<li>Experience testing CV subsystems in laboratory environments that mimic the space environmental constraints</li>
<li>Experience with orbital mechanics and resident space object tracking capabilities</li>
<li>Experience conducting spacecraft operations and satellite command and control with an emphasis on system reliability and uptime</li>
<li>Experience with testing/validation leveraging FlatSats, Hardware-in-the-Loop testbeds and digital spacecraft simulators through nominal and fault scenarios</li>
<li>Experience with computer vision and perception algorithms to support GNC operations</li>
<li>Experience developing 3-DOF simplified and 6-DOF high-fidelity dynamics simulation models used for GNC systems analysis and validation</li>
<li>Exposure to US satellite operations policy and constraints for relevant mission threads in all orbits</li>
</ul>
<p style="margin-top:24px;font-size:13px;color:#666;">XML job scraping automation by <a href="https://yubhub.co">YubHub</a></p>]]></Description>
      <Jobtype>full-time</Jobtype>
      <Experiencelevel>senior</Experiencelevel>
      <Workarrangement>onsite</Workarrangement>
      <Salaryrange>$191,000-$253,000 USD</Salaryrange>
      <Skills>Machine Learning, Robotics, Computer Science, Image Science, C++, Python, Objection Detection, Object Tracking, Instance Segmentation, Semantic Segmentation, Semantic Change Detection, Natural Feature Tracking (NFT), Visual Odometry, SLAM, Multi-view Geometry, Structure from Motion, 3D Geometry, Discriminative Correlation Filters, Stereo, Neural 3D Reconstruction, Multi-band sensor processing, RGB-D and LIDAR Sensor Fusion, Matlab, Simulink, Go, Linux systems, Computer vision, Vision navigation, Image processing, Feature tracking, Open CV, NFT</Skills>
      <Category>Engineering</Category>
      <Industry>Technology</Industry>
      <Employername>Anduril Industries</Employername>
      <Employerlogo>https://logos.yubhub.co/anduril.com.png</Employerlogo>
      <Employerdescription>Anduril Industries is a defence technology company that develops advanced technology for the U.S. and allied military capabilities.</Employerdescription>
      <Employerwebsite>https://www.anduril.com/</Employerwebsite>
      <Compensationcurrency></Compensationcurrency>
      <Compensationmin></Compensationmin>
      <Compensationmax></Compensationmax>
      <Applyto>https://job-boards.greenhouse.io/andurilindustries/jobs/5016340007</Applyto>
      <Location>Costa Mesa, California, United States</Location>
      <Country></Country>
      <Postedate>2026-04-18</Postedate>
    </job>
    <job>
      <externalid>73f04783-b19</externalid>
      <Title>Robotics Engineer, Maritime</Title>
      <Description><![CDATA[<p>As a Robotics Engineer in Anduril&#39;s Maritime Division, you will contribute to the delivery of vehicle perception and planning capability integrated into our products. This includes systems analysis, sensor selection and integration, perception architecture and implementation, motion planning, health management, behavior analysis, simulation and test infrastructure, and interfaces with lower- and higher-level systems.</p>
<p>We expect Robotics Engineers to demonstrate end-to-end ownership of their projects, contributing as a member of a team to the rapid architecting, design, delivery, support, and evolution of next-generation autonomous platforms through their entire product life-cycle.</p>
<p>Key responsibilities include:</p>
<p>Implementing trusted safe navigation, collision avoidance, and situational awareness systems that balance constraints, restrictions, and requirements in a multi-stakeholder environment Implementing scalable sub-systems including sensor processing, perception, tracking, motion planning, health management, anomaly detection, simulation, testing fixtures, and vehicle interfaces Contributing to the development of existing software components across Anduril, with the aim of developing components that are re-usable across multiple Anduril product lines Utilizing advanced techniques in computer vision, sensor fusion, and machine learning to enhance perception and planning capabilities of Anduril vehicles Conducting thorough testing and validation of perception and planning algorithms through development and use of simulation and analysis of data from real-world experiments, including evolving the data analysis tooling Collaborating with cross-functional teams, including software engineers, mechanical engineers, and systems engineers, to ensure effective system integration and testing Traveling to co-locate with end-users and/or other teams up to 20% of the time</p>
<p>Required qualifications include:</p>
<p>Bachelor&#39;s degree in Robotics, Mechatronics, Computer Science, Engineering, or a relevant field Experienced and proficient in C++ and/or Python software development Familiarity with autonomous vehicle hardware and sensors such as radar, sonar, LIDAR, and cameras Demonstrated knowledge of at least one of: computer vision, sensor fusion, SLAM, motion planning, machine learning Experience in a senior perception or planning role for the delivery of a robotic system Capacity to act as the technical owner for a system, including stakeholder engagement, requirements definition, roadmap management, team coordination, design, implementation, sustainment, and evolution Ability to collaborate with stakeholders to define and implement robust validation and verification strategies for perception and planning modules Capacity to learn and grow individually, while mentoring junior team members effectively, building team cohesion and capacity Eligibility to obtain and maintain an active U.S. Secret security clearance</p>
<p>Preferred qualifications include:</p>
<p>Experience with autonomous systems in the ground, air, maritime, or space domains Experience with simulation tools and frameworks, such as Gazebo, Unity, or Unreal Engine, for algorithm validation and testing Knowledge of safety standards and certification processes for autonomous systems Familiarity with System Engineering concepts</p>
<p>US Salary Range: $191,000-$253,000 USD</p>
<p style="margin-top:24px;font-size:13px;color:#666;">XML job scraping automation by <a href="https://yubhub.co">YubHub</a></p>]]></Description>
      <Jobtype>full-time</Jobtype>
      <Experiencelevel>senior</Experiencelevel>
      <Workarrangement>onsite</Workarrangement>
      <Salaryrange>$191,000-$253,000 USD</Salaryrange>
      <Skills>C++, Python, Autonomous vehicle hardware, Sensor fusion, SLAM, Motion planning, Machine learning, Simulation tools, Gazebo, Unity, Unreal Engine, Safety standards, Certification processes, System Engineering</Skills>
      <Category>Engineering</Category>
      <Industry>Technology</Industry>
      <Employername>Anduril Industries</Employername>
      <Employerlogo>https://logos.yubhub.co/anduril.com.png</Employerlogo>
      <Employerdescription>Anduril Industries is a defense technology company that designs, builds, and sells advanced military systems using AI-powered operating systems.</Employerdescription>
      <Employerwebsite>https://www.anduril.com/</Employerwebsite>
      <Compensationcurrency></Compensationcurrency>
      <Compensationmin></Compensationmin>
      <Compensationmax></Compensationmax>
      <Applyto>https://job-boards.greenhouse.io/andurilindustries/jobs/5091916007</Applyto>
      <Location>Boston, Massachusetts, United States</Location>
      <Country></Country>
      <Postedate>2026-04-18</Postedate>
    </job>
    <job>
      <externalid>9406d4ff-94e</externalid>
      <Title>Robotics Engineer, Maritime</Title>
      <Description><![CDATA[<p>We are seeking a Robotics Engineer to contribute to the delivery of vehicle perception and planning capability integrated into our products. This includes systems analysis, sensor selection and integration, perception architecture and implementation, motion planning, health management, behavior analysis, simulation and test infrastructure, and interfaces with lower- and higher-level systems.</p>
<p>As a Robotics Engineer, you will be responsible for implementing trusted safe navigation, collision avoidance, and situational awareness systems that balance constraints, restrictions, and requirements in a multi-stakeholder environment. You will also implement scalable sub-systems including sensor processing, perception, tracking, motion planning, health management, anomaly detection, simulation, testing fixtures, and vehicle interfaces.</p>
<p>In addition, you will contribute to the development of existing software components across Anduril, with the aim of developing components that are reusable across multiple Anduril product lines. You will utilize advanced techniques in computer vision, sensor fusion, and machine learning to enhance perception and planning capabilities of Anduril vehicles.</p>
<p>You will conduct thorough testing and validation of perception and planning algorithms through development and use of simulation and analysis of data from real-world experiments, including evolving the data analysis tooling. You will also collaborate with cross-functional teams, including software engineers, mechanical engineers, and systems engineers, to ensure effective system integration and testing.</p>
<p>This role requires a strong background in robotics, mechatronics, computer science, or engineering, with experience in C++ and/or Python software development. You should have familiarity with autonomous vehicle hardware and sensors such as radar, sonar, LIDAR, and cameras. Demonstrated knowledge of at least one of computer vision, sensor fusion, SLAM, motion planning, or machine learning is required.</p>
<p>Experience in a senior perception or planning role for the delivery of a robotic system is preferred. You should have the capacity to act as the technical owner for a system, including stakeholder engagement, requirements definition, roadmap management, team coordination, design, implementation, sustainment, and evolution. Ability to collaborate with stakeholders to define and implement robust validation and verification strategies for perception and planning modules is also required.</p>
<p>Eligibility to obtain and maintain an active U.S. Secret security clearance is necessary for this role.</p>
<p style="margin-top:24px;font-size:13px;color:#666;">XML job scraping automation by <a href="https://yubhub.co">YubHub</a></p>]]></Description>
      <Jobtype>full-time</Jobtype>
      <Experiencelevel>senior</Experiencelevel>
      <Workarrangement>onsite</Workarrangement>
      <Salaryrange>$191,000-$253,000 USD</Salaryrange>
      <Skills>C++, Python, Autonomous vehicle hardware, Radar, Sonar, LIDAR, Cameras, Computer vision, Sensor fusion, SLAM, Motion planning, Machine learning, Simulation tools and frameworks, Safety standards and certification processes for autonomous systems</Skills>
      <Category>Engineering</Category>
      <Industry>Technology</Industry>
      <Employername>Anduril Industries</Employername>
      <Employerlogo>https://logos.yubhub.co/andurilindustries.com.png</Employerlogo>
      <Employerdescription>Anduril Industries is a defense technology company that designs, builds, and sells advanced military systems.</Employerdescription>
      <Employerwebsite>https://www.andurilindustries.com/</Employerwebsite>
      <Compensationcurrency></Compensationcurrency>
      <Compensationmin></Compensationmin>
      <Compensationmax></Compensationmax>
      <Applyto>https://job-boards.greenhouse.io/andurilindustries/jobs/5051580007</Applyto>
      <Location>Costa Mesa, California, United States</Location>
      <Country></Country>
      <Postedate>2026-04-18</Postedate>
    </job>
    <job>
      <externalid>acbc634c-1c8</externalid>
      <Title>Robotics Software Engineer, Air Vehicle Autonomy</Title>
      <Description><![CDATA[<p>We are looking for software engineers and roboticists excited about creating a powerful autonomy software stack that includes computer vision, motion planning, SLAM, controls, estimation, and secure communications.</p>
<p>As a Robotics Software Engineer, you will write and maintain core libraries and services that perform critical functions for collaborative teams of robots. This includes motion deconfliction and contingency management of fast mover air vehicles.</p>
<p>You will own major feature development and rollout of large, complex features for our products, such as terminal-phase autonomy for various air vehicles and developing a test plan on live surrogates.</p>
<p>You will work closely with Anduril and 3rd party vehicle hardware teams, as well as operational subject matter experts (fighter pilots, UAV operators, etc.) to align on requirements during product development and iterate towards a final design.</p>
<p>Required qualifications include a BS in Robotics, Computer Science, Mechatronics, Electrical Engineering, Mechanical Engineering, or related field, and 3+ years experience with C++ or Rust in a Linux development environment.</p>
<p style="margin-top:24px;font-size:13px;color:#666;">XML job scraping automation by <a href="https://yubhub.co">YubHub</a></p>]]></Description>
      <Jobtype>full-time</Jobtype>
      <Experiencelevel>mid</Experiencelevel>
      <Workarrangement>onsite</Workarrangement>
      <Salaryrange>$191,000-$253,000 USD</Salaryrange>
      <Skills>C++, Rust, Linux development environment, Robotics, Computer Vision, Motion Planning, SLAM, Controls, Estimation, Secure Communications, Python, Go, Embedded Systems, Multi-Agent Coordination, Complex Frame Transformation</Skills>
      <Category>Engineering</Category>
      <Industry>Technology</Industry>
      <Employername>Anduril</Employername>
      <Employerlogo>https://logos.yubhub.co/anduril.com.png</Employerlogo>
      <Employerdescription>Anduril develops aerial and multi-domain robotic systems, with products like Fury (unmanned fighter jet) and Barracuda (air-breathing cruise missile).</Employerdescription>
      <Employerwebsite>https://www.anduril.com/</Employerwebsite>
      <Compensationcurrency></Compensationcurrency>
      <Compensationmin></Compensationmin>
      <Compensationmax></Compensationmax>
      <Applyto>https://job-boards.greenhouse.io/andurilindustries/jobs/4673939007</Applyto>
      <Location>Costa Mesa, California, United States</Location>
      <Country></Country>
      <Postedate>2026-04-18</Postedate>
    </job>
    <job>
      <externalid>7c36d505-6f8</externalid>
      <Title>Robotics Engineer</Title>
      <Description><![CDATA[<p>As a Robotics Engineer at Anduril Industries, you will contribute to the delivery of vehicle perception and planning capability integrated into our products. This includes systems analysis, sensor selection and integration, perception architecture and implementation, motion planning, health management, behavior analysis, simulation and test infrastructure, and interfaces with lower- and higher-level systems.</p>
<p>We expect Robotics Engineers to demonstrate end-to-end ownership of their projects, contributing as a member of a team to the rapid architecting, design, delivery, support, and evolution of next-generation autonomous platforms through their entire product life-cycle.</p>
<p>Key responsibilities include:</p>
<p>Implementing trusted safe navigation, collision avoidance, and situational awareness systems that balance constraints, restrictions, and requirements in a multi-stakeholder environment Implementing scalable sub-systems including sensor processing, perception, tracking, motion planning, health management, anomaly detection, simulation, testing fixtures, and vehicle interfaces Contributing to the development of existing software components across Anduril, with the aim of developing components that are re-usable across multiple Anduril product lines Utilizing advanced techniques in computer vision, sensor fusion, and machine learning to enhance perception and planning capabilities of Anduril vehicles Conducting thorough testing and validation of perception and planning algorithms through development and use of simulation and analysis of data from real-world experiments, including evolving the data analysis tooling Collaborating with cross-functional teams, including software engineers, mechanical engineers, and systems engineers, to ensure effective system integration and testing Traveling to co-locate with end-users and/or other teams up to 20% of the time</p>
<p style="margin-top:24px;font-size:13px;color:#666;">XML job scraping automation by <a href="https://yubhub.co">YubHub</a></p>]]></Description>
      <Jobtype>full-time</Jobtype>
      <Experiencelevel>senior</Experiencelevel>
      <Workarrangement>onsite</Workarrangement>
      <Salaryrange>The salary range for this role is an estimate based on a wide range of compensation factors, inclusive of base salary only. Actual salary offer may vary based on (but not limited to) work experience, education and/or training, critical skills, and/or business considerations.</Salaryrange>
      <Skills>C++, Python, Autonomous vehicle hardware and sensors, Computer vision, Sensor fusion, SLAM, Motion planning, Machine learning, Simulation tools and frameworks, Safety standards and certification processes for autonomous systems</Skills>
      <Category>Engineering</Category>
      <Industry>Technology</Industry>
      <Employername>Anduril Industries</Employername>
      <Employerlogo>https://logos.yubhub.co/anduril.com.png</Employerlogo>
      <Employerdescription>Anduril Industries is a defense technology company that designs, builds, and sells advanced military systems.</Employerdescription>
      <Employerwebsite>https://www.anduril.com/</Employerwebsite>
      <Compensationcurrency></Compensationcurrency>
      <Compensationmin></Compensationmin>
      <Compensationmax></Compensationmax>
      <Applyto>https://job-boards.greenhouse.io/andurilindustries/jobs/4972426007</Applyto>
      <Location>Melbourne, Victoria, Australia</Location>
      <Country></Country>
      <Postedate>2026-04-18</Postedate>
    </job>
    <job>
      <externalid>269373be-68a</externalid>
      <Title>Robotics Engineer</Title>
      <Description><![CDATA[<p><strong>Job Description</strong></p>
<p>We are seeking a skilled Robotics Engineer to join our Maritime Division. As a Robotics Engineer, you will contribute to the delivery of vehicle perception and planning capability integrated into our products.</p>
<p>This includes systems analysis, sensor selection and integration, perception architecture and implementation, motion planning, health management, behaviour analysis, simulation and test infrastructure, and interfaces with lower- and higher-level systems.</p>
<p><strong>Responsibilities</strong></p>
<ul>
<li>Implement trusted safe navigation, collision avoidance, and situational awareness systems that balance constraints, restrictions and requirements in a multi-stakeholder environment</li>
<li>Implement scalable sub-systems including sensor processing, perception, tracking, motion planning, health management, anomaly detection, simulation, testing fixtures, and vehicle interfaces</li>
<li>Contribute to the development of existing software components across Anduril, with the aim of developing components that are re-usable across multiple Anduril product lines</li>
<li>Utilize advanced techniques in computer vision, sensor fusion, and machine learning to enhance perception and planning capabilities of Anduril vehicles</li>
<li>Conduct thorough testing and validation of perception and planning algorithms through development and use of simulation and analysis of data from real-world experiments, including evolving the data analysis tooling</li>
<li>Collaborate with cross-functional teams, including software engineers, mechanical engineers, and systems engineers, to ensure effective system integration and testing</li>
</ul>
<p><strong>Requirements</strong></p>
<ul>
<li>Bachelor’s degree in Robotics, Mechatronics, Computer Science, Engineering, a relevant field, or equivalent experience</li>
<li>Experienced and proficient in C++ and/or Python software development</li>
<li>Familiarity with autonomous vehicle hardware and sensors such as radar, sonar, LIDAR, and cameras</li>
<li>Demonstrated knowledge of at least one of: computer vision, sensor fusion, SLAM, motion planning, machine learning</li>
<li>Experience in a senior perception or planning role for the delivery of a robotic system</li>
<li>Capacity to act as the technical owner for a system, including stakeholder engagement, requirements definition, roadmap management, team coordination, design, implementation, sustainment and evolution</li>
<li>Ability to collaborate with stakeholders to define and implement robust validation and verification strategies for perception and planning modules</li>
<li>Capacity to learn and grow individually, while mentoring junior team members effectively, building team cohesion and capacity</li>
<li>Eligible to obtain and maintain an Australian Government Negative Vetting 2 security clearance (NV2)</li>
</ul>
<p><strong>Preferred Qualifications</strong></p>
<ul>
<li>Experience with autonomous systems in the ground, air, maritime or space domains</li>
<li>Experience with simulation tools and frameworks, such as Gazebo, Unity, or Unreal Engine, for algorithm validation and testing</li>
<li>Knowledge of safety standards and certification processes for autonomous systems</li>
<li>Familiarity with System Engineering concepts</li>
</ul>
<p><strong>Salary and Benefits</strong></p>
<p>The salary range for this role is an estimate based on a wide range of compensation factors, inclusive of base salary only. Actual salary offer may vary based on (but not limited to) work experience, education and/or training, critical skills, and/or business considerations. Highly competitive equity grants are included in the majority of full-time offers; and are considered part of Anduril&#39;s total compensation package. Additionally, Anduril offers top-tier benefits for full-time employees, including:</p>
<ul>
<li>Healthcare Benefits - US Roles: Comprehensive medical, dental, and vision plans at little to no cost to you.</li>
<li>UK &amp; AUS Roles: We cover full cost of medical insurance premiums for you and your dependents.</li>
<li>IE Roles: We offer an annual contribution toward your private health insurance for you and your dependents.</li>
<li>Income Protection: Anduril covers life and disability insurance for all employees.</li>
<li>Generous time off: Highly competitive PTO plans with a holiday hiatus in December.</li>
<li>Caregiver &amp; Wellness Leave is available to care for family members, bond with a new baby, or address your own medical needs.</li>
<li>Family Planning &amp; Parenting Support: Coverage for fertility treatments (e.g., IVF, preservation), adoption, and gestational carriers, along with resources to support you and your partner from planning to parenting.</li>
<li>Mental Health Resources: Access free mental health resources 24/7, including therapy and life coaching.</li>
<li>Additional work-life services, such as legal and financial support, are also available.</li>
<li>Professional Development: Annual reimbursement for professional development - Commuter Benefits: Company-funded commuter benefits based on your region.</li>
<li>Relocation Assistance: Available depending on role eligibility.</li>
<li>Retirement Savings Plan - US Roles: Traditional 401(k), Roth, and after-tax (mega backdoor Roth) options.</li>
<li>UK &amp; IE Roles: Pension plan with employer match.</li>
<li>AUS Roles: Superannuation plan.</li>
</ul>
<p><strong>Protecting Yourself from Recruitment Scams</strong></p>
<p>Anduril is committed to maintaining the highest standards of integrity and transparency in our recruitment processes.</p>
<p style="margin-top:24px;font-size:13px;color:#666;">XML job scraping automation by <a href="https://yubhub.co">YubHub</a></p>]]></Description>
      <Jobtype>full-time</Jobtype>
      <Experiencelevel>senior</Experiencelevel>
      <Workarrangement>onsite</Workarrangement>
      <Salaryrange></Salaryrange>
      <Skills>C++, Python, Autonomous vehicle hardware, Sensor fusion, SLAM, Motion planning, Machine learning, Simulation tools and frameworks, Safety standards and certification processes, System Engineering concepts</Skills>
      <Category>Engineering</Category>
      <Industry>Technology</Industry>
      <Employername>Anduril Industries</Employername>
      <Employerlogo>https://logos.yubhub.co/anduril.com.png</Employerlogo>
      <Employerdescription>Anduril Industries is a defense technology company that designs, builds, and sells advanced military systems.</Employerdescription>
      <Employerwebsite>https://www.anduril.com/</Employerwebsite>
      <Compensationcurrency></Compensationcurrency>
      <Compensationmin></Compensationmin>
      <Compensationmax></Compensationmax>
      <Applyto>https://job-boards.greenhouse.io/andurilindustries/jobs/4961600007</Applyto>
      <Location>Sydney, New South Wales, Australia</Location>
      <Country></Country>
      <Postedate>2026-04-18</Postedate>
    </job>
    <job>
      <externalid>7b435142-624</externalid>
      <Title>Mission Software Engineer, Air Vehicle Autonomy, C++</Title>
      <Description><![CDATA[<p>We are looking for software engineers and roboticists excited about creating a powerful autonomy software stack that includes computer vision, motion planning, SLAM, controls, estimation, and secure communications.</p>
<p>The Air Dominance &amp; Strike team at Anduril develops aerial and multi-domain robotic systems. The team is responsible for taking products like Fury (unmanned fighter jet) and Barracuda (air-breathing cruise missile) from concept to product. The team also develops Lattice for Mission Autonomy, Anduril’s premier software platform that enables masses of Fury, Barracuda, and other first and third party robots to collaborate across various missions.</p>
<p>Eligible to obtain and maintain an active U.S. Top Secret security clearance BS, MS, or PhD in Computer Science, Software Engineering, Mathematics, Physics, or related field. 3+ years of production-grade C++ and/or Rust experience in a Linux development environment Experience building software solutions involving significant amounts of data processing and analysis Ability to quickly understand and navigate complex systems and established code bases A desire to work on critical software that has a real-world impact Travel up to 30% of time to build, test, and deploy capabilities in the real world</p>
<p>Strong background with focus in Physics, Mathematics, and/or Motion Planning to inform modeling &amp; simulation (M&amp;S) and physical systems Developing and testing multi-agent autonomous systems and deploying in real-world environments. Feature and algorithm development with an understanding of behavior trees. Developing software/hardware for flight systems and safety critical functionality. Distributed communication networks and message standards Knowledge of military systems and operational tactics</p>
<p style="margin-top:24px;font-size:13px;color:#666;">XML job scraping automation by <a href="https://yubhub.co">YubHub</a></p>]]></Description>
      <Jobtype>full-time</Jobtype>
      <Experiencelevel>senior</Experiencelevel>
      <Workarrangement>onsite</Workarrangement>
      <Salaryrange>$166,000-$220,000 USD</Salaryrange>
      <Skills>C++, Rust, Linux, Computer Vision, Motion Planning, SLAM, Controls, Estimation, Secure Communications, Physics, Mathematics, Behavior Trees, Flight Systems, Safety Critical Functionality, Distributed Communication Networks, Message Standards, Military Systems, Operational Tactics</Skills>
      <Category>Engineering</Category>
      <Industry>Technology</Industry>
      <Employername>Anduril</Employername>
      <Employerlogo>https://logos.yubhub.co/anduril.com.png</Employerlogo>
      <Employerdescription>Anduril develops aerial and multi-domain robotic systems.</Employerdescription>
      <Employerwebsite>https://www.anduril.com/</Employerwebsite>
      <Compensationcurrency></Compensationcurrency>
      <Compensationmin></Compensationmin>
      <Compensationmax></Compensationmax>
      <Applyto>https://job-boards.greenhouse.io/andurilindustries/jobs/4673932007</Applyto>
      <Location>Costa Mesa, California, United States; Seattle, Washington, United States; Washington, District of Columbia, United States</Location>
      <Country></Country>
      <Postedate>2026-04-18</Postedate>
    </job>
    <job>
      <externalid>19b66bec-a6b</externalid>
      <Title>Research Engineer / Scientist (SLAM)</Title>
      <Description><![CDATA[<p>We&#39;re looking for a Research Engineer/Scientist to design, implement, and advance state-of-the-art simultaneous localization and mapping systems. This role is focused on modern SLAM techniques,both classical and learning-based,with an emphasis on scalable state estimation, sensor fusion, and long-term mapping in complex, dynamic environments.</p>
<p>As a Research Engineer/Scientist, you will:</p>
<ul>
<li>Design and implement modern SLAM systems for real-world environments, including visual, visual-inertial, lidar, or multi-sensor configurations.</li>
<li>Develop robust localization and mapping pipelines, including pose estimation, map management, loop closure, and global optimization.</li>
<li>Research and prototype learning-based or hybrid SLAM approaches that combine classical geometry with modern machine learning methods.</li>
<li>Build and maintain scalable state estimation frameworks, including factor graph optimization, filtering, and smoothing techniques.</li>
<li>Develop sensor fusion strategies that integrate cameras, IMUs, depth sensors, lidar, or other modalities to improve robustness and accuracy.</li>
<li>Analyze failure modes in real-world SLAM deployments (e.g., perceptual aliasing, dynamic scenes, drift) and design principled solutions.</li>
<li>Create evaluation frameworks, benchmarks, and metrics to measure SLAM accuracy, robustness, and performance across large datasets.</li>
<li>Optimize performance across the stack, including real-time constraints, memory usage, and compute efficiency, for large-scale and production systems.</li>
<li>Collaborate with reconstruction, simulation, and infrastructure teams to ensure SLAM outputs integrate cleanly with downstream world modeling and rendering pipelines.</li>
<li>Contribute to technical direction by proposing new research ideas, mentoring teammates, and helping define best practices for localization and mapping across the organization.</li>
</ul>
<p>We&#39;re looking for someone with 6+ years of experience working on SLAM, state estimation, robotics perception, or related areas. A strong foundation in probabilistic estimation, optimization, and geometric vision is required, as well as proficiency in Python and/or C++.</p>
<p style="margin-top:24px;font-size:13px;color:#666;">XML job scraping automation by <a href="https://yubhub.co">YubHub</a></p>]]></Description>
      <Jobtype>full-time</Jobtype>
      <Experiencelevel>senior</Experiencelevel>
      <Workarrangement>onsite</Workarrangement>
      <Salaryrange>$250,000-$350,000 base salary (good-faith estimate for San Francisco Bay Area upon hire; actual offer based on experience, skills, and qualifications)</Salaryrange>
      <Skills>SLAM, state estimation, robotics perception, probabilistic estimation, optimization, geometric vision, Python, C++</Skills>
      <Category>Engineering</Category>
      <Industry>Technology</Industry>
      <Employername>World Labs</Employername>
      <Employerlogo>https://logos.yubhub.co/worldlabs.ai.png</Employerlogo>
      <Employerdescription>World Labs builds foundational world models that can perceive, generate, reason, and interact with the 3D world.</Employerdescription>
      <Employerwebsite>https://worldlabs.ai</Employerwebsite>
      <Compensationcurrency></Compensationcurrency>
      <Compensationmin></Compensationmin>
      <Compensationmax></Compensationmax>
      <Applyto>https://job-boards.greenhouse.io/worldlabs/jobs/4135311009</Applyto>
      <Location>San Francisco</Location>
      <Country></Country>
      <Postedate>2026-04-17</Postedate>
    </job>
    <job>
      <externalid>1044b51e-cc6</externalid>
      <Title>Senior Manager, Software - Perception</Title>
      <Description><![CDATA[<p>This position is ideal for an individual who thrives on building advanced perception systems that enable autonomous aircraft to operate effectively in complex and contested environments.</p>
<p>A successful candidate will be skilled in developing real-time object detection, sensor fusion, and state estimation algorithms using data from diverse mission sensors such as EO/IR cameras, radars, and IMUs. The role requires strong algorithmic thinking, deep familiarity with airborne sensing systems, and the ability to deliver performant software in simulation and real-world conditions.</p>
<p>Shield AI is committed to developing cutting-edge autonomy for unmanned aircraft operating across all Department of Defense (DoD) domains, including air, sea, and land. Our Perception Engineers are instrumental in creating the situational awareness that underpins autonomy, ensuring our systems understand and respond to the operational environment with speed, precision, and resilience.</p>
<p><strong>Responsibilities:</strong></p>
<ul>
<li>Lead teams across autonomy, integration, and testing by aligning technical efforts, resolving cross-functional challenges, and driving mission-focused execution.</li>
<li>Develop advanced perception algorithms for object detection, classification, and multi-target tracking across diverse sensor modalities.</li>
<li>Implement sensor fusion frameworks by integrating data from vision systems, radars, and other mission sensors using probabilistic and deterministic fusion techniques.</li>
<li>Develop state estimation capabilities by designing and refining algorithms for localization and pose estimation using IMU, GPS, vision, and other onboard sensing inputs.</li>
<li>Analyze and utilize sensor ICDs to ensure correct data handling, interpretation, and synchronization.</li>
<li>Optimize perception performance by tuning and evaluating perception pipelines for performance, robustness, and real-time efficiency in both simulation and real-world environments.</li>
<li>Support autonomy integration by working closely with autonomy, systems, and integration teams to interface perception outputs with planning, behaviors, and decision-making modules.</li>
<li>Validate in simulated and operational settings by leveraging synthetic data, simulation environments, and field testing to validate algorithm accuracy and mission readiness.</li>
<li>Collaborate with hardware and sensor teams to ensure seamless integration of perception algorithms with onboard compute platforms and diverse sensor payloads.</li>
<li>Drive innovation in airborne sensing by contributing novel ideas and state-of-the-art techniques to advance real-time perception capabilities for unmanned aircraft operating in complex, GPS-denied, or contested environments.</li>
<li>Travel Requirement – Members of this team typically travel around 10-15% of the year (to different office locations, customer sites, and flight integration events).</li>
</ul>
<p><strong>Requirements:</strong></p>
<ul>
<li>BS/MS in Computer Science, Electrical Engineering, Mechanical Engineering, Aerospace Engineering, and/or similar degree, or equivalent practical experience.</li>
<li>Typically requires a minimum of 10 years of related experience with a Bachelor’s degree; or 9 years and a Master’s degree; or 7 years with a PhD; or equivalent work experience.</li>
<li>7+ years of experience in Unmanned Systems programs in the DoD or applied R&amp;D.</li>
<li>2+ years of people leadership experience.</li>
<li>Background in implementing algorithms such as Kalman Filters, multi-target tracking, or deep learning-based detection models.</li>
<li>Familiarity with fusing data from radar, EO/IR cameras, or other sensors using probabilistic or rule-based approaches.</li>
<li>Familiarity with SLAM, visual-inertial odometry, or sensor-fused localization approaches in real-time applications.</li>
<li>Ability to interpret and work with Interface Control Documents (ICDs) and hardware integration specs.</li>
<li>Proficiency with version control, debugging, and test-driven development in cross-functional teams.</li>
<li>Ability to obtain a SECRET clearance.</li>
</ul>
<p><strong>Preferences:</strong></p>
<ul>
<li>Hands-on integration or algorithm development with airborne sensing systems.</li>
<li>Experience with ML frameworks such as PyTorch or Tensorflow, particularly for vision-based object detection or classification tasks.</li>
<li>Experience deploying perception software on SWaP-constrained platforms.</li>
<li>Familiarity with validating perception systems during flight test events or operational environments.</li>
<li>Understanding of sensing challenges in denied or degraded conditions.</li>
<li>Exposure to perception applications across air, maritime, and ground platforms.</li>
</ul>
<p style="margin-top:24px;font-size:13px;color:#666;">XML job scraping automation by <a href="https://yubhub.co">YubHub</a></p>]]></Description>
      <Jobtype>full-time</Jobtype>
      <Experiencelevel>senior</Experiencelevel>
      <Workarrangement>onsite</Workarrangement>
      <Salaryrange>$229,233 - $343,849 a year</Salaryrange>
      <Skills>BS/MS in Computer Science, Electrical Engineering, Mechanical Engineering, Aerospace Engineering, and/or similar degree, 10+ years of related experience, 7+ years of experience in Unmanned Systems programs in the DoD or applied R&amp;D, 2+ years of people leadership experience, Background in implementing algorithms such as Kalman Filters, multi-target tracking, or deep learning-based detection models, Familiarity with fusing data from radar, EO/IR cameras, or other sensors using probabilistic or rule-based approaches, Familiarity with SLAM, visual-inertial odometry, or sensor-fused localization approaches in real-time applications, Ability to interpret and work with Interface Control Documents (ICDs) and hardware integration specs, Proficiency with version control, debugging, and test-driven development in cross-functional teams, Ability to obtain a SECRET clearance, Hands-on integration or algorithm development with airborne sensing systems, Experience with ML frameworks such as PyTorch or Tensorflow, particularly for vision-based object detection or classification tasks, Experience deploying perception software on SWaP-constrained platforms, Familiarity with validating perception systems during flight test events or operational environments, Understanding of sensing challenges in denied or degraded conditions, Exposure to perception applications across air, maritime, and ground platforms</Skills>
      <Category>Engineering</Category>
      <Industry>Technology</Industry>
      <Employername>Shield AI</Employername>
      <Employerlogo>https://logos.yubhub.co/shield.ai.png</Employerlogo>
      <Employerdescription>Shield AI is a venture-backed deep-tech company founded in 2015, developing intelligent systems to protect service members and civilians.</Employerdescription>
      <Employerwebsite>https://www.shield.ai</Employerwebsite>
      <Compensationcurrency></Compensationcurrency>
      <Compensationmin></Compensationmin>
      <Compensationmax></Compensationmax>
      <Applyto>https://jobs.lever.co/shieldai/cebc0dd3-ffbf-4013-a2ad-ae32732cabd3</Applyto>
      <Location>Washington, DC / San Diego, California / Boston, MA</Location>
      <Country></Country>
      <Postedate>2026-04-17</Postedate>
    </job>
    <job>
      <externalid>841c78ea-841</externalid>
      <Title>Senior Engineer, Software - Perception</Title>
      <Description><![CDATA[<p>This position is ideal for an individual who thrives on building advanced perception systems that enable autonomous aircraft to operate effectively in complex and contested environments.</p>
<p>A successful candidate will be skilled in developing real-time object detection, sensor fusion, and state estimation algorithms using data from diverse mission sensors such as EO/IR cameras, radars, and IMUs.
The role requires strong algorithmic thinking, deep familiarity with airborne sensing systems, and the ability to deliver performant software in simulation and real-world conditions.</p>
<p>Develop advanced perception algorithms , Design and implement robust algorithms for object detection, classification, and multi-target tracking across diverse sensor modalities.
Implement sensor fusion frameworks , Integrate data from vision systems, radars, and other mission sensors using probabilistic and deterministic fusion techniques to generate accurate situational awareness.
Develop state estimation capabilities , Design and refine algorithms for localization and pose estimation using IMU, GPS, vision, and other onboard sensing inputs to enable stable and accurate navigation.
Analyze and utilize sensor ICDs , Interpret interface control documents (ICDs) and technical specifications for aircraft-mounted sensors to ensure correct data handling, interpretation, and synchronization.
Optimize perception performance , Tune and evaluate perception pipelines for performance, robustness, and real-time efficiency in both simulation and real-world environments.
Support autonomy integration , Work closely with autonomy, systems, and integration teams to interface perception outputs with planning, behaviors, and decision-making modules.
Validate in simulated and operational settings , Leverage synthetic data, simulation environments, and field testing to validate algorithm accuracy and mission readiness.
Collaborate with hardware and sensor teams , Ensure seamless integration of perception algorithms with onboard compute platforms and diverse sensor payloads.
Drive innovation in airborne sensing , Contribute novel ideas and state-of-the-art techniques to advance real-time perception capabilities for unmanned aircraft operating in complex, GPS-denied, or contested environments.</p>
<p style="margin-top:24px;font-size:13px;color:#666;">XML job scraping automation by <a href="https://yubhub.co">YubHub</a></p>]]></Description>
      <Jobtype>full-time</Jobtype>
      <Experiencelevel>senior</Experiencelevel>
      <Workarrangement>onsite</Workarrangement>
      <Salaryrange>$160,000 - $240,000 a year</Salaryrange>
      <Skills>BS/MS in Computer Science, Electrical Engineering, Mechanical Engineering, Aerospace Engineering, and/or similar degree, or equivalent practical experience, Typically requires a minimum of 5 years of related experience with a Bachelor’s degree; or 4 years and a Master’s degree; or 2 years with a PhD; or equivalent work experience, Background in implementing algorithms such as Kalman Filters, multi-target tracking, or deep learning-based detection models, Familiarity with fusing data from radar, EO/IR cameras, or other sensors using probabilistic or rule-based approaches, Familiarity with SLAM, visual-inertial odometry, or sensor-fused localization approaches in real-time applications, Ability to interpret and work with Interface Control Documents (ICDs) and hardware integration specs, Proficiency with version control, debugging, and test-driven development in cross-functional teams, Ability to obtain a SECRET clearance, Hands-on integration or algorithm development with airborne sensing systems, Experience with ML frameworks such as PyTorch or Tensorflow, particularly for vision-based object detection or classification tasks, Experience deploying perception software on SWaP-constrained platforms, Familiarity with validating perception systems during flight test events or operational environments, Understanding of sensing challenges in denied or degraded conditions, Exposure to perception applications across air, maritime, and ground platforms</Skills>
      <Category>Engineering</Category>
      <Industry>Technology</Industry>
      <Employername>Shield AI</Employername>
      <Employerlogo>https://logos.yubhub.co/shield.ai.png</Employerlogo>
      <Employerdescription>Shield AI is a venture-backed deep-tech company that develops intelligent systems to protect service members and civilians.</Employerdescription>
      <Employerwebsite>https://www.shield.ai</Employerwebsite>
      <Compensationcurrency></Compensationcurrency>
      <Compensationmin></Compensationmin>
      <Compensationmax></Compensationmax>
      <Applyto>https://jobs.lever.co/shieldai/d6f1d906-5c1e-4640-87f3-3e31e1b45fa6</Applyto>
      <Location>San Diego, California / Washington, DC / Boston, MA</Location>
      <Country></Country>
      <Postedate>2026-04-17</Postedate>
    </job>
    <job>
      <externalid>5f911dd8-860</externalid>
      <Title>Senior Staff Engineer, Software - Perception</Title>
      <Description><![CDATA[<p>This role is ideal for an individual who thrives on building advanced perception systems that enable autonomous aircraft to operate effectively in complex and contested environments.</p>
<p>A successful candidate will be skilled in developing real-time object detection, sensor fusion, and state estimation algorithms using data from diverse mission sensors such as EO/IR cameras, radars, and IMUs. The role requires strong algorithmic thinking, deep familiarity with airborne sensing systems, and the ability to deliver performant software in simulation and real-world conditions.</p>
<p>Shield AI is committed to developing cutting-edge autonomy for unmanned aircraft operating across all Department of Defense (DoD) domains, including air, sea, and land. Our Perception Engineers are instrumental in creating the situational awareness that underpins autonomy, ensuring our systems understand and respond to the operational environment with speed, precision, and resilience.</p>
<p><strong>Responsibilities:</strong></p>
<ul>
<li>Develop advanced perception algorithms , Design and implement robust algorithms for object detection, classification, and multi-target tracking across diverse sensor modalities.</li>
<li>Implement sensor fusion frameworks , Integrate data from vision systems, radars, and other mission sensors using probabilistic and deterministic fusion techniques to generate accurate situational awareness.</li>
<li>Develop state estimation capabilities , Design and refine algorithms for localization and pose estimation using IMU, GPS, vision, and other onboard sensing inputs to enable stable and accurate navigation.</li>
<li>Analyze and utilize sensor ICDs , Interpret interface control documents (ICDs) and technical specifications for aircraft-mounted sensors to ensure correct data handling, interpretation, and synchronization.</li>
<li>Optimize perception performance , Tune and evaluate perception pipelines for performance, robustness, and real-time efficiency in both simulation and real-world environments.</li>
<li>Support autonomy integration , Work closely with autonomy, systems, and integration teams to interface perception outputs with planning, behaviors, and decision-making modules.</li>
<li>Validate in simulated and operational settings , Leverage synthetic data, simulation environments, and field testing to validate algorithm accuracy and mission readiness.</li>
<li>Collaborate with hardware and sensor teams , Ensure seamless integration of perception algorithms with onboard compute platforms and diverse sensor payloads.</li>
<li>Drive innovation in airborne sensing , Contribute novel ideas and state-of-the-art techniques to advance real-time perception capabilities for unmanned aircraft operating in complex, GPS-denied, or contested environments.</li>
<li>Travel Requirement , Members of this team typically travel around 10-15% of the year (to different office locations, customer sites, and flight integration events).</li>
</ul>
<p><strong>Requirements:</strong></p>
<ul>
<li>BS/MS in Computer Science, Electrical Engineering, Mechanical Engineering, Aerospace Engineering, and/or similar degree, or equivalent practical experience</li>
<li>Typically requires a minimum of 10 years of related experience with a Bachelor’s degree; or 9 years and a Master’s degree; or 7 years with a PhD; or equivalent work experience</li>
<li>Background in implementing algorithms such as Kalman Filters, multi-target tracking, or deep learning-based detection models</li>
<li>Familiarity with fusing data from radar, EO/IR cameras, or other sensors using probabilistic or rule-based approaches</li>
<li>Familiarity with SLAM, visual-inertial odometry, or sensor-fused localization approaches in real-time applications</li>
<li>Ability to interpret and work with Interface Control Documents (ICDs) and hardware integration specs</li>
<li>Proficiency with version control, debugging, and test-driven development in cross-functional teams</li>
<li>Ability to obtain a SECRET clearance</li>
</ul>
<p><strong>Preferences:</strong></p>
<ul>
<li>Hands-on integration or algorithm development with airborne sensing systems</li>
<li>Experience with ML frameworks such as PyTorch or Tensorflow, particularly for vision-based object detection or classification tasks</li>
<li>Experience deploying perception software on SWaP-constrained platforms</li>
<li>Familiarity with validating perception systems during flight test events or operational environments</li>
<li>Understanding of sensing challenges in denied or degraded conditions</li>
<li>Exposure to perception applications across air, maritime, and ground platforms</li>
</ul>
<p style="margin-top:24px;font-size:13px;color:#666;">XML job scraping automation by <a href="https://yubhub.co">YubHub</a></p>]]></Description>
      <Jobtype>full-time</Jobtype>
      <Experiencelevel>senior</Experiencelevel>
      <Workarrangement>onsite</Workarrangement>
      <Salaryrange>$220,800 - $331,200 a year</Salaryrange>
      <Skills>algorithm development, sensor fusion, state estimation, Kalman Filters, multi-target tracking, deep learning-based detection models, probabilistic or rule-based approaches, SLAM, visual-inertial odometry, sensor-fused localization, version control, debugging, test-driven development, hands-on integration with airborne sensing systems, ML frameworks such as PyTorch or Tensorflow, perception software deployment on SWaP-constrained platforms, validating perception systems during flight test events or operational environments, sensing challenges in denied or degraded conditions, perception applications across air, maritime, and ground platforms</Skills>
      <Category>Engineering</Category>
      <Industry>Technology</Industry>
      <Employername>Shield AI</Employername>
      <Employerlogo>https://logos.yubhub.co/shield.ai.png</Employerlogo>
      <Employerdescription>Shield AI is a venture-backed deep-tech company founded in 2015, developing intelligent systems to protect service members and civilians.</Employerdescription>
      <Employerwebsite>https://www.shield.ai</Employerwebsite>
      <Compensationcurrency></Compensationcurrency>
      <Compensationmin></Compensationmin>
      <Compensationmax></Compensationmax>
      <Applyto>https://jobs.lever.co/shieldai/5cf8609e-ce9a-47e9-8956-00dae756e406</Applyto>
      <Location>San Diego, California / Washington, DC / Boston, MA</Location>
      <Country></Country>
      <Postedate>2026-04-17</Postedate>
    </job>
    <job>
      <externalid>bed4759c-578</externalid>
      <Title>Staff Engineer, Software - Perception</Title>
      <Description><![CDATA[<p>This position is ideal for an individual who thrives on building advanced perception systems that enable autonomous aircraft to operate effectively in complex and contested environments.</p>
<p>A successful candidate will be skilled in developing real-time object detection, sensor fusion, and state estimation algorithms using data from diverse mission sensors such as EO/IR cameras, radars, and IMUs. The role requires strong algorithmic thinking, deep familiarity with airborne sensing systems, and the ability to deliver performant software in simulation and real-world conditions.</p>
<p>Shield AI is committed to developing cutting-edge autonomy for unmanned aircraft operating across all Department of Defense (DoD) domains, including air, sea, and land. Our Perception Engineers are instrumental in creating the situational awareness that underpins autonomy, ensuring our systems understand and respond to the operational environment with speed, precision, and resilience.</p>
<p><strong>Responsibilities:</strong></p>
<ul>
<li>Develop advanced perception algorithms , Design and implement robust algorithms for object detection, classification, and multi-target tracking across diverse sensor modalities.</li>
<li>Implement sensor fusion frameworks , Integrate data from vision systems, radars, and other mission sensors using probabilistic and deterministic fusion techniques to generate accurate situational awareness.</li>
<li>Develop state estimation capabilities , Design and refine algorithms for localization and pose estimation using IMU, GPS, vision, and other onboard sensing inputs to enable stable and accurate navigation.</li>
<li>Analyze and utilize sensor ICDs , Interpret interface control documents (ICDs) and technical specifications for aircraft-mounted sensors to ensure correct data handling, interpretation, and synchronization.</li>
<li>Optimize perception performance , Tune and evaluate perception pipelines for performance, robustness, and real-time efficiency in both simulation and real-world environments.</li>
<li>Support autonomy integration , Work closely with autonomy, systems, and integration teams to interface perception outputs with planning, behaviors, and decision-making modules.</li>
<li>Validate in simulated and operational settings , Leverage synthetic data, simulation environments, and field testing to validate algorithm accuracy and mission readiness.</li>
<li>Collaborate with hardware and sensor teams , Ensure seamless integration of perception algorithms with onboard compute platforms and diverse sensor payloads.</li>
<li>Drive innovation in airborne sensing , Contribute novel ideas and state-of-the-art techniques to advance real-time perception capabilities for unmanned aircraft operating in complex, GPS-denied, or contested environments.</li>
<li>Travel Requirement , Members of this team typically travel around 10-15% of the year (to different office locations, customer sites, and flight integration events).</li>
</ul>
<p><strong>Required Qualifications:</strong></p>
<ul>
<li>BS/MS in Computer Science, Electrical Engineering, Mechanical Engineering, Aerospace Engineering, and/or similar degree, or equivalent practical experience</li>
<li>Typically requires a minimum of 7 years of related experience with a Bachelor’s degree; or 5 years and a Master’s degree; or 4 years with a PhD; or equivalent work experience</li>
<li>Background in implementing algorithms such as Kalman Filters, multi-target tracking, or deep learning-based detection models</li>
<li>Familiarity with fusing data from radar, EO/IR cameras, or other sensors using probabilistic or rule-based approaches</li>
<li>Familiarity with SLAM, visual-inertial odometry, or sensor-fused localization approaches in real-time applications</li>
<li>Ability to interpret and work with Interface Control Documents (ICDs) and hardware integration specs</li>
<li>Proficiency with version control, debugging, and test-driven development in cross-functional teams</li>
<li>Ability to obtain a SECRET clearance</li>
</ul>
<p><strong>Preferred Qualifications:</strong></p>
<ul>
<li>Hands-on integration or algorithm development with airborne sensing systems</li>
<li>Experience with ML frameworks such as PyTorch or Tensorflow, particularly for vision-based object detection or classification tasks</li>
<li>Experience deploying perception software on SWaP-constrained platforms</li>
<li>Familiarity with validating perception systems during flight test events or operational environments</li>
<li>Understanding of sensing challenges in denied or degraded conditions</li>
<li>Exposure to perception applications across air, maritime, and ground platforms</li>
</ul>
<p>$182,720 - $274,080 a year</p>
<p style="margin-top:24px;font-size:13px;color:#666;">XML job scraping automation by <a href="https://yubhub.co">YubHub</a></p>]]></Description>
      <Jobtype>full-time</Jobtype>
      <Experiencelevel>staff</Experiencelevel>
      <Workarrangement>onsite</Workarrangement>
      <Salaryrange>$182,720 - $274,080 a year</Salaryrange>
      <Skills>real-time object detection, sensor fusion, state estimation algorithms, EO/IR cameras, radars, IMUs, Kalman Filters, multi-target tracking, deep learning-based detection models, probabilistic or rule-based approaches, SLAM, visual-inertial odometry, sensor-fused localization, Interface Control Documents, hardware integration specs, version control, debugging, test-driven development, hands-on integration or algorithm development with airborne sensing systems, ML frameworks such as PyTorch or Tensorflow, vision-based object detection or classification tasks, SWaP-constrained platforms, validating perception systems during flight test events or operational environments, sensing challenges in denied or degraded conditions, perception applications across air, maritime, and ground platforms</Skills>
      <Category>Engineering</Category>
      <Industry>Technology</Industry>
      <Employername>Shield AI</Employername>
      <Employerlogo>https://logos.yubhub.co/shield.ai.png</Employerlogo>
      <Employerdescription>Shield AI is a venture-backed deep-tech company founded in 2015, developing intelligent systems to protect service members and civilians.</Employerdescription>
      <Employerwebsite>https://www.shield.ai</Employerwebsite>
      <Compensationcurrency></Compensationcurrency>
      <Compensationmin></Compensationmin>
      <Compensationmax></Compensationmax>
      <Applyto>https://jobs.lever.co/shieldai/8739c509-b6ea-4640-bcc1-c8b5b1de31b2</Applyto>
      <Location>San Diego, California / Washington, DC / Boston, MA</Location>
      <Country></Country>
      <Postedate>2026-04-17</Postedate>
    </job>
    <job>
      <externalid>8f6cb9bd-a3f</externalid>
      <Title>Computer Vision Engineer (C++)</Title>
      <Description><![CDATA[<p>We&#39;re looking for a Computer Vision Engineer (C++) to join our team in Port Melbourne, contributing to the development of innovative, real-time perception solutions for next-gen autonomous platforms.</p>
<p>As a member of our team, you&#39;ll design and implement novel computer vision algorithms from scratch, optimised for real-time performance. You&#39;ll develop and maintain C++-based CV pipelines as part of autonomous mission systems, collaborate with a multidisciplinary team of AI, robotics, and optical engineers to deliver reliable edge solutions, and support the integration of deep learning models into broader CV systems.</p>
<p>In this role, you&#39;ll have the opportunity to stay across current academic research and emerging techniques in computer vision and ML, and contribute to the development of custom algorithms, not just apply libraries.</p>
<p>Why Shield AI?</p>
<ul>
<li>Build mission-critical vision and autonomy systems that make a real-world impact.</li>
<li>Collaborate with some of the best minds in AI, autonomy, and defence technology.</li>
<li>Hybrid role based in our Port Melbourne office.</li>
<li>Salary + equity for permanent roles, with a strong career development pathway.</li>
</ul>
<p style="margin-top:24px;font-size:13px;color:#666;">XML job scraping automation by <a href="https://yubhub.co">YubHub</a></p>]]></Description>
      <Jobtype>full-time</Jobtype>
      <Experiencelevel>mid|senior</Experiencelevel>
      <Workarrangement>onsite</Workarrangement>
      <Salaryrange></Salaryrange>
      <Skills>C++, computer vision, image processing, machine learning, real-time performance, object detection, target tracking, 3D reconstruction, SLAM, camera calibration, behaviour analysis, OpenCV, deep learning</Skills>
      <Category>Engineering</Category>
      <Industry>Technology</Industry>
      <Employername>Shield AI</Employername>
      <Employerlogo>https://logos.yubhub.co/shield.ai.png</Employerlogo>
      <Employerdescription>Shield AI is a venture-backed deep-tech company founded in 2015, with a mission of protecting service members and civilians with intelligent systems.</Employerdescription>
      <Employerwebsite>https://www.shield.ai</Employerwebsite>
      <Compensationcurrency></Compensationcurrency>
      <Compensationmin></Compensationmin>
      <Compensationmax></Compensationmax>
      <Applyto>https://jobs.lever.co/shieldai/2cfe6692-a266-4d27-8832-ef652fa57ee4</Applyto>
      <Location>Melbourne</Location>
      <Country></Country>
      <Postedate>2026-04-17</Postedate>
    </job>
    <job>
      <externalid>850c077a-c6e</externalid>
      <Title>Software Engineer, Generalist</Title>
      <Description><![CDATA[<p>We are seeking a Software Engineer, Generalist to play a pivotal role in the design, development, and implementation of software systems for our autonomous surface vessels (ASVs).</p>
<p>You will work closely with cross-functional teams to ensure the seamless integration of software components into our ASV platform.</p>
<p>Responsibilities:</p>
<ul>
<li>Collaborate with hardware engineers, robotics engineers, and other software engineers across the tech stack to design, develop, and deploy software solutions for autonomous surface vessels</li>
<li>Participate in all phases of the software development lifecycle, including requirements gathering, design, implementation, testing, deployment, and maintenance</li>
<li>Develop robust, scalable, and maintainable software systems that meet the unique challenges of autonomous maritime operations</li>
<li>Implement algorithms for perception, navigation, path planning, and control to enable autonomous behavior in ASVs</li>
<li>Optimise software performance and reliability to meet stringent DoD requirements and operational standards</li>
<li>Conduct thorough testing and validation of software components to ensure functionality, accuracy, and safety</li>
<li>Stay current with emerging technologies and industry trends in autonomous systems, robotics, and maritime technology</li>
</ul>
<p>Qualifications:</p>
<ul>
<li>Bachelor&#39;s degree in Computer Science, Software Engineering, or a related field</li>
<li>Proven experience in software development, with a focus on autonomous systems, robotics, or related fields</li>
<li>Proficiency in programming languages such as C++, Python, or Java, with a strong emphasis on object-oriented design and development</li>
<li>Experience with software development tools and frameworks commonly used in robotics and autonomous systems (e.g., ROS, OpenCV, TensorFlow, etc.)</li>
<li>Familiarity with sensor fusion techniques, SLAM algorithms, and other technologies relevant to autonomous navigation and perception</li>
<li>Strong problem-solving skills and the ability to work effectively in a fast-paced environment</li>
<li>Excellent communication skills and the ability to clearly articulate technical concepts</li>
</ul>
<p>Benefits:</p>
<ul>
<li>Medical Insurance: Comprehensive health insurance plans covering a range of services</li>
<li>Dental and Vision Insurance: Coverage for routine dental check-ups, orthodontics, and vision care</li>
<li>Saronic pays 100% of the premium for employees and 80% for dependents</li>
<li>Time Off: Generous PTO and Holidays</li>
<li>Parental Leave: Paid maternity and paternity leave to support new parents</li>
<li>Competitive Salary: Industry-standard salaries with opportunities for performance-based bonuses</li>
<li>Retirement Plan: 401(k) plan</li>
<li>Stock Options: Equity options to give employees a stake in the company’s success</li>
<li>Life and Disability Insurance: Basic life insurance and short- and long-term disability coverage</li>
<li>Additional Perks: Free lunch benefit and unlimited free drinks and snacks in the office</li>
</ul>
<p>Physical Demands:</p>
<ul>
<li>Prolonged periods of sitting at a desk and working on a computer.</li>
<li>Occasional standing and walking within the office.</li>
<li>Manual dexterity to operate a computer keyboard, mouse, and other office equipment.</li>
<li>Visual acuity to read screens, documents, and reports.</li>
<li>Occasional reaching, bending, or stooping to access file drawers, cabinets, or office supplies.</li>
<li>Lifting and carrying items up to 20 pounds occasionally (e.g., office supplies, packages).</li>
</ul>
<p>Additional Information:</p>
<p>This role requires access to export-controlled information or items that require “U.S. Person” status. As defined by U.S. law, individuals who are any one of the following are considered to be a “U.S. Person”: (1) U.S. citizens, (2) legal permanent residents (a.k.a. green card holders), and (3) certain protected classes of asylees and refugees, as defined in 8 U.S.C. 1324b(a)(3).</p>
<p style="margin-top:24px;font-size:13px;color:#666;">XML job scraping automation by <a href="https://yubhub.co">YubHub</a></p>]]></Description>
      <Jobtype>full-time</Jobtype>
      <Experiencelevel>mid</Experiencelevel>
      <Workarrangement>onsite</Workarrangement>
      <Salaryrange></Salaryrange>
      <Skills>C++, Python, Java, ROS, OpenCV, TensorFlow, sensor fusion techniques, SLAM algorithms</Skills>
      <Category>Engineering</Category>
      <Industry>Technology</Industry>
      <Employername>Saronic Technologies</Employername>
      <Employerlogo>https://logos.yubhub.co/saronictechnologies.com.png</Employerlogo>
      <Employerdescription>Saronic Technologies develops state-of-the-art solutions for autonomous maritime operations.</Employerdescription>
      <Employerwebsite>https://www.saronictechnologies.com/</Employerwebsite>
      <Compensationcurrency></Compensationcurrency>
      <Compensationmin></Compensationmin>
      <Compensationmax></Compensationmax>
      <Applyto>https://jobs.lever.co/saronic/77ef1f0f-5ba5-46b5-aca6-d38730790e97</Applyto>
      <Location>San Francisco</Location>
      <Country></Country>
      <Postedate>2026-04-17</Postedate>
    </job>
    <job>
      <externalid>690339e7-e86</externalid>
      <Title>Senior Software Engineer, Autonomy - Calibration, Mapping &amp; Localization</Title>
      <Description><![CDATA[<p>About Cyngn</p>
<p>Based in Mountain View, CA, Cyngn is a publicly-traded autonomous technology company. We deploy self-driving industrial vehicles like forklifts and tuggers to factories, warehouses, and other facilities throughout North America.</p>
<p>To build this emergent technology, we are looking for innovative, motivated, and experienced leaders to join us and move this field forward. If you like to build, tinker, and create with a team of trusted and passionate colleagues, then Cyngn is the place for you.</p>
<p>Key reasons to join Cyngn:</p>
<p>We are small and big. With under 100 employees, Cyngn operates with the energy of a startup. On the other hand, we’re publicly traded. This means our employees not only work in close-knit teams with mentorship from company leaders,they also get access to the liquidity of our publicly-traded equity.</p>
<p>We build today and deploy tomorrow. Our autonomous vehicles aren’t just test concepts,they’re deployed to real clients right now. That means your work will have a tangible, visible impact.</p>
<p>We aren’t robots. We just develop them. We’re a welcoming, diverse team of sharp thinkers and kind humans. Collaboration and trust drive our creative environment. At Cyngn, everyone’s perspective matters,and that’s what powers our innovation.</p>
<p>About this role:</p>
<p>As a Staff/Senior Software Engineer on our Calibration, Localization, &amp; Mapping (CLAM) team, you will be responsible for delivering mission-critical improvements and new features to our calibration, localization, and mapping subsystems. You will work on a small, highly focused team developing production-quality software that enables efficient and accurate creation of HD maps at Cyngn deployment-sites and robust localization for Cyngn’s autonomous vehicle fleets.</p>
<p>Responsibilities</p>
<ul>
<li><p>Design, implement, tune, and test mapping, localization, and sensor calibration algorithms for our autonomous vehicle platforms using C++ and Python.</p>
</li>
<li><p>Develop tooling and metrics for performance validation and continuous testing frameworks.</p>
</li>
<li><p>Balance project tasks, code reviews, and research to meet product-driven milestones in a fast-paced startup environment.</p>
</li>
</ul>
<p>Qualifications</p>
<ul>
<li><p>MS/Phd with focus in robotics or a similar technical field of study</p>
</li>
<li><p>Solid foundation in probability theory, linear algebra, 3D geometry, and spatial coordinate transformations.</p>
</li>
<li><p>In-depth understanding of matrix factorization algorithms and Lie algebra/groups.</p>
</li>
<li><p>Solid theoretical knowledge of state-of-the-art techniques in 3D Lidar-based mapping and localization for autonomous vehicles (LOAM series, GICP, FastLIO, bundle-adjustment)</p>
</li>
<li><p>Familiarity with state estimation frameworks such as EKF’s as well as modern nonlinear optimization libraries (GTSAM, G2O, Ceres-Solver, GNC-Solver, etc.)</p>
</li>
<li><p>6+ years of industry experience as an autonomous vehicle or robotics software engineering professional including hands-on implementation and tuning on production hardware.</p>
</li>
<li><p>6+ years industry experience writing C++ software in a production environment - architecture design, unit testing, code review, algorithm performance trade-offs, etc.</p>
</li>
<li><p>Proficiency in Python.</p>
</li>
<li><p>Excellent written &amp; verbal communication skills.</p>
</li>
</ul>
<p>Bonus Qualifications</p>
<ul>
<li><p>Proven record of top-tier publications or patents.</p>
</li>
<li><p>Experience with GPU programming, CUDA.</p>
</li>
<li><p>Experience in implementing automated map change detection and updating techniques.</p>
</li>
<li><p>Experience implementing modern multi-sensor calibration and sensor mis-alignment detection algorithms.</p>
</li>
<li><p>Experience with camera-based SLAM and 3D multi-view geometry.</p>
</li>
<li><p>Experience working with ROS2 to design, build, and operate robotic systems.</p>
</li>
<li><p>Exposure to modern software development version control and project management tools - Git, Jira, etc.</p>
</li>
</ul>
<p>Benefits &amp; Perks</p>
<ul>
<li><p>Health benefits (Medical, Dental, Vision, HSA and FSA (Health &amp; Dependent Daycare), Employee Assistance Program, 1:1 Health Concierge)</p>
</li>
<li><p>Life, Short-term and long-term disability insurance (Cyngn funds 100% of premiums)</p>
</li>
<li><p>Company 401(k)</p>
</li>
<li><p>Commuter Benefits</p>
</li>
<li><p>Flexible vacation policy</p>
</li>
<li><p>Sabbatical leave opportunity after 5 years with the company</p>
</li>
<li><p>Paid Parental Leave</p>
</li>
<li><p>Daily lunches for in-office employees and fully-stocked kitchen with snacks and beverages</p>
</li>
</ul>
<p style="margin-top:24px;font-size:13px;color:#666;">XML job scraping automation by <a href="https://yubhub.co">YubHub</a></p>]]></Description>
      <Jobtype>full-time</Jobtype>
      <Experiencelevel>senior</Experiencelevel>
      <Workarrangement>onsite</Workarrangement>
      <Salaryrange>$180,000-198,000 per year</Salaryrange>
      <Skills>C++, Python, Probability theory, Linear algebra, 3D geometry, Spatial coordinate transformations, Matrix factorization algorithms, Lie algebra/groups, State estimation frameworks, Nonlinear optimization libraries, GPU programming, CUDA, Automated map change detection and updating techniques, Modern multi-sensor calibration and sensor mis-alignment detection algorithms, Camera-based SLAM and 3D multi-view geometry, ROS2, Git, Jira</Skills>
      <Category>Engineering</Category>
      <Industry>Technology</Industry>
      <Employername>Cyngn</Employername>
      <Employerlogo>https://logos.yubhub.co/cyngn.com.png</Employerlogo>
      <Employerdescription>Cyngn is a publicly-traded autonomous technology company that deploys self-driving industrial vehicles to factories, warehouses, and other facilities throughout North America.</Employerdescription>
      <Employerwebsite>https://www.cyngn.com/</Employerwebsite>
      <Compensationcurrency></Compensationcurrency>
      <Compensationmin></Compensationmin>
      <Compensationmax></Compensationmax>
      <Applyto>https://jobs.lever.co/cyngn/716dbe41-cac5-4d23-9ec3-cc05b32322b4</Applyto>
      <Location>Mountain View</Location>
      <Country></Country>
      <Postedate>2026-04-17</Postedate>
    </job>
    <job>
      <externalid>8b82d370-9f7</externalid>
      <Title>Open Application</Title>
      <Description><![CDATA[<p>At Varjo, we are pioneers in the immersive computing revolution. Our mixed reality solutions redefine realism, creating virtual experiences that match the authenticity of the real world. We are not just a company; we are a team of talents from around the globe, where diversity fuels innovation and drives results.</p>
<p>Join a multicultural team where English is our daily working language, providing an inclusive and collaborative atmosphere. At Varjo, we believe in the power of different experiences, backgrounds, and ideas coming together to shape the future of immersive computing.</p>
<p>We are seeking the best and brightest to join us on this exhilarating journey. As we continue to set new standards in technology, we invite you to be a part of our vision for the future. When we are done, computers will look nothing like what they do right now.</p>
<p>Areas and Technologies We Work With:</p>
<ul>
<li>C++/C Programming</li>
<li>Embedded C/C++</li>
<li>SLAM and Computer Vision (image processing, object recognition and detection)</li>
<li>Algorithm Design and Optimization</li>
<li>GPU/CPU Programming</li>
<li>Unity and Unreal Development</li>
<li>Sensor Fusion</li>
<li>ROS (Robot Operating System)</li>
<li>ADAS (Advanced Driver Assistance Systems)</li>
<li>3D Reconstruction</li>
<li>CUDA</li>
<li>Optics and Cameras</li>
<li>Audio and Video Streaming</li>
</ul>
<p>Apply Now
Submit an open application, including your CV, a link to your LinkedIn profile, and details of projects that make you particularly proud. If you have connections at Varjo, feel free to drop some names. Join Varjo and play a crucial role in shaping the future of immersive computing.</p>
<p style="margin-top:24px;font-size:13px;color:#666;">XML job scraping automation by <a href="https://yubhub.co">YubHub</a></p>]]></Description>
      <Jobtype>full-time</Jobtype>
      <Experiencelevel>entry|mid|senior|staff|executive</Experiencelevel>
      <Workarrangement>onsite</Workarrangement>
      <Salaryrange></Salaryrange>
      <Skills>C++/C Programming, Embedded C/C++, SLAM and Computer Vision, Algorithm Design and Optimization, GPU/CPU Programming, Unity and Unreal Development, Sensor Fusion, ROS (Robot Operating System), ADAS (Advanced Driver Assistance Systems), 3D Reconstruction, CUDA, Optics and Cameras, Audio and Video Streaming</Skills>
      <Category>Engineering</Category>
      <Industry>Technology</Industry>
      <Employername>Varjo</Employername>
      <Employerlogo>https://logos.yubhub.co/j.com.png</Employerlogo>
      <Employerdescription>Varjo is a Finnish mixed and virtual reality technology company founded in 2016. It has rapidly expanded its global footprint and currently operates in several locations.</Employerdescription>
      <Employerwebsite>https://apply.workable.com</Employerwebsite>
      <Compensationcurrency></Compensationcurrency>
      <Compensationmin></Compensationmin>
      <Compensationmax></Compensationmax>
      <Applyto>https://apply.workable.com/j/B64F9C1C64</Applyto>
      <Location>Helsinki</Location>
      <Country></Country>
      <Postedate>2026-03-08</Postedate>
    </job>
  </jobs>
</source>