<?xml version="1.0" encoding="UTF-8"?>
<source>
  <jobs>
    <job>
      <externalid>eccf1031-6f3</externalid>
      <Title>Senior Computer Vision Engineer, Space</Title>
      <Description><![CDATA[<p>We are seeking a Senior Computer Vision Engineer to join our rapidly growing team in Washington DC. The ideal candidate will have a strong background in computer vision and machine learning, with experience in developing and implementing computer vision algorithms for various spacecraft efforts in all orbital regimes.</p>
<p>The Senior Computer Vision Engineer will be responsible for proposing and prototyping innovative solutions to solve real-world problems, developing and maintaining core libraries and runtime applications, integrating classical and geometric methods in computer vision with ML methods, and working with space vehicle CV software and hardware subsystems for various spacecraft efforts in all orbital regimes.</p>
<p>The successful candidate will have a Master&#39;s or Ph.D. in Machine Learning, Robotics, or Computer Science, with a strong background in computer vision and machine learning. They will also have experience in one or more of the following: objection detection, object tracking, instance segmentation, semantic segmentation, semantic change detection, natural feature tracking (NFT), visual odometry, SLAM, multi-view geometry, structure from motion, 3D geometry, discriminative correlation filters, stereo, neural 3D reconstruction, multi-band sensor processing, RGB-D and LIDAR sensor fusion.</p>
<p>The Senior Computer Vision Engineer will work closely with related teams, including Sensors, GNC, Avionics, Systems, Flight Software, Mission Operations, and Ground Software, to develop and implement computer vision algorithms for various spacecraft efforts in all orbital regimes.</p>
<p>The ideal candidate will have excellent communication and organizational skills, including documentation and training material, and will be able to work effectively in a fast-paced environment with tight deadlines.</p>
<p>The salary range for this role is $191,000-$253,000 USD, and highly competitive equity grants are included in the majority of full-time offers.</p>
<p style="margin-top:24px;font-size:13px;color:#666;">XML job scraping automation by <a href="https://yubhub.co">YubHub</a></p>]]></Description>
      <Jobtype>full-time</Jobtype>
      <Experiencelevel>senior</Experiencelevel>
      <Workarrangement>onsite</Workarrangement>
      <Salaryrange>$191,000-$253,000 USD</Salaryrange>
      <Skills>Machine Learning, Robotics, Computer Science, Computer Vision, Object Detection, Object Tracking, Instance Segmentation, Semantic Segmentation, Semantic Change Detection, Natural Feature Tracking (NFT), Visual Odometry, SLAM, Multi-view Geometry, Structure from Motion, 3D Geometry, Discriminative Correlation Filters, Stereo, Neural 3D Reconstruction, Multi-band Sensor Processing, RGB-D and LIDAR Sensor Fusion, Matlab, Simulink, Python, Go, C++, Linux systems, OpenCV, NFT</Skills>
      <Category>Engineering</Category>
      <Industry>Technology</Industry>
      <Employername>Anduril Industries</Employername>
      <Employerlogo>https://logos.yubhub.co/andurilindustries.com.png</Employerlogo>
      <Employerdescription>Anduril Industries is a defense technology company that develops advanced technology for the U.S. and allied military.</Employerdescription>
      <Employerwebsite>https://www.andurilindustries.com/</Employerwebsite>
      <Compensationcurrency></Compensationcurrency>
      <Compensationmin></Compensationmin>
      <Compensationmax></Compensationmax>
      <Applyto>https://job-boards.greenhouse.io/andurilindustries/jobs/5016343007</Applyto>
      <Location>Washington, District of Columbia, United States</Location>
      <Country></Country>
      <Postedate>2026-04-18</Postedate>
    </job>
    <job>
      <externalid>1044b51e-cc6</externalid>
      <Title>Senior Manager, Software - Perception</Title>
      <Description><![CDATA[<p>This position is ideal for an individual who thrives on building advanced perception systems that enable autonomous aircraft to operate effectively in complex and contested environments.</p>
<p>A successful candidate will be skilled in developing real-time object detection, sensor fusion, and state estimation algorithms using data from diverse mission sensors such as EO/IR cameras, radars, and IMUs. The role requires strong algorithmic thinking, deep familiarity with airborne sensing systems, and the ability to deliver performant software in simulation and real-world conditions.</p>
<p>Shield AI is committed to developing cutting-edge autonomy for unmanned aircraft operating across all Department of Defense (DoD) domains, including air, sea, and land. Our Perception Engineers are instrumental in creating the situational awareness that underpins autonomy, ensuring our systems understand and respond to the operational environment with speed, precision, and resilience.</p>
<p><strong>Responsibilities:</strong></p>
<ul>
<li>Lead teams across autonomy, integration, and testing by aligning technical efforts, resolving cross-functional challenges, and driving mission-focused execution.</li>
<li>Develop advanced perception algorithms for object detection, classification, and multi-target tracking across diverse sensor modalities.</li>
<li>Implement sensor fusion frameworks by integrating data from vision systems, radars, and other mission sensors using probabilistic and deterministic fusion techniques.</li>
<li>Develop state estimation capabilities by designing and refining algorithms for localization and pose estimation using IMU, GPS, vision, and other onboard sensing inputs.</li>
<li>Analyze and utilize sensor ICDs to ensure correct data handling, interpretation, and synchronization.</li>
<li>Optimize perception performance by tuning and evaluating perception pipelines for performance, robustness, and real-time efficiency in both simulation and real-world environments.</li>
<li>Support autonomy integration by working closely with autonomy, systems, and integration teams to interface perception outputs with planning, behaviors, and decision-making modules.</li>
<li>Validate in simulated and operational settings by leveraging synthetic data, simulation environments, and field testing to validate algorithm accuracy and mission readiness.</li>
<li>Collaborate with hardware and sensor teams to ensure seamless integration of perception algorithms with onboard compute platforms and diverse sensor payloads.</li>
<li>Drive innovation in airborne sensing by contributing novel ideas and state-of-the-art techniques to advance real-time perception capabilities for unmanned aircraft operating in complex, GPS-denied, or contested environments.</li>
<li>Travel Requirement – Members of this team typically travel around 10-15% of the year (to different office locations, customer sites, and flight integration events).</li>
</ul>
<p><strong>Requirements:</strong></p>
<ul>
<li>BS/MS in Computer Science, Electrical Engineering, Mechanical Engineering, Aerospace Engineering, and/or similar degree, or equivalent practical experience.</li>
<li>Typically requires a minimum of 10 years of related experience with a Bachelor’s degree; or 9 years and a Master’s degree; or 7 years with a PhD; or equivalent work experience.</li>
<li>7+ years of experience in Unmanned Systems programs in the DoD or applied R&amp;D.</li>
<li>2+ years of people leadership experience.</li>
<li>Background in implementing algorithms such as Kalman Filters, multi-target tracking, or deep learning-based detection models.</li>
<li>Familiarity with fusing data from radar, EO/IR cameras, or other sensors using probabilistic or rule-based approaches.</li>
<li>Familiarity with SLAM, visual-inertial odometry, or sensor-fused localization approaches in real-time applications.</li>
<li>Ability to interpret and work with Interface Control Documents (ICDs) and hardware integration specs.</li>
<li>Proficiency with version control, debugging, and test-driven development in cross-functional teams.</li>
<li>Ability to obtain a SECRET clearance.</li>
</ul>
<p><strong>Preferences:</strong></p>
<ul>
<li>Hands-on integration or algorithm development with airborne sensing systems.</li>
<li>Experience with ML frameworks such as PyTorch or Tensorflow, particularly for vision-based object detection or classification tasks.</li>
<li>Experience deploying perception software on SWaP-constrained platforms.</li>
<li>Familiarity with validating perception systems during flight test events or operational environments.</li>
<li>Understanding of sensing challenges in denied or degraded conditions.</li>
<li>Exposure to perception applications across air, maritime, and ground platforms.</li>
</ul>
<p style="margin-top:24px;font-size:13px;color:#666;">XML job scraping automation by <a href="https://yubhub.co">YubHub</a></p>]]></Description>
      <Jobtype>full-time</Jobtype>
      <Experiencelevel>senior</Experiencelevel>
      <Workarrangement>onsite</Workarrangement>
      <Salaryrange>$229,233 - $343,849 a year</Salaryrange>
      <Skills>BS/MS in Computer Science, Electrical Engineering, Mechanical Engineering, Aerospace Engineering, and/or similar degree, 10+ years of related experience, 7+ years of experience in Unmanned Systems programs in the DoD or applied R&amp;D, 2+ years of people leadership experience, Background in implementing algorithms such as Kalman Filters, multi-target tracking, or deep learning-based detection models, Familiarity with fusing data from radar, EO/IR cameras, or other sensors using probabilistic or rule-based approaches, Familiarity with SLAM, visual-inertial odometry, or sensor-fused localization approaches in real-time applications, Ability to interpret and work with Interface Control Documents (ICDs) and hardware integration specs, Proficiency with version control, debugging, and test-driven development in cross-functional teams, Ability to obtain a SECRET clearance, Hands-on integration or algorithm development with airborne sensing systems, Experience with ML frameworks such as PyTorch or Tensorflow, particularly for vision-based object detection or classification tasks, Experience deploying perception software on SWaP-constrained platforms, Familiarity with validating perception systems during flight test events or operational environments, Understanding of sensing challenges in denied or degraded conditions, Exposure to perception applications across air, maritime, and ground platforms</Skills>
      <Category>Engineering</Category>
      <Industry>Technology</Industry>
      <Employername>Shield AI</Employername>
      <Employerlogo>https://logos.yubhub.co/shield.ai.png</Employerlogo>
      <Employerdescription>Shield AI is a venture-backed deep-tech company founded in 2015, developing intelligent systems to protect service members and civilians.</Employerdescription>
      <Employerwebsite>https://www.shield.ai</Employerwebsite>
      <Compensationcurrency></Compensationcurrency>
      <Compensationmin></Compensationmin>
      <Compensationmax></Compensationmax>
      <Applyto>https://jobs.lever.co/shieldai/cebc0dd3-ffbf-4013-a2ad-ae32732cabd3</Applyto>
      <Location>Washington, DC / San Diego, California / Boston, MA</Location>
      <Country></Country>
      <Postedate>2026-04-17</Postedate>
    </job>
    <job>
      <externalid>3f0b0cce-7be</externalid>
      <Title>Manager, Software - Perception</Title>
      <Description><![CDATA[<p>This position is ideal for an individual who thrives on building advanced perception systems that enable autonomous aircraft to operate effectively in complex and contested environments.</p>
<p>A successful candidate will be skilled in developing real-time object detection, sensor fusion, and state estimation algorithms using data from diverse mission sensors such as EO/IR cameras, radars, and IMUs.
The role requires strong algorithmic thinking, deep familiarity with airborne sensing systems, and the ability to deliver performant software in simulation and real-world conditions.</p>
<p>We are seeking a skilled and motivated manager to lead technical teams and support direct projects integrating perception solutions for defense platforms.</p>
<p>Shield AI is committed to developing cutting-edge autonomy for unmanned aircraft operating across all Department of Defense (DoD) domains, including air, sea, and land.
Our Perception Engineers are instrumental in creating the situational awareness that underpins autonomy, ensuring our systems understand and respond to the operational environment with speed, precision, and resilience.</p>
<p>Responsibilities:</p>
<ul>
<li>Multidisciplinary Team Leadership – Lead teams across autonomy, integration, and testing by aligning technical efforts, resolving cross-functional challenges, and driving mission-focused execution.</li>
<li>Develop advanced perception algorithms , Design and implement robust algorithms for object detection, classification, and multi-target tracking across diverse sensor modalities.</li>
<li>Implement sensor fusion frameworks , Integrate data from vision systems, radars, and other mission sensors using probabilistic and deterministic fusion techniques to generate accurate situational awareness.</li>
<li>Develop state estimation capabilities , Design and refine algorithms for localization and pose estimation using IMU, GPS, vision, and other onboard sensing inputs to enable stable and accurate navigation.</li>
<li>Analyze and utilize sensor ICDs , Interpret interface control documents (ICDs) and technical specifications for aircraft-mounted sensors to ensure correct data handling, interpretation, and synchronization.</li>
<li>Optimize perception performance , Tune and evaluate perception pipelines for performance, robustness, and real-time efficiency in both simulation and real-world environments.</li>
<li>Support autonomy integration , Work closely with autonomy, systems, and integration teams to interface perception outputs with planning, behaviors, and decision-making modules.</li>
<li>Validate in simulated and operational settings , Leverage synthetic data, simulation environments, and field testing to validate algorithm accuracy and mission readiness.</li>
<li>Collaborate with hardware and sensor teams , Ensure seamless integration of perception algorithms with onboard compute platforms and diverse sensor payloads.</li>
<li>Drive innovation in airborne sensing , Contribute novel ideas and state-of-the-art techniques to advance real-time perception capabilities for unmanned aircraft operating in complex, GPS-denied, or contested environments.</li>
<li>Travel Requirement , Members of this team typically travel around 10-15% of the year (to different office locations, customer sites, and flight integration events).</li>
</ul>
<p>Required Qualifications:</p>
<ul>
<li>BS/MS in Computer Science, Electrical Engineering, Mechanical Engineering, Aerospace Engineering, and/or similar degree, or equivalent practical experience</li>
<li>Typically requires a minimum of 7 years of related experience with a Bachelor’s degree; or 5 years and a Master’s degree; or 4 years with a PhD; or equivalent work experience</li>
<li>5+ years of experience in Unmanned Systems programs in the DoD or applied R&amp;D</li>
<li>2+ years of people leadership experience</li>
<li>Background in implementing algorithms such as Kalman Filters, multi-target tracking, or deep learning-based detection models.</li>
<li>Familiarity with fusing data from radar, EO/IR cameras, or other sensors using probabilistic or rule-based approaches.</li>
<li>Familiarity with SLAM, visual-inertial odometry, or sensor-fused localization approaches in real-time applications.</li>
<li>Ability to interpret and work with Interface Control Documents (ICDs) and hardware integration specs.</li>
<li>Proficiency with version control, debugging, and test-driven development in cross-functional teams.</li>
<li>Ability to obtain a SECRET clearance.</li>
</ul>
<p>Preferred Qualifications:</p>
<ul>
<li>Hands-on integration or algorithm development with airborne sensing systems.</li>
<li>Experience with ML frameworks such as PyTorch or Tensorflow, particularly for vision-based object detection or classification tasks.</li>
<li>Experience deploying perception software on SWaP-constrained platforms.</li>
<li>Familiarity with validating perception systems during flight test events or operational environments.</li>
<li>Understanding of sensing challenges in denied or degraded conditions.</li>
<li>Exposure to perception applications across air, maritime, and ground platforms.</li>
</ul>
<p style="margin-top:24px;font-size:13px;color:#666;">XML job scraping automation by <a href="https://yubhub.co">YubHub</a></p>]]></Description>
      <Jobtype>full-time</Jobtype>
      <Experiencelevel>senior</Experiencelevel>
      <Workarrangement>onsite</Workarrangement>
      <Salaryrange>$220,441 - $330,661 a year</Salaryrange>
      <Skills>BS/MS in Computer Science, Electrical Engineering, Mechanical Engineering, Aerospace Engineering, and/or similar degree, or equivalent practical experience, Typically requires a minimum of 7 years of related experience with a Bachelor’s degree; or 5 years and a Master’s degree; or 4 years with a PhD; or equivalent work experience, 5+ years of experience in Unmanned Systems programs in the DoD or applied R&amp;D, 2+ years of people leadership experience, Background in implementing algorithms such as Kalman Filters, multi-target tracking, or deep learning-based detection models., Hands-on integration or algorithm development with airborne sensing systems, Experience with ML frameworks such as PyTorch or Tensorflow, particularly for vision-based object detection or classification tasks, Experience deploying perception software on SWaP-constrained platforms, Familiarity with validating perception systems during flight test events or operational environments, Understanding of sensing challenges in denied or degraded conditions</Skills>
      <Category>Engineering</Category>
      <Industry>Technology</Industry>
      <Employername>Shield AI</Employername>
      <Employerlogo>https://logos.yubhub.co/shield.ai.png</Employerlogo>
      <Employerdescription>Shield AI is a venture-backed deep-tech company founded in 2015, developing intelligent systems to protect service members and civilians.</Employerdescription>
      <Employerwebsite>https://www.shield.ai</Employerwebsite>
      <Compensationcurrency></Compensationcurrency>
      <Compensationmin></Compensationmin>
      <Compensationmax></Compensationmax>
      <Applyto>https://jobs.lever.co/shieldai/1120529c-2f7d-4b27-a29b-50976c49c433</Applyto>
      <Location>Washington, DC</Location>
      <Country></Country>
      <Postedate>2026-04-17</Postedate>
    </job>
    <job>
      <externalid>841c78ea-841</externalid>
      <Title>Senior Engineer, Software - Perception</Title>
      <Description><![CDATA[<p>This position is ideal for an individual who thrives on building advanced perception systems that enable autonomous aircraft to operate effectively in complex and contested environments.</p>
<p>A successful candidate will be skilled in developing real-time object detection, sensor fusion, and state estimation algorithms using data from diverse mission sensors such as EO/IR cameras, radars, and IMUs.
The role requires strong algorithmic thinking, deep familiarity with airborne sensing systems, and the ability to deliver performant software in simulation and real-world conditions.</p>
<p>Develop advanced perception algorithms , Design and implement robust algorithms for object detection, classification, and multi-target tracking across diverse sensor modalities.
Implement sensor fusion frameworks , Integrate data from vision systems, radars, and other mission sensors using probabilistic and deterministic fusion techniques to generate accurate situational awareness.
Develop state estimation capabilities , Design and refine algorithms for localization and pose estimation using IMU, GPS, vision, and other onboard sensing inputs to enable stable and accurate navigation.
Analyze and utilize sensor ICDs , Interpret interface control documents (ICDs) and technical specifications for aircraft-mounted sensors to ensure correct data handling, interpretation, and synchronization.
Optimize perception performance , Tune and evaluate perception pipelines for performance, robustness, and real-time efficiency in both simulation and real-world environments.
Support autonomy integration , Work closely with autonomy, systems, and integration teams to interface perception outputs with planning, behaviors, and decision-making modules.
Validate in simulated and operational settings , Leverage synthetic data, simulation environments, and field testing to validate algorithm accuracy and mission readiness.
Collaborate with hardware and sensor teams , Ensure seamless integration of perception algorithms with onboard compute platforms and diverse sensor payloads.
Drive innovation in airborne sensing , Contribute novel ideas and state-of-the-art techniques to advance real-time perception capabilities for unmanned aircraft operating in complex, GPS-denied, or contested environments.</p>
<p style="margin-top:24px;font-size:13px;color:#666;">XML job scraping automation by <a href="https://yubhub.co">YubHub</a></p>]]></Description>
      <Jobtype>full-time</Jobtype>
      <Experiencelevel>senior</Experiencelevel>
      <Workarrangement>onsite</Workarrangement>
      <Salaryrange>$160,000 - $240,000 a year</Salaryrange>
      <Skills>BS/MS in Computer Science, Electrical Engineering, Mechanical Engineering, Aerospace Engineering, and/or similar degree, or equivalent practical experience, Typically requires a minimum of 5 years of related experience with a Bachelor’s degree; or 4 years and a Master’s degree; or 2 years with a PhD; or equivalent work experience, Background in implementing algorithms such as Kalman Filters, multi-target tracking, or deep learning-based detection models, Familiarity with fusing data from radar, EO/IR cameras, or other sensors using probabilistic or rule-based approaches, Familiarity with SLAM, visual-inertial odometry, or sensor-fused localization approaches in real-time applications, Ability to interpret and work with Interface Control Documents (ICDs) and hardware integration specs, Proficiency with version control, debugging, and test-driven development in cross-functional teams, Ability to obtain a SECRET clearance, Hands-on integration or algorithm development with airborne sensing systems, Experience with ML frameworks such as PyTorch or Tensorflow, particularly for vision-based object detection or classification tasks, Experience deploying perception software on SWaP-constrained platforms, Familiarity with validating perception systems during flight test events or operational environments, Understanding of sensing challenges in denied or degraded conditions, Exposure to perception applications across air, maritime, and ground platforms</Skills>
      <Category>Engineering</Category>
      <Industry>Technology</Industry>
      <Employername>Shield AI</Employername>
      <Employerlogo>https://logos.yubhub.co/shield.ai.png</Employerlogo>
      <Employerdescription>Shield AI is a venture-backed deep-tech company that develops intelligent systems to protect service members and civilians.</Employerdescription>
      <Employerwebsite>https://www.shield.ai</Employerwebsite>
      <Compensationcurrency></Compensationcurrency>
      <Compensationmin></Compensationmin>
      <Compensationmax></Compensationmax>
      <Applyto>https://jobs.lever.co/shieldai/d6f1d906-5c1e-4640-87f3-3e31e1b45fa6</Applyto>
      <Location>San Diego, California / Washington, DC / Boston, MA</Location>
      <Country></Country>
      <Postedate>2026-04-17</Postedate>
    </job>
    <job>
      <externalid>bed4759c-578</externalid>
      <Title>Staff Engineer, Software - Perception</Title>
      <Description><![CDATA[<p>This position is ideal for an individual who thrives on building advanced perception systems that enable autonomous aircraft to operate effectively in complex and contested environments.</p>
<p>A successful candidate will be skilled in developing real-time object detection, sensor fusion, and state estimation algorithms using data from diverse mission sensors such as EO/IR cameras, radars, and IMUs. The role requires strong algorithmic thinking, deep familiarity with airborne sensing systems, and the ability to deliver performant software in simulation and real-world conditions.</p>
<p>Shield AI is committed to developing cutting-edge autonomy for unmanned aircraft operating across all Department of Defense (DoD) domains, including air, sea, and land. Our Perception Engineers are instrumental in creating the situational awareness that underpins autonomy, ensuring our systems understand and respond to the operational environment with speed, precision, and resilience.</p>
<p><strong>Responsibilities:</strong></p>
<ul>
<li>Develop advanced perception algorithms , Design and implement robust algorithms for object detection, classification, and multi-target tracking across diverse sensor modalities.</li>
<li>Implement sensor fusion frameworks , Integrate data from vision systems, radars, and other mission sensors using probabilistic and deterministic fusion techniques to generate accurate situational awareness.</li>
<li>Develop state estimation capabilities , Design and refine algorithms for localization and pose estimation using IMU, GPS, vision, and other onboard sensing inputs to enable stable and accurate navigation.</li>
<li>Analyze and utilize sensor ICDs , Interpret interface control documents (ICDs) and technical specifications for aircraft-mounted sensors to ensure correct data handling, interpretation, and synchronization.</li>
<li>Optimize perception performance , Tune and evaluate perception pipelines for performance, robustness, and real-time efficiency in both simulation and real-world environments.</li>
<li>Support autonomy integration , Work closely with autonomy, systems, and integration teams to interface perception outputs with planning, behaviors, and decision-making modules.</li>
<li>Validate in simulated and operational settings , Leverage synthetic data, simulation environments, and field testing to validate algorithm accuracy and mission readiness.</li>
<li>Collaborate with hardware and sensor teams , Ensure seamless integration of perception algorithms with onboard compute platforms and diverse sensor payloads.</li>
<li>Drive innovation in airborne sensing , Contribute novel ideas and state-of-the-art techniques to advance real-time perception capabilities for unmanned aircraft operating in complex, GPS-denied, or contested environments.</li>
<li>Travel Requirement , Members of this team typically travel around 10-15% of the year (to different office locations, customer sites, and flight integration events).</li>
</ul>
<p><strong>Required Qualifications:</strong></p>
<ul>
<li>BS/MS in Computer Science, Electrical Engineering, Mechanical Engineering, Aerospace Engineering, and/or similar degree, or equivalent practical experience</li>
<li>Typically requires a minimum of 7 years of related experience with a Bachelor’s degree; or 5 years and a Master’s degree; or 4 years with a PhD; or equivalent work experience</li>
<li>Background in implementing algorithms such as Kalman Filters, multi-target tracking, or deep learning-based detection models</li>
<li>Familiarity with fusing data from radar, EO/IR cameras, or other sensors using probabilistic or rule-based approaches</li>
<li>Familiarity with SLAM, visual-inertial odometry, or sensor-fused localization approaches in real-time applications</li>
<li>Ability to interpret and work with Interface Control Documents (ICDs) and hardware integration specs</li>
<li>Proficiency with version control, debugging, and test-driven development in cross-functional teams</li>
<li>Ability to obtain a SECRET clearance</li>
</ul>
<p><strong>Preferred Qualifications:</strong></p>
<ul>
<li>Hands-on integration or algorithm development with airborne sensing systems</li>
<li>Experience with ML frameworks such as PyTorch or Tensorflow, particularly for vision-based object detection or classification tasks</li>
<li>Experience deploying perception software on SWaP-constrained platforms</li>
<li>Familiarity with validating perception systems during flight test events or operational environments</li>
<li>Understanding of sensing challenges in denied or degraded conditions</li>
<li>Exposure to perception applications across air, maritime, and ground platforms</li>
</ul>
<p>$182,720 - $274,080 a year</p>
<p style="margin-top:24px;font-size:13px;color:#666;">XML job scraping automation by <a href="https://yubhub.co">YubHub</a></p>]]></Description>
      <Jobtype>full-time</Jobtype>
      <Experiencelevel>staff</Experiencelevel>
      <Workarrangement>onsite</Workarrangement>
      <Salaryrange>$182,720 - $274,080 a year</Salaryrange>
      <Skills>real-time object detection, sensor fusion, state estimation algorithms, EO/IR cameras, radars, IMUs, Kalman Filters, multi-target tracking, deep learning-based detection models, probabilistic or rule-based approaches, SLAM, visual-inertial odometry, sensor-fused localization, Interface Control Documents, hardware integration specs, version control, debugging, test-driven development, hands-on integration or algorithm development with airborne sensing systems, ML frameworks such as PyTorch or Tensorflow, vision-based object detection or classification tasks, SWaP-constrained platforms, validating perception systems during flight test events or operational environments, sensing challenges in denied or degraded conditions, perception applications across air, maritime, and ground platforms</Skills>
      <Category>Engineering</Category>
      <Industry>Technology</Industry>
      <Employername>Shield AI</Employername>
      <Employerlogo>https://logos.yubhub.co/shield.ai.png</Employerlogo>
      <Employerdescription>Shield AI is a venture-backed deep-tech company founded in 2015, developing intelligent systems to protect service members and civilians.</Employerdescription>
      <Employerwebsite>https://www.shield.ai</Employerwebsite>
      <Compensationcurrency></Compensationcurrency>
      <Compensationmin></Compensationmin>
      <Compensationmax></Compensationmax>
      <Applyto>https://jobs.lever.co/shieldai/8739c509-b6ea-4640-bcc1-c8b5b1de31b2</Applyto>
      <Location>San Diego, California / Washington, DC / Boston, MA</Location>
      <Country></Country>
      <Postedate>2026-04-17</Postedate>
    </job>
    <job>
      <externalid>8f6cb9bd-a3f</externalid>
      <Title>Computer Vision Engineer (C++)</Title>
      <Description><![CDATA[<p>We&#39;re looking for a Computer Vision Engineer (C++) to join our team in Port Melbourne, contributing to the development of innovative, real-time perception solutions for next-gen autonomous platforms.</p>
<p>As a member of our team, you&#39;ll design and implement novel computer vision algorithms from scratch, optimised for real-time performance. You&#39;ll develop and maintain C++-based CV pipelines as part of autonomous mission systems, collaborate with a multidisciplinary team of AI, robotics, and optical engineers to deliver reliable edge solutions, and support the integration of deep learning models into broader CV systems.</p>
<p>In this role, you&#39;ll have the opportunity to stay across current academic research and emerging techniques in computer vision and ML, and contribute to the development of custom algorithms, not just apply libraries.</p>
<p>Why Shield AI?</p>
<ul>
<li>Build mission-critical vision and autonomy systems that make a real-world impact.</li>
<li>Collaborate with some of the best minds in AI, autonomy, and defence technology.</li>
<li>Hybrid role based in our Port Melbourne office.</li>
<li>Salary + equity for permanent roles, with a strong career development pathway.</li>
</ul>
<p style="margin-top:24px;font-size:13px;color:#666;">XML job scraping automation by <a href="https://yubhub.co">YubHub</a></p>]]></Description>
      <Jobtype>full-time</Jobtype>
      <Experiencelevel>mid|senior</Experiencelevel>
      <Workarrangement>onsite</Workarrangement>
      <Salaryrange></Salaryrange>
      <Skills>C++, computer vision, image processing, machine learning, real-time performance, object detection, target tracking, 3D reconstruction, SLAM, camera calibration, behaviour analysis, OpenCV, deep learning</Skills>
      <Category>Engineering</Category>
      <Industry>Technology</Industry>
      <Employername>Shield AI</Employername>
      <Employerlogo>https://logos.yubhub.co/shield.ai.png</Employerlogo>
      <Employerdescription>Shield AI is a venture-backed deep-tech company founded in 2015, with a mission of protecting service members and civilians with intelligent systems.</Employerdescription>
      <Employerwebsite>https://www.shield.ai</Employerwebsite>
      <Compensationcurrency></Compensationcurrency>
      <Compensationmin></Compensationmin>
      <Compensationmax></Compensationmax>
      <Applyto>https://jobs.lever.co/shieldai/2cfe6692-a266-4d27-8832-ef652fa57ee4</Applyto>
      <Location>Melbourne</Location>
      <Country></Country>
      <Postedate>2026-04-17</Postedate>
    </job>
    <job>
      <externalid>49653163-8a7</externalid>
      <Title>Senior Open-Source Machine Learning Engineer, Computer Vision</Title>
      <Description><![CDATA[<p>At Hugging Face, we&#39;re on a journey to democratize good AI. We are building the fastest growing platform for AI builders.</p>
<p>As an Open-Source ML engineer in Computer Vision, you will work mainly with existing open-source libraries, such as Transformers and Datasets to boost the support for vision or multi-modal models and datasets. You will bring your computer vision expertise to provide the best computer-vision tool stack in the machine learning ecosystem and work with us to provide the best, simplest, and most intuitive computer-vision library in the industry.</p>
<p>Responsibilities:</p>
<ul>
<li>Work with existing open-source libraries to boost support for vision or multi-modal models and datasets.</li>
<li>Bring computer vision expertise to provide the best computer-vision tool stack in the machine learning ecosystem.</li>
<li>Collaborate with researchers, ML practitioners, and data scientists on a daily basis.</li>
<li>Foster one of the most active machine learning communities, helping users contribute to and use the tools that you build.</li>
</ul>
<p>Requirements:</p>
<ul>
<li>Deep expertise in computer vision: object detection, segmentation, generative models, or multimodal systems.</li>
<li>Strong open-source presence: You’ve contributed significantly to CV libraries (e.g., OpenCV, Detectron2, MMDetection, or Hugging Face’s own transformers/diffusers), as a Core-Contributor or maintainer.</li>
<li>Scalability mindset: Experience optimizing models for production, deploying at scale, or improving inference efficiency.</li>
<li>Collaboration &amp; mentorship: You enjoy working with cross-functional teams, reviewing PRs, and guiding junior contributors.</li>
<li>Alignment with our mission: You believe in democratizing AI and want to empower millions of builders with state-of-the-art tools.</li>
</ul>
<p>If you love open-source, are passionate about the new development of Transformers models in computer vision, have experience building, optimizing, and training such models in PyTorch and/or TensorFlow, serving them in production, and want to contribute to one of the fastest-growing ML libraries, then we can&#39;t wait to see your application!</p>
<p>If you&#39;re interested in joining us, but don&#39;t tick every box above, we still encourage you to apply! We&#39;re building a diverse team whose skills, experiences, and backgrounds complement one another. We&#39;re happy to consider where you might be able to make the biggest impact.</p>
<p style="margin-top:24px;font-size:13px;color:#666;">XML job scraping automation by <a href="https://yubhub.co">YubHub</a></p>]]></Description>
      <Jobtype>full-time</Jobtype>
      <Experiencelevel>senior</Experiencelevel>
      <Workarrangement>remote</Workarrangement>
      <Salaryrange></Salaryrange>
      <Skills>computer vision, object detection, segmentation, generative models, multimodal systems, open-source libraries, Transformers, Datasets, PyTorch, TensorFlow</Skills>
      <Category>Engineering</Category>
      <Industry>Technology</Industry>
      <Employername>Hugging Face</Employername>
      <Employerlogo></Employerlogo>
      <Employerdescription>Hugging Face is a platform for AI builders with over 11 million users who collectively shared over 2M models, 700k datasets &amp; 600k apps.</Employerdescription>
      <Employerwebsite>https://huggingface.co/</Employerwebsite>
      <Compensationcurrency></Compensationcurrency>
      <Compensationmin></Compensationmin>
      <Compensationmax></Compensationmax>
      <Applyto>https://apply.workable.com/j/ED25C4FEA1</Applyto>
      <Location>New York, New York</Location>
      <Country></Country>
      <Postedate>2026-03-10</Postedate>
    </job>
    <job>
      <externalid>af311231-ebb</externalid>
      <Title>Senior Open-Source Machine Learning Engineer, Computer Vision</Title>
      <Description><![CDATA[<p>At Hugging Face, we&#39;re on a journey to democratize good AI.</p>
<p>We are building the fastest growing platform for AI builders with over 11 million users who collectively shared over 2M models, 700k datasets &amp; 600k apps.</p>
<p>As an Open-Source ML engineer in Computer Vision, you will work mainly with existing open-source libraries, such as Transformers and Datasets to boost the support for vision or multi-modal models and datasets.</p>
<p>You will bring your computer vision expertise to provide the best computer-vision tool stack in the machine learning ecosystem and work with us to provide the best, simplest, and most intuitive computer-vision library in the industry.</p>
<p>You&#39;ll get to foster one of the most active machine learning communities, helping users contribute to and use the tools that you build.</p>
<p>You&#39;ll interact with Researchers, ML practitioners, and data scientists on a daily basis through GitHub, our forums, or slack.</p>
<p><strong>Responsibilities</strong></p>
<ul>
<li>Deep expertise in computer vision: object detection, segmentation, generative models, or multimodal systems.</li>
<li>Strong open-source presence: You’ve contributed significantly to CV libraries (e.g., OpenCV, Detectron2, MMDetection, or Hugging Face’s own transformers/diffusers), as a Core-Contributor or maintainer.</li>
<li>Scalability mindset: Experience optimizing models for production, deploying at scale, or improving inference efficiency.</li>
<li>Collaboration &amp; mentorship: You enjoy working with cross-functional teams, reviewing PRs, and guiding junior contributors.</li>
<li>Alignment with our mission: You believe in democratizing AI and want to empower millions of builders with state-of-the-art tools.</li>
</ul>
<p><strong>Benefits</strong></p>
<ul>
<li>Flexible working hours and remote options.</li>
<li>Health, dental, and vision benefits for employees and their dependents.</li>
<li>Parental leave and flexible paid time off.</li>
<li>Reimbursement for relevant conferences, training, and education.</li>
<li>Company equity as part of their compensation package.</li>
</ul>
<p><strong>What We Offer</strong></p>
<ul>
<li>Work with some of the smartest people in our industry.</li>
<li>A bias for impact and a continuous growth mindset.</li>
<li>Support for your well-being and career development.</li>
<li>Opportunities to visit our offices in NYC and Paris.</li>
<li>An outfitting of your workstation to ensure success.</li>
</ul>
<p style="margin-top:24px;font-size:13px;color:#666;">XML job scraping automation by <a href="https://yubhub.co">YubHub</a></p>]]></Description>
      <Jobtype>full-time</Jobtype>
      <Experiencelevel>senior</Experiencelevel>
      <Workarrangement>remote</Workarrangement>
      <Salaryrange></Salaryrange>
      <Skills>computer vision, object detection, segmentation, generative models, multimodal systems, open-source libraries, Transformers, Datasets, PyTorch, TensorFlow</Skills>
      <Category>Engineering</Category>
      <Industry>Technology</Industry>
      <Employername>Hugging Face</Employername>
      <Employerlogo></Employerlogo>
      <Employerdescription>Hugging Face is a platform for AI builders with over 11 million users who collectively shared over 2M models, 700k datasets &amp; 600k apps.</Employerdescription>
      <Employerwebsite>https://huggingface.co/</Employerwebsite>
      <Compensationcurrency></Compensationcurrency>
      <Compensationmin></Compensationmin>
      <Compensationmax></Compensationmax>
      <Applyto>https://apply.workable.com/j/0F3FFE6E77</Applyto>
      <Location>Paris</Location>
      <Country></Country>
      <Postedate>2026-03-10</Postedate>
    </job>
    <job>
      <externalid>a1d19949-bfc</externalid>
      <Title>Senior Data Scientist (Computer Vision Engineer)</Title>
      <Description><![CDATA[<p>Joining Razer will place you on a global mission to revolutionize the way the world games. Razer is a place to do great work, offering you the opportunity to make an impact globally while working across a global team located across 5 continents. Razer is also a great place to work, providing you the unique, gamer-centric #LifeAtRazer experience that will put you in an accelerated growth, both personally and professionally.</p>
<p><strong>What you&#39;ll do</strong></p>
<p>Develop and implement computer vision algorithms for tasks such as object detection, recognition, tracking, segmentation, and image classification. Design and architect computer vision systems to meet specific requirements and objectives.</p>
<p><strong>What you need</strong></p>
<ul>
<li>Bachelor’s or master’s degree in computer science, Electrical Engineering, or a related field.</li>
</ul>
<p style="margin-top:24px;font-size:13px;color:#666;">XML job scraping automation by <a href="https://yubhub.co">YubHub</a></p>]]></Description>
      <Jobtype>full-time</Jobtype>
      <Experiencelevel>senior</Experiencelevel>
      <Workarrangement>onsite</Workarrangement>
      <Salaryrange></Salaryrange>
      <Skills>computer vision algorithms, object detection, image classification, Python, C++, OpenCV</Skills>
      <Category>Engineering</Category>
      <Industry>Technology</Industry>
      <Employername>Razer</Employername>
      <Employerlogo>https://logos.yubhub.co/razer.com.png</Employerlogo>
      <Employerdescription>Razer is a global gaming company that creates cutting-edge products and experiences that define the ultimate gameplay. They are guided by their mission &quot;For Gamers. By Gamers.&quot; and are relentlessly pushing boundaries and leading the charge in AI for gaming, shaping the future of the industry.</Employerdescription>
      <Employerwebsite>https://razer.wd3.myworkdayjobs.com</Employerwebsite>
      <Compensationcurrency></Compensationcurrency>
      <Compensationmin></Compensationmin>
      <Compensationmax></Compensationmax>
      <Applyto>https://razer.wd3.myworkdayjobs.com/en-US/Careers/job/Singapore/Senior-Software-Engineer--Computer-Vision-Engineer-_JR2025005486</Applyto>
      <Location>Singapore</Location>
      <Country></Country>
      <Postedate>2026-01-01</Postedate>
    </job>
  </jobs>
</source>