<?xml version="1.0" encoding="UTF-8"?>
<source>
  <jobs>
    <job>
      <externalid>152babe3-e76</externalid>
      <Title>Deployment Engineer</Title>
      <Description><![CDATA[<p>About Cyngn</p>
<p>Based in Mountain View, CA, Cyngn is a publicly-traded autonomous technology company. We deploy self-driving industrial vehicles - specifically autonomous tuggers - to factories, warehouses, and other facilities throughout North America.</p>
<p>To build this emergent technology, we are looking for innovative, motivated, and experienced leaders to join us and move this field forward. If you like to build, tinker, and create with a team of trusted and passionate colleagues, then Cyngn is the place for you.</p>
<p>Key reasons to join Cyngn:</p>
<p>We are small and big. With under 100 employees, Cyngn operates with the energy of a startup. On the other hand, we’re publicly traded. This means our employees not only work in close-knit teams with mentorship from company leaders,they also get access to the liquidity of our publicly-traded equity.</p>
<p>We build today and deploy tomorrow. Our autonomous vehicles aren’t just test concepts,they’re deployed to real clients right now. That means your work will have a tangible, visible impact.</p>
<p>We aren’t robots. We just develop them. We’re a welcoming, diverse team of sharp thinkers and kind humans. Collaboration and trust drive our creative environment. At Cyngn, everyone’s perspective matters,and that’s what powers our innovation.</p>
<p>About this role:</p>
<p>As a Deployment Engineer, you are the technical architect of a live customer site. You move beyond just &quot;fixing&quot; hardware to &quot;optimizing&quot; autonomy. You will lead the deployment lifecycle from the moment the hardware arrives until the customer signs off on a fully autonomous workflow. You will also be in charge of maintaining the fleet through tier 1 support and map maintenance. You are the primary technical point of contact for the customer on-site for any issues and implementing their needs.</p>
<p>Responsibilities</p>
<p>Deployment &amp; Field Operations</p>
<p>Lead end-to-end deployment of autonomous robotic systems at customer facilities.</p>
<p>Conduct site surveys and assess automation readiness, infrastructure constraints, and ODD requirements.</p>
<p>Generate 3D and semantic maps and validate localization and navigation performance.</p>
<p>Install, configure, calibrate, and commission robotic systems in live production environments.</p>
<p>Train customer operators, maintenance teams, and site stakeholders; manage handoff to support.</p>
<p>Work directly with the Customer Solutions Engineering team to take their designs in a handoff and perform a full implementation and betterment of these designs.</p>
<p>User Acceptance Testing (UAT): Define and lead the final testing phase during the implementation, proving to the customer that the system meets all safety and throughput Key Performance Indicators (KPIs).</p>
<p>Perform the handoff and orientation of the vehicle with the customer as well as part of the handoff.</p>
<p>Customer &amp; Stakeholder Engagement</p>
<p>Act as the primary technical point of contact during deployments and early operations.</p>
<p>Work directly with customer IT teams to configure networks, resolve firewall/port issues, and ensure reliable connectivity.</p>
<p>Run regular performance reviews with customers, using data to drive operational improvements and adoption.</p>
<p>Head the deployment of the routes on site, and coordinate with the customer to make sure these routes remain optimized over time.</p>
<p>Troubleshooting &amp; Support</p>
<p>Analyze robot logs, sensor data, and system metrics using tools such as Foxglove, RViz, RQT_Bag, and PlotJuggler.</p>
<p>Diagnose hardware, software, perception, and infrastructure issues in the field.</p>
<p>Own incident resolution across Tier 1–3 support, ensuring fast MTTR and clear root-cause documentation.</p>
<p>Monitor fleet health and KPIs using Grafana and internal dashboards.</p>
<p>Cross-Functional Collaboration</p>
<p>Provide structured feedback from the field to Product, Engineering, QA, and Perception teams.</p>
<p>Support validation activities including FoV studies, data collection, and perception audits.</p>
<p>Collaborate with OEM partners to integrate robotics hardware and software into new vehicle platforms.</p>
<p>Build and maintain scalable deployment playbooks, checklists, and Jira-based workflows.</p>
<p>Qualifications</p>
<p>Education: Bachelor’s or Master’s degree in Industrial Engineer, Robotics, Mechanical Engineering, Electrical Engineering, or a related field.</p>
<p>Experience: 3-5+ years in robotics, autonomous systems, or industrial automation. Experience as a Field Engineer, Customer Success Engineer, Integration Engineer, or similar is highly preferred.</p>
<p>Experience with autonomous vehicles, mobile robots, or drones is a requirement</p>
<p>Advanced Software Skills: Linux &amp; ROS Proficiency: Deep comfort with ROS tools (rviz, rosbag, tf) for visualizing and debugging vehicle thoughts.</p>
<p>Professionalism: Exceptional communication skills. You must be able to explain &quot;why the robot stopped&quot; to a floor manager in a way that builds trust rather than confusion.</p>
<p>Physical Requirements: Ability to work in industrial environments (warehouses, yards) and travel up to 50-70% of the time.</p>
<p>Deployment Skills with Robotics: Robotics deployment, calibration, and troubleshooting</p>
<p>Network configuration and customer IT environments</p>
<p>Experience working directly at customer sites in manufacturing, logistics, or industrial environments</p>
<p>Ability to analyze logs, metrics, and sensor data to diagnose complex system issues</p>
<p>Comfortable with travel and working independently in the field</p>
<p>Bonus Qualifications</p>
<p>Familiarity with 3D LiDAR systems (e.g., Hesai, Ouster)</p>
<p>Experience with mapping, localization, and perception validation</p>
<p>Exposure to Grafana, Jira, TestRail, Docker, Git</p>
<p>Strong documentation and process-building mindset</p>
<p>Preferably based in the Eastern Time Zone</p>
<p>Physical Requirements</p>
<p>[Hybrid/Field Based],50%-75% Travel</p>
<p>Prolonged periods of sitting, standing, and walking, including time at a desk and in industrial or customer environments.</p>
<p>Frequent bending, stooping, kneeling, climbing, and reaching to install, maintain, and repair robots.</p>
<p>Ability to lift and maneuver objects up to 80 lbs.</p>
<p>Ability to operate hand tools, power tools, and testing equipment.</p>
<p>Specific vision abilities required include close vision, color vision, peripheral vision, depth perception, and ability to adjust focus.</p>
<p>Ability to communicate clearly, hear, and respond effectively in noisy environments.</p>
<p>Benefits &amp; Perks</p>
<p>Health benefits (Medical, Dental, Vision, HSA and FSA (Health &amp; Dependent Daycare), Employee Assistance Program, 1:1 Health Concierge)</p>
<p>Life, Short-term and long-term disability insurance (Cyngn funds 100% of premiums)</p>
<p>Company 401(k)</p>
<p>Commuter Benefits</p>
<p>Flexible vacation policy</p>
<p>Remote or hybrid work opportunities</p>
<p>Sabbatical leave opportunity after 5 years with the company</p>
<p>Paid Parental Leave</p>
<p>Daily lunches for in-office employees and fully-stocked kitchen with snacks and beverages</p>
<p>Monthly meal and tech allowances for remote employees</p>
<p style="margin-top:24px;font-size:13px;color:#666;">XML job scraping automation by <a href="https://yubhub.co">YubHub</a></p>]]></Description>
      <Jobtype>full-time</Jobtype>
      <Experiencelevel>mid</Experiencelevel>
      <Workarrangement>hybrid</Workarrangement>
      <Salaryrange>USD 100,000-125,000 per-year-salary&quot;,   &quot;salaryMin&quot;: 95000,   &quot;salaryMax&quot;: 125000,   &quot;salaryCurrency&quot;: &quot;USD&quot;,   &quot;salaryPeriod&quot;: &quot;year</Salaryrange>
      <Skills>Linux, ROS, Foxglove, RViz, RQT_Bag, PlotJuggler, Grafana, Jira, Robotics deployment, Calibration, Troubleshooting, Network configuration, Customer IT environments, Autonomous vehicles, Mobile robots, Drones, 3D LiDAR systems, Mapping, Localization, Perception validation</Skills>
      <Category>Engineering</Category>
      <Industry>Technology</Industry>
      <Employername>Cyngn</Employername>
      <Employerlogo>https://logos.yubhub.co/cyngn.com.png</Employerlogo>
      <Employerdescription>Cyngn is a publicly-traded autonomous technology company that deploys self-driving industrial vehicles to factories, warehouses, and other facilities throughout North America.</Employerdescription>
      <Employerwebsite>https://www.cyngn.com/</Employerwebsite>
      <Compensationcurrency></Compensationcurrency>
      <Compensationmin></Compensationmin>
      <Compensationmax></Compensationmax>
      <Applyto>https://jobs.lever.co/cyngn/0d5a0008-85d6-4fdc-a6de-e29501528c55</Applyto>
      <Location>Mountain View</Location>
      <Country></Country>
      <Postedate>2026-04-24</Postedate>
    </job>
    <job>
      <externalid>39758a32-f8e</externalid>
      <Title>Senior Computer Vision Engineer</Title>
      <Description><![CDATA[<p>We are seeking an experienced Computer Vision Engineer with a strong background in robotics to join our rapidly growing team in Costa Mesa, CA. In this role, you will be at the forefront of developing advanced perception systems for complex autonomous aerial platforms.</p>
<p>Your expertise in computer vision algorithms, combined with your understanding of robotics principles, will be crucial in solving a wide variety of challenges involving visual perception, SLAM, motion planning, controls, and state estimation. This role requires not only technical expertise in computer vision and robotics but also the ability to make pragmatic engineering tradeoffs, considering the unique constraints of aerial platforms.</p>
<p>You will work at the intersection of 3D perception and computer vision, developing robust algorithms that power real-time decision-making for autonomous aerial systems. You will design experiments, data collection efforts, and curate training/evaluation sets to develop insights for both internal purposes and customers.</p>
<p>As a member of our team, you will collaborate closely with robotics, software, and hardware teams to integrate perception algorithms into autonomous aerial systems. You will work with vendors and government stakeholders to advance the state-of-the-art in perception and world modeling for autonomous aerial systems.</p>
<p>Required qualifications include a BS in Robotics, Computer Science, Mechatronics, Electrical Engineering, Mechanical Engineering, or related field, with strong knowledge of 3D computer vision concepts, including multi-view geometry, camera models, photogrammetry, and 3D reconstruction techniques. You should have fluency in standard domain libraries (numpy, opencv, pytorch, etc), proven understanding of data structures, algorithms, concurrency, and code optimization, and 6+ years of professional industry experience working with C++ or Rust programming languages.</p>
<p>Preferred qualifications include an MS or PhD in Robotics, Computer Science, Mechatronics, Electrical Engineering, Mechanical Engineering, or related field, experience with perception systems for aerial robotics or other highly dynamic platforms, and knowledge of path planning algorithms and their integration with perception systems in dynamic environments.</p>
<p style="margin-top:24px;font-size:13px;color:#666;">XML job scraping automation by <a href="https://yubhub.co">YubHub</a></p>]]></Description>
      <Jobtype>full-time</Jobtype>
      <Experiencelevel>senior</Experiencelevel>
      <Workarrangement>onsite</Workarrangement>
      <Salaryrange>$220,000-$292,000 USD</Salaryrange>
      <Skills>3D computer vision, multi-view geometry, camera models, photogrammetry, 3D reconstruction techniques, numpy, opencv, pytorch, data structures, algorithms, concurrency, code optimization, C++, Rust, perception systems for aerial robotics, path planning algorithms, integration with perception systems in dynamic environments</Skills>
      <Category>Engineering</Category>
      <Industry>Technology</Industry>
      <Employername>Anduril Industries</Employername>
      <Employerlogo>https://logos.yubhub.co/anduril.com.png</Employerlogo>
      <Employerdescription>Anduril Industries is a defence technology company that develops advanced technology to transform U.S. and allied military capabilities.</Employerdescription>
      <Employerwebsite>https://www.anduril.com/</Employerwebsite>
      <Compensationcurrency></Compensationcurrency>
      <Compensationmin></Compensationmin>
      <Compensationmax></Compensationmax>
      <Applyto>https://job-boards.greenhouse.io/andurilindustries/jobs/5114446007</Applyto>
      <Location>Costa Mesa, California, United States</Location>
      <Country></Country>
      <Postedate>2026-04-24</Postedate>
    </job>
    <job>
      <externalid>fc9a9b7c-a45</externalid>
      <Title>Analog/IO Design, Sr Staff Engineer</Title>
      <Description><![CDATA[<p>Our Hardware Engineers at Synopsys are responsible for designing and developing cutting-edge semiconductor solutions. They work on intricate tasks such as chip architecture, circuit design, and verification to ensure the efficiency and reliability of semiconductor products.</p>
<p>We Are: At Synopsys, we drive the innovations that shape the way we live and connect. Our technology is central to the Era of Pervasive Intelligence, from self-driving cars to learning machines. We lead in chip design, verification, and IP integration, empowering the creation of high-performance silicon chips and software content.</p>
<p>You Are: You are a passionate and inventive analog circuit design engineer with a deep-rooted curiosity for emerging technologies and industry-leading semiconductor processes. You thrive in dynamic, collaborative environments and are recognized for your ability to balance technical depth with practical implementation.</p>
<p>Responsibilities: Designing and developing best-in-class ESD and Latch-Up robust solutions for advanced interface IPs using cutting-edge FinFet, FDSOI, and BCD processes. Owning the full lifecycle of ESD structures, from schematic design, simulation, and layout to silicon qualification and production release. Leading and executing I/O development, including I/O ring design, review, and optimization for performance and robustness. Developing and qualifying Interface Testchips, ensuring comprehensive ESD and Latch-Up validation to meet global customer requirements. Running ESD simulations by building detailed ESD networks and performing advanced analyses to ensure design integrity. Applying foundry-provided PERC (Physical Verification Rule Check) rules and using PERC check tools to validate compliance and enhance design quality. Collaborating closely with foundry partners, design, and layout teams to ensure timely and effective integration of ESD and LU solutions.</p>
<p>The Impact You Will Have: Elevating the reliability and performance of Synopsys&#39; interface IPs, directly influencing the success of global semiconductor customers. Driving innovation in analog circuit design for next-generation silicon technologies, helping Synopsys maintain its leadership in the industry. Reducing field failures and increasing product longevity by delivering robust ESD and Latch-Up protection solutions. Accelerating time-to-market for customer products through efficient and high-quality design practices. Fostering a culture of technical excellence and continuous improvement within the analog design team. Building strong partnerships with foundries and cross-functional teams, enhancing collaboration and knowledge sharing across projects.</p>
<p>Benefits: We offer a comprehensive range of health, wellness, and financial benefits to cater to your needs. Our total rewards include both monetary and non-monetary offerings.</p>
<p style="margin-top:24px;font-size:13px;color:#666;">XML job scraping automation by <a href="https://yubhub.co">YubHub</a></p>]]></Description>
      <Jobtype>full-time</Jobtype>
      <Experiencelevel>senior</Experiencelevel>
      <Workarrangement>onsite</Workarrangement>
      <Salaryrange></Salaryrange>
      <Skills>Analog circuit design, ESD and Latch-Up robustness, FinFet, FDSOI, and BCD process technologies, PERC rules and PERC check tools, Foundry-provided PERC rules</Skills>
      <Category>Engineering</Category>
      <Industry>Technology</Industry>
      <Employername>Synopsys</Employername>
      <Employerlogo>https://logos.yubhub.co/careers.synopsys.com.png</Employerlogo>
      <Employerdescription>Synopsys is a technology company that drives innovations in the semiconductor industry, providing solutions for chip design, verification, and IP integration.</Employerdescription>
      <Employerwebsite>https://careers.synopsys.com</Employerwebsite>
      <Compensationcurrency></Compensationcurrency>
      <Compensationmin></Compensationmin>
      <Compensationmax></Compensationmax>
      <Applyto>https://careers.synopsys.com/job/noida/analog-io-design-sr-staff-engineer/44408/93647959696</Applyto>
      <Location>Noida</Location>
      <Country></Country>
      <Postedate>2026-04-24</Postedate>
    </job>
    <job>
      <externalid>26310f57-d67</externalid>
      <Title>Layout Design, Staff Engineer</Title>
      <Description><![CDATA[<p>Our Hardware Engineers at Synopsys are responsible for designing and developing cutting-edge semiconductor solutions. They work on intricate tasks such as chip architecture, circuit design, and verification to ensure the efficiency and reliability of semiconductor products. These engineers play a crucial role in advancing technology and enabling innovations in various industries.</p>
<p>At Synopsys, we drive the innovations that shape the way we live and connect. Our technology is central to the Era of Pervasive Intelligence, from self-driving cars to learning machines. We lead in chip design, verification, and IP integration, empowering the creation of high-performance silicon chips and software content. Join us to transform the future through continuous technological innovation.</p>
<p><strong>You Are:</strong> You are an experienced layout design engineer with a passion for technological advancement and an eye for detail. You thrive in collaborative, fast-paced environments and are motivated by the challenge of developing next-generation DDR and HBM PHY IPs. With over five years of hands-on experience in layout development, you are adept at navigating complex process technologies such as CMOS, FinFET, and GAA at 7nm and below. You are a natural leader, capable of mentoring junior engineers, driving project execution, and ensuring the highest standards of product quality. Your expertise spans floorplanning, layout matching, ESD, latch-up, PERC, EMIR, DFM, LEF generation, and IO frame requirements. You understand the importance of customer requirements at the PHY level and are committed to delivering differentiated solutions that help customers meet their unique performance, power, and size targets. Your communication skills,both written and verbal,are exceptional, enabling you to foster accountability and ownership within cross-functional teams. Above all, you value inclusion, diversity, and continuous learning, and are eager to contribute to a workplace that celebrates innovative thinking and collaboration.</p>
<p><strong>What You’ll Be Doing:</strong></p>
<ul>
<li>Leading the development of cutting-edge DDR and HBM layout IPs, setting technical direction and standards.</li>
<li>Providing hands-on expertise in layout creation, problem-solving, and technical troubleshooting.</li>
<li>Mentoring and guiding junior engineers, fostering growth and technical excellence within the team.</li>
<li>Estimating project efforts, planning schedules, and executing projects in cross-functional settings.</li>
<li>Collaborating with teams to support critical layout requirements, floorplanning, and quality assurance processes.</li>
<li>Conducting layout reviews, ensuring compliance with release processes, and meeting stringent customer requirements.</li>
</ul>
<p><strong>The Impact You Will Have:</strong></p>
<ul>
<li>Accelerate the integration of advanced silicon IP in SoCs, driving innovation in smart devices and systems.</li>
<li>Enhance product differentiation and performance, enabling customers to meet demanding market requirements.</li>
<li>Reduce time-to-market and risk for customers through robust layout design and technical leadership.</li>
<li>Support Synopsys’ reputation as a leader in DDR &amp; HBM PHY IP development, contributing to industry benchmarks.</li>
<li>Foster an inclusive and collaborative engineering culture that values accountability and technical excellence.</li>
<li>Mentor and develop the next generation of layout engineers, ensuring sustained innovation and talent growth.</li>
</ul>
<p><strong>What You’ll Need:</strong></p>
<ul>
<li>BTech/MTech in Electronics, Electrical Engineering, or related field.</li>
<li>5+ years of relevant experience in layout design, preferably in DDR &amp; HBM PHY IP development.</li>
<li>Deep understanding of submicron effects, floorplan techniques in CMOS, FinFET, GAA technologies (7nm and below).</li>
<li>Expertise in layout matching, ESD, latch-up, PERC, EMIR, DFM, LEF generation, bond-pad layout, IO frame and pitch requirements.</li>
<li>Strong ability to lead projects, manage schedules, and ensure product quality within tight timelines.</li>
<li>Excellent written, verbal communication, and interpersonal skills.</li>
</ul>
<p><strong>Who You Are:</strong></p>
<ul>
<li>Innovative thinker with a proactive approach to problem-solving.</li>
<li>Effective communicator and collaborator across diverse teams.</li>
<li>Detail-oriented, accountable, and committed to high standards of quality.</li>
<li>Mentor and leader, fostering growth and technical excellence.</li>
<li>Adaptable, eager to learn, and open to new ideas and technologies.</li>
<li>Champion for inclusion, diversity, and teamwork.</li>
</ul>
<p><strong>The Team You’ll Be A Part Of:</strong> You will join a dynamic Silicon IP team focused on developing high-performance DDR and HBM PHY IPs. Our team values technical innovation, collaborative problem-solving, and continuous improvement. We work closely with cross-functional groups including design, verification, and customer support to deliver industry-leading solutions that shape the future of smart technology.</p>
<p><strong>Rewards and Benefits:</strong> We offer a comprehensive range of health, wellness, and financial benefits to cater to your needs. Our total rewards include both monetary and non-monetary offerings. Your recruiter will provide more details about the salary range and benefits during the hiring process.</p>
<p style="margin-top:24px;font-size:13px;color:#666;">XML job scraping automation by <a href="https://yubhub.co">YubHub</a></p>]]></Description>
      <Jobtype>full-time</Jobtype>
      <Experiencelevel>staff</Experiencelevel>
      <Workarrangement>onsite</Workarrangement>
      <Salaryrange></Salaryrange>
      <Skills>layout design, DDR and HBM PHY IPs, CMOS, FinFET, and GAA technologies, floorplanning, layout matching, ESD, latch-up, PERC, EMIR, DFM, LEF generation, IO frame and pitch requirements</Skills>
      <Category>Engineering</Category>
      <Industry>Technology</Industry>
      <Employername>Synopsys</Employername>
      <Employerlogo>https://logos.yubhub.co/careers.synopsys.com.png</Employerlogo>
      <Employerdescription>Synopsys is a leading provider of electronic design automation (EDA) software and services for the semiconductor and electronics industries.</Employerdescription>
      <Employerwebsite>https://careers.synopsys.com</Employerwebsite>
      <Compensationcurrency></Compensationcurrency>
      <Compensationmin></Compensationmin>
      <Compensationmax></Compensationmax>
      <Applyto>https://careers.synopsys.com/job/bengaluru/layout-design-staff-engineer/44408/93917039712</Applyto>
      <Location>Bengaluru</Location>
      <Country></Country>
      <Postedate>2026-04-24</Postedate>
    </job>
    <job>
      <externalid>1a49fd5b-a39</externalid>
      <Title>Layout Design, Staff Engineer</Title>
      <Description><![CDATA[<p>Our Hardware Engineers at Synopsys are responsible for designing and developing cutting-edge semiconductor solutions. They work on intricate tasks such as chip architecture, circuit design, and verification to ensure the efficiency and reliability of semiconductor products. These engineers play a crucial role in advancing technology and enabling innovations in various industries.</p>
<p>You are an experienced layout design engineer with a passion for technological advancement and an eye for detail. You thrive in collaborative, fast-paced environments and are motivated by the challenge of developing next-generation DDR and HBM PHY IPs. With over five years of hands-on experience in layout development, you are adept at navigating complex process technologies such as CMOS, FinFET, and GAA at 7nm and below. You are a natural leader, capable of mentoring junior engineers, driving project execution, and ensuring the highest standards of product quality. Your expertise spans floorplanning, layout matching, ESD, latch-up, PERC, EMIR, DFM, LEF generation, and IO frame requirements. You understand the importance of customer requirements at the PHY level and are committed to delivering differentiated solutions that help customers meet their unique performance, power, and size targets. Your communication skills,both written and verbal,are exceptional, enabling you to foster accountability and ownership within cross-functional teams. Above all, you value inclusion, diversity, and continuous learning, and are eager to contribute to a workplace that celebrates innovative thinking and collaboration.</p>
<p>Leading the development of cutting-edge DDR and HBM layout IPs, setting technical direction and standards. Providing hands-on expertise in layout creation, problem-solving, and technical troubleshooting. Mentoring and guiding junior engineers, fostering growth and technical excellence within the team. Estimating project efforts, planning schedules, and executing projects in cross-functional settings. Collaborating with teams to support critical layout requirements, floorplanning, and quality assurance processes. Conducting layout reviews, ensuring compliance with release processes, and meeting stringent customer requirements.</p>
<p style="margin-top:24px;font-size:13px;color:#666;">XML job scraping automation by <a href="https://yubhub.co">YubHub</a></p>]]></Description>
      <Jobtype>full-time</Jobtype>
      <Experiencelevel>staff</Experiencelevel>
      <Workarrangement>onsite</Workarrangement>
      <Salaryrange></Salaryrange>
      <Skills>layout design, DDR and HBM PHY IPs, CMOS, FinFET, and GAA at 7nm and below, floorplanning, layout matching, ESD, latch-up, PERC, EMIR, DFM, LEF generation, IO frame requirements</Skills>
      <Category>Engineering</Category>
      <Industry>Technology</Industry>
      <Employername>Synopsys</Employername>
      <Employerlogo>https://logos.yubhub.co/careers.synopsys.com.png</Employerlogo>
      <Employerdescription>Synopsys is a leading provider of electronic design automation (EDA) software and services. It was founded in 1986 and has grown to become a global company with over 10,000 employees.</Employerdescription>
      <Employerwebsite>https://careers.synopsys.com</Employerwebsite>
      <Compensationcurrency></Compensationcurrency>
      <Compensationmin></Compensationmin>
      <Compensationmax></Compensationmax>
      <Applyto>https://careers.synopsys.com/job/bengaluru/layout-design-staff-engineer/44408/93917039728</Applyto>
      <Location>Bengaluru</Location>
      <Country></Country>
      <Postedate>2026-04-24</Postedate>
    </job>
    <job>
      <externalid>7181ae65-d2c</externalid>
      <Title>Layout Design, Staff Engineer</Title>
      <Description><![CDATA[<p>Our Hardware Engineers at Synopsys are responsible for designing and developing cutting-edge semiconductor solutions. They work on intricate tasks such as chip architecture, circuit design, and verification to ensure the efficiency and reliability of semiconductor products. These engineers play a crucial role in advancing technology and enabling innovations in various industries.</p>
<p>We are seeking a passionate and innovative Staff Engineer who thrives on turning complex technical challenges into industry-leading solutions. You will lead the design and development of next-generation DDR and HBM PHY IP layout, driving technical innovation and best practices. You will also provide technical mentorship and guidance to junior engineers, fostering skill development and knowledge sharing across the team.</p>
<p>As a Staff Engineer, you will take ownership of layout planning, execution, and quality review processes to ensure on-time delivery of high-quality silicon IP. You will collaborate with cross-functional teams, including circuit design, verification, and product engineering, to meet project goals and customer requirements. You will also manage effort estimation, project scheduling, and execution in multi-disciplinary team settings.</p>
<p>The successful candidate will have a strong command of deep submicron effects, advanced floorplan techniques, and process technologies such as CMOS, FinFET, and GAA. You will also have expertise in layout matching, ESD, latch-up, PERC, EMIR, DFM, LEF generation, and bond-pad/IO frame design.</p>
<p>If you are a proactive problem solver, ready to lead, mentor, and make a tangible impact in a dynamic, fast-paced environment, we encourage you to apply for this exciting opportunity.</p>
<p style="margin-top:24px;font-size:13px;color:#666;">XML job scraping automation by <a href="https://yubhub.co">YubHub</a></p>]]></Description>
      <Jobtype>full-time</Jobtype>
      <Experiencelevel>staff</Experiencelevel>
      <Workarrangement>onsite</Workarrangement>
      <Salaryrange></Salaryrange>
      <Skills>physical layout design, advanced process nodes, deep submicron effects, advanced floorplan techniques, process technologies, layout matching, ESD, latch-up, PERC, EMIR, DFM, LEF generation, bond-pad/IO frame design</Skills>
      <Category>Engineering</Category>
      <Industry>Technology</Industry>
      <Employername>Synopsys</Employername>
      <Employerlogo>https://logos.yubhub.co/careers.synopsys.com.png</Employerlogo>
      <Employerdescription>Synopsys is a leading provider of electronic design automation (EDA) software and intellectual property (IP) used in the design and development of semiconductor products.</Employerdescription>
      <Employerwebsite>https://careers.synopsys.com</Employerwebsite>
      <Compensationcurrency></Compensationcurrency>
      <Compensationmin></Compensationmin>
      <Compensationmax></Compensationmax>
      <Applyto>https://careers.synopsys.com/job/bengaluru/layout-design-staff-engineer/44408/93942161264</Applyto>
      <Location>Bengaluru</Location>
      <Country></Country>
      <Postedate>2026-04-24</Postedate>
    </job>
    <job>
      <externalid>8ec6d1f4-b98</externalid>
      <Title>Layout Design, Staff Engineer</Title>
      <Description><![CDATA[<p>We are seeking a highly skilled Layout Design, Staff Engineer to join our team in Bengaluru. As a Staff Engineer, you will be responsible for leading the design and development of next-generation DDR and HBM PHY IP layout, driving technical innovation and best practices.</p>
<p>Key Responsibilities:</p>
<ul>
<li>Lead the design and development of next-generation DDR and HBM PHY IP layout</li>
<li>Provide technical mentorship and guidance to junior engineers</li>
<li>Take ownership of layout planning, execution, and quality review processes</li>
<li>Collaborate with cross-functional teams to meet project goals and customer requirements</li>
<li>Manage effort estimation, project scheduling, and execution in multi-disciplinary team settings</li>
</ul>
<p>Requirements:</p>
<ul>
<li>BTech/MTech degree in Electronics, Electrical Engineering, or a related field</li>
<li>Minimum 5 years of relevant experience in physical layout design, particularly in advanced nodes (7nm and below)</li>
<li>Strong command of deep submicron effects, advanced floorplan techniques, and process technologies such as CMOS, FinFET, and GAA</li>
<li>Expertise in layout matching, ESD, latch-up, PERC, EMIR, DFM, LEF generation, and bond-pad/IO frame design</li>
<li>Demonstrated ability to lead projects, manage schedules, and deliver high-quality results within tight timelines</li>
</ul>
<p>Preferred Qualifications:</p>
<ul>
<li>Experience with industry-standard EDA tools for layout and verification</li>
<li>Strong problem-solving skills and ability to work in a fast-paced environment</li>
<li>Excellent communication and collaboration skills</li>
</ul>
<p>Benefits:</p>
<ul>
<li>Comprehensive medical and healthcare plans</li>
<li>Time away from work programs</li>
<li>Family support programs</li>
<li>ESPP</li>
</ul>
<p>At Synopsys, we value diversity and inclusion and are committed to creating a workplace where everyone feels valued and supported. We are an equal opportunity employer and welcome applications from qualified candidates of all backgrounds.</p>
<p style="margin-top:24px;font-size:13px;color:#666;">XML job scraping automation by <a href="https://yubhub.co">YubHub</a></p>]]></Description>
      <Jobtype>full-time</Jobtype>
      <Experiencelevel>staff</Experiencelevel>
      <Workarrangement>onsite</Workarrangement>
      <Salaryrange></Salaryrange>
      <Skills>physical layout design, deep submicron effects, advanced floorplan techniques, process technologies, layout matching, ESD, latch-up, PERC, EMIR, DFM, LEF generation, bond-pad/IO frame design, industry-standard EDA tools, problem-solving skills, communication skills, collaboration skills</Skills>
      <Category>Engineering</Category>
      <Industry>Technology</Industry>
      <Employername>Synopsys</Employername>
      <Employerlogo>https://logos.yubhub.co/careers.synopsys.com.png</Employerlogo>
      <Employerdescription>Synopsys is a leading provider of electronic design automation (EDA) software and intellectual property (IP) for the semiconductor industry.</Employerdescription>
      <Employerwebsite>https://careers.synopsys.com</Employerwebsite>
      <Compensationcurrency></Compensationcurrency>
      <Compensationmin></Compensationmin>
      <Compensationmax></Compensationmax>
      <Applyto>https://careers.synopsys.com/job/bengaluru/layout-design-staff-engineer/44408/93942161216</Applyto>
      <Location>Bengaluru</Location>
      <Country></Country>
      <Postedate>2026-04-24</Postedate>
    </job>
    <job>
      <externalid>f5f009bc-2f2</externalid>
      <Title>SAP Business Cutover Project Manager</Title>
      <Description><![CDATA[<p>Do you want to boost your career and collaborate with expert, talented colleagues to solve and deliver against our clients&#39; most important challenges? We are growing and are looking for people to join our team. As a Senior SAP Business Cutover Project Manager, you will lead the end-to-end business cutover process for an SAP S/4 global programme, ensuring smooth transition from legacy operations to new systems.</p>
<p>This role focuses on business readiness, operational ramp down, and ramp up activities across supply chain, manufacturing, distribution, commercial and finance, minimizing disruption and safeguarding customer business activities.</p>
<p>Key responsibilities include:</p>
<p>Developing and owning the business cutover strategy and execution roadmap, integrating technical and business activities. Working with the business teams to develop detailed ramp down/ramp up plans for critical business processes for all sites (e.g., production scheduling, inventory, order fulfilment, finance). Ensuring compliance with governance, methodologies, and change control processes. Coordinating readiness across the 5 key regions including all associated plants, warehouses, and distribution centers. Aligning cutover activities with seasonal demand cycles and logistics constraints. Engaging business leaders and operational teams to validate readiness and dependencies. Facilitating go/no-go readiness reviews with leadership and PMO. Identifying and mitigating risks related to downtime, data migration, and operational continuity. Defining rollback scenarios and contingency plans. Driving cutover execution during trial runs, dress rehearsal and cutover for go-live. Providing real-time dashboards and executive updates on readiness and progress. Leading hypercare activities and ensuring smooth handover to operations/support teams. Capturing lessons learned for continuous improvement.</p>
<p>As a successful candidate, you will have:</p>
<p>10+ years in SAP program delivery with proven experience in business cutover management for consumer goods and logistics. Expertise in ramp down/ramp up planning for large-scale ERP transformations (ECC and S/4HANA). Strong understanding of supply chain, manufacturing, distribution, commercial and finance processes. Familiarity with SAP modules. Experience in global rollouts and multi-country deployments. SAP or PMP certification preferred. Excellent communication and ability to influence C-Level executives. Ability to lead teams to prepare large proposals and program plans, facilitate leverage differentiators (e.g. specific consulting frameworks, etc.). Outstanding communication skills (verbal and written) and presentation skills, with the ability to influence C-Level stakeholders within client organizations. Strategic thinker with strong business orientation. Ability to manage complex dependencies and drive decisions. Skilled in balancing technical and operational priorities. Willingness to work shifts for cutover activities Project-related mobility/willingness to travel.</p>
<p style="margin-top:24px;font-size:13px;color:#666;">XML job scraping automation by <a href="https://yubhub.co">YubHub</a></p>]]></Description>
      <Jobtype>full-time</Jobtype>
      <Experiencelevel>senior</Experiencelevel>
      <Workarrangement>hybrid</Workarrangement>
      <Salaryrange></Salaryrange>
      <Skills>SAP, ERP, Supply Chain, Manufacturing, Distribution, Commercial, Finance, Ramp Down/Ramp Up Planning, Change Control Processes, Governance Methodologies, Risk Management, Contingency Planning, Hypercare Activities, Lessons Learned</Skills>
      <Category>Consulting</Category>
      <Industry>Management Consulting</Industry>
      <Employername>Infosys Consulting - Europe</Employername>
      <Employerlogo>https://logos.yubhub.co/infosys.com.png</Employerlogo>
      <Employerdescription>Infosys Consulting is a globally renowned management consulting firm that works with market leading brands across sectors. It has a presence in Europe and is recognized as one of the UK&apos;s top firms by the Financial Times and Forbes.</Employerdescription>
      <Employerwebsite>https://www.infosys.com/consulting/</Employerwebsite>
      <Compensationcurrency></Compensationcurrency>
      <Compensationmin></Compensationmin>
      <Compensationmax></Compensationmax>
      <Applyto>https://jobs.workable.com/view/w6VnRV7YDnWPWYokeLPf6s/hybrid-sap-business-cutover-project-manager---digital-platforms---germany-in-munich-at-infosys-consulting---europe</Applyto>
      <Location>Munich</Location>
      <Country></Country>
      <Postedate>2026-04-24</Postedate>
    </job>
    <job>
      <externalid>297a7703-794</externalid>
      <Title>Skilled Trade - Inspector Tooling and Layout - Kansas City Assembly</Title>
      <Description><![CDATA[<p>Tooling and Layout Inspectors are paid a top hourly base rate of $44.315, plus applicable shift, overtime, and holiday premiums. The starting rate for this position is $44.115 with pay increases to the top rate upon completion of three consecutive months of employment.</p>
<p>Immediate access to Best-in-Class Company provided healthcare!</p>
<p>Responsibilities: Willingness and ability to work on any assigned schedule, change shifts, work more than 8 hours per day and/or work overtime, all while maintaining good attendance. Ability and willingness to learn and follow safety rules and procedures, to work in a team environment, treat co-workers with dignity and respect regardless of personal differences, and accept and follow instructions and requests from leadership. Ability and willingness to understand and follow instructions, both oral or in writing.</p>
<p>Qualifications: Troubleshoot &amp; Repair: Perceptron/Zeiss Systems Build sort gauges from scratch using tools and parts in the lab Rebuild any and all types of gauges in the production departments Set up new gauges in the production departments Understand the mechanics of all gauges and masters and be able to re-set and master any gauge in the plant Operational knowledge of all measuring devices in the IQ Lab, such height gages, gage blocks, hardness tester, optical comparator, thread gages, calipers, micrometers, pin gages. Ability to access and read blueprints to determine measurement characteristics</p>
<p>Gauge Surveillance and Calibration: Perform yearly surveillance (visual inspection) &amp; calibration (measurement verification) on all gauges and fixtures Enter and retrieve gauge information into Gage Track record database</p>
<p>Layout: Write programs for ATOS Scanbox, PC Demis CMM, and Polyworks for measurement equipment Perform part measurements per blueprint and Engineering direction Ability to work with Engineers to conduct part evaluations Ability to run scanning systems like Handyscan and Metrascan Ability to set up and use Leica Laser Tracker Use 3D Printer to create holding fixtures and prototype tools/gages.</p>
<p>Computer, Software, PC requirements: Upload Datamyte and LMI data for Margin/Flush/Seal Gap Readings Use and upload Blue Light Laser Gage data Modify CMM programs in PCDemis or ATOS for engineering changes Ability to use Ford Corp CAD software Teamcenter and Catia.</p>
<p>Preventive Maintenance: Perform department PM’s on Department Equipment as required.</p>
<p>Administrative / Computer tasks: Knowledge of Maximo &amp; Drawing Management System GD&amp;T - Geometric dimensioning and tolerancing Enter gauge repair diagnosis &amp; solutions into every GR Maximo ticket Access and read blueprints to determine gauge replacement parts Knowledge of Citrix and WBDM for all data storage</p>
<p style="margin-top:24px;font-size:13px;color:#666;">XML job scraping automation by <a href="https://yubhub.co">YubHub</a></p>]]></Description>
      <Jobtype>full-time</Jobtype>
      <Experiencelevel>entry</Experiencelevel>
      <Workarrangement>onsite</Workarrangement>
      <Salaryrange>$44.115 - $44.315 per hour</Salaryrange>
      <Skills>Perceptron/Zeiss Systems, Gauges, Blueprints, Measurement characteristics, ATOS Scanbox, PC Demis CMM, Polyworks, Leica Laser Tracker, 3D Printer, Datamyte, LMI, Blue Light Laser Gage, Teamcenter, Catia, Maximo, Drawing Management System, GD&amp;T, Geometric dimensioning and tolerancing, Citrix, WBDM</Skills>
      <Category>Engineering</Category>
      <Industry>Automotive</Industry>
      <Employername>Ford Motor Company</Employername>
      <Employerlogo>https://logos.yubhub.co/corporate.ford.com.png</Employerlogo>
      <Employerdescription>Ford Motor Company is a multinational automaker that designs, manufactures, and markets automobiles and commercial vehicles.</Employerdescription>
      <Employerwebsite>https://corporate.ford.com/</Employerwebsite>
      <Compensationcurrency></Compensationcurrency>
      <Compensationmin></Compensationmin>
      <Compensationmax></Compensationmax>
      <Applyto>https://efds.fa.em5.oraclecloud.com/hcmUI/CandidateExperience/en/sites/CX_1/job/40031</Applyto>
      <Location>Kansas City</Location>
      <Country></Country>
      <Postedate>2026-04-24</Postedate>
    </job>
    <job>
      <externalid>0f00522c-1ea</externalid>
      <Title>Inference Technical Lead, On-Device Transformers</Title>
      <Description><![CDATA[<p>Job Title: Inference Technical Lead, On-Device Transformers</p>
<p>Location: San Francisco</p>
<p>Department: Consumer Products</p>
<p>Job Type: Full time</p>
<p>Workplace Type: Hybrid</p>
<p><strong>Compensation</strong></p>
<p>The base pay offered may vary depending on multiple individualized factors, including market location, job-related knowledge, skills, and experience. If the role is non-exempt, overtime pay will be provided consistent with applicable laws. In addition to the salary range listed above, total compensation also includes generous equity, performance-related bonus(es) for eligible employees, and the following benefits.</p>
<ul>
<li>Medical, dental, and vision insurance for you and your family, with employer contributions to Health Savings Accounts</li>
</ul>
<ul>
<li>Pre-tax accounts for Health FSA, Dependent Care FSA, and commuter expenses (parking and transit)</li>
</ul>
<ul>
<li>401(k) retirement plan with employer match</li>
</ul>
<ul>
<li>Paid parental leave (up to 24 weeks for birth parents and 20 weeks for non-birthing parents), plus paid medical and caregiver leave (up to 8 weeks)</li>
</ul>
<ul>
<li>Paid time off: flexible PTO for exempt employees and up to 15 days annually for non-exempt employees</li>
</ul>
<ul>
<li>13+ paid company holidays, and multiple paid coordinated company office closures throughout the year for focus and recharge, plus paid sick or safe time (1 hour per 30 hours worked, or more, as required by applicable state or local law)</li>
</ul>
<ul>
<li>Mental health and wellness support</li>
</ul>
<ul>
<li>Employer-paid basic life and disability coverage</li>
</ul>
<ul>
<li>Annual learning and development stipend to fuel your professional growth</li>
</ul>
<ul>
<li>Daily meals in our offices, and meal delivery credits as eligible</li>
</ul>
<ul>
<li>Relocation support for eligible employees</li>
</ul>
<ul>
<li>Additional taxable fringe benefits, such as charitable donation matching and wellness stipends, may also be provided.</li>
</ul>
<p><strong>About the Team</strong></p>
<p>The Future of Computing Research team is an applied research team in the Consumer Devices group focused on developing new methods and models to support our vision as we advance forward in our mission of building AGI that benefits all of humanity.</p>
<p><strong>About the Role</strong></p>
<p>As a Technical Lead on the Future of Computing Research team, you will work together with both the best ML researchers in the world and the greatest design talent of our generation to push the frontier of model capabilities.</p>
<p><strong>This role is based in San Francisco, CA. We follow a hybrid model with 4 days a week in the office and offer relocation assistance to new employees.</strong></p>
<p><strong>In this role, you will:</strong></p>
<ul>
<li>Evaluate and select silicon platforms (GPUs, NPUs, and specialized accelerators) for on-device and edge deployment of OpenAI models.</li>
</ul>
<ul>
<li>Work closely with research teams to co-design model architectures that meet real-world deployment constraints such as latency, memory, power, and bandwidth.</li>
</ul>
<ul>
<li>Analyze and model system performance, identifying tradeoffs between model design, memory hierarchy, compute throughput, and hardware capabilities.</li>
</ul>
<ul>
<li>Partner with hardware vendors and internal infrastructure teams to bring up new accelerators and ensure efficient execution of transformer workloads.</li>
</ul>
<ul>
<li>Build and lead a team of engineers responsible for implementing the low-level inference stack, including kernel development and runtime systems.</li>
</ul>
<ul>
<li>Run through the necessary walls to take nascent research capabilities and turn them into capabilities we can build on top of.</li>
</ul>
<p><strong>You might thrive in this role if you:</strong></p>
<ul>
<li>Have experience evaluating or deploying workloads on GPUs, NPUs, or other specialized accelerators.</li>
</ul>
<ul>
<li>Understand the performance characteristics of transformer models, including attention, KV-cache behavior, and memory bandwidth requirements.</li>
</ul>
<ul>
<li>Have designed or optimized high-performance compute systems, such as inference engines, distributed runtimes, or hardware-aware ML pipelines.</li>
</ul>
<ul>
<li>Have experience building or leading teams working on low-level performance-critical software such as CUDA kernels, compilers, or ML runtimes.</li>
</ul>
<ul>
<li>Have already spent time in the weeds teaching models to speak and perceive.</li>
</ul>
<p><strong>About OpenAI</strong></p>
<p>OpenAI is an AI research and deployment company dedicated to ensuring that general-purpose artificial intelligence benefits all of humanity. We push the boundaries of the capabilities of AI systems and seek to safely deploy them to the world through our products. AI is an extremely powerful tool that must be created with safety and human needs at its core, and to achieve our mission, we must encompass and value the many different perspectives, voices, and experiences that form the full spectrum of humanity.</p>
<p><strong>Salary</strong></p>
<p>Compensation Range: $445K</p>
<p style="margin-top:24px;font-size:13px;color:#666;">XML job scraping automation by <a href="https://yubhub.co">YubHub</a></p>]]></Description>
      <Jobtype>Full time</Jobtype>
      <Experiencelevel>senior</Experiencelevel>
      <Workarrangement>hybrid</Workarrangement>
      <Salaryrange>$445K</Salaryrange>
      <Skills>Experience evaluating or deploying workloads on GPUs, NPUs, or other specialized accelerators, Understanding the performance characteristics of transformer models, including attention, KV-cache behavior, and memory bandwidth requirements, Designing or optimizing high-performance compute systems, such as inference engines, distributed runtimes, or hardware-aware ML pipelines, Building or leading teams working on low-level performance-critical software such as CUDA kernels, compilers, or ML runtimes, Teaching models to speak and perceive</Skills>
      <Category>Engineering</Category>
      <Industry>Technology</Industry>
      <Employername>OpenAI</Employername>
      <Employerlogo>https://logos.yubhub.co/openai.com.png</Employerlogo>
      <Employerdescription>OpenAI is an AI research and deployment company. It pushes the boundaries of the capabilities of AI systems and seeks to safely deploy them to the world through its products.</Employerdescription>
      <Employerwebsite>https://openai.com/</Employerwebsite>
      <Compensationcurrency></Compensationcurrency>
      <Compensationmin></Compensationmin>
      <Compensationmax></Compensationmax>
      <Applyto>https://jobs.ashbyhq.com/openai/a653b035-a866-4a5c-9c2a-fda3c2950eee</Applyto>
      <Location>San Francisco</Location>
      <Country></Country>
      <Postedate>2026-04-24</Postedate>
    </job>
    <job>
      <externalid>e2865a22-2ab</externalid>
      <Title>Process Consultant</Title>
      <Description><![CDATA[<p>As a Process Consultant, you will play a key role in enabling our customers to shape a better tomorrow. Your primary responsibility will be to conduct discovery workshops based on a master process list, identify gaps and requirements, and build a roll-out template. You will also enhance and automate key processes such as export control, customs documentation, and trade compliance.</p>
<p>Your duties will include managing configuration and customization of SAP GTS solutions to meet specific needs, designing process flows, using SAP best practices, and providing customized solutions as per business needs. Additionally, you will be responsible for fit to standard workshops, project preparation and planning, requirement gathering and business blueprint, solution design and configuration, custom development, testing and quality assurance, end-user training and change management, cutover planning and data migration, go-live and hypercare support, and post-implementation review and continuous improvement.</p>
<p>To succeed in this role, you will need to have end-to-end project experience in a similar role/stream, good communication skills, great attention to detail, and adaptability and flexibility to manage deadline pressure and changes. Excellent presentation skills are also essential, as you will be required to deliver workshops and trainings for customers.</p>
<p>We congregate a strong team spirit where every win, big or small, belongs to all of us. We welcome curiosity, creativity, and unconventional thinking patterns. We recognize the importance of healthy, tight-knit communities and sustainable environmental changes, and we strive to enact positive change in any form within our reach.</p>
<p>At MHP, you will continuously grow with your projects and objectives in an innovative and supportive environment. That makes us the perfect sparring partner for your career, fueling your growth as an expert in your field while expanding your business network.</p>
<p style="margin-top:24px;font-size:13px;color:#666;">XML job scraping automation by <a href="https://yubhub.co">YubHub</a></p>]]></Description>
      <Jobtype>full-time</Jobtype>
      <Experiencelevel>mid</Experiencelevel>
      <Workarrangement>onsite</Workarrangement>
      <Salaryrange></Salaryrange>
      <Skills>SAP GTS, process consulting, project management, requirement gathering, solution design, custom development, testing and quality assurance, end-user training and change management, cutover planning and data migration, go-live and hypercare support, post-implementation review and continuous improvement</Skills>
      <Category>Consulting</Category>
      <Industry>Technology</Industry>
      <Employername>MHP</Employername>
      <Employerlogo>https://logos.yubhub.co/mhp.com.png</Employerlogo>
      <Employerdescription>MHP is a technology and business partner that digitizes its customers&apos; processes and products, supporting them in their IT transformations along the entire value chain. It serves over 300 customers worldwide.</Employerdescription>
      <Employerwebsite>http://www.mhp.com/</Employerwebsite>
      <Compensationcurrency></Compensationcurrency>
      <Compensationmin></Compensationmin>
      <Compensationmax></Compensationmax>
      <Applyto>https://jobs.porsche.com/index.php?ac=jobad&amp;id=17658</Applyto>
      <Location></Location>
      <Country></Country>
      <Postedate>2026-04-22</Postedate>
    </job>
    <job>
      <externalid>ec31a052-c0e</externalid>
      <Title>AI Tutor - Urdu</Title>
      <Description><![CDATA[<p>As an AI Tutor specialized in multilingual audio capabilities, you will contribute to xAI&#39;s mission by training and refining Grok to excel in voice interactions, speech recognition, and auditory experiences across diverse languages, accents, and cultural contexts.</p>
<p>Your work will focus on curating and annotating high-quality audio data to enhance Grok&#39;s global accessibility, enabling natural spoken interactions for users worldwide, bridging language barriers through accurate speech processing, and improving the AI&#39;s handling of multilingual audio nuances.</p>
<p>Responsibilities:</p>
<ul>
<li>Use proprietary software to provide labels, annotations, recordings, and inputs on projects involving multilingual audio clips, voice recordings, speech samples, and auditory elements in various languages.</li>
</ul>
<ul>
<li>Support the delivery of high-quality curated audio data that ensures clear, natural spoken output, accurate representation of linguistic and prosodic details (such as intonation, rhythm, and accent), and professional audio standards.</li>
</ul>
<ul>
<li>Collaborate with technical staff to develop tasks that improve AI&#39;s ability to handle speech modulation, accent variation, noise in real-world recordings, and multilingual audio processing.</li>
</ul>
<ul>
<li>Work with technical staff to improve annotation tools for efficient audio workflows.</li>
</ul>
<p>Basic Qualifications:</p>
<ul>
<li>Native proficiency in Urdu with exposure to diverse accents, dialects, or regional variations.</li>
</ul>
<ul>
<li>Proficiency in English (minimum B2 level) with clear, natural vocal delivery and pronunciation suitable for audio recording purposes.</li>
</ul>
<ul>
<li>Strong auditory perception to identify nuances in speech, accents, pronunciation, intonation, and audio quality across languages.</li>
</ul>
<ul>
<li>Demonstrated ability to handle multilingual audio content, including evaluating speech accuracy, cultural vocal expressions, and contextual interpretation in spoken form.</li>
</ul>
<ul>
<li>Demonstrated ability to transcribe audio with high accuracy across accents and varying audio quality.</li>
</ul>
<ul>
<li>Comfort providing high-quality voice recordings and feedback on audio samples in multiple languages.</li>
</ul>
<ul>
<li>Strong comprehension skills and the ability to make independent judgments on ambiguous or varied audio material, including noisy or accented speech.</li>
</ul>
<ul>
<li>Strong communication, interpersonal, analytical, detail-oriented, and organizational skills, with the ability to articulate audio-related feedback effectively.</li>
</ul>
<ul>
<li>Commitment to developing AI that masters sophisticated multilingual audio capabilities.</li>
</ul>
<p>Preferred Skills and Experience:</p>
<ul>
<li>Demonstration of exceptional attention to linguistic nuance, auditory detail, and data quality beyond standard transcription work.</li>
</ul>
<ul>
<li>Deep understanding and taste of what good/useful Audio data is.</li>
</ul>
<ul>
<li>Strong command of advanced transcription and annotation practices, including handling disfluencies, accents, and prosodic features (intonation, stress, rhythm, emotion, etc) with high consistency and accuracy.</li>
</ul>
<ul>
<li>Background in linguistics (e.g., phonetics, phonology, sociolinguistics), speech sciences, cognitive science, or a related field, or equivalent practical experience, with demonstrated ability to analyze accent variation, pronunciation differences, and multilingual speech patterns.</li>
</ul>
<ul>
<li>Experience working with speech/audio datasets, annotation workflows, or AI training data, including knowledge/experience with training voice models, and an understanding of how data quality impacts model performance.</li>
</ul>
<ul>
<li>Professional experience in voice work, including voice acting, voice recording, podcasting with a measurable audience (e.g., X following), or similar audio production demonstrating attention to clarity and recording quality.</li>
</ul>
<ul>
<li>Demonstrated ability to exercise independent judgment in ambiguous audio scenarios and make consistent, defensible annotation decisions.</li>
</ul>
<ul>
<li>Portfolio (strongly preferred for advanced candidates): Voice samples, annotated transcripts, or audio-related work demonstrating quality, methodology, and attention to detail.</li>
</ul>
<ul>
<li>Candidates with professional experience in voice, linguistics, speech data, or speech evaluation and research are especially encouraged to apply.</li>
</ul>
<p>Location and Other Expectations:</p>
<ul>
<li>Tutor roles may be offered as full-time, part-time, or contractor positions, depending on role needs and candidate fit.</li>
</ul>
<ul>
<li>For contractor positions, hours will vary widely based on project scope and contractor availability, with no fixed commitments required. On average, most projects may require at least 10 hours per week to deliver effectively, though this is not a fixed commitment and depends on the scope of work. Contractors have full flexibility to set their own hours and determine the exact amount of time needed to complete deliverables.</li>
</ul>
<ul>
<li>Tutor roles may be performed remotely from any location worldwide, subject to legal eligibility, time-zone compatibility, and role-specific needs.</li>
</ul>
<ul>
<li>For US-based candidates, please note that we are unable to hire in Wyoming and Illinois at this time.</li>
</ul>
<ul>
<li>We are unable to provide visa sponsorship.</li>
</ul>
<ul>
<li>For those who will be working from a personal device, your computer must be a Chromebook, a Mac with macOS 11.0 or later, or Windows 10 or later.</li>
</ul>
<p>Compensation and Benefits:</p>
<p>US-based candidates: $35/hour - $45/hour depending on factors including relevant experience, skills, education, geographic location, and qualifications. International candidates: Information will be provided to you during the recruitment process.</p>
<p>Benefits vary based on employment type, location, and jurisdiction. Benefits for eligible U.S.-based positions include health insurance, 401(k) plan, and paid sick leave. Specific details and role-specific information will be provided to you during the interview process.</p>
<p style="margin-top:24px;font-size:13px;color:#666;">XML job scraping automation by <a href="https://yubhub.co">YubHub</a></p>]]></Description>
      <Jobtype>full-time|part-time|contract</Jobtype>
      <Experiencelevel>mid</Experiencelevel>
      <Workarrangement>remote</Workarrangement>
      <Salaryrange>$35/hour - $45/hour</Salaryrange>
      <Skills>Native proficiency in Urdu, Proficiency in English, Strong auditory perception, Demonstrated ability to handle multilingual audio content, Comfort providing high-quality voice recordings, Exceptional attention to linguistic nuance, Deep understanding and taste of what good/useful Audio data is, Strong command of advanced transcription and annotation practices, Background in linguistics, Experience working with speech/audio datasets</Skills>
      <Category>Engineering</Category>
      <Industry>Technology</Industry>
      <Employername>xAI</Employername>
      <Employerlogo>https://logos.yubhub.co/xai.com.png</Employerlogo>
      <Employerdescription>xAI creates AI systems that can accurately understand the universe and aid humanity in its pursuit of knowledge. The team is small and highly motivated.</Employerdescription>
      <Employerwebsite>https://www.xai.com/</Employerwebsite>
      <Compensationcurrency></Compensationcurrency>
      <Compensationmin></Compensationmin>
      <Compensationmax></Compensationmax>
      <Applyto>https://job-boards.greenhouse.io/xai/jobs/5090273007</Applyto>
      <Location>Remote</Location>
      <Country></Country>
      <Postedate>2026-04-18</Postedate>
    </job>
    <job>
      <externalid>089cbbc0-81b</externalid>
      <Title>AI Tutor - Hindi</Title>
      <Description><![CDATA[<p>As an AI Tutor specialized in multilingual audio capabilities, you will contribute to xAI&#39;s mission by training and refining Grok to excel in voice interactions, speech recognition, and auditory experiences across diverse languages, accents, and cultural contexts.</p>
<p>Your work will focus on curating and annotating high-quality audio data to enhance Grok&#39;s global accessibility, enabling natural spoken interactions for users worldwide, bridging language barriers through accurate speech processing, and improving the AI&#39;s handling of multilingual audio nuances.</p>
<p>Responsibilities:</p>
<ul>
<li>Use proprietary software to provide labels, annotations, recordings, and inputs on projects involving multilingual audio clips, voice recordings, speech samples, and auditory elements in various languages.</li>
<li>Support the delivery of high-quality curated audio data that ensures clear, natural spoken output, accurate representation of linguistic and prosodic details (such as intonation, rhythm, and accent), and professional audio standards.</li>
<li>Collaborate with technical staff to develop tasks that improve AI&#39;s ability to handle speech modulation, accent variation, noise in real-world recordings, and multilingual audio processing.</li>
<li>Work with technical staff to improve annotation tools for efficient audio workflows.</li>
</ul>
<p>Basic Qualifications:</p>
<ul>
<li>Native proficiency in Hindi with exposure to diverse accents, dialects, or regional variations.</li>
<li>Proficiency in English (minimum B2 level) with clear, natural vocal delivery and pronunciation suitable for audio recording purposes.</li>
<li>Strong auditory perception to identify nuances in speech, accents, pronunciation, intonation, and audio quality across languages.</li>
<li>Demonstrated ability to handle multilingual audio content, including evaluating speech accuracy, cultural vocal expressions, and contextual interpretation in spoken form.</li>
<li>Demonstrated ability to transcribe audio with high accuracy across accents and varying audio quality.</li>
<li>Comfort providing high-quality voice recordings and feedback on audio samples in multiple languages.</li>
<li>Strong comprehension skills and the ability to make independent judgments on ambiguous or varied audio material, including noisy or accented speech.</li>
<li>Strong communication, interpersonal, analytical, detail-oriented, and organizational skills, with the ability to articulate audio-related feedback effectively.</li>
<li>Commitment to developing AI that masters sophisticated multilingual audio capabilities.</li>
</ul>
<p>Preferred Skills and Experience:</p>
<ul>
<li>Demonstration of exceptional attention to linguistic nuance, auditory detail, and data quality beyond standard transcription work.</li>
<li>Deep understanding and taste of what good/useful Audio data is.</li>
<li>Strong command of advanced transcription and annotation practices, including handling disfluencies, accents, and prosodic features (intonation, stress, rhythm, emotion, etc) with high consistency and accuracy.</li>
<li>Background in linguistics (e.g., phonetics, phonology, sociolinguistics), speech sciences, cognitive science, or a related field, or equivalent practical experience, with demonstrated ability to analyze accent variation, pronunciation differences, and multilingual speech patterns.</li>
<li>Experience working with speech/audio datasets, annotation workflows, or AI training data, including knowledge/experience with training voice models, and an understanding of how data quality impacts model performance.</li>
<li>Professional experience in voice work, including voice acting, voice recording, podcasting with a measurable audience (e.g., X following), or similar audio production demonstrating attention to clarity and recording quality.</li>
<li>Demonstrated ability to exercise independent judgment in ambiguous audio scenarios and make consistent, defensible annotation decisions.</li>
<li>Portfolio (strongly preferred for advanced candidates): Voice samples, annotated transcripts, or audio-related work demonstrating quality, methodology, and attention to detail.</li>
<li>Candidates with professional experience in voice, linguistics, speech data, or speech evaluation and research are especially encouraged to apply.</li>
</ul>
<p>Location and Other Expectations:</p>
<ul>
<li>Tutor roles may be offered as full-time, part-time, or contractor positions, depending on role needs and candidate fit.</li>
<li>For contractor positions, hours will vary widely based on project scope and contractor availability, with no fixed commitments required. On average, most projects may require at least 10 hours per week to deliver effectively, though this is not a fixed commitment and depends on the scope of work. Contractors have full flexibility to set their own hours and determine the exact amount of time needed to complete deliverables.</li>
<li>Tutor roles may be performed remotely from any location worldwide, subject to legal eligibility, time-zone compatibility, and role-specific needs.</li>
<li>For US-based candidates, please note that we are unable to hire in Wyoming and Illinois at this time.</li>
<li>We are unable to provide visa sponsorship.</li>
<li>For those who will be working from a personal device, your computer must be a Chromebook, a Mac with macOS 11.0 or later, or Windows 10 or later.</li>
</ul>
<p>Compensation and Benefits: US-based candidates: $35/hour - $45/hour depending on factors including relevant experience, skills, education, geographic location, and qualifications. International candidates: Information will be provided to you during the recruitment process. Benefits vary based on employment type, location, and jurisdiction. Benefits for eligible U.S.-based positions include health insurance, 401(k) plan, and paid sick leave. Specific details and role-specific information will be provided to you during the interview process.</p>
<p style="margin-top:24px;font-size:13px;color:#666;">XML job scraping automation by <a href="https://yubhub.co">YubHub</a></p>]]></Description>
      <Jobtype>full-time|part-time|contract</Jobtype>
      <Experiencelevel>mid</Experiencelevel>
      <Workarrangement>remote</Workarrangement>
      <Salaryrange>$35/hour - $45/hour</Salaryrange>
      <Skills>Native proficiency in Hindi, Proficiency in English, Strong auditory perception, Demonstrated ability to handle multilingual audio content, Demonstrated ability to transcribe audio with high accuracy, Comfort providing high-quality voice recordings and feedback on audio samples, Strong comprehension skills, Strong communication, interpersonal, analytical, detail-oriented, and organizational skills, Exceptional attention to linguistic nuance, auditory detail, and data quality, Deep understanding and taste of what good/useful Audio data is, Strong command of advanced transcription and annotation practices, Background in linguistics or speech sciences, Experience working with speech/audio datasets, annotation workflows, or AI training data, Professional experience in voice work, Portfolio (voice samples, annotated transcripts, or audio-related work)</Skills>
      <Category>Engineering</Category>
      <Industry>Technology</Industry>
      <Employername>xAI</Employername>
      <Employerlogo>https://logos.yubhub.co/xai.com.png</Employerlogo>
      <Employerdescription>xAI creates AI systems that can accurately understand the universe and aid humanity in its pursuit of knowledge. The team is small and highly motivated.</Employerdescription>
      <Employerwebsite>https://www.xai.com/</Employerwebsite>
      <Compensationcurrency></Compensationcurrency>
      <Compensationmin></Compensationmin>
      <Compensationmax></Compensationmax>
      <Applyto>https://job-boards.greenhouse.io/xai/jobs/5090207007</Applyto>
      <Location>Remote</Location>
      <Country></Country>
      <Postedate>2026-04-18</Postedate>
    </job>
    <job>
      <externalid>268f6e4b-e51</externalid>
      <Title>AI Tutor - Portuguese</Title>
      <Description><![CDATA[<p>As an AI Tutor specialising in multilingual audio capabilities, you will contribute to xAI&#39;s mission by training and refining Grok to excel in voice interactions, speech recognition, and auditory experiences across diverse languages, accents, and cultural contexts.</p>
<p>Your work will focus on curating and annotating high-quality audio data to enhance Grok&#39;s global accessibility, enabling natural spoken interactions for users worldwide, bridging language barriers through accurate speech processing, and improving the AI&#39;s handling of multilingual audio nuances.</p>
<p>Responsibilities:</p>
<ul>
<li>Use proprietary software to provide labels, annotations, recordings, and inputs on projects involving multilingual audio clips, voice recordings, speech samples, and auditory elements in various languages.</li>
<li>Support the delivery of high-quality curated audio data that ensures clear, natural spoken output, accurate representation of linguistic and prosodic details (such as intonation, rhythm, and accent), and professional audio standards.</li>
<li>Collaborate with technical staff to develop tasks that improve AI&#39;s ability to handle speech modulation, accent variation, noise in real-world recordings, and multilingual audio processing.</li>
<li>Work with technical staff to improve annotation tools for efficient audio workflows.</li>
</ul>
<p>Basic Qualifications:</p>
<ul>
<li>Native proficiency in Portuguese with exposure to diverse accents, dialects, or regional variations.</li>
<li>Proficiency in English (minimum B2 level) with clear, natural vocal delivery and pronunciation suitable for audio recording purposes.</li>
<li>Strong auditory perception to identify nuances in speech, accents, pronunciation, intonation, and audio quality across languages.</li>
<li>Demonstrated ability to handle multilingual audio content, including evaluating speech accuracy, cultural vocal expressions, and contextual interpretation in spoken form.</li>
<li>Demonstrated ability to transcribe audio with high accuracy across accents and varying audio quality.</li>
<li>Comfort providing high-quality voice recordings and feedback on audio samples in multiple languages.</li>
<li>Strong comprehension skills and the ability to make independent judgments on ambiguous or varied audio material, including noisy or accented speech.</li>
<li>Strong communication, interpersonal, analytical, detail-oriented, and organisational skills, with the ability to articulate audio-related feedback effectively.</li>
<li>Commitment to developing AI that masters sophisticated multilingual audio capabilities.</li>
</ul>
<p>Preferred Skills and Experience:</p>
<ul>
<li>Demonstration of exceptional attention to linguistic nuance, auditory detail, and data quality beyond standard transcription work.</li>
<li>Deep understanding and taste of what good/useful Audio data is.</li>
<li>Strong command of advanced transcription and annotation practices, including handling disfluencies, accents, and prosodic features (intonation, stress, rhythm, emotion, etc) with high consistency and accuracy.</li>
<li>Background in linguistics (e.g., phonetics, phonology, sociolinguistics), speech sciences, cognitive science, or a related field, or equivalent practical experience, with demonstrated ability to analyse accent variation, pronunciation differences, and multilingual speech patterns.</li>
<li>Experience working with speech/audio datasets, annotation workflows, or AI training data, including knowledge/experience with training voice models, and an understanding of how data quality impacts model performance.</li>
<li>Professional experience in voice work, including voice acting, voice recording, podcasting with a measurable audience (e.g., X following), or similar audio production demonstrating attention to clarity and recording quality.</li>
<li>Demonstrated ability to exercise independent judgment in ambiguous audio scenarios and make consistent, defensible annotation decisions.</li>
<li>Portfolio (strongly preferred for advanced candidates): Voice samples, annotated transcripts, or audio-related work demonstrating quality, methodology, and attention to detail.</li>
<li>Candidates with professional experience in voice, linguistics, speech data, or speech evaluation and research are especially encouraged to apply.</li>
</ul>
<p>Location and Other Expectations:</p>
<ul>
<li>Tutor roles may be offered as full-time, part-time, or contractor positions, depending on role needs and candidate fit.</li>
<li>For contractor positions, hours will vary widely based on project scope and contractor availability, with no fixed commitments required. On average, most projects may require at least 10 hours per week to deliver effectively, though this is not a fixed commitment and depends on the scope of work. Contractors have full flexibility to set their own hours and determine the exact amount of time needed to complete deliverables.</li>
<li>Tutor roles may be performed remotely from any location worldwide, subject to legal eligibility, time-zone compatibility, and role-specific needs.</li>
<li>For US-based candidates, please note that we are unable to hire in Wyoming and Illinois at this time.</li>
<li>We are unable to provide visa sponsorship.</li>
<li>For those who will be working from a personal device, your computer must be a Chromebook, a Mac with macOS 11.0 or later, or Windows 10 or later.</li>
</ul>
<p>Compensation and Benefits: US-based candidates: $35/hour - $45/hour depending on factors including relevant experience, skills, education, geographic location, and qualifications. International candidates: Information will be provided to you during the recruitment process. Benefits vary based on employment type, location, and jurisdiction. Benefits for eligible U.S.-based positions include health insurance, 401(k) plan, and paid sick leave. Specific details and role-specific information will be provided to you during the interview process.</p>
<p style="margin-top:24px;font-size:13px;color:#666;">XML job scraping automation by <a href="https://yubhub.co">YubHub</a></p>]]></Description>
      <Jobtype>full-time|part-time|contract</Jobtype>
      <Experiencelevel>mid</Experiencelevel>
      <Workarrangement>remote</Workarrangement>
      <Salaryrange>$35/hour - $45/hour</Salaryrange>
      <Skills>Native proficiency in Portuguese, Proficiency in English, Strong auditory perception, Demonstrated ability to handle multilingual audio content, Demonstrated ability to transcribe audio with high accuracy, Comfort providing high-quality voice recordings and feedback on audio samples, Strong comprehension skills, Strong communication, interpersonal, analytical, detail-oriented, and organisational skills, Exceptional attention to linguistic nuance, auditory detail, and data quality, Deep understanding and taste of what good/useful Audio data is, Strong command of advanced transcription and annotation practices, Background in linguistics, speech sciences, cognitive science, or a related field, Experience working with speech/audio datasets, annotation workflows, or AI training data, Professional experience in voice work, including voice acting, voice recording, podcasting, Portfolio (voice samples, annotated transcripts, or audio-related work)</Skills>
      <Category>Engineering</Category>
      <Industry>Technology</Industry>
      <Employername>xAI</Employername>
      <Employerlogo>https://logos.yubhub.co/x.ai.png</Employerlogo>
      <Employerdescription>xAI creates AI systems to understand the universe and aid humanity in its pursuit of knowledge.</Employerdescription>
      <Employerwebsite>https://x.ai/</Employerwebsite>
      <Compensationcurrency></Compensationcurrency>
      <Compensationmin></Compensationmin>
      <Compensationmax></Compensationmax>
      <Applyto>https://job-boards.greenhouse.io/xai/jobs/5090221007</Applyto>
      <Location>Remote</Location>
      <Country></Country>
      <Postedate>2026-04-18</Postedate>
    </job>
    <job>
      <externalid>71de1b49-ad6</externalid>
      <Title>AI Tutor - Tamil</Title>
      <Description><![CDATA[<p>As an AI Tutor specialized in multilingual audio capabilities, you will contribute to xAI&#39;s mission by training and refining Grok to excel in voice interactions, speech recognition, and auditory experiences across diverse languages, accents, and cultural contexts.</p>
<p>Your work will focus on curating and annotating high-quality audio data to enhance Grok&#39;s global accessibility, enabling natural spoken interactions for users worldwide, bridging language barriers through accurate speech processing, and improving the AI&#39;s handling of multilingual audio nuances.</p>
<p>Responsibilities:</p>
<ul>
<li>Use proprietary software to provide labels, annotations, recordings, and inputs on projects involving multilingual audio clips, voice recordings, speech samples, and auditory elements in various languages.</li>
<li>Support the delivery of high-quality curated audio data that ensures clear, natural spoken output, accurate representation of linguistic and prosodic details (such as intonation, rhythm, and accent), and professional audio standards.</li>
<li>Collaborate with technical staff to develop tasks that improve AI&#39;s ability to handle speech modulation, accent variation, noise in real-world recordings, and multilingual audio processing.</li>
<li>Work with technical staff to improve annotation tools for efficient audio workflows.</li>
</ul>
<p>Basic Qualifications:</p>
<ul>
<li>Native proficiency in Tamil with exposure to diverse accents, dialects, or regional variations.</li>
<li>Proficiency in English (minimum B2 level) with clear, natural vocal delivery and pronunciation suitable for audio recording purposes.</li>
<li>Strong auditory perception to identify nuances in speech, accents, pronunciation, intonation, and audio quality across languages.</li>
<li>Demonstrated ability to handle multilingual audio content, including evaluating speech accuracy, cultural vocal expressions, and contextual interpretation in spoken form.</li>
<li>Demonstrated ability to transcribe audio with high accuracy across accents and varying audio quality.</li>
<li>Comfort providing high-quality voice recordings and feedback on audio samples in multiple languages.</li>
<li>Strong comprehension skills and the ability to make independent judgments on ambiguous or varied audio material, including noisy or accented speech.</li>
<li>Strong communication, interpersonal, analytical, detail-oriented, and organizational skills, with the ability to articulate audio-related feedback effectively.</li>
<li>Commitment to developing AI that masters sophisticated multilingual audio capabilities.</li>
</ul>
<p>Preferred Skills and Experience:</p>
<ul>
<li>Demonstration of exceptional attention to linguistic nuance, auditory detail, and data quality beyond standard transcription work.</li>
<li>Deep understanding and taste of what good/useful Audio data is.</li>
<li>Strong command of advanced transcription and annotation practices, including handling disfluencies, accents, and prosodic features (intonation, stress, rhythm, emotion, etc) with high consistency and accuracy.</li>
<li>Background in linguistics (e.g., phonetics, phonology, sociolinguistics), speech sciences, cognitive science, or a related field, or equivalent practical experience, with demonstrated ability to analyze accent variation, pronunciation differences, and multilingual speech patterns.</li>
<li>Experience working with speech/audio datasets, annotation workflows, or AI training data, including knowledge/experience with training voice models, and an understanding of how data quality impacts model performance.</li>
<li>Professional experience in voice work, including voice acting, voice recording, podcasting with a measurable audience (e.g., X following), or similar audio production demonstrating attention to clarity and recording quality.</li>
<li>Demonstrated ability to exercise independent judgment in ambiguous audio scenarios and make consistent, defensible annotation decisions.</li>
<li>Portfolio (strongly preferred for advanced candidates): Voice samples, annotated transcripts, or audio-related work demonstrating quality, methodology, and attention to detail.</li>
<li>Candidates with professional experience in voice, linguistics, speech data, or speech evaluation and research are especially encouraged to apply.</li>
</ul>
<p>Location and Other Expectations:</p>
<ul>
<li>Tutor roles may be offered as full-time, part-time, or contractor positions, depending on role needs and candidate fit.</li>
<li>For contractor positions, hours will vary widely based on project scope and contractor availability, with no fixed commitments required. On average, most projects may require at least 10 hours per week to deliver effectively, though this is not a fixed commitment and depends on the scope of work. Contractors have full flexibility to set their own hours and determine the exact amount of time needed to complete deliverables.</li>
<li>Tutor roles may be performed remotely from any location worldwide, subject to legal eligibility, time-zone compatibility, and role-specific needs.</li>
<li>For US-based candidates, please note that we are unable to hire in Wyoming and Illinois at this time.</li>
<li>We are unable to provide visa sponsorship.</li>
<li>For those who will be working from a personal device, your computer must be a Chromebook, a Mac with macOS 11.0 or later, or Windows 10 or later.</li>
</ul>
<p>Compensation and Benefits: US-based candidates: $35/hour - $45/hour depending on factors including relevant experience, skills, education, geographic location, and qualifications. International candidates: Information will be provided to you during the recruitment process. Benefits vary based on employment type, location, and jurisdiction. Benefits for eligible U.S.-based positions include health insurance, 401(k) plan, and paid sick leave. Specific details and role-specific information will be provided to you during the interview process.</p>
<p style="margin-top:24px;font-size:13px;color:#666;">XML job scraping automation by <a href="https://yubhub.co">YubHub</a></p>]]></Description>
      <Jobtype>full-time|part-time|contract</Jobtype>
      <Experiencelevel>mid</Experiencelevel>
      <Workarrangement>remote</Workarrangement>
      <Salaryrange>$35/hour - $45/hour</Salaryrange>
      <Skills>Native proficiency in Tamil, Proficiency in English, Strong auditory perception, Demonstrated ability to handle multilingual audio content, Demonstrated ability to transcribe audio with high accuracy, Comfort providing high-quality voice recordings and feedback on audio samples, Strong comprehension skills, Strong communication, interpersonal, analytical, detail-oriented, and organizational skills, Exceptional attention to linguistic nuance, auditory detail, and data quality, Deep understanding and taste of what good/useful Audio data is, Strong command of advanced transcription and annotation practices, Background in linguistics, speech sciences, cognitive science, or a related field, Experience working with speech/audio datasets, annotation workflows, or AI training data, Professional experience in voice work, including voice acting, voice recording, podcasting, Portfolio (voice samples, annotated transcripts, or audio-related work)</Skills>
      <Category>Engineering</Category>
      <Industry>Technology</Industry>
      <Employername>xAI</Employername>
      <Employerlogo>https://logos.yubhub.co/xai.io.png</Employerlogo>
      <Employerdescription>xAI creates AI systems that can accurately understand the universe and aid humanity in its pursuit of knowledge.</Employerdescription>
      <Employerwebsite>https://www.xai.io/</Employerwebsite>
      <Compensationcurrency></Compensationcurrency>
      <Compensationmin></Compensationmin>
      <Compensationmax></Compensationmax>
      <Applyto>https://job-boards.greenhouse.io/xai/jobs/5090269007</Applyto>
      <Location>Remote</Location>
      <Country></Country>
      <Postedate>2026-04-18</Postedate>
    </job>
    <job>
      <externalid>31bc0f5d-b2a</externalid>
      <Title>AI Tutor - Polish</Title>
      <Description><![CDATA[<p>As an AI Tutor specialized in multilingual audio capabilities, you will contribute to xAI&#39;s mission by training and refining Grok to excel in voice interactions, speech recognition, and auditory experiences across diverse languages, accents, and cultural contexts.</p>
<p>Your work will focus on curating and annotating high-quality audio data to enhance Grok&#39;s global accessibility, enabling natural spoken interactions for users worldwide, bridging language barriers through accurate speech processing, and improving the AI&#39;s handling of multilingual audio nuances.</p>
<p>Responsibilities:</p>
<ul>
<li>Use proprietary software to provide labels, annotations, recordings, and inputs on projects involving multilingual audio clips, voice recordings, speech samples, and auditory elements in various languages.</li>
</ul>
<ul>
<li>Support the delivery of high-quality curated audio data that ensures clear, natural spoken output, accurate representation of linguistic and prosodic details (such as intonation, rhythm, and accent), and professional audio standards.</li>
</ul>
<ul>
<li>Collaborate with technical staff to develop tasks that improve AI&#39;s ability to handle speech modulation, accent variation, noise in real-world recordings, and multilingual audio processing.</li>
</ul>
<ul>
<li>Work with technical staff to improve annotation tools for efficient audio workflows.</li>
</ul>
<p>Basic Qualifications:</p>
<ul>
<li>Native proficiency in Polish with exposure to diverse accents, dialects, or regional variations.</li>
</ul>
<ul>
<li>Proficiency in English (minimum B2 level) with clear, natural vocal delivery and pronunciation suitable for audio recording purposes.</li>
</ul>
<ul>
<li>Strong auditory perception to identify nuances in speech, accents, pronunciation, intonation, and audio quality across languages.</li>
</ul>
<ul>
<li>Demonstrated ability to handle multilingual audio content, including evaluating speech accuracy, cultural vocal expressions, and contextual interpretation in spoken form.</li>
</ul>
<ul>
<li>Demonstrated ability to transcribe audio with high accuracy across accents and varying audio quality.</li>
</ul>
<ul>
<li>Comfort providing high-quality voice recordings and feedback on audio samples in multiple languages.</li>
</ul>
<ul>
<li>Strong comprehension skills and the ability to make independent judgments on ambiguous or varied audio material, including noisy or accented speech.</li>
</ul>
<ul>
<li>Strong communication, interpersonal, analytical, detail-oriented, and organizational skills, with the ability to articulate audio-related feedback effectively.</li>
</ul>
<ul>
<li>Commitment to developing AI that masters sophisticated multilingual audio capabilities.</li>
</ul>
<p>Preferred Skills and Experience:</p>
<ul>
<li>Demonstration of exceptional attention to linguistic nuance, auditory detail, and data quality beyond standard transcription work.</li>
</ul>
<ul>
<li>Deep understanding and taste of what good/useful Audio data is.</li>
</ul>
<ul>
<li>Strong command of advanced transcription and annotation practices, including handling disfluencies, accents, and prosodic features (intonation, stress, rhythm, emotion, etc) with high consistency and accuracy.</li>
</ul>
<ul>
<li>Background in linguistics (e.g., phonetics, phonology, sociolinguistics), speech sciences, cognitive science, or a related field, or equivalent practical experience, with demonstrated ability to analyze accent variation, pronunciation differences, and multilingual speech patterns.</li>
</ul>
<ul>
<li>Experience working with speech/audio datasets, annotation workflows, or AI training data, including knowledge/experience with training voice models, and an understanding of how data quality impacts model performance.</li>
</ul>
<ul>
<li>Professional experience in voice work, including voice acting, voice recording, podcasting with a measurable audience (e.g., X following), or similar audio production demonstrating attention to clarity and recording quality.</li>
</ul>
<ul>
<li>Demonstrated ability to exercise independent judgment in ambiguous audio scenarios and make consistent, defensible annotation decisions.</li>
</ul>
<ul>
<li>Portfolio (strongly preferred for advanced candidates): Voice samples, annotated transcripts, or audio-related work demonstrating quality, methodology, and attention to detail.</li>
</ul>
<ul>
<li>Candidates with professional experience in voice, linguistics, speech data, or speech evaluation and research are especially encouraged to apply.</li>
</ul>
<p>Location and Other Expectations:</p>
<ul>
<li>Tutor roles may be offered as full-time, part-time, or contractor positions, depending on role needs and candidate fit.</li>
</ul>
<ul>
<li>For contractor positions, hours will vary widely based on project scope and contractor availability, with no fixed commitments required. On average, most projects may require at least 10 hours per week to deliver effectively, though this is not a fixed commitment and depends on the scope of work. Contractors have full flexibility to set their own hours and determine the exact amount of time needed to complete deliverables.</li>
</ul>
<ul>
<li>Tutor roles may be performed remotely from any location worldwide, subject to legal eligibility, time-zone compatibility, and role-specific needs.</li>
</ul>
<ul>
<li>For US-based candidates, please note that we are unable to hire in Wyoming and Illinois at this time.</li>
</ul>
<ul>
<li>We are unable to provide visa sponsorship.</li>
</ul>
<ul>
<li>For those who will be working from a personal device, your computer must be a Chromebook, a Mac with macOS 11.0 or later, or Windows 10 or later.</li>
</ul>
<p>Compensation and Benefits:</p>
<p>US-based candidates: $35/hour - $45/hour depending on factors including relevant experience, skills, education, geographic location, and qualifications. International candidates: Information will be provided to you during the recruitment process.</p>
<p>Benefits vary based on employment type, location, and jurisdiction. Benefits for eligible U.S.-based positions include health insurance, 401(k) plan, and paid sick leave. Specific details and role-specific information will be provided to you during the interview process.</p>
<p style="margin-top:24px;font-size:13px;color:#666;">XML job scraping automation by <a href="https://yubhub.co">YubHub</a></p>]]></Description>
      <Jobtype>full-time|part-time|contract</Jobtype>
      <Experiencelevel>mid</Experiencelevel>
      <Workarrangement>remote</Workarrangement>
      <Salaryrange>$35/hour - $45/hour</Salaryrange>
      <Skills>Native proficiency in Polish, Proficiency in English, Strong auditory perception, Demonstrated ability to handle multilingual audio content, Demonstrated ability to transcribe audio with high accuracy, Comfort providing high-quality voice recordings and feedback on audio samples, Strong comprehension skills, Strong communication, interpersonal, analytical, detail-oriented, and organizational skills, Exceptional attention to linguistic nuance, Deep understanding and taste of what good/useful Audio data is, Strong command of advanced transcription and annotation practices, Background in linguistics, Experience working with speech/audio datasets, Professional experience in voice work, Portfolio (voice samples, annotated transcripts, or audio-related work), Candidates with professional experience in voice, linguistics, speech data, or speech evaluation and research</Skills>
      <Category>Engineering</Category>
      <Industry>Technology</Industry>
      <Employername>xAI</Employername>
      <Employerlogo>https://logos.yubhub.co/xai.com.png</Employerlogo>
      <Employerdescription>xAI&apos;s mission is to create AI systems that can accurately understand the universe and aid humanity in its pursuit of knowledge. The company is small and highly motivated.</Employerdescription>
      <Employerwebsite>https://www.xai.com/</Employerwebsite>
      <Compensationcurrency></Compensationcurrency>
      <Compensationmin></Compensationmin>
      <Compensationmax></Compensationmax>
      <Applyto>https://job-boards.greenhouse.io/xai/jobs/5090218007</Applyto>
      <Location>Remote</Location>
      <Country></Country>
      <Postedate>2026-04-18</Postedate>
    </job>
    <job>
      <externalid>158d72e0-139</externalid>
      <Title>Robotics Software Engineer, Air Vehicle Autonomy</Title>
      <Description><![CDATA[<p>We are looking for software engineers and roboticists excited about creating a powerful autonomy software stack that includes computer vision, motion planning, SLAM, controls, estimation, and secure communications.</p>
<p>As a Robotics Software Engineer, you will write and maintain core libraries and services that perform critical functions for collaborative teams of robots. You will own major feature development and rollout of large, complex features for our products. You will work closely with Anduril and 3rd party vehicle hardware teams, as well as operational subject matter experts (fighter pilots, UAV operators, etc.) to align on requirements during product development and iterate towards a final design.</p>
<p>Key responsibilities include: Writing and maintaining core libraries and services that perform critical functions for collaborative teams of robots Owning major feature development and rollout of large, complex features for our products Working closely with Anduril and 3rd party vehicle hardware teams, as well as operational subject matter experts to align on requirements during product development and iterate towards a final design</p>
<p>Required qualifications include: Eligible to obtain and maintain an active U.S. Top Secret security clearance BS in Robotics, Computer Science, Mechatronics, Electrical Engineering, Mechanical Engineering, or related field Proven understanding of data structures, algorithms, concurrency, and code optimization Experience troubleshooting and analyzing remotely deployed software systems Experience working with and testing electrical and mechanical systems 3+ Years experience with C++ or Rust experience in a Linux development environment</p>
<p>Preferred qualifications include: MS or PhD Experience in one or more of the following: motion planning, perception, localization, mapping, controls, and related system performance metrics Python, Rust, and/or Go experience Experience programming for embedded and physical devices Multi-agent coordination of UAVs Complex frame transformation problems, such as target localization or multi degree of freedom robotic arms</p>
<p style="margin-top:24px;font-size:13px;color:#666;">XML job scraping automation by <a href="https://yubhub.co">YubHub</a></p>]]></Description>
      <Jobtype>full-time</Jobtype>
      <Experiencelevel>senior</Experiencelevel>
      <Workarrangement>onsite</Workarrangement>
      <Salaryrange>$191,000-$253,000 USD</Salaryrange>
      <Skills>C++, Rust, Linux development environment, Data structures, Algorithms, Concurrency, Code optimization, Troubleshooting, Analysis, Electrical engineering, Mechanical engineering, Motion planning, Perception, Localization, Mapping, Controls, Python, Go, Embedded systems, Physical devices, Multi-agent coordination, Complex frame transformation</Skills>
      <Category>Engineering</Category>
      <Industry>Technology</Industry>
      <Employername>Anduril</Employername>
      <Employerlogo>https://logos.yubhub.co/anduril.com.png</Employerlogo>
      <Employerdescription>Anduril develops aerial and multi-domain robotic systems. The company is responsible for taking products like Fury (unmanned fighter jet) and Barracuda (air-breathing cruise missile) from concept to product.</Employerdescription>
      <Employerwebsite>https://www.anduril.com/</Employerwebsite>
      <Compensationcurrency></Compensationcurrency>
      <Compensationmin></Compensationmin>
      <Compensationmax></Compensationmax>
      <Applyto>https://job-boards.greenhouse.io/andurilindustries/jobs/4674090007</Applyto>
      <Location>Seattle, Washington, United States</Location>
      <Country></Country>
      <Postedate>2026-04-18</Postedate>
    </job>
    <job>
      <externalid>b589fa63-68a</externalid>
      <Title>AI Tutor - Indonesian</Title>
      <Description><![CDATA[<p>As an AI Tutor specialized in multilingual audio capabilities, you will contribute to xAI&#39;s mission by training and refining Grok to excel in voice interactions, speech recognition, and auditory experiences across diverse languages, accents, and cultural contexts.</p>
<p>Your work will focus on curating and annotating high-quality audio data to enhance Grok&#39;s global accessibility, enabling natural spoken interactions for users worldwide, bridging language barriers through accurate speech processing, and improving the AI&#39;s handling of multilingual audio nuances.</p>
<p>Responsibilities:</p>
<ul>
<li>Use proprietary software to provide labels, annotations, recordings, and inputs on projects involving multilingual audio clips, voice recordings, speech samples, and auditory elements in various languages.</li>
</ul>
<ul>
<li>Support the delivery of high-quality curated audio data that ensures clear, natural spoken output, accurate representation of linguistic and prosodic details (such as intonation, rhythm, and accent), and professional audio standards.</li>
</ul>
<ul>
<li>Collaborate with technical staff to develop tasks that improve AI&#39;s ability to handle speech modulation, accent variation, noise in real-world recordings, and multilingual audio processing.</li>
</ul>
<ul>
<li>Work with technical staff to improve annotation tools for efficient audio workflows.</li>
</ul>
<p>Basic Qualifications:</p>
<ul>
<li>Native proficiency in Indonesian with exposure to diverse accents, dialects, or regional variations.</li>
</ul>
<ul>
<li>Proficiency in English (minimum B2 level) with clear, natural vocal delivery and pronunciation suitable for audio recording purposes.</li>
</ul>
<ul>
<li>Strong auditory perception to identify nuances in speech, accents, pronunciation, intonation, and audio quality across languages.</li>
</ul>
<ul>
<li>Demonstrated ability to handle multilingual audio content, including evaluating speech accuracy, cultural vocal expressions, and contextual interpretation in spoken form.</li>
</ul>
<ul>
<li>Demonstrated ability to transcribe audio with high accuracy across accents and varying audio quality.</li>
</ul>
<ul>
<li>Comfort providing high-quality voice recordings and feedback on audio samples in multiple languages.</li>
</ul>
<ul>
<li>Strong comprehension skills and the ability to make independent judgments on ambiguous or varied audio material, including noisy or accented speech.</li>
</ul>
<ul>
<li>Strong communication, interpersonal, analytical, detail-oriented, and organizational skills, with the ability to articulate audio-related feedback effectively.</li>
</ul>
<ul>
<li>Commitment to developing AI that masters sophisticated multilingual audio capabilities.</li>
</ul>
<p>Preferred Skills and Experience:</p>
<ul>
<li>Demonstration of exceptional attention to linguistic nuance, auditory detail, and data quality beyond standard transcription work.</li>
</ul>
<ul>
<li>Deep understanding and taste of what good/useful Audio data is.</li>
</ul>
<ul>
<li>Strong command of advanced transcription and annotation practices, including handling disfluencies, accents, and prosodic features (intonation, stress, rhythm, emotion, etc) with high consistency and accuracy.</li>
</ul>
<ul>
<li>Background in linguistics (e.g., phonetics, phonology, sociolinguistics), speech sciences, cognitive science, or a related field, or equivalent practical experience, with demonstrated ability to analyze accent variation, pronunciation differences, and multilingual speech patterns.</li>
</ul>
<ul>
<li>Experience working with speech/audio datasets, annotation workflows, or AI training data, including knowledge/experience with training voice models, and an understanding of how data quality impacts model performance.</li>
</ul>
<ul>
<li>Professional experience in voice work, including voice acting, voice recording, podcasting with a measurable audience (e.g., X following), or similar audio production demonstrating attention to clarity and recording quality.</li>
</ul>
<ul>
<li>Demonstrated ability to exercise independent judgment in ambiguous audio scenarios and make consistent, defensible annotation decisions.</li>
</ul>
<ul>
<li>Portfolio (strongly preferred for advanced candidates): Voice samples, annotated transcripts, or audio-related work demonstrating quality, methodology, and attention to detail.</li>
</ul>
<ul>
<li>Candidates with professional experience in voice, linguistics, speech data, or speech evaluation and research are especially encouraged to apply.</li>
</ul>
<p>Location and Other Expectations:</p>
<ul>
<li>Tutor roles may be offered as full-time, part-time, or contractor positions, depending on role needs and candidate fit.</li>
</ul>
<ul>
<li>For contractor positions, hours will vary widely based on project scope and contractor availability, with no fixed commitments required. On average, most projects may require at least 10 hours per week to deliver effectively, though this is not a fixed commitment and depends on the scope of work. Contractors have full flexibility to set their own hours and determine the exact amount of time needed to complete deliverables.</li>
</ul>
<ul>
<li>Tutor roles may be performed remotely from any location worldwide, subject to legal eligibility, time-zone compatibility, and role-specific needs.</li>
</ul>
<ul>
<li>For US-based candidates, please note that we are unable to hire in Wyoming and Illinois at this time.</li>
</ul>
<ul>
<li>We are unable to provide visa sponsorship.</li>
</ul>
<ul>
<li>For those who will be working from a personal device, your computer must be a Chromebook, a Mac with macOS 11.0 or later, or Windows 10 or later.</li>
</ul>
<p>Compensation and Benefits:</p>
<p>US-based candidates: $35/hour - $45/hour depending on factors including relevant experience, skills, education, geographic location, and qualifications. International candidates: Information will be provided to you during the recruitment process.</p>
<p>Benefits vary based on employment type, location, and jurisdiction. Benefits for eligible U.S.-based positions include health insurance, 401(k) plan, and paid sick leave. Specific details and role-specific information will be provided to you during the interview process.</p>
<p style="margin-top:24px;font-size:13px;color:#666;">XML job scraping automation by <a href="https://yubhub.co">YubHub</a></p>]]></Description>
      <Jobtype>full-time|part-time|contract|temporary|internship</Jobtype>
      <Experiencelevel></Experiencelevel>
      <Workarrangement>remote</Workarrangement>
      <Salaryrange></Salaryrange>
      <Skills>Native proficiency in Indonesian, Proficiency in English, Strong auditory perception, Demonstrated ability to handle multilingual audio content, Demonstrated ability to transcribe audio with high accuracy, Comfort providing high-quality voice recordings and feedback on audio samples, Strong comprehension skills, Strong communication, interpersonal, analytical, detail-oriented, and organizational skills, Demonstration of exceptional attention to linguistic nuance, auditory detail, and data quality, Deep understanding and taste of what good/useful Audio data is, Strong command of advanced transcription and annotation practices, Background in linguistics (e.g., phonetics, phonology, sociolinguistics), Experience working with speech/audio datasets, annotation workflows, or AI training data, Professional experience in voice work, Portfolio (voice samples, annotated transcripts, or audio-related work)</Skills>
      <Category>Engineering</Category>
      <Industry>Technology</Industry>
      <Employername>xAI</Employername>
      <Employerlogo>https://logos.yubhub.co/xai.com.png</Employerlogo>
      <Employerdescription>xAI creates AI systems that can accurately understand the universe and aid humanity in its pursuit of knowledge. The team is small and highly motivated.</Employerdescription>
      <Employerwebsite>https://www.xai.com/</Employerwebsite>
      <Compensationcurrency></Compensationcurrency>
      <Compensationmin></Compensationmin>
      <Compensationmax></Compensationmax>
      <Applyto>https://job-boards.greenhouse.io/xai/jobs/5095657007</Applyto>
      <Location>Remote</Location>
      <Country></Country>
      <Postedate>2026-04-18</Postedate>
    </job>
    <job>
      <externalid>d8d85a5b-566</externalid>
      <Title>AI Tutor - English</Title>
      <Description><![CDATA[<p>As an AI Tutor specialized in multilingual audio capabilities, you will contribute to xAI&#39;s mission by training and refining Grok to excel in voice interactions, speech recognition, and auditory experiences across diverse languages, accents, and cultural contexts. Your work will focus on curating and annotating high-quality audio data to enhance Grok&#39;s global accessibility, enabling natural spoken interactions for users worldwide, bridging language barriers through accurate speech processing, and improving the AI&#39;s handling of multilingual audio nuances.</p>
<p>Responsibilities:</p>
<ul>
<li>Use proprietary software to provide labels, annotations, recordings, and inputs on projects involving multilingual audio clips, voice recordings, speech samples, and auditory elements in various languages.</li>
<li>Support the delivery of high-quality curated audio data that ensures clear, natural spoken output, accurate representation of linguistic and prosodic details (such as intonation, rhythm, and accent), and professional audio standards.</li>
<li>Collaborate with technical staff to develop tasks that improve AI&#39;s ability to handle speech modulation, accent variation, noise in real-world recordings, and multilingual audio processing.</li>
<li>Work with technical staff to improve annotation tools for efficient audio workflows.</li>
</ul>
<p>Basic Qualifications:</p>
<ul>
<li>Native proficiency in English with exposure to diverse accents, dialects, or regional variations.</li>
<li>Strong auditory perception to identify nuances in speech, accents, pronunciation, intonation, and audio quality across languages.</li>
<li>Demonstrated ability to handle multilingual audio content, including evaluating speech accuracy, cultural vocal expressions, and contextual interpretation in spoken form.</li>
<li>Demonstrated ability to transcribe audio with high accuracy across accents and varying audio quality.</li>
<li>Comfort providing high-quality voice recordings and feedback on audio samples in multiple languages.</li>
<li>Strong comprehension skills and the ability to make independent judgments on ambiguous or varied audio material, including noisy or accented speech.</li>
<li>Strong communication, interpersonal, analytical, detail-oriented, and organizational skills, with the ability to articulate audio-related feedback effectively.</li>
<li>Commitment to developing AI that masters sophisticated multilingual audio capabilities.</li>
</ul>
<p>Preferred Skills and Experience:</p>
<ul>
<li>Demonstration of exceptional attention to linguistic nuance, auditory detail, and data quality beyond standard transcription work.</li>
<li>Deep understanding and taste of what good/useful Audio data is.</li>
<li>Strong command of advanced transcription and annotation practices, including handling disfluencies, accents, and prosodic features (intonation, stress, rhythm, emotion, etc) with high consistency and accuracy.</li>
<li>Background in linguistics (e.g., phonetics, phonology, sociolinguistics), speech sciences, cognitive science, or a related field, or equivalent practical experience, with demonstrated ability to analyze accent variation, pronunciation differences, and multilingual speech patterns.</li>
<li>Experience working with speech/audio datasets, annotation workflows, or AI training data, including knowledge/experience with training voice models, and an understanding of how data quality impacts model performance.</li>
<li>Professional experience in voice work, including voice acting, voice recording, podcasting with a measurable audience (e.g., X following), or similar audio production demonstrating attention to clarity and recording quality.</li>
<li>Demonstrated ability to exercise independent judgment in ambiguous audio scenarios and make consistent, defensible annotation decisions.</li>
<li>Portfolio (strongly preferred for advanced candidates): Voice samples, annotated transcripts, or audio-related work demonstrating quality, methodology, and attention to detail.</li>
<li>Candidates with professional experience in voice, linguistics, speech data, or speech evaluation and research are especially encouraged to apply.</li>
</ul>
<p>Location and Other Expectations:</p>
<ul>
<li>Tutor roles may be offered as full-time, part-time, or contractor positions, depending on role needs and candidate fit.</li>
<li>For contractor positions, hours will vary widely based on project scope and contractor availability, with no fixed commitments required. On average, most projects may require at least 10 hours per week to deliver effectively, though this is not a fixed commitment and depends on the scope of work. Contractors have full flexibility to set their own hours and determine the exact amount of time needed to complete deliverables.</li>
<li>Tutor roles may be performed remotely from any location worldwide, subject to legal eligibility, time-zone compatibility, and role-specific needs.</li>
<li>For US-based candidates, please note that we are unable to hire in Wyoming and Illinois at this time.</li>
<li>We are unable to provide visa sponsorship.</li>
<li>For those who will be working from a personal device, your computer must be a Chromebook, a Mac with macOS 11.0 or later, or Windows 10 or later.</li>
</ul>
<p>Compensation and Benefits: US-based candidates: $35/hour - $45/hour depending on factors including relevant experience, skills, education, geographic location, and qualifications. International candidates: Information will be provided to you during the recruitment process. Benefits vary based on employment type, location, and jurisdiction. Benefits for eligible U.S.-based positions include health insurance, 401(k) plan, and paid sick leave. Specific details and role-specific information will be provided to you during the interview process.</p>
<p style="margin-top:24px;font-size:13px;color:#666;">XML job scraping automation by <a href="https://yubhub.co">YubHub</a></p>]]></Description>
      <Jobtype>full-time|part-time|contract|temporary|internship</Jobtype>
      <Experiencelevel></Experiencelevel>
      <Workarrangement>remote</Workarrangement>
      <Salaryrange>$35/hour - $45/hour</Salaryrange>
      <Skills>Native proficiency in English, Strong auditory perception, Demonstrated ability to handle multilingual audio content, Demonstrated ability to transcribe audio with high accuracy, Comfort providing high-quality voice recordings and feedback on audio samples, Exceptional attention to linguistic nuance and auditory detail, Deep understanding and taste of what good/useful Audio data is, Strong command of advanced transcription and annotation practices, Background in linguistics or speech sciences, Experience working with speech/audio datasets or AI training data</Skills>
      <Category>Engineering</Category>
      <Industry>Technology</Industry>
      <Employername>xAI</Employername>
      <Employerlogo>https://logos.yubhub.co/xai.com.png</Employerlogo>
      <Employerdescription>xAI creates AI systems that can accurately understand the universe and aid humanity in its pursuit of knowledge. The company has a small, highly motivated team focused on engineering excellence.</Employerdescription>
      <Employerwebsite>https://www.xai.com/</Employerwebsite>
      <Compensationcurrency></Compensationcurrency>
      <Compensationmin></Compensationmin>
      <Compensationmax></Compensationmax>
      <Applyto>https://job-boards.greenhouse.io/xai/jobs/5090198007</Applyto>
      <Location>Remote</Location>
      <Country></Country>
      <Postedate>2026-04-18</Postedate>
    </job>
    <job>
      <externalid>984a177d-b58</externalid>
      <Title>AI Tutor - Thai</Title>
      <Description><![CDATA[<p>As an AI Tutor specialized in multilingual audio capabilities, you will contribute to xAI&#39;s mission by training and refining Grok to excel in voice interactions, speech recognition, and auditory experiences across diverse languages, accents, and cultural contexts.</p>
<p>Your work will focus on curating and annotating high-quality audio data to enhance Grok&#39;s global accessibility, enabling natural spoken interactions for users worldwide, bridging language barriers through accurate speech processing, and improving the AI&#39;s handling of multilingual audio nuances.</p>
<p>Responsibilities:</p>
<ul>
<li>Use proprietary software to provide labels, annotations, recordings, and inputs on projects involving multilingual audio clips, voice recordings, speech samples, and auditory elements in various languages.</li>
</ul>
<ul>
<li>Support the delivery of high-quality curated audio data that ensures clear, natural spoken output, accurate representation of linguistic and prosodic details (such as intonation, rhythm, and accent), and professional audio standards.</li>
</ul>
<ul>
<li>Collaborate with technical staff to develop tasks that improve AI&#39;s ability to handle speech modulation, accent variation, noise in real-world recordings, and multilingual audio processing.</li>
</ul>
<ul>
<li>Work with technical staff to improve annotation tools for efficient audio workflows.</li>
</ul>
<p>Basic Qualifications:</p>
<ul>
<li>Native proficiency in Thai with exposure to diverse accents, dialects, or regional variations.</li>
</ul>
<ul>
<li>Proficiency in English (minimum B2 level) with clear, natural vocal delivery and pronunciation suitable for audio recording purposes.</li>
</ul>
<ul>
<li>Strong auditory perception to identify nuances in speech, accents, pronunciation, intonation, and audio quality across languages.</li>
</ul>
<ul>
<li>Demonstrated ability to handle multilingual audio content, including evaluating speech accuracy, cultural vocal expressions, and contextual interpretation in spoken form.</li>
</ul>
<ul>
<li>Demonstrated ability to transcribe audio with high accuracy across accents and varying audio quality.</li>
</ul>
<ul>
<li>Comfort providing high-quality voice recordings and feedback on audio samples in multiple languages.</li>
</ul>
<ul>
<li>Strong comprehension skills and the ability to make independent judgments on ambiguous or varied audio material, including noisy or accented speech.</li>
</ul>
<ul>
<li>Strong communication, interpersonal, analytical, detail-oriented, and organizational skills, with the ability to articulate audio-related feedback effectively.</li>
</ul>
<ul>
<li>Commitment to developing AI that masters sophisticated multilingual audio capabilities.</li>
</ul>
<p>Preferred Skills and Experience:</p>
<ul>
<li>Demonstration of exceptional attention to linguistic nuance, auditory detail, and data quality beyond standard transcription work.</li>
</ul>
<ul>
<li>Deep understanding and taste of what good/useful Audio data is.</li>
</ul>
<ul>
<li>Strong command of advanced transcription and annotation practices, including handling disfluencies, accents, and prosodic features (intonation, stress, rhythm, emotion, etc) with high consistency and accuracy.</li>
</ul>
<ul>
<li>Background in linguistics (e.g., phonetics, phonology, sociolinguistics), speech sciences, cognitive science, or a related field, or equivalent practical experience, with demonstrated ability to analyze accent variation, pronunciation differences, and multilingual speech patterns.</li>
</ul>
<ul>
<li>Experience working with speech/audio datasets, annotation workflows, or AI training data, including knowledge/experience with training voice models, and an understanding of how data quality impacts model performance.</li>
</ul>
<ul>
<li>Professional experience in voice work, including voice acting, voice recording, podcasting with a measurable audience (e.g., X following), or similar audio production demonstrating attention to clarity and recording quality.</li>
</ul>
<ul>
<li>Demonstrated ability to exercise independent judgment in ambiguous audio scenarios and make consistent, defensible annotation decisions.</li>
</ul>
<ul>
<li>Portfolio (strongly preferred for advanced candidates): Voice samples, annotated transcripts, or audio-related work demonstrating quality, methodology, and attention to detail.</li>
</ul>
<ul>
<li>Candidates with professional experience in voice, linguistics, speech data, or speech evaluation and research are especially encouraged to apply.</li>
</ul>
<p>Location and Other Expectations:</p>
<ul>
<li>Tutor roles may be offered as full-time, part-time, or contractor positions, depending on role needs and candidate fit.</li>
</ul>
<ul>
<li>For contractor positions, hours will vary widely based on project scope and contractor availability, with no fixed commitments required. On average, most projects may require at least 10 hours per week to deliver effectively, though this is not a fixed commitment and depends on the scope of work. Contractors have full flexibility to set their own hours and determine the exact amount of time needed to complete deliverables.</li>
</ul>
<ul>
<li>Tutor roles may be performed remotely from any location worldwide, subject to legal eligibility, time-zone compatibility, and role-specific needs.</li>
</ul>
<ul>
<li>For US-based candidates, please note that we are unable to hire in Wyoming and Illinois at this time.</li>
</ul>
<ul>
<li>We are unable to provide visa sponsorship.</li>
</ul>
<ul>
<li>For those who will be working from a personal device, your computer must be a Chromebook, a Mac with macOS 11.0 or later, or Windows 10 or later.</li>
</ul>
<p>Compensation and Benefits:</p>
<p>US-based candidates: $35/hour - $45/hour depending on factors including relevant experience, skills, education, geographic location, and qualifications. International candidates: Information will be provided to you during the recruitment process.</p>
<p>Benefits vary based on employment type, location, and jurisdiction. Benefits for eligible U.S.-based positions include health insurance, 401(k) plan, and paid sick leave. Specific details and role-specific information will be provided to you during the interview process.</p>
<p style="margin-top:24px;font-size:13px;color:#666;">XML job scraping automation by <a href="https://yubhub.co">YubHub</a></p>]]></Description>
      <Jobtype>full-time|part-time|contract|temporary|internship</Jobtype>
      <Experiencelevel></Experiencelevel>
      <Workarrangement>remote</Workarrangement>
      <Salaryrange>$35/hour - $45/hour</Salaryrange>
      <Skills>Native proficiency in Thai, Proficiency in English, Strong auditory perception, Demonstrated ability to handle multilingual audio content, Demonstrated ability to transcribe audio with high accuracy, Comfort providing high-quality voice recordings and feedback on audio samples, Strong comprehension skills, Strong communication, interpersonal, analytical, detail-oriented, and organizational skills, Exceptional attention to linguistic nuance, auditory detail, and data quality, Deep understanding and taste of what good/useful Audio data is, Strong command of advanced transcription and annotation practices, Background in linguistics, speech sciences, cognitive science, or a related field, Experience working with speech/audio datasets, annotation workflows, or AI training data, Professional experience in voice work, including voice acting, voice recording, podcasting, Demonstrated ability to exercise independent judgment in ambiguous audio scenarios</Skills>
      <Category>Engineering</Category>
      <Industry>Technology</Industry>
      <Employername>xAI</Employername>
      <Employerlogo>https://logos.yubhub.co/xai.com.png</Employerlogo>
      <Employerdescription>xAI creates AI systems that can accurately understand the universe and aid humanity in its pursuit of knowledge. The team is small and focused on engineering excellence.</Employerdescription>
      <Employerwebsite>https://www.xai.com/</Employerwebsite>
      <Compensationcurrency></Compensationcurrency>
      <Compensationmin></Compensationmin>
      <Compensationmax></Compensationmax>
      <Applyto>https://job-boards.greenhouse.io/xai/jobs/5090272007</Applyto>
      <Location>Remote</Location>
      <Country></Country>
      <Postedate>2026-04-18</Postedate>
    </job>
    <job>
      <externalid>2dd31d8a-430</externalid>
      <Title>Robotics Software Engineer, Vehicle Software</Title>
      <Description><![CDATA[<p>About the Team</p>
<p>Air Dominance &amp; Strike (AD&amp;S) is responsible for autonomous robotics systems like the Fury unmanned fighter jet and the Barracuda family of advanced effects. The AD&amp;S Vehicle Software team is responsible for the software running on these systems. Our software engineers collaborate with other engineering disciplines to develop software for vehicle control, networking, sensor integration, and telemetry.</p>
<p>We are looking for engineers excited to build the foundational vehicle software stack that supports the wide range of AD&amp;S initiatives, from early concept simulation to first flight to live operations to large scale fleet management.</p>
<p>Responsibilities</p>
<ul>
<li>Write and maintain core libraries and services that perform critical functions for collaborative teams of robots - for example, motion deconfliction and contingency management of fast mover air vehicles.</li>
<li>Own major feature development and rollout of large, complex features for our products - recent examples include developing terminal-phase autonomy for various air vehicles and developing a test plan on live surrogates.</li>
<li>Work closely with Anduril and 3rd party vehicle hardware teams, as well as operational subject matter experts (fighter pilots, UAV operators, etc.) to align on requirements during product development and iterate towards a final design.</li>
</ul>
<p>Required Qualifications</p>
<ul>
<li>Eligible to obtain and maintain an active U.S. Top Secret security clearance</li>
<li>BS in Robotics, Computer Science, Mechatronics, Electrical Engineering, Mechanical Engineering, or related field</li>
<li>Proven understanding of data structures, algorithms, concurrency, and code optimization</li>
<li>Experience troubleshooting and analyzing remotely deployed software systems</li>
<li>Experience working with and testing electrical and mechanical systems</li>
<li>3+ Years experience with C++ or Rust experience in a Linux development environment</li>
</ul>
<p>Preferred Qualifications</p>
<ul>
<li>MS or PhD</li>
<li>Experience in one or more of the following: motion planning, perception, localization, mapping, controls, and related system performance metrics.</li>
<li>Python, Rust, and/or Go experience</li>
<li>Experience programming for embedded and physical devices</li>
<li>Multi-agent coordination of UAVs</li>
<li>Complex frame transformation problems, such as target localization or multi degree of freedom robotic arms</li>
</ul>
<p>US Salary Range $191,000-$253,000 USD</p>
<p>The salary range for this role is an estimate based on a wide range of compensation factors, inclusive of base salary only. Actual salary offer may vary based on (but not limited to) work experience, education and/or training, critical skills, and/or business considerations. Highly competitive equity grants are included in the majority of full time offers; and are considered part of Anduril&#39;s total compensation package. Additionally, Anduril offers top-tier benefits for full-time employees, including:</p>
<p>Healthcare Benefits</p>
<ul>
<li>US Roles: Comprehensive medical, dental, and vision plans at little to no cost to you.</li>
<li>UK &amp; AUS Roles: We cover full cost of medical insurance premiums for you and your dependents.</li>
<li>IE Roles: We offer an annual contribution toward your private health insurance for you and your dependents.</li>
</ul>
<p>Additional Benefits</p>
<ul>
<li>Income Protection: Anduril covers life and disability insurance for all employees.</li>
<li>Generous time off: Highly competitive PTO plans with a holiday hiatus in December. Caregiver &amp; Wellness Leave is available to care for family members, bond with a new baby, or address your own medical needs.</li>
<li>Family Planning &amp; Parenting Support: Coverage for fertility treatments (e.g., IVF, preservation), adoption, and gestational carriers, along with resources to support you and your partner from planning to parenting.</li>
<li>Mental Health Resources: Access free mental health resources 24/7, including therapy and life coaching. Additional work-life services, such as legal and financial support, are also available.</li>
<li>Professional Development: Annual reimbursement for professional development</li>
<li>Commuter Benefits: Company-funded commuter benefits based on your region.</li>
<li>Relocation Assistance: Available depending on role eligibility.</li>
</ul>
<p>Retirement Savings Plan</p>
<ul>
<li>US Roles: Traditional 401(k), Roth, and after-tax (mega backdoor Roth) options.</li>
<li>UK &amp; IE Roles: Pension plan with employer match.</li>
<li>AUS Roles: Superannuation plan.</li>
</ul>
<p>Protecting Yourself from Recruitment Scams</p>
<p>Anduril is committed to maintaining the integrity of our Talent acquisition process and the security of our candidates. We&#39;ve observed a rise in sophisticated phishing and fraudulent schemes where individuals impersonate Anduril representatives, luring job seekers with false interviews or job offers. These scammers often attempt to extract payment or sensitive personal information.</p>
<p>To ensure your safety and help you navigate your job search with confidence, please keep the following critical points in mind:</p>
<ul>
<li>No Financial Requests: Anduril will never solicit payment or demand personal financial details (such as banking information, credit card numbers, or social security numbers) at any stage of our hiring process. Our legitimate recruitment is entirely free for candidates.</li>
<li>Please always verify communications:</li>
<li>Direct from Anduril: If you receive an email from one of our recruiters, it will only come from an @anduril.com address.</li>
<li>Via Agency Partner: If contacted by a recruiting agency for an Anduril role, their email will clearly identify their agency. If you suspect any suspicious activity, please verify the agency&#39;s authenticity by reaching out to contact@anduril.com.</li>
<li>Exercise Caution with Unsolicited Outreach: If you receive any communication that appears suspicious, contains grammatical errors, or makes unusual requests, do not engage. Always confirm the sender&#39;s email domain is @anduril.com before providing any personal information or clicking on links.</li>
<li>What to Do If You Suspect Fraud: Should you encounter any questionable or fraudulent outreach claiming to be from Anduril, please report it immediately to contact@anduril.com. Your proactive caution is invaluable in protecting your personal information and upholding the security and trustworthiness of our recruitment efforts.</li>
</ul>
<p>Data Privacy</p>
<p style="margin-top:24px;font-size:13px;color:#666;">XML job scraping automation by <a href="https://yubhub.co">YubHub</a></p>]]></Description>
      <Jobtype>full-time</Jobtype>
      <Experiencelevel>senior</Experiencelevel>
      <Workarrangement>onsite</Workarrangement>
      <Salaryrange>$191,000-$253,000 USD</Salaryrange>
      <Skills>C++, Rust, Linux development environment, Data structures, Algorithms, Concurrency, Code optimization, Troubleshooting, Analyzing remotely deployed software systems, Electrical and mechanical systems, Motion planning, Perception, Localization, Mapping, Controls, System performance metrics, Python, Go, Embedded and physical devices, Multi-agent coordination of UAVs, Complex frame transformation problems</Skills>
      <Category>Engineering</Category>
      <Industry>Technology</Industry>
      <Employername>Anduril</Employername>
      <Employerlogo>https://logos.yubhub.co/anduril.com.png</Employerlogo>
      <Employerdescription>Anduril is a technology company that develops autonomous robotics systems.</Employerdescription>
      <Employerwebsite>https://www.anduril.com/</Employerwebsite>
      <Compensationcurrency></Compensationcurrency>
      <Compensationmin></Compensationmin>
      <Compensationmax></Compensationmax>
      <Applyto>https://job-boards.greenhouse.io/andurilindustries/jobs/4672892007</Applyto>
      <Location>Costa Mesa, California, United States</Location>
      <Country></Country>
      <Postedate>2026-04-18</Postedate>
    </job>
    <job>
      <externalid>08a5f496-732</externalid>
      <Title>Robotics Software Engineer</Title>
      <Description><![CDATA[<p>As a Robotics Software Engineer on our Tactical Recon &amp; Strike team, you&#39;ll be at the forefront of cutting-edge autonomous systems development. You&#39;ll tackle diverse challenges in autonomy, systems integration, robotics, and networking, making critical engineering decisions that directly impact mission success.</p>
<p>Your role will be pivotal in ensuring Anduril&#39;s products work seamlessly together to achieve a variety of crucial outcomes. You&#39;ll develop innovative solutions for complex robotics problems, balance pragmatic engineering trade-offs with mission-critical requirements, and collaborate across teams to integrate software with hardware systems.</p>
<p>Contributing to the entire product lifecycle, from concept to deployment, you&#39;ll rapidly prototype and iterate on software solutions. We&#39;re looking for someone who thrives in a fast-paced environment and isn&#39;t afraid to tackle ambiguous problems. Your &#39;Whatever It Takes&#39; mindset will be key in executing tasks efficiently, scalably, and pragmatically, always keeping the mission at the forefront of your work.</p>
<p>This role offers the opportunity to make a significant impact on next-generation defence technology, working with state-of-the-art robotics and autonomous systems. You&#39;ll be part of a team that values innovation, quick iteration, and delivering high-quality solutions that meet real-world needs.</p>
<p>Must be eligible to obtain and maintain an active U.S. Secret security clearance. This position will be located at our office in Atlanta, GA (relocation benefits provided.)</p>
<p>Key Responsibilities:</p>
<ul>
<li>Develop and maintain core robotics libraries, including frame transformations, targeting, and guidance systems, that will be utilized across all Anduril robotics platforms</li>
</ul>
<ul>
<li>Lead the development and implementation of major features for our products, such as designing and building Software-in-the-Loop simulators for advanced systems like Altius</li>
</ul>
<ul>
<li>Optimise performance of existing products, primarily focused on our Altius Drone product line</li>
</ul>
<ul>
<li>Collaborate closely with hardware and manufacturing teams throughout the product development lifecycle, providing timely feedback to influence and enhance final hardware designs</li>
</ul>
<ul>
<li>Troubleshoot and resolve complex issues in deployed systems, ensuring optimal performance in the field</li>
</ul>
<ul>
<li>Contribute to the design and implementation of multi-agent coordination systems for UAVs</li>
</ul>
<ul>
<li>Participate in the full software development lifecycle, from concept and design through testing and deployment</li>
</ul>
<ul>
<li>Stay current with emerging technologies and industry trends, recommending and implementing innovations to improve our products and processes</li>
</ul>
<p>Required Qualifications:</p>
<ul>
<li>Bachelor&#39;s degree in Robotics, Computer Science, or related field</li>
</ul>
<ul>
<li>3+ years of professional software development experience</li>
</ul>
<ul>
<li>Strong proficiency in C++ or Rust, with experience in Linux development environments</li>
</ul>
<ul>
<li>Demonstrated expertise in data structures, algorithms, concurrency, and code optimisation</li>
</ul>
<ul>
<li>Proven experience troubleshooting and analysing remotely deployed software systems</li>
</ul>
<ul>
<li>Hands-on experience working with and testing electrical and mechanical systems</li>
</ul>
<ul>
<li>Ability to collaborate effectively with cross-functional teams, including hardware and manufacturing</li>
</ul>
<ul>
<li>Strong problem-solving skills and a &#39;Whatever It Takes&#39; mindset</li>
</ul>
<ul>
<li>Excellent communication skills, both written and verbal</li>
</ul>
<ul>
<li>Eligible to obtain and maintain an active U.S. Secret security clearance</li>
</ul>
<ul>
<li>Willingness to relocate to Atlanta, GA</li>
</ul>
<p>Preferred Qualifications:</p>
<ul>
<li>Master&#39;s or Ph.D. in a relevant field (e.g., Robotics, Computer Science, Electrical Engineering)</li>
</ul>
<ul>
<li>Expertise in one or more advanced robotics areas: motion planning, perception, localisation, mapping, or controls</li>
</ul>
<ul>
<li>Experience with performance optimisation and metrics for complex robotic systems</li>
</ul>
<ul>
<li>Proficiency in Python, Rust, and/or Go, in addition to C++</li>
</ul>
<ul>
<li>Hands-on experience programming for embedded systems and physical devices</li>
</ul>
<ul>
<li>Background in multi-agent coordination, particularly with UAVs</li>
</ul>
<ul>
<li>Demonstrated ability to solve complex frame transformation problems (e.g., target localisation, multi-degree-of-freedom robotic arms)</li>
</ul>
<ul>
<li>Experience with real-time operating systems and distributed computing</li>
</ul>
<ul>
<li>Familiarity with machine learning and AI applications in robotics</li>
</ul>
<ul>
<li>Knowledge of sensor fusion techniques and implementation</li>
</ul>
<ul>
<li>Understanding of aerodynamics and flight dynamics as applied to UAV systems</li>
</ul>
<ul>
<li>Experience with simulation environments for robotics testing and development</li>
</ul>
<ul>
<li>Track record of contributions to open-source robotics projects or relevant publications</li>
</ul>
<p style="margin-top:24px;font-size:13px;color:#666;">XML job scraping automation by <a href="https://yubhub.co">YubHub</a></p>]]></Description>
      <Jobtype>full-time</Jobtype>
      <Experiencelevel>mid</Experiencelevel>
      <Workarrangement>onsite</Workarrangement>
      <Salaryrange>$165,000-$218,000 USD</Salaryrange>
      <Skills>C++, Rust, Linux development environments, Data structures, Algorithms, Concurrency, Code optimisation, Troubleshooting, Analysis, Electrical and mechanical systems, Collaboration, Problem-solving, Communication, Python, Go, Embedded systems, Physical devices, Multi-agent coordination, Motion planning, Perception, Localisation, Mapping, Controls, Performance optimisation, Real-time operating systems, Distributed computing, Machine learning, AI applications, Sensor fusion, Aerodynamics, Flight dynamics, Simulation environments</Skills>
      <Category>Engineering</Category>
      <Industry>Technology</Industry>
      <Employername>Anduril Industries</Employername>
      <Employerlogo>https://logos.yubhub.co/anduril.com.png</Employerlogo>
      <Employerdescription>Anduril Industries is a defence technology company that develops advanced technology for the U.S. and allied military.</Employerdescription>
      <Employerwebsite>https://www.anduril.com/</Employerwebsite>
      <Compensationcurrency></Compensationcurrency>
      <Compensationmin></Compensationmin>
      <Compensationmax></Compensationmax>
      <Applyto>https://job-boards.greenhouse.io/andurilindustries/jobs/5078772007</Applyto>
      <Location>Atlanta, Georgia, United States</Location>
      <Country></Country>
      <Postedate>2026-04-18</Postedate>
    </job>
    <job>
      <externalid>126e36d8-668</externalid>
      <Title>Perception Engineering Intern</Title>
      <Description><![CDATA[<p>We are seeking a perception engineer with a strong background in computer vision to join our rapidly growing team in Costa Mesa, CA. In this role, you will be at the forefront of developing advanced perception systems for complex autonomous aerial platforms.</p>
<p>Your expertise in computer vision algorithms, combined with your understanding of robotics principles, will be crucial in solving a wide variety of challenges involving visual perception, SLAM, motion planning, controls, and state estimation. This role requires not only technical expertise in computer vision and robotics but also the ability to make pragmatic engineering tradeoffs, considering the unique constraints of aerial platforms.</p>
<p>Your work will directly contribute to the seamless integration of Anduril&#39;s products, achieving critical outcomes in autonomous operations. This position demands strong systems-level knowledge and experience, as you&#39;ll be working on the intersection of computer vision, robotics, and autonomous systems.</p>
<p>If you are passionate about pushing the boundaries of computer vision in robotics, possess a &#39;Whatever It Takes&#39; mindset, and can execute in an expedient, scalable, and pragmatic way while keeping the mission top-of-mind and making sound engineering decisions, then this role is for you.</p>
<p>Responsibilities:</p>
<ul>
<li>Work at the intersection of 3D perception and computer vision, developing robust algorithms that power real-time decision-making for autonomous aerial systems.</li>
</ul>
<ul>
<li>Develop and implement advanced structure from motion and SLAM algorithms to create accurate 3D models from multiple camera inputs in real-time.</li>
</ul>
<ul>
<li>Integrate perception outputs with path planning algorithms to enable autonomous navigation in complex, unstructured environments.</li>
</ul>
<ul>
<li>Design experiments, data collection efforts, and curate training/evaluation sets to develop insights for both internal purposes and customers.</li>
</ul>
<ul>
<li>Collaborate closely with robotics, software, and hardware teams to integrate perception algorithms into autonomous aerial systems.</li>
</ul>
<ul>
<li>Work with vendors and government stakeholders to advance the state-of-the-art in perception and world modeling for autonomous aerial systems.</li>
</ul>
<p>Required Qualifications:</p>
<ul>
<li>BS in Robotics, Computer Science, Mechatronics, Electrical Engineering, Mechanical Engineering, or related field.</li>
</ul>
<ul>
<li>Strong knowledge of 3D computer vision concepts, including multi-view geometry, camera models, photogrammetry, depth estimation, and 3D reconstruction techniques.</li>
</ul>
<ul>
<li>Fluency in standard domain libraries (numpy, opencv, pytorch, etc).</li>
</ul>
<ul>
<li>Proven understanding of data structures, algorithms, concurrency, and code optimization.</li>
</ul>
<ul>
<li>Experience working with Python, PyTorch, or C++ programming languages.</li>
</ul>
<ul>
<li>Experience deploying software to end customers, internal or external.</li>
</ul>
<ul>
<li>Must be willing to travel 25%.</li>
</ul>
<ul>
<li>Eligible to obtain an active U.S. Secret security clearance.</li>
</ul>
<p>Preferred Qualifications:</p>
<ul>
<li>MS or PhD in Robotics, Computer Science, Engineering, or related field.</li>
</ul>
<ul>
<li>Experience with perception systems for aerial robotics or other highly dynamic platforms.</li>
</ul>
<ul>
<li>Experience with real-world sensor integrations, including LiDAR, RGB-D cameras, IR cameras, stereo cameras, or TOF cameras.</li>
</ul>
<ul>
<li>Experience with GPU / CUDA programming for accelerated computer vision processing.</li>
</ul>
<ul>
<li>Knowledge of path planning algorithms and their integration with perception systems in dynamic environments.</li>
</ul>
<p style="margin-top:24px;font-size:13px;color:#666;">XML job scraping automation by <a href="https://yubhub.co">YubHub</a></p>]]></Description>
      <Jobtype>internship</Jobtype>
      <Experiencelevel>entry</Experiencelevel>
      <Workarrangement>onsite</Workarrangement>
      <Salaryrange></Salaryrange>
      <Skills>computer vision, robotics, Python, PyTorch, C++, numpy, opencv, data structures, algorithms, concurrency, code optimization, perception systems, aerial robotics, LiDAR, RGB-D cameras, IR cameras, stereo cameras, TOF cameras, GPU, CUDA</Skills>
      <Category>Engineering</Category>
      <Industry>Technology</Industry>
      <Employername>Anduril Industries</Employername>
      <Employerlogo>https://logos.yubhub.co/anduril.com.png</Employerlogo>
      <Employerdescription>Anduril Industries is a defence technology company that develops advanced technology for the U.S. and allied military.</Employerdescription>
      <Employerwebsite>https://www.anduril.com/</Employerwebsite>
      <Compensationcurrency></Compensationcurrency>
      <Compensationmin></Compensationmin>
      <Compensationmax></Compensationmax>
      <Applyto>https://job-boards.greenhouse.io/andurilindustries/jobs/4830032007</Applyto>
      <Location>Costa Mesa, California, United States</Location>
      <Country></Country>
      <Postedate>2026-04-18</Postedate>
    </job>
    <job>
      <externalid>19912495-792</externalid>
      <Title>Robotics Engineer, Sensor Integration</Title>
      <Description><![CDATA[<p>We are seeking a Robotics Software Engineer with expertise in C++ and Rust to join our team. In this role, you will design, develop, and optimize software solutions for autonomous robotic systems, focusing on sensor integration, networking, and multi-agent coordination.</p>
<p>You will work on interdisciplinary challenges, collaborate across teams, and deploy critical software in real-world environments.</p>
<p>Key responsibilities include:</p>
<p>Developing mission-critical software for networking, sensor integration, and autonomy across robotic platforms</p>
<p>Working with various sensors (e.g., cameras, LiDAR, IMUs) to enable perception, localization, and navigation</p>
<p>Designing and optimizing distributed communication networks and message-passing frameworks for multi-robot coordination</p>
<p>Collaborating with hardware, systems, and manufacturing teams to seamlessly integrate software into physical systems</p>
<p>Traveling up to 25% to test, debug, and deploy systems in operational environments</p>
<p>Contribution to the entire software lifecycle, including prototyping, implementation, testing, and deployment</p>
<p>Enhancing system efficiency, such as improving latency, battery consumption, and resource utilization</p>
<p>Analyzing and resolving issues in deployed systems, ensuring reliability and operational success</p>
<p>Required qualifications include:</p>
<p>Bachelor&#39;s or Master&#39;s degree in Robotics, Computer Science, Software Engineering, Mathematics, or Physics</p>
<p>2+ years of hands-on experience developing production-grade software in C++ and/or Rust</p>
<p>Experience with distributed communication networks, protocols, and message standards</p>
<p>Proven ability to work with and integrate sensors (e.g., LiDAR, cameras, IMUs) into robotics systems</p>
<p>Ability to navigate and contribute to complex systems and established codebases</p>
<p>Passion for building software that directly influences mission-critical outcomes</p>
<p>Willingness to travel up to 25%</p>
<p>Preferred qualifications include:</p>
<p>Experience with motion planning, perception, localization, and multi-agent coordination</p>
<p>Proficiency in designing Software-in-the-Loop (SIL) simulation environments</p>
<p>Experience working with embedded systems and physical devices</p>
<p>Familiarity with metrics and optimization techniques for robotics systems</p>
<p>Knowledge of AI/ML applications in robotics</p>
<p>Active or prior U.S. Secret clearance is a plus</p>
<p>The salary range for this role is $191,000-$253,000 USD.</p>
<p style="margin-top:24px;font-size:13px;color:#666;">XML job scraping automation by <a href="https://yubhub.co">YubHub</a></p>]]></Description>
      <Jobtype>full-time</Jobtype>
      <Experiencelevel>senior</Experiencelevel>
      <Workarrangement>onsite</Workarrangement>
      <Salaryrange>$191,000-$253,000 USD</Salaryrange>
      <Skills>C++, Rust, distributed communication networks, protocols, message standards, sensor integration, networking, multi-agent coordination, motion planning, perception, localization, Software-in-the-Loop (SIL) simulation environments, embedded systems, physical devices, metrics and optimization techniques, AI/ML applications in robotics</Skills>
      <Category>Engineering</Category>
      <Industry>Technology</Industry>
      <Employername>Anduril</Employername>
      <Employerlogo>https://logos.yubhub.co/anduril.com.png</Employerlogo>
      <Employerdescription>Anduril is a technology company that develops advanced robotics and artificial intelligence systems for various industries.</Employerdescription>
      <Employerwebsite>https://www.anduril.com/</Employerwebsite>
      <Compensationcurrency></Compensationcurrency>
      <Compensationmin></Compensationmin>
      <Compensationmax></Compensationmax>
      <Applyto>https://job-boards.greenhouse.io/andurilindustries/jobs/5096506007</Applyto>
      <Location>Costa Mesa, California, United States</Location>
      <Country></Country>
      <Postedate>2026-04-18</Postedate>
    </job>
    <job>
      <externalid>ac5a9e64-0de</externalid>
      <Title>Lead Software Engineer, Mission System, Maritime</Title>
      <Description><![CDATA[<p>We are looking for a Lead Mission Systems Software Engineer to join our rapidly growing Maritime team in Boston or Quincy, MA. In this role, you will design and develop the core decision-making and autonomy software that powers our autonomous underwater vehicles.</p>
<p>You will lead the architecture, implementation, and deployment of the logic that enables these systems to understand their environment, navigate safely, respond to obstacles or threats, and execute complex missions with limited human involvement.</p>
<p>As a technical leader, you will provide hands-on contributions across software development, establishing a long-term software roadmap, and leading the team through execution of that plan.</p>
<p>Collaborating closely with program leadership and cross-functional teams from other Anduril products, you&#39;ll ensure that designs align with customer requirements while maintaining traceability to program office technical decisions.</p>
<p>You will act as a subject matter expert in mission systems development and integration, presenting to customers and demonstrating overall system performance.</p>
<p>If you enjoy tackling complex technical challenges and owning the development of high-impact products, this role is for you.</p>
<p>Key Responsibilities:</p>
<ul>
<li>Act as a technical leader on a small team owning software for Copperhead</li>
</ul>
<ul>
<li>Draw on both your technical expertise and leadership skills to set objectives, build cross-functional teams, and rapidly drive projects to completion</li>
</ul>
<ul>
<li>Drive the design and implementation of development processes for the initial delivery and subsequent iteration of payloads and mission systems</li>
</ul>
<ul>
<li>Act as the technical owner for an entire mission system, including stakeholder and customer engagement, requirements definition, roadmap management, team coordination, design, implementation, sustainment and evolution</li>
</ul>
<ul>
<li>Organize integration events, scoping key deliverables and impact to broader program</li>
</ul>
<ul>
<li>Own customer success through the design and delivery of a fully operational mission system</li>
</ul>
<ul>
<li>Leverage internal product and program-specific engineering teams to rapidly deliver capability beyond the scope of current platforms, with a clear path for both architecture and capability evolution over time</li>
</ul>
<ul>
<li>Generate system solutions to improve reliability, ease-of-use, and capability across a variety of customer missions</li>
</ul>
<ul>
<li>Write and maintain core libraries (frame transformations, targeting and guidance, communications, etc.) that all robotics platforms at Anduril will use</li>
</ul>
<ul>
<li>Own major feature development for Copperhead and manage rollout to the fleet</li>
</ul>
<ul>
<li>Author documents to fulfill specific customer requests, including white papers and reports</li>
</ul>
<ul>
<li>Travel up to 15% of time to build, test, and deploy capabilities in the real world, and collaborate with other teams or end-users</li>
</ul>
<p>Required Qualifications:</p>
<ul>
<li>Strong engineering background from industry or school, ideally in areas/fields such as Robotics, Computer Science, Software Engineering, Mechatronics, Electrical Engineering, Mathematics, or Physics</li>
</ul>
<ul>
<li>Experience in a leadership position within a high-performing technology organization</li>
</ul>
<ul>
<li>Proven understanding of data structures, algorithms, concurrency, and code optimization</li>
</ul>
<ul>
<li>6+ years of professional C++ or Rust programming experience in a Linux development environment</li>
</ul>
<ul>
<li>Experience troubleshooting and analyzing remotely deployed software systems</li>
</ul>
<ul>
<li>Experience with the development and sustainment of distributed software platform and application architectures, running under dynamic network topologies</li>
</ul>
<ul>
<li>Capacity to work holistically on software-enabled capabilities up and down the software stack and through lifecycle through design, implementation, operation, and sustainment</li>
</ul>
<ul>
<li>Demonstrated curiosity and ability to learn outside of core discipline, and a desire to work on critical software that has a real-world impact</li>
</ul>
<ul>
<li>Experience engaging with customers to represent the technical aspects of a product portfolio regarding missions and payloads</li>
</ul>
<ul>
<li>Strong communication skills and ability to collaborate across technical teams</li>
</ul>
<ul>
<li>Eligible to obtain and maintain an active U.S. Secret security clearance</li>
</ul>
<p>Preferred Qualifications:</p>
<ul>
<li>MS or PhD in Computer Science, Robotics, or a related field</li>
</ul>
<ul>
<li>Experience with modeling and simulation of complex systems</li>
</ul>
<ul>
<li>Proficiency in Python, Rust, and/or Go</li>
</ul>
<ul>
<li>Experience in one or more of the following: high-performance computing, network programming, fault tolerance, fault handling, DevSecOps</li>
</ul>
<ul>
<li>Experience solving complex frame transformation problems, such as target localization or multi-degree-of-freedom robotic arms</li>
</ul>
<ul>
<li>Experience in one or more of the following: sensor integration, tracking and estimation, motion planning, perception, localization, mapping, guidance, navigation, and control, and related system performance metrics</li>
</ul>
<ul>
<li>Hands-on experience developing software for embedded and physical devices</li>
</ul>
<ul>
<li>Solid understanding of robotics systems, common interfaces, and protocols (gRPC, TCP/IP, RS485, RS232)</li>
</ul>
<ul>
<li>Demonstrated ability to learn and grow individually, while effectively mentoring senior team members, building team cohesion, and increasing team capability</li>
</ul>
<p>Experience Level: Senior</p>
<p>Employment Type: Full-time</p>
<p>Workplace Type: Onsite</p>
<p>Category: Engineering</p>
<p>Industry: Technology</p>
<p>Salary Range: $220,000-$292,000 USD</p>
<p>Required Skills:</p>
<ul>
<li>C++</li>
</ul>
<ul>
<li>Rust</li>
</ul>
<ul>
<li>Linux development environment</li>
</ul>
<ul>
<li>Data structures</li>
</ul>
<ul>
<li>Algorithms</li>
</ul>
<ul>
<li>Concurrency</li>
</ul>
<ul>
<li>Code optimization</li>
</ul>
<ul>
<li>Distributed software platform and application architectures</li>
</ul>
<ul>
<li>Dynamic network topologies</li>
</ul>
<ul>
<li>Software-enabled capabilities</li>
</ul>
<ul>
<li>Critical software</li>
</ul>
<ul>
<li>Complex systems</li>
</ul>
<ul>
<li>Modeling and simulation</li>
</ul>
<ul>
<li>High-performance computing</li>
</ul>
<ul>
<li>Network programming</li>
</ul>
<ul>
<li>Fault tolerance</li>
</ul>
<ul>
<li>Fault handling</li>
</ul>
<ul>
<li>DevSecOps</li>
</ul>
<ul>
<li>Sensor integration</li>
</ul>
<ul>
<li>Tracking and estimation</li>
</ul>
<ul>
<li>Motion planning</li>
</ul>
<ul>
<li>Perception</li>
</ul>
<ul>
<li>Localization</li>
</ul>
<ul>
<li>Mapping</li>
</ul>
<ul>
<li>Guidance</li>
</ul>
<ul>
<li>Navigation</li>
</ul>
<ul>
<li>Control</li>
</ul>
<p>Preferred Skills:</p>
<ul>
<li>Python</li>
</ul>
<ul>
<li>Go</li>
</ul>
<ul>
<li>High-performance computing</li>
</ul>
<ul>
<li>Network programming</li>
</ul>
<ul>
<li>Fault tolerance</li>
</ul>
<ul>
<li>Fault handling</li>
</ul>
<ul>
<li>DevSecOps</li>
</ul>
<p style="margin-top:24px;font-size:13px;color:#666;">XML job scraping automation by <a href="https://yubhub.co">YubHub</a></p>]]></Description>
      <Jobtype>Full-time</Jobtype>
      <Experiencelevel>senior</Experiencelevel>
      <Workarrangement>onsite</Workarrangement>
      <Salaryrange>$220,000-$292,000 USD</Salaryrange>
      <Skills>C++, Rust, Linux development environment, Data structures, Algorithms, Concurrency, Code optimization, Distributed software platform and application architectures, Dynamic network topologies, Software-enabled capabilities, Critical software, Complex systems, Modeling and simulation, High-performance computing, Network programming, Fault tolerance, Fault handling, DevSecOps, Sensor integration, Tracking and estimation, Motion planning, Perception, Localization, Mapping, Guidance, Navigation, Control, Python, Go</Skills>
      <Category>Engineering</Category>
      <Industry>Technology</Industry>
      <Employername>Anduril Industries</Employername>
      <Employerlogo>https://logos.yubhub.co/anduril.com.png</Employerlogo>
      <Employerdescription>Anduril Industries is a defense technology company that transforms U.S. and allied military capabilities with advanced technology.</Employerdescription>
      <Employerwebsite>https://www.anduril.com/</Employerwebsite>
      <Compensationcurrency></Compensationcurrency>
      <Compensationmin></Compensationmin>
      <Compensationmax></Compensationmax>
      <Applyto>https://job-boards.greenhouse.io/andurilindustries/jobs/5033769007</Applyto>
      <Location>Quincy, Massachusetts, United States</Location>
      <Country></Country>
      <Postedate>2026-04-18</Postedate>
    </job>
    <job>
      <externalid>e00b7052-70b</externalid>
      <Title>Senior Business Systems Analyst, Finance Systems</Title>
      <Description><![CDATA[<p>We are seeking an experienced Senior Business Systems Analyst to join our Finance Systems team at Anthropic. In this role, you will serve as the internal functional lead for our Workday Financials implementation, owning the design and configuration of the Financial Data Model (FDM), Chart of Accounts, and dimensional structures that will serve as the source of truth for financial reporting.</p>
<p>You will develop Prism Analytics and Accounting Center solutions, gather requirements and build reporting capabilities, and collaborate closely with cross-functional teams to drive the successful adoption of our new ERP platform.</p>
<p>This is a critical role that will directly shape how Anthropic&#39;s finance organisation operates as we scale toward public company readiness. You will work at the intersection of finance domain expertise and technical implementation, partnering with the implementation partner, engineering teams, and finance stakeholders to build a world-class financial systems foundation.</p>
<p>Responsibilities:</p>
<ul>
<li>ERP Core Financials Implementation: Serve as internal functional lead for Workday Financials implementation, partnering with consultants to drive configuration decisions, validate designs, and ensure business requirements are met</li>
</ul>
<ul>
<li>Financial Data Model (FDM) Design: Own the design and configuration of Chart of Accounts, Worktags, dimensional hierarchies, and Accounting Books that will serve as the source of truth for all financial reporting, ensuring support for both GAAP and Management reporting requirements</li>
</ul>
<ul>
<li>Prism Analytics Development: Develop and maintain Prism/Accounting Center solutions from source analysis and ingestion design through build, testing, cutover, and hypercare, including integration with external data sources like BigQuery and Pigment</li>
</ul>
<ul>
<li>Requirements Gathering &amp; Reporting: Gather business requirements from Finance, Accounting, and FP&amp;A stakeholders, translating them into hands-on development of executive reporting, dashboards, and analytics solutions</li>
</ul>
<ul>
<li>Workshop Participation &amp; Solution Design: Participate in implementation workshops, challenge requirements, and translate business needs into buildable designs and testable acceptance criteria; manage defects and data quality issues throughout the project lifecycle</li>
</ul>
<ul>
<li>Cross-Functional Collaboration: Collaborate with Integrations, Security, and Financials configuration teams to align master data, journals, controls, and performance service level agreements; partner with Data Infrastructure and BizTech teams on system integrations</li>
</ul>
<ul>
<li>Cutover &amp; Hypercare Planning: Prepare cutover plans, data migration strategies, reconciliation frameworks, and hypercare plans; document data lineage, controls, and audit artifacts to support SOX compliance requirements</li>
</ul>
<ul>
<li>Platform Expansion &amp; Adoption: Work closely with engineering teams and business stakeholders to drive ongoing expansion and adoption of the Workday platform, identifying opportunities for process improvement and automation</li>
</ul>
<p>You may be a good fit if you:</p>
<ul>
<li>Have 8+ years of experience in finance systems, ERP implementation, or business systems analysis roles, with at least 5 years of hands-on Workday Financials experience</li>
</ul>
<ul>
<li>Possess deep expertise in Workday Financial Data Model (FDM), including Chart of Accounts design, Worktags configuration, dimensional hierarchies, and Accounting Books setup</li>
</ul>
<ul>
<li>Have strong experience with Workday Prism Analytics, including data modeling, source integration, calculated fields, and report development</li>
</ul>
<ul>
<li>Are skilled at translating complex business requirements into technical solutions, bridging the gap between finance stakeholders and technical implementation teams</li>
</ul>
<ul>
<li>Have experience with full ERP implementation lifecycles, including requirements gathering, configuration, testing, data migration, cutover planning, and hypercare</li>
</ul>
<ul>
<li>Possess strong understanding of financial accounting processes including General Ledger, multi-entity consolidation, intercompany accounting, and management reporting</li>
</ul>
<ul>
<li>Have excellent stakeholder management and communication skills, with ability to work effectively with finance leadership, accounting teams, and technical partners</li>
</ul>
<ul>
<li>Demonstrate strong analytical and problem-solving skills with attention to detail and commitment to data accuracy and integrity</li>
</ul>
<ul>
<li>Are comfortable working in fast-paced, high-growth environments with evolving requirements and tight timelines</li>
</ul>
<p>Strong candidates may also have:</p>
<ul>
<li>Background in accounting, finance, or CPA certification with understanding of GAAP/IFRS reporting requirements</li>
</ul>
<ul>
<li>Experience with Workday Accounting Center for complex journal automation and subledger accounting</li>
</ul>
<ul>
<li>Technical proficiency with SQL, Python, or scripting languages for data analysis and integration support</li>
</ul>
<ul>
<li>Experience integrating Workday with external data platforms such as BigQuery or cloud data warehouses</li>
</ul>
<ul>
<li>Knowledge of SOX compliance requirements and internal controls for financial systems</li>
</ul>
<ul>
<li>Experience with EPM/FP&amp;A systems such as Pigment, Anaplan, or Adaptive Planning and their integration with ERP</li>
</ul>
<ul>
<li>Prior experience at high-growth technology companies scaling toward IPO readiness</li>
</ul>
<ul>
<li>Familiarity with Workday HCM and understanding of HCM-Financials integration points</li>
</ul>
<ul>
<li>Experience with data migration tools, ETL processes, and reconciliation frameworks for ERP implementations</li>
</ul>
<p>The annual compensation range for this role is $205,000-$265,000 USD.</p>
<p style="margin-top:24px;font-size:13px;color:#666;">XML job scraping automation by <a href="https://yubhub.co">YubHub</a></p>]]></Description>
      <Jobtype>full-time</Jobtype>
      <Experiencelevel>senior</Experiencelevel>
      <Workarrangement>hybrid</Workarrangement>
      <Salaryrange>$205,000-$265,000 USD</Salaryrange>
      <Skills>Workday Financials, Workday Financial Data Model (FDM), Chart of Accounts design, Worktags configuration, Dimensional hierarchies, Accounting Books setup, Prism Analytics, Data modeling, Source integration, Calculated fields, Report development, ERP implementation lifecycles, Requirements gathering, Configuration, Testing, Data migration, Cutover planning, Hypercare, Financial accounting processes, General Ledger, Multi-entity consolidation, Intercompany accounting, Management reporting, Stakeholder management, Communication skills, Analytical skills, Problem-solving skills, Data accuracy and integrity, SQL, Python, Scripting languages, BigQuery, Cloud data warehouses, SOX compliance requirements, Internal controls, EPM/FP&amp;A systems, Pigment, Anaplan, Adaptive Planning, ERP integration, High-growth technology companies, IPO readiness, Workday HCM, HCM-Financials integration points, Data migration tools, ETL processes, Reconciliation frameworks</Skills>
      <Category>Finance</Category>
      <Industry>Technology</Industry>
      <Employername>Anthropic</Employername>
      <Employerlogo>https://logos.yubhub.co/anthropic.co.png</Employerlogo>
      <Employerdescription>Anthropic is a technology company focused on creating reliable, interpretable, and steerable AI systems.</Employerdescription>
      <Employerwebsite>https://www.anthropic.co/</Employerwebsite>
      <Compensationcurrency></Compensationcurrency>
      <Compensationmin></Compensationmin>
      <Compensationmax></Compensationmax>
      <Applyto>https://job-boards.greenhouse.io/anthropic/jobs/4991194008</Applyto>
      <Location>San Francisco, CA | Seattle, WA</Location>
      <Country></Country>
      <Postedate>2026-04-18</Postedate>
    </job>
    <job>
      <externalid>2e1d4fee-d7d</externalid>
      <Title>Lead Software Engineer, Mission System, Maritime</Title>
      <Description><![CDATA[<p>We are looking for a Lead Mission Systems Software Engineer to join our rapidly growing Maritime team in Boston or Quincy, MA. In this role, you will design and develop the core decision-making and autonomy software that powers our autonomous underwater vehicles.</p>
<p>You will lead the architecture, implementation, and deployment of the logic that enables these systems to understand their environment, navigate safely, respond to obstacles or threats, and execute complex missions with limited human involvement.</p>
<p>As a technical leader, you will provide hands-on contributions across software development, establish a long-term software roadmap, and lead the team through execution of that plan.</p>
<p>Collaborating closely with program leadership and cross-functional teams from other Anduril products, you&#39;ll ensure that designs align with customer requirements while maintaining traceability to program office technical decisions.</p>
<p>You will act as a subject matter expert in mission systems development and integration, presenting to customers and demonstrating overall system performance.</p>
<p>If you enjoy tackling complex technical challenges and owning the development of high-impact products, this role is for you.</p>
<p>In this role, you will:</p>
<ul>
<li>Act as a technical leader on a small team owning software for Copperhead</li>
<li>Draw on both your technical expertise and leadership skills to set objectives, build cross-functional teams, and rapidly drive projects to completion</li>
<li>Drive the design and implementation of development processes for the initial delivery and subsequent iteration of payloads and mission systems</li>
<li>Act as the technical owner for an entire mission system, including stakeholder and customer engagement, requirements definition, roadmap management, team coordination, design, implementation, sustainment, and evolution</li>
<li>Organize integration events, scoping key deliverables and impact to broader program</li>
<li>Own customer success through the design and delivery of a fully operational mission system</li>
<li>Leverage internal product and program-specific engineering teams to rapidly deliver capability beyond the scope of current platforms, with a clear path for both architecture and capability evolution over time</li>
<li>Generate system solutions to improve reliability, ease-of-use, and capability across a variety of customer missions</li>
<li>Write and maintain core libraries (frame transformations, targeting and guidance, communications, etc.) that all robotics platforms at Anduril will use</li>
<li>Own major feature development for Copperhead and manage rollout to the fleet</li>
<li>Author documents to fulfill specific customer requests, including white papers and reports</li>
<li>Travel up to 15% of time to build, test, and deploy capabilities in the real world, and collaborate with other teams or end-users</li>
</ul>
<p>Required qualifications include:</p>
<ul>
<li>Strong engineering background from industry or school, ideally in areas/fields such as Robotics, Computer Science, Software Engineering, Mechatronics, Electrical Engineering, Mathematics, or Physics</li>
<li>Experience in a leadership position within a high-performing technology organization</li>
<li>Proven understanding of data structures, algorithms, concurrency, and code optimization</li>
<li>6+ years of professional C++ or Rust programming experience in a Linux development environment</li>
<li>Experience troubleshooting and analyzing remotely deployed software systems</li>
<li>Experience with the development and sustainment of distributed software platform and application architectures, running under dynamic network topologies</li>
<li>Capacity to work holistically on software-enabled capabilities up and down the software stack and through lifecycle through design, implementation, operation, and sustainment</li>
<li>Demonstrated curiosity and ability to learn outside of core discipline, and a desire to work on critical software that has a real-world impact</li>
<li>Experience engaging with customers to represent the technical aspects of a product portfolio regarding missions and payloads</li>
<li>Strong communication skills and ability to collaborate across technical teams</li>
<li>Eligible to obtain and maintain an active U.S. Secret security clearance</li>
</ul>
<p>Preferred qualifications include:</p>
<ul>
<li>MS or PhD in Computer Science, Robotics, or a related field</li>
<li>Experience with modeling and simulation of complex systems</li>
<li>Proficiency in Python, Rust, and/or Go</li>
<li>Experience in one or more of the following: high-performance computing, network programming, fault tolerance, fault handling, DevSecOps</li>
<li>Experience solving complex frame transformation problems, such as target localization or multi-degree-of-freedom robotic arms</li>
<li>Experience in one or more of the following: sensor integration, tracking and estimation, motion planning, perception, localization, mapping, guidance, navigation, and control, and related system performance metrics</li>
<li>Hands-on experience developing software for embedded and physical devices</li>
<li>Solid understanding of robotics systems, common interfaces, and protocols (gRPC, TCP/IP, RS485, RS232)</li>
<li>Demonstrated ability to learn and grow individually, while effectively mentoring senior team members, building team cohesion, and increasing team capability</li>
</ul>
<p style="margin-top:24px;font-size:13px;color:#666;">XML job scraping automation by <a href="https://yubhub.co">YubHub</a></p>]]></Description>
      <Jobtype>full-time</Jobtype>
      <Experiencelevel>senior</Experiencelevel>
      <Workarrangement>onsite</Workarrangement>
      <Salaryrange>$220,000-$292,000 USD</Salaryrange>
      <Skills>C++, Rust, Linux development environment, Data structures, Algorithms, Concurrency, Code optimization, Troubleshooting, Analysis, Distributed software platform, Application architectures, Dynamic network topologies, Communication skills, Collaboration, U.S. Secret security clearance, Python, Go, High-performance computing, Network programming, Fault tolerance, Fault handling, DevSecOps, Sensor integration, Tracking and estimation, Motion planning, Perception, Localization, Mapping, Guidance, Navigation, Control, Robotics systems, Common interfaces, Protocols</Skills>
      <Category>Engineering</Category>
      <Industry>Technology</Industry>
      <Employername>Anduril Industries</Employername>
      <Employerlogo>https://logos.yubhub.co/anduril.com.png</Employerlogo>
      <Employerdescription>Anduril Industries is a defense technology company that designs, builds, and sells military systems.</Employerdescription>
      <Employerwebsite>https://www.anduril.com/</Employerwebsite>
      <Compensationcurrency></Compensationcurrency>
      <Compensationmin></Compensationmin>
      <Compensationmax></Compensationmax>
      <Applyto>https://job-boards.greenhouse.io/andurilindustries/jobs/5033767007</Applyto>
      <Location>Boston, Massachusetts, United States</Location>
      <Country></Country>
      <Postedate>2026-04-18</Postedate>
    </job>
    <job>
      <externalid>993beba7-87d</externalid>
      <Title>AI Tutor - Vietnamese</Title>
      <Description><![CDATA[<p>As an AI Tutor specializing in multilingual audio capabilities, you will contribute to xAI&#39;s mission by training and refining Grok to excel in voice interactions, speech recognition, and auditory experiences across diverse languages, accents, and cultural contexts. Your work will focus on curating and annotating high-quality audio data to enhance Grok&#39;s global accessibility, enabling natural spoken interactions for users worldwide, bridging language barriers through accurate speech processing, and improving the AI&#39;s handling of multilingual audio nuances.</p>
<p>Responsibilities:</p>
<ul>
<li>Use proprietary software to provide labels, annotations, recordings, and inputs on projects involving multilingual audio clips, voice recordings, speech samples, and auditory elements in various languages.</li>
<li>Support the delivery of high-quality curated audio data that ensures clear, natural spoken output, accurate representation of linguistic and prosodic details (such as intonation, rhythm, and accent), and professional audio standards.</li>
<li>Collaborate with technical staff to develop tasks that improve AI&#39;s ability to handle speech modulation, accent variation, noise in real-world recordings, and multilingual audio processing.</li>
<li>Work with technical staff to improve annotation tools for efficient audio workflows.</li>
</ul>
<p>Basic Qualifications:</p>
<ul>
<li>Native proficiency in Vietnamese with exposure to diverse accents, dialects, or regional variations.</li>
<li>Proficiency in English (minimum B2 level) with clear, natural vocal delivery and pronunciation suitable for audio recording purposes.</li>
<li>Strong auditory perception to identify nuances in speech, accents, pronunciation, intonation, and audio quality across languages.</li>
<li>Demonstrated ability to handle multilingual audio content, including evaluating speech accuracy, cultural vocal expressions, and contextual interpretation in spoken form.</li>
<li>Demonstrated ability to transcribe audio with high accuracy across accents and varying audio quality.</li>
<li>Comfort providing high-quality voice recordings and feedback on audio samples in multiple languages.</li>
<li>Strong comprehension skills and the ability to make independent judgments on ambiguous or varied audio material, including noisy or accented speech.</li>
<li>Strong communication, interpersonal, analytical, detail-oriented, and organizational skills, with the ability to articulate audio-related feedback effectively.</li>
<li>Commitment to developing AI that masters sophisticated multilingual audio capabilities.</li>
</ul>
<p>Preferred Skills and Experience:</p>
<ul>
<li>Demonstration of exceptional attention to linguistic nuance, auditory detail, and data quality beyond standard transcription work.</li>
<li>Deep understanding and taste of what good/useful Audio data is.</li>
<li>Strong command of advanced transcription and annotation practices, including handling disfluencies, accents, and prosodic features (intonation, stress, rhythm, emotion, etc) with high consistency and accuracy.</li>
<li>Background in linguistics (e.g., phonetics, phonology, sociolinguistics), speech sciences, cognitive science, or a related field, or equivalent practical experience, with demonstrated ability to analyze accent variation, pronunciation differences, and multilingual speech patterns.</li>
<li>Experience working with speech/audio datasets, annotation workflows, or AI training data, including knowledge/experience with training voice models, and an understanding of how data quality impacts model performance.</li>
<li>Professional experience in voice work, including voice acting, voice recording, podcasting with a measurable audience (e.g., X following), or similar audio production demonstrating attention to clarity and recording quality.</li>
<li>Demonstrated ability to exercise independent judgment in ambiguous audio scenarios and make consistent, defensible annotation decisions.</li>
<li>Portfolio (strongly preferred for advanced candidates): Voice samples, annotated transcripts, or audio-related work demonstrating quality, methodology, and attention to detail.</li>
<li>Candidates with professional experience in voice, linguistics, speech data, or speech evaluation and research are especially encouraged to apply.</li>
</ul>
<p>Location and Other Expectations:</p>
<ul>
<li>Tutor roles may be offered as full-time, part-time, or contractor positions, depending on role needs and candidate fit.</li>
<li>For contractor positions, hours will vary widely based on project scope and contractor availability, with no fixed commitments required. On average, most projects may require at least 10 hours per week to deliver effectively, though this is not a fixed commitment and depends on the scope of work. Contractors have full flexibility to set their own hours and determine the exact amount of time needed to complete deliverables.</li>
<li>Tutor roles may be performed remotely from any location worldwide, subject to legal eligibility, time-zone compatibility, and role-specific needs.</li>
<li>For US-based candidates, please note that we are unable to hire in Wyoming and Illinois at this time.</li>
<li>We are unable to provide visa sponsorship.</li>
<li>For those who will be working from a personal device, your computer must be a Chromebook, a Mac with macOS 11.0 or later, or Windows 10 or later.</li>
</ul>
<p>Compensation and Benefits: US-based candidates: $35/hour - $45/hour depending on factors including relevant experience, skills, education, geographic location, and qualifications. International candidates: Information will be provided to you during the recruitment process. Benefits vary based on employment type, location, and jurisdiction. Benefits for eligible U.S.-based positions include health insurance, 401(k) plan, and paid sick leave. Specific details and role-specific information will be provided to you during the interview process.</p>
<p style="margin-top:24px;font-size:13px;color:#666;">XML job scraping automation by <a href="https://yubhub.co">YubHub</a></p>]]></Description>
      <Jobtype>full-time|part-time|contract</Jobtype>
      <Experiencelevel>mid</Experiencelevel>
      <Workarrangement>remote</Workarrangement>
      <Salaryrange>$35/hour - $45/hour</Salaryrange>
      <Skills>Multilingual audio capabilities, Proprietary software, Audio data curation, Speech recognition, Auditory experiences, Accent variation, Noise in real-world recordings, Multilingual audio processing, Annotation tools, Efficient audio workflows, Native proficiency in Vietnamese, Proficiency in English, Strong auditory perception, Multilingual audio content, Speech accuracy, Cultural vocal expressions, Contextual interpretation, Transcription, High-quality voice recordings, Feedback on audio samples, Independent judgments, Ambiguous audio scenarios, Defensible annotation decisions, Portfolio, Voice samples, Annotated transcripts, Audio-related work, Quality, Methodology, Attention to detail, Exceptional attention to linguistic nuance, Auditory detail, Data quality, Advanced transcription and annotation practices, Disfluencies, Accents, Prosodic features, Linguistics, Phonetics, Phonology, Sociolinguistics, Speech sciences, Cognitive science, Pronunciation differences, Multilingual speech patterns, Speech/audio datasets, Annotation workflows, AI training data, Training voice models, Data quality impacts model performance, Professional experience in voice work, Voice acting, Voice recording, Podcasting, Measurable audience, Similar audio production, Clarity and recording quality</Skills>
      <Category>Engineering</Category>
      <Industry>Technology</Industry>
      <Employername>xAI</Employername>
      <Employerlogo>https://logos.yubhub.co/xai.com.png</Employerlogo>
      <Employerdescription>xAI creates AI systems to understand the universe and aid humanity in its pursuit of knowledge.</Employerdescription>
      <Employerwebsite>https://www.xai.com/</Employerwebsite>
      <Compensationcurrency></Compensationcurrency>
      <Compensationmin></Compensationmin>
      <Compensationmax></Compensationmax>
      <Applyto>https://job-boards.greenhouse.io/xai/jobs/5090274007</Applyto>
      <Location>Remote</Location>
      <Country></Country>
      <Postedate>2026-04-18</Postedate>
    </job>
    <job>
      <externalid>80234cf0-6c3</externalid>
      <Title>AI Tutor - Urdu</Title>
      <Description><![CDATA[<p>As an AI Tutor specialized in multilingual audio capabilities, you will contribute to xAI&#39;s mission by training and refining Grok to excel in voice interactions, speech recognition, and auditory experiences across diverse languages, accents, and cultural contexts.</p>
<p>Your work will focus on curating and annotating high-quality audio data to enhance Grok&#39;s global accessibility, enabling natural spoken interactions for users worldwide, bridging language barriers through accurate speech processing, and improving the AI&#39;s handling of multilingual audio nuances.</p>
<p>Responsibilities:</p>
<ul>
<li>Use proprietary software to provide labels, annotations, recordings, and inputs on projects involving multilingual audio clips, voice recordings, speech samples, and auditory elements in various languages.</li>
</ul>
<ul>
<li>Support the delivery of high-quality curated audio data that ensures clear, natural spoken output, accurate representation of linguistic and prosodic details (such as intonation, rhythm, and accent), and professional audio standards.</li>
</ul>
<ul>
<li>Collaborate with technical staff to develop tasks that improve AI&#39;s ability to handle speech modulation, accent variation, noise in real-world recordings, and multilingual audio processing.</li>
</ul>
<ul>
<li>Work with technical staff to improve annotation tools for efficient audio workflows.</li>
</ul>
<p>Basic Qualifications:</p>
<ul>
<li>Native proficiency in Urdu with exposure to diverse accents, dialects, or regional variations.</li>
</ul>
<ul>
<li>Proficiency in English (minimum B2 level) with clear, natural vocal delivery and pronunciation suitable for audio recording purposes.</li>
</ul>
<ul>
<li>Strong auditory perception to identify nuances in speech, accents, pronunciation, intonation, and audio quality across languages.</li>
</ul>
<ul>
<li>Demonstrated ability to handle multilingual audio content, including evaluating speech accuracy, cultural vocal expressions, and contextual interpretation in spoken form.</li>
</ul>
<ul>
<li>Demonstrated ability to transcribe audio with high accuracy across accents and varying audio quality.</li>
</ul>
<ul>
<li>Comfort providing high-quality voice recordings and feedback on audio samples in multiple languages.</li>
</ul>
<ul>
<li>Strong comprehension skills and the ability to make independent judgments on ambiguous or varied audio material, including noisy or accented speech.</li>
</ul>
<ul>
<li>Strong communication, interpersonal, analytical, detail-oriented, and organizational skills, with the ability to articulate audio-related feedback effectively.</li>
</ul>
<ul>
<li>Commitment to developing AI that masters sophisticated multilingual audio capabilities.</li>
</ul>
<p>Preferred Skills and Experience:</p>
<ul>
<li>Demonstration of exceptional attention to linguistic nuance, auditory detail, and data quality beyond standard transcription work.</li>
</ul>
<ul>
<li>Deep understanding and taste of what good/useful Audio data is.</li>
</ul>
<ul>
<li>Strong command of advanced transcription and annotation practices, including handling disfluencies, accents, and prosodic features (intonation, stress, rhythm, emotion, etc) with high consistency and accuracy.</li>
</ul>
<ul>
<li>Background in linguistics (e.g., phonetics, phonology, sociolinguistics), speech sciences, cognitive science, or a related field, or equivalent practical experience, with demonstrated ability to analyze accent variation, pronunciation differences, and multilingual speech patterns.</li>
</ul>
<ul>
<li>Experience working with speech/audio datasets, annotation workflows, or AI training data, including knowledge/experience with training voice models, and an understanding of how data quality impacts model performance.</li>
</ul>
<ul>
<li>Professional experience in voice work, including voice acting, voice recording, podcasting with a measurable audience (e.g., X following), or similar audio production demonstrating attention to clarity and recording quality.</li>
</ul>
<ul>
<li>Demonstrated ability to exercise independent judgment in ambiguous audio scenarios and make consistent, defensible annotation decisions.</li>
</ul>
<ul>
<li>Portfolio (strongly preferred for advanced candidates): Voice samples, annotated transcripts, or audio-related work demonstrating quality, methodology, and attention to detail.</li>
</ul>
<ul>
<li>Candidates with professional experience in voice, linguistics, speech data, or speech evaluation and research are especially encouraged to apply.</li>
</ul>
<p>Location and Other Expectations:</p>
<ul>
<li>Tutor roles may be offered as full-time, part-time, or contractor positions, depending on role needs and candidate fit.</li>
</ul>
<ul>
<li>For contractor positions, hours will vary widely based on project scope and contractor availability, with no fixed commitments required. On average, most projects may require at least 10 hours per week to deliver effectively, though this is not a fixed commitment and depends on the scope of work. Contractors have full flexibility to set their own hours and determine the exact amount of time needed to complete deliverables.</li>
</ul>
<ul>
<li>Tutor roles may be performed remotely from any location worldwide, subject to legal eligibility, time-zone compatibility, and role-specific needs.</li>
</ul>
<ul>
<li>For US-based candidates, please note that we are unable to hire in Wyoming and Illinois at this time.</li>
</ul>
<ul>
<li>We are unable to provide visa sponsorship.</li>
</ul>
<ul>
<li>For those who will be working from a personal device, your computer must be a Chromebook, a Mac with macOS 11.0 or later, or Windows 10 or later.</li>
</ul>
<p>Compensation and Benefits:</p>
<p>US-based candidates: $35/hour - $45/hour depending on factors including relevant experience, skills, education, geographic location, and qualifications. International candidates: Information will be provided to you during the recruitment process.</p>
<p>Benefits vary based on employment type, location, and jurisdiction. Benefits for eligible U.S.-based positions include health insurance, 401(k) plan, and paid sick leave. Specific details and role-specific information will be provided to you during the interview process.</p>
<p style="margin-top:24px;font-size:13px;color:#666;">XML job scraping automation by <a href="https://yubhub.co">YubHub</a></p>]]></Description>
      <Jobtype>full-time|part-time|contract</Jobtype>
      <Experiencelevel>mid</Experiencelevel>
      <Workarrangement>remote</Workarrangement>
      <Salaryrange>$35/hour - $45/hour</Salaryrange>
      <Skills>Native proficiency in Urdu, Proficiency in English, Strong auditory perception, Demonstrated ability to handle multilingual audio content, Demonstrated ability to transcribe audio with high accuracy, Comfort providing high-quality voice recordings and feedback on audio samples, Strong comprehension skills, Strong communication, interpersonal, analytical, detail-oriented, and organizational skills, Exceptional attention to linguistic nuance, auditory detail, and data quality, Deep understanding and taste of what good/useful Audio data is, Strong command of advanced transcription and annotation practices, Background in linguistics, speech sciences, cognitive science, or a related field, Experience working with speech/audio datasets, annotation workflows, or AI training data, Professional experience in voice work, including voice acting, voice recording, podcasting, Demonstrated ability to exercise independent judgment in ambiguous audio scenarios</Skills>
      <Category>Engineering</Category>
      <Industry>Technology</Industry>
      <Employername>xAI</Employername>
      <Employerlogo>https://logos.yubhub.co/xai.com.png</Employerlogo>
      <Employerdescription>xAI creates AI systems that can accurately understand the universe and aid humanity in its pursuit of knowledge. The team is small and highly motivated.</Employerdescription>
      <Employerwebsite>https://www.xai.com/</Employerwebsite>
      <Compensationcurrency></Compensationcurrency>
      <Compensationmin></Compensationmin>
      <Compensationmax></Compensationmax>
      <Applyto>https://job-boards.greenhouse.io/xai/jobs/5090273007</Applyto>
      <Location>Remote</Location>
      <Country></Country>
      <Postedate>2026-04-18</Postedate>
    </job>
    <job>
      <externalid>c592ab39-06d</externalid>
      <Title>AI Tutor - Norwegian</Title>
      <Description><![CDATA[<p>As an AI Tutor specialized in multilingual audio capabilities, you will contribute to xAI&#39;s mission by training and refining Grok to excel in voice interactions, speech recognition, and auditory experiences across diverse languages, accents, and cultural contexts.</p>
<p>Your work will focus on curating and annotating high-quality audio data to enhance Grok&#39;s global accessibility, enabling natural spoken interactions for users worldwide, bridging language barriers through accurate speech processing, and improving the AI&#39;s handling of multilingual audio nuances.</p>
<p>Responsibilities:</p>
<ul>
<li>Use proprietary software to provide labels, annotations, recordings, and inputs on projects involving multilingual audio clips, voice recordings, speech samples, and auditory elements in various languages.</li>
</ul>
<ul>
<li>Support the delivery of high-quality curated audio data that ensures clear, natural spoken output, accurate representation of linguistic and prosodic details (such as intonation, rhythm, and accent), and professional audio standards.</li>
</ul>
<ul>
<li>Collaborate with technical staff to develop tasks that improve AI&#39;s ability to handle speech modulation, accent variation, noise in real-world recordings, and multilingual audio processing.</li>
</ul>
<ul>
<li>Work with technical staff to improve annotation tools for efficient audio workflows.</li>
</ul>
<p>Basic Qualifications:</p>
<ul>
<li>Native proficiency in Norwegian with exposure to diverse accents, dialects, or regional variations.</li>
</ul>
<ul>
<li>Proficiency in English (minimum B2 level) with clear, natural vocal delivery and pronunciation suitable for audio recording purposes.</li>
</ul>
<ul>
<li>Strong auditory perception to identify nuances in speech, accents, pronunciation, intonation, and audio quality across languages.</li>
</ul>
<ul>
<li>Demonstrated ability to handle multilingual audio content, including evaluating speech accuracy, cultural vocal expressions, and contextual interpretation in spoken form.</li>
</ul>
<ul>
<li>Demonstrated ability to transcribe audio with high accuracy across accents and varying audio quality.</li>
</ul>
<ul>
<li>Comfort providing high-quality voice recordings and feedback on audio samples in multiple languages.</li>
</ul>
<ul>
<li>Strong comprehension skills and the ability to make independent judgments on ambiguous or varied audio material, including noisy or accented speech.</li>
</ul>
<ul>
<li>Strong communication, interpersonal, analytical, detail-oriented, and organizational skills, with the ability to articulate audio-related feedback effectively.</li>
</ul>
<ul>
<li>Commitment to developing AI that masters sophisticated multilingual audio capabilities.</li>
</ul>
<p>Preferred Skills and Experience:</p>
<ul>
<li>Demonstration of exceptional attention to linguistic nuance, auditory detail, and data quality beyond standard transcription work.</li>
</ul>
<ul>
<li>Deep understanding and taste of what good/useful Audio data is.</li>
</ul>
<ul>
<li>Strong command of advanced transcription and annotation practices, including handling disfluencies, accents, and prosodic features (intonation, stress, rhythm, emotion, etc) with high consistency and accuracy.</li>
</ul>
<ul>
<li>Background in linguistics (e.g., phonetics, phonology, sociolinguistics), speech sciences, cognitive science, or a related field, or equivalent practical experience, with demonstrated ability to analyze accent variation, pronunciation differences, and multilingual speech patterns.</li>
</ul>
<ul>
<li>Experience working with speech/audio datasets, annotation workflows, or AI training data, including knowledge/experience with training voice models, and an understanding of how data quality impacts model performance.</li>
</ul>
<ul>
<li>Professional experience in voice work, including voice acting, voice recording, podcasting with a measurable audience (e.g., X following), or similar audio production demonstrating attention to clarity and recording quality.</li>
</ul>
<ul>
<li>Demonstrated ability to exercise independent judgment in ambiguous audio scenarios and make consistent, defensible annotation decisions.</li>
</ul>
<ul>
<li>Portfolio (strongly preferred for advanced candidates): Voice samples, annotated transcripts, or audio-related work demonstrating quality, methodology, and attention to detail.</li>
</ul>
<ul>
<li>Candidates with professional experience in voice, linguistics, speech data, or speech evaluation and research are especially encouraged to apply.</li>
</ul>
<p>Location and Other Expectations:</p>
<ul>
<li>Tutor roles may be offered as full-time, part-time, or contractor positions, depending on role needs and candidate fit.</li>
</ul>
<ul>
<li>For contractor positions, hours will vary widely based on project scope and contractor availability, with no fixed commitments required. On average, most projects may require at least 10 hours per week to deliver effectively, though this is not a fixed commitment and depends on the scope of work. Contractors have full flexibility to set their own hours and determine the exact amount of time needed to complete deliverables.</li>
</ul>
<ul>
<li>Tutor roles may be performed remotely from any location worldwide, subject to legal eligibility, time-zone compatibility, and role-specific needs.</li>
</ul>
<ul>
<li>For US-based candidates, please note that we are unable to hire in Wyoming and Illinois at this time.</li>
</ul>
<ul>
<li>We are unable to provide visa sponsorship.</li>
</ul>
<ul>
<li>For those who will be working from a personal device, your computer must be a Chromebook, a Mac with macOS 11.0 or later, or Windows 10 or later.</li>
</ul>
<p>Compensation and Benefits:</p>
<p>US-based candidates: $35/hour - $45/hour depending on factors including relevant experience, skills, education, geographic location, and qualifications. International candidates: Information will be provided to you during the recruitment process.</p>
<p>Benefits vary based on employment type, location, and jurisdiction. Benefits for eligible U.S.-based positions include health insurance, 401(k) plan, and paid sick leave. Specific details and role-specific information will be provided to you during the interview process.</p>
<p style="margin-top:24px;font-size:13px;color:#666;">XML job scraping automation by <a href="https://yubhub.co">YubHub</a></p>]]></Description>
      <Jobtype>full-time|part-time|contract</Jobtype>
      <Experiencelevel>mid</Experiencelevel>
      <Workarrangement>remote</Workarrangement>
      <Salaryrange>$35/hour - $45/hour</Salaryrange>
      <Skills>Native proficiency in Norwegian, Proficiency in English, Strong auditory perception, Demonstrated ability to handle multilingual audio content, Demonstrated ability to transcribe audio with high accuracy, Comfort providing high-quality voice recordings and feedback on audio samples, Strong comprehension skills, Strong communication, interpersonal, analytical, detail-oriented, and organizational skills, Exceptional attention to linguistic nuance, auditory detail, and data quality, Deep understanding and taste of what good/useful Audio data is, Strong command of advanced transcription and annotation practices, Background in linguistics, speech sciences, cognitive science, or a related field, Experience working with speech/audio datasets, annotation workflows, or AI training data, Professional experience in voice work, including voice acting, voice recording, podcasting, Demonstrated ability to exercise independent judgment in ambiguous audio scenarios</Skills>
      <Category>Engineering</Category>
      <Industry>Technology</Industry>
      <Employername>xAI</Employername>
      <Employerlogo>https://logos.yubhub.co/xai.com.png</Employerlogo>
      <Employerdescription>xAI creates AI systems that can accurately understand the universe and aid humanity in its pursuit of knowledge. The team is small and highly motivated.</Employerdescription>
      <Employerwebsite>https://www.xai.com/</Employerwebsite>
      <Compensationcurrency></Compensationcurrency>
      <Compensationmin></Compensationmin>
      <Compensationmax></Compensationmax>
      <Applyto>https://job-boards.greenhouse.io/xai/jobs/5090215007</Applyto>
      <Location>Remote</Location>
      <Country></Country>
      <Postedate>2026-04-18</Postedate>
    </job>
    <job>
      <externalid>793fd488-522</externalid>
      <Title>AI Tutor - Turkish</Title>
      <Description><![CDATA[<p>As an AI Tutor specialized in multilingual audio capabilities, you will contribute to xAI&#39;s mission by training and refining Grok to excel in voice interactions, speech recognition, and auditory experiences across diverse languages, accents, and cultural contexts.</p>
<p>Your work will focus on curating and annotating high-quality audio data to enhance Grok&#39;s global accessibility, enabling natural spoken interactions for users worldwide, bridging language barriers through accurate speech processing, and improving the AI&#39;s handling of multilingual audio nuances.</p>
<p>Responsibilities:</p>
<ul>
<li>Use proprietary software to provide labels, annotations, recordings, and inputs on projects involving multilingual audio clips, voice recordings, speech samples, and auditory elements in various languages.</li>
<li>Support the delivery of high-quality curated audio data that ensures clear, natural spoken output, accurate representation of linguistic and prosodic details (such as intonation, rhythm, and accent), and professional audio standards.</li>
<li>Collaborate with technical staff to develop tasks that improve AI&#39;s ability to handle speech modulation, accent variation, noise in real-world recordings, and multilingual audio processing.</li>
<li>Work with technical staff to improve annotation tools for efficient audio workflows.</li>
</ul>
<p>Basic Qualifications:</p>
<ul>
<li>Native proficiency in Turkish with exposure to diverse accents, dialects, or regional variations.</li>
<li>Proficiency in English (minimum B2 level) with clear, natural vocal delivery and pronunciation suitable for audio recording purposes.</li>
<li>Strong auditory perception to identify nuances in speech, accents, pronunciation, intonation, and audio quality across languages.</li>
<li>Demonstrated ability to handle multilingual audio content, including evaluating speech accuracy, cultural vocal expressions, and contextual interpretation in spoken form.</li>
<li>Demonstrated ability to transcribe audio with high accuracy across accents and varying audio quality.</li>
<li>Comfort providing high-quality voice recordings and feedback on audio samples in multiple languages.</li>
<li>Strong comprehension skills and the ability to make independent judgments on ambiguous or varied audio material, including noisy or accented speech.</li>
<li>Strong communication, interpersonal, analytical, detail-oriented, and organizational skills, with the ability to articulate audio-related feedback effectively.</li>
<li>Commitment to developing AI that masters sophisticated multilingual audio capabilities.</li>
</ul>
<p>Preferred Skills and Experience:</p>
<ul>
<li>Demonstration of exceptional attention to linguistic nuance, auditory detail, and data quality beyond standard transcription work.</li>
<li>Deep understanding and taste of what good/useful Audio data is.</li>
<li>Strong command of advanced transcription and annotation practices, including handling disfluencies, accents, and prosodic features (intonation, stress, rhythm, emotion, etc) with high consistency and accuracy.</li>
<li>Background in linguistics (e.g., phonetics, phonology, sociolinguistics), speech sciences, cognitive science, or a related field, or equivalent practical experience, with demonstrated ability to analyze accent variation, pronunciation differences, and multilingual speech patterns.</li>
<li>Experience working with speech/audio datasets, annotation workflows, or AI training data, including knowledge/experience with training voice models, and an understanding of how data quality impacts model performance.</li>
<li>Professional experience in voice work, including voice acting, voice recording, podcasting with a measurable audience (e.g., X following), or similar audio production demonstrating attention to clarity and recording quality.</li>
<li>Demonstrated ability to exercise independent judgment in ambiguous audio scenarios and make consistent, defensible annotation decisions.</li>
<li>Portfolio (strongly preferred for advanced candidates): Voice samples, annotated transcripts, or audio-related work demonstrating quality, methodology, and attention to detail.</li>
<li>Candidates with professional experience in voice, linguistics, speech data, or speech evaluation and research are especially encouraged to apply.</li>
</ul>
<p>Location and Other Expectations:</p>
<ul>
<li>Tutor roles may be offered as full-time, part-time, or contractor positions, depending on role needs and candidate fit.</li>
<li>For contractor positions, hours will vary widely based on project scope and contractor availability, with no fixed commitments required. On average, most projects may require at least 10 hours per week to deliver effectively, though this is not a fixed commitment and depends on the scope of work. Contractors have full flexibility to set their own hours and determine the exact amount of time needed to complete deliverables.</li>
<li>Tutor roles may be performed remotely from any location worldwide, subject to legal eligibility, time-zone compatibility, and role-specific needs.</li>
<li>For US-based candidates, please note that we are unable to hire in Wyoming and Illinois at this time.</li>
<li>We are unable to provide visa sponsorship.</li>
<li>For those who will be working from a personal device, your computer must be a Chromebook, a Mac with macOS 11.0 or later, or Windows 10 or later.</li>
</ul>
<p>Compensation and Benefits: US-based candidates: $35/hour - $45/hour depending on factors including relevant experience, skills, education, geographic location, and qualifications. International candidates: Information will be provided to you during the recruitment process. Benefits vary based on employment type, location, and jurisdiction. Benefits for eligible U.S.-based positions include health insurance, 401(k) plan, and paid sick leave. Specific details and role-specific information will be provided to you during the interview process.</p>
<p style="margin-top:24px;font-size:13px;color:#666;">XML job scraping automation by <a href="https://yubhub.co">YubHub</a></p>]]></Description>
      <Jobtype>full-time|part-time|contract</Jobtype>
      <Experiencelevel>mid</Experiencelevel>
      <Workarrangement>remote</Workarrangement>
      <Salaryrange>$35/hour - $45/hour</Salaryrange>
      <Skills>Native proficiency in Turkish, Proficiency in English, Strong auditory perception, Demonstrated ability to handle multilingual audio content, Demonstrated ability to transcribe audio with high accuracy, Comfort providing high-quality voice recordings and feedback on audio samples, Strong comprehension skills, Strong communication, interpersonal, analytical, detail-oriented, and organizational skills, Exceptional attention to linguistic nuance, auditory detail, and data quality, Deep understanding and taste of what good/useful Audio data is, Strong command of advanced transcription and annotation practices, Background in linguistics, speech sciences, cognitive science, or a related field, Experience working with speech/audio datasets, annotation workflows, or AI training data, Professional experience in voice work, including voice acting, voice recording, podcasting, Independent judgment in ambiguous audio scenarios and make consistent, defensible annotation decisions</Skills>
      <Category>Engineering</Category>
      <Industry>Technology</Industry>
      <Employername>xAI</Employername>
      <Employerlogo>https://logos.yubhub.co/xai.com.png</Employerlogo>
      <Employerdescription>xAI creates AI systems that can accurately understand the universe and aid humanity in its pursuit of knowledge. It has a small team focused on engineering excellence.</Employerdescription>
      <Employerwebsite>https://www.xai.com/</Employerwebsite>
      <Compensationcurrency></Compensationcurrency>
      <Compensationmin></Compensationmin>
      <Compensationmax></Compensationmax>
      <Applyto>https://job-boards.greenhouse.io/xai/jobs/5095662007</Applyto>
      <Location>Remote</Location>
      <Country></Country>
      <Postedate>2026-04-18</Postedate>
    </job>
    <job>
      <externalid>56511179-58e</externalid>
      <Title>AI Tutor - Marathi</Title>
      <Description><![CDATA[<p>As an AI Tutor specialized in multilingual audio capabilities, you will contribute to xAI&#39;s mission by training and refining Grok to excel in voice interactions, speech recognition, and auditory experiences across diverse languages, accents, and cultural contexts.</p>
<p>Your work will focus on curating and annotating high-quality audio data to enhance Grok&#39;s global accessibility, enabling natural spoken interactions for users worldwide, bridging language barriers through accurate speech processing, and improving the AI&#39;s handling of multilingual audio nuances.</p>
<p>Responsibilities:</p>
<ul>
<li>Use proprietary software to provide labels, annotations, recordings, and inputs on projects involving multilingual audio clips, voice recordings, speech samples, and auditory elements in various languages.</li>
</ul>
<ul>
<li>Support the delivery of high-quality curated audio data that ensures clear, natural spoken output, accurate representation of linguistic and prosodic details (such as intonation, rhythm, and accent), and professional audio standards.</li>
</ul>
<ul>
<li>Collaborate with technical staff to develop tasks that improve AI&#39;s ability to handle speech modulation, accent variation, noise in real-world recordings, and multilingual audio processing.</li>
</ul>
<ul>
<li>Work with technical staff to improve annotation tools for efficient audio workflows.</li>
</ul>
<p>Basic Qualifications:</p>
<ul>
<li>Native proficiency in Marathi with exposure to diverse accents, dialects, or regional variations.</li>
</ul>
<ul>
<li>Proficiency in English (minimum B2 level) with clear, natural vocal delivery and pronunciation suitable for audio recording purposes.</li>
</ul>
<ul>
<li>Strong auditory perception to identify nuances in speech, accents, pronunciation, intonation, and audio quality across languages.</li>
</ul>
<ul>
<li>Demonstrated ability to handle multilingual audio content, including evaluating speech accuracy, cultural vocal expressions, and contextual interpretation in spoken form.</li>
</ul>
<ul>
<li>Demonstrated ability to transcribe audio with high accuracy across accents and varying audio quality.</li>
</ul>
<ul>
<li>Comfort providing high-quality voice recordings and feedback on audio samples in multiple languages.</li>
</ul>
<ul>
<li>Strong comprehension skills and the ability to make independent judgments on ambiguous or varied audio material, including noisy or accented speech.</li>
</ul>
<ul>
<li>Strong communication, interpersonal, analytical, detail-oriented, and organizational skills, with the ability to articulate audio-related feedback effectively.</li>
</ul>
<ul>
<li>Commitment to developing AI that masters sophisticated multilingual audio capabilities.</li>
</ul>
<p>Preferred Skills and Experience:</p>
<ul>
<li>Demonstration of exceptional attention to linguistic nuance, auditory detail, and data quality beyond standard transcription work.</li>
</ul>
<ul>
<li>Deep understanding and taste of what good/useful Audio data is.</li>
</ul>
<ul>
<li>Strong command of advanced transcription and annotation practices, including handling disfluencies, accents, and prosodic features (intonation, stress, rhythm, emotion, etc) with high consistency and accuracy.</li>
</ul>
<ul>
<li>Background in linguistics (e.g., phonetics, phonology, sociolinguistics), speech sciences, cognitive science, or a related field, or equivalent practical experience, with demonstrated ability to analyze accent variation, pronunciation differences, and multilingual speech patterns.</li>
</ul>
<ul>
<li>Experience working with speech/audio datasets, annotation workflows, or AI training data, including knowledge/experience with training voice models, and an understanding of how data quality impacts model performance.</li>
</ul>
<ul>
<li>Professional experience in voice work, including voice acting, voice recording, podcasting with a measurable audience (e.g., X following), or similar audio production demonstrating attention to clarity and recording quality.</li>
</ul>
<ul>
<li>Demonstrated ability to exercise independent judgment in ambiguous audio scenarios and make consistent, defensible annotation decisions.</li>
</ul>
<ul>
<li>Portfolio (strongly preferred for advanced candidates): Voice samples, annotated transcripts, or audio-related work demonstrating quality, methodology, and attention to detail.</li>
</ul>
<ul>
<li>Candidates with professional experience in voice, linguistics, speech data, or speech evaluation and research are especially encouraged to apply.</li>
</ul>
<p>Location and Other Expectations:</p>
<ul>
<li>Tutor roles may be offered as full-time, part-time, or contractor positions, depending on role needs and candidate fit.</li>
</ul>
<ul>
<li>For contractor positions, hours will vary widely based on project scope and contractor availability, with no fixed commitments required. On average, most projects may require at least 10 hours per week to deliver effectively, though this is not a fixed commitment and depends on the scope of work. Contractors have full flexibility to set their own hours and determine the exact amount of time needed to complete deliverables.</li>
</ul>
<ul>
<li>Tutor roles may be performed remotely from any location worldwide, subject to legal eligibility, time-zone compatibility, and role-specific needs.</li>
</ul>
<ul>
<li>For US-based candidates, please note that we are unable to hire in Wyoming and Illinois at this time.</li>
</ul>
<ul>
<li>We are unable to provide visa sponsorship.</li>
</ul>
<ul>
<li>For those who will be working from a personal device, your computer must be a Chromebook, a Mac with macOS 11.0 or later, or Windows 10 or later.</li>
</ul>
<p>Compensation and Benefits:</p>
<p>US-based candidates: $35/hour - $45/hour depending on factors including relevant experience, skills, education, geographic location, and qualifications. International candidates: Information will be provided to you during the recruitment process.</p>
<p>Benefits vary based on employment type, location, and jurisdiction. Benefits for eligible U.S.-based positions include health insurance, 401(k) plan, and paid sick leave. Specific details and role-specific information will be provided to you during the interview process.</p>
<p style="margin-top:24px;font-size:13px;color:#666;">XML job scraping automation by <a href="https://yubhub.co">YubHub</a></p>]]></Description>
      <Jobtype>full-time|part-time|contract</Jobtype>
      <Experiencelevel>mid</Experiencelevel>
      <Workarrangement>remote</Workarrangement>
      <Salaryrange>$35/hour - $45/hour</Salaryrange>
      <Skills>Native proficiency in Marathi, Proficiency in English, Strong auditory perception, Demonstrated ability to handle multilingual audio content, Demonstrated ability to transcribe audio with high accuracy, Comfort providing high-quality voice recordings and feedback on audio samples, Strong comprehension skills, Strong communication, interpersonal, analytical, detail-oriented, and organizational skills, Exceptional attention to linguistic nuance, auditory detail, and data quality, Deep understanding and taste of what good/useful Audio data is, Strong command of advanced transcription and annotation practices, Background in linguistics, speech sciences, cognitive science, or a related field, Experience working with speech/audio datasets, annotation workflows, or AI training data, Professional experience in voice work, including voice acting, voice recording, podcasting, Portfolio (voice samples, annotated transcripts, or audio-related work)</Skills>
      <Category>Engineering</Category>
      <Industry>Technology</Industry>
      <Employername>xAI</Employername>
      <Employerlogo>https://logos.yubhub.co/xai.com.png</Employerlogo>
      <Employerdescription>xAI creates AI systems that can accurately understand the universe and aid humanity in its pursuit of knowledge. The team is small and highly motivated.</Employerdescription>
      <Employerwebsite>https://www.xai.com/</Employerwebsite>
      <Compensationcurrency></Compensationcurrency>
      <Compensationmin></Compensationmin>
      <Compensationmax></Compensationmax>
      <Applyto>https://job-boards.greenhouse.io/xai/jobs/5090213007</Applyto>
      <Location>Remote</Location>
      <Country></Country>
      <Postedate>2026-04-18</Postedate>
    </job>
    <job>
      <externalid>0b4750c4-02c</externalid>
      <Title>AI Tutor - Thai</Title>
      <Description><![CDATA[<p>As an AI Tutor specialized in multilingual audio capabilities, you will contribute to xAI&#39;s mission by training and refining Grok to excel in voice interactions, speech recognition, and auditory experiences across diverse languages, accents, and cultural contexts.</p>
<p>Your work will focus on curating and annotating high-quality audio data to enhance Grok&#39;s global accessibility, enabling natural spoken interactions for users worldwide, bridging language barriers through accurate speech processing, and improving the AI&#39;s handling of multilingual audio nuances.</p>
<p>Responsibilities:</p>
<ul>
<li>Use proprietary software to provide labels, annotations, recordings, and inputs on projects involving multilingual audio clips, voice recordings, speech samples, and auditory elements in various languages.</li>
<li>Support the delivery of high-quality curated audio data that ensures clear, natural spoken output, accurate representation of linguistic and prosodic details (such as intonation, rhythm, and accent), and professional audio standards.</li>
<li>Collaborate with technical staff to develop tasks that improve AI&#39;s ability to handle speech modulation, accent variation, noise in real-world recordings, and multilingual audio processing.</li>
<li>Work with technical staff to improve annotation tools for efficient audio workflows.</li>
</ul>
<p>Basic Qualifications:</p>
<ul>
<li>Native proficiency in Thai with exposure to diverse accents, dialects, or regional variations.</li>
<li>Proficiency in English (minimum B2 level) with clear, natural vocal delivery and pronunciation suitable for audio recording purposes.</li>
<li>Strong auditory perception to identify nuances in speech, accents, pronunciation, intonation, and audio quality across languages.</li>
<li>Demonstrated ability to handle multilingual audio content, including evaluating speech accuracy, cultural vocal expressions, and contextual interpretation in spoken form.</li>
<li>Demonstrated ability to transcribe audio with high accuracy across accents and varying audio quality.</li>
<li>Comfort providing high-quality voice recordings and feedback on audio samples in multiple languages.</li>
<li>Strong comprehension skills and the ability to make independent judgments on ambiguous or varied audio material, including noisy or accented speech.</li>
<li>Strong communication, interpersonal, analytical, detail-oriented, and organizational skills, with the ability to articulate audio-related feedback effectively.</li>
<li>Commitment to developing AI that masters sophisticated multilingual audio capabilities.</li>
</ul>
<p>Preferred Skills and Experience:</p>
<ul>
<li>Demonstration of exceptional attention to linguistic nuance, auditory detail, and data quality beyond standard transcription work.</li>
<li>Deep understanding and taste of what good/useful Audio data is.</li>
<li>Strong command of advanced transcription and annotation practices, including handling disfluencies, accents, and prosodic features (intonation, stress, rhythm, emotion, etc) with high consistency and accuracy.</li>
<li>Background in linguistics (e.g., phonetics, phonology, sociolinguistics), speech sciences, cognitive science, or a related field, or equivalent practical experience, with demonstrated ability to analyze accent variation, pronunciation differences, and multilingual speech patterns.</li>
<li>Experience working with speech/audio datasets, annotation workflows, or AI training data, including knowledge/experience with training voice models, and an understanding of how data quality impacts model performance.</li>
<li>Professional experience in voice work, including voice acting, voice recording, podcasting with a measurable audience (e.g., X following), or similar audio production demonstrating attention to clarity and recording quality.</li>
<li>Demonstrated ability to exercise independent judgment in ambiguous audio scenarios and make consistent, defensible annotation decisions.</li>
<li>Portfolio (strongly preferred for advanced candidates): Voice samples, annotated transcripts, or audio-related work demonstrating quality, methodology, and attention to detail.</li>
<li>Candidates with professional experience in voice, linguistics, speech data, or speech evaluation and research are especially encouraged to apply.</li>
</ul>
<p>Location and Other Expectations:</p>
<ul>
<li>Tutor roles may be offered as full-time, part-time, or contractor positions, depending on role needs and candidate fit.</li>
<li>For contractor positions, hours will vary widely based on project scope and contractor availability, with no fixed commitments required. On average, most projects may require at least 10 hours per week to deliver effectively, though this is not a fixed commitment and depends on the scope of work. Contractors have full flexibility to set their own hours and determine the exact amount of time needed to complete deliverables.</li>
<li>Tutor roles may be performed remotely from any location worldwide, subject to legal eligibility, time-zone compatibility, and role-specific needs.</li>
<li>For US-based candidates, please note that we are unable to hire in Wyoming and Illinois at this time.</li>
<li>We are unable to provide visa sponsorship.</li>
<li>For those who will be working from a personal device, your computer must be a Chromebook, a Mac with macOS 11.0 or later, or Windows 10 or later.</li>
</ul>
<p>Compensation and Benefits: US-based candidates: $35/hour - $45/hour depending on factors including relevant experience, skills, education, geographic location, and qualifications. International candidates: Information will be provided to you during the recruitment process. Benefits vary based on employment type, location, and jurisdiction. Benefits for eligible U.S.-based positions include health insurance, 401(k) plan, and paid sick leave. Specific details and role-specific information will be provided to you during the interview process.</p>
<p style="margin-top:24px;font-size:13px;color:#666;">XML job scraping automation by <a href="https://yubhub.co">YubHub</a></p>]]></Description>
      <Jobtype>full-time|part-time|contract</Jobtype>
      <Experiencelevel>mid</Experiencelevel>
      <Workarrangement>remote</Workarrangement>
      <Salaryrange>$35/hour - $45/hour</Salaryrange>
      <Skills>Multilingual audio capabilities, Proprietary software, Audio data curation, Annotation tools, Speech recognition, Auditory experiences, Thai language proficiency, English language proficiency, Auditory perception, Speech accuracy, Cultural vocal expressions, Contextual interpretation, Transcription, Voice recordings, Feedback on audio samples, Comprehension skills, Independent judgments, Communication skills, Interpersonal skills, Analytical skills, Detail-oriented skills, Organizational skills, Exceptional attention to linguistic nuance, Auditory detail, Data quality, Advanced transcription and annotation practices, Disfluencies, Accents, Prosodic features, Linguistics, Speech sciences, Cognitive science, Accent variation, Pronunciation differences, Multilingual speech patterns, Speech/audio datasets, Annotation workflows, AI training data, Voice models, Data quality impacts model performance, Voice work, Voice acting, Voice recording, Podcasting, Audio production, Independent judgment, Defensible annotation decisions, Portfolio, Voice samples, Annotated transcripts, Audio-related work</Skills>
      <Category>Engineering</Category>
      <Industry>Technology</Industry>
      <Employername>xAI</Employername>
      <Employerlogo>https://logos.yubhub.co/xai.com.png</Employerlogo>
      <Employerdescription>xAI creates AI systems to understand the universe and aid humanity in its pursuit of knowledge. The team is small and highly motivated.</Employerdescription>
      <Employerwebsite>https://www.xai.com</Employerwebsite>
      <Compensationcurrency></Compensationcurrency>
      <Compensationmin></Compensationmin>
      <Compensationmax></Compensationmax>
      <Applyto>https://job-boards.greenhouse.io/xai/jobs/5090272007</Applyto>
      <Location>Remote</Location>
      <Country></Country>
      <Postedate>2026-04-18</Postedate>
    </job>
    <job>
      <externalid>82470fe3-22f</externalid>
      <Title>AI Tutor - Korean</Title>
      <Description><![CDATA[<p>As an AI Tutor specialized in multilingual audio capabilities, you will contribute to xAI&#39;s mission by training and refining Grok to excel in voice interactions, speech recognition, and auditory experiences across diverse languages, accents, and cultural contexts.</p>
<p>Your work will focus on curating and annotating high-quality audio data to enhance Grok&#39;s global accessibility, enabling natural spoken interactions for users worldwide, bridging language barriers through accurate speech processing, and improving the AI&#39;s handling of multilingual audio nuances.</p>
<p><strong>Responsibilities</strong></p>
<ul>
<li>Use proprietary software to provide labels, annotations, recordings, and inputs on projects involving multilingual audio clips, voice recordings, speech samples, and auditory elements in various languages.</li>
<li>Support the delivery of high-quality curated audio data that ensures clear, natural spoken output, accurate representation of linguistic and prosodic details (such as intonation, rhythm, and accent), and professional audio standards.</li>
<li>Collaborate with technical staff to develop tasks that improve AI&#39;s ability to handle speech modulation, accent variation, noise in real-world recordings, and multilingual audio processing.</li>
<li>Work with technical staff to improve annotation tools for efficient audio workflows.</li>
</ul>
<p><strong>Basic Qualifications</strong></p>
<ul>
<li>Native proficiency in Korean with exposure to diverse accents, dialects, or regional variations.</li>
<li>Proficiency in English (minimum B2 level) with clear, natural vocal delivery and pronunciation suitable for audio recording purposes.</li>
<li>Strong auditory perception to identify nuances in speech, accents, pronunciation, intonation, and audio quality across languages.</li>
<li>Demonstrated ability to handle multilingual audio content, including evaluating speech accuracy, cultural vocal expressions, and contextual interpretation in spoken form.</li>
<li>Demonstrated ability to transcribe audio with high accuracy across accents and varying audio quality.</li>
<li>Comfort providing high-quality voice recordings and feedback on audio samples in multiple languages.</li>
<li>Strong comprehension skills and the ability to make independent judgments on ambiguous or varied audio material, including noisy or accented speech.</li>
<li>Strong communication, interpersonal, analytical, detail-oriented, and organizational skills, with the ability to articulate audio-related feedback effectively.</li>
<li>Commitment to developing AI that masters sophisticated multilingual audio capabilities.</li>
</ul>
<p><strong>Preferred Skills and Experience</strong></p>
<ul>
<li>Demonstration of exceptional attention to linguistic nuance, auditory detail, and data quality beyond standard transcription work.</li>
<li>Deep understanding and taste of what good/useful Audio data is.</li>
<li>Strong command of advanced transcription and annotation practices, including handling disfluencies, accents, and prosodic features (intonation, stress, rhythm, emotion, etc) with high consistency and accuracy.</li>
<li>Background in linguistics (e.g., phonetics, phonology, sociolinguistics), speech sciences, cognitive science, or a related field, or equivalent practical experience, with demonstrated ability to analyze accent variation, pronunciation differences, and multilingual speech patterns.</li>
<li>Experience working with speech/audio datasets, annotation workflows, or AI training data, including knowledge/experience with training voice models, and an understanding of how data quality impacts model performance.</li>
<li>Professional experience in voice work, including voice acting, voice recording, podcasting with a measurable audience (e.g., X following), or similar audio production demonstrating attention to clarity and recording quality.</li>
<li>Demonstrated ability to exercise independent judgment in ambiguous audio scenarios and make consistent, defensible annotation decisions.</li>
<li>Portfolio (strongly preferred for advanced candidates): Voice samples, annotated transcripts, or audio-related work demonstrating quality, methodology, and attention to detail.</li>
<li>Candidates with professional experience in voice, linguistics, speech data, or speech evaluation and research are especially encouraged to apply.</li>
</ul>
<p><strong>Location and Other Expectations</strong></p>
<ul>
<li>Tutor roles may be offered as full-time, part-time, or contractor positions, depending on role needs and candidate fit.</li>
<li>For contractor positions, hours will vary widely based on project scope and contractor availability, with no fixed commitments required. On average, most projects may require at least 10 hours per week to deliver effectively, though this is not a fixed commitment and depends on the scope of work. Contractors have full flexibility to set their own hours and determine the exact amount of time needed to complete deliverables.</li>
<li>Tutor roles may be performed remotely from any location worldwide, subject to legal eligibility, time-zone compatibility, and role-specific needs.</li>
<li>For US-based candidates, please note that we are unable to hire in Wyoming and Illinois at this time.</li>
<li>We are unable to provide visa sponsorship.</li>
<li>For those who will be working from a personal device, your computer must be a Chromebook, a Mac with macOS 11.0 or later, or Windows 10 or later.</li>
</ul>
<p><strong>Compensation and Benefits</strong></p>
<p>US-based candidates: $35/hour - $45/hour depending on factors including relevant experience, skills, education, geographic location, and qualifications. International candidates: Information will be provided to you during the recruitment process.</p>
<p>Benefits vary based on employment type, location, and jurisdiction. Benefits for eligible U.S.-based positions include health insurance, 401(k) plan, and paid sick leave. Specific details and role-specific information will be provided to you during the interview process.</p>
<p style="margin-top:24px;font-size:13px;color:#666;">XML job scraping automation by <a href="https://yubhub.co">YubHub</a></p>]]></Description>
      <Jobtype>full-time|part-time|contract</Jobtype>
      <Experiencelevel>mid</Experiencelevel>
      <Workarrangement>remote</Workarrangement>
      <Salaryrange>$35/hour - $45/hour</Salaryrange>
      <Skills>Native proficiency in Korean, Proficiency in English, Strong auditory perception, Demonstrated ability to handle multilingual audio content, Demonstrated ability to transcribe audio with high accuracy, Comfort providing high-quality voice recordings and feedback on audio samples, Strong comprehension skills, Strong communication, interpersonal, analytical, detail-oriented, and organizational skills, Exceptional attention to linguistic nuance, auditory detail, and data quality, Deep understanding and taste of what good/useful Audio data is, Strong command of advanced transcription and annotation practices, Background in linguistics, speech sciences, cognitive science, or a related field, Experience working with speech/audio datasets, annotation workflows, or AI training data, Professional experience in voice work, including voice acting, voice recording, podcasting, Portfolio: Voice samples, annotated transcripts, or audio-related work demonstrating quality, methodology, and attention to detail</Skills>
      <Category>Engineering</Category>
      <Industry>Technology</Industry>
      <Employername>xAI</Employername>
      <Employerlogo>https://logos.yubhub.co/xai.com.png</Employerlogo>
      <Employerdescription>xAI creates AI systems that can accurately understand the universe and aid humanity in its pursuit of knowledge. The team is small and highly motivated.</Employerdescription>
      <Employerwebsite>https://www.xai.com/</Employerwebsite>
      <Compensationcurrency></Compensationcurrency>
      <Compensationmin></Compensationmin>
      <Compensationmax></Compensationmax>
      <Applyto>https://job-boards.greenhouse.io/xai/jobs/5090210007</Applyto>
      <Location>Remote</Location>
      <Country></Country>
      <Postedate>2026-04-18</Postedate>
    </job>
    <job>
      <externalid>2386c05e-0ba</externalid>
      <Title>AI Tutor - Telugu</Title>
      <Description><![CDATA[<p>As an AI Tutor specialised in multilingual audio capabilities, you will contribute to xAI&#39;s mission by training and refining Grok to excel in voice interactions, speech recognition, and auditory experiences across diverse languages, accents, and cultural contexts.</p>
<p>Your work will focus on curating and annotating high-quality audio data to enhance Grok&#39;s global accessibility, enabling natural spoken interactions for users worldwide, bridging language barriers through accurate speech processing, and improving the AI&#39;s handling of multilingual audio nuances.</p>
<p>Responsibilities:</p>
<ul>
<li>Use proprietary software to provide labels, annotations, recordings, and inputs on projects involving multilingual audio clips, voice recordings, speech samples, and auditory elements in various languages.</li>
</ul>
<ul>
<li>Support the delivery of high-quality curated audio data that ensures clear, natural spoken output, accurate representation of linguistic and prosodic details (such as intonation, rhythm, and accent), and professional audio standards.</li>
</ul>
<ul>
<li>Collaborate with technical staff to develop tasks that improve AI&#39;s ability to handle speech modulation, accent variation, noise in real-world recordings, and multilingual audio processing.</li>
</ul>
<ul>
<li>Work with technical staff to improve annotation tools for efficient audio workflows.</li>
</ul>
<p>Basic Qualifications:</p>
<ul>
<li>Native proficiency in Telugu with exposure to diverse accents, dialects, or regional variations.</li>
</ul>
<ul>
<li>Proficiency in English (minimum B2 level) with clear, natural vocal delivery and pronunciation suitable for audio recording purposes.</li>
</ul>
<ul>
<li>Strong auditory perception to identify nuances in speech, accents, pronunciation, intonation, and audio quality across languages.</li>
</ul>
<ul>
<li>Demonstrated ability to handle multilingual audio content, including evaluating speech accuracy, cultural vocal expressions, and contextual interpretation in spoken form.</li>
</ul>
<ul>
<li>Demonstrated ability to transcribe audio with high accuracy across accents and varying audio quality.</li>
</ul>
<ul>
<li>Comfort providing high-quality voice recordings and feedback on audio samples in multiple languages.</li>
</ul>
<ul>
<li>Strong comprehension skills and the ability to make independent judgments on ambiguous or varied audio material, including noisy or accented speech.</li>
</ul>
<ul>
<li>Strong communication, interpersonal, analytical, detail-oriented, and organisational skills, with the ability to articulate audio-related feedback effectively.</li>
</ul>
<ul>
<li>Commitment to developing AI that masters sophisticated multilingual audio capabilities.</li>
</ul>
<p>Preferred Skills and Experience:</p>
<ul>
<li>Demonstration of exceptional attention to linguistic nuance, auditory detail, and data quality beyond standard transcription work.</li>
</ul>
<ul>
<li>Deep understanding and taste of what good/useful Audio data is.</li>
</ul>
<ul>
<li>Strong command of advanced transcription and annotation practices, including handling disfluencies, accents, and prosodic features (intonation, stress, rhythm, emotion, etc) with high consistency and accuracy.</li>
</ul>
<ul>
<li>Background in linguistics (e.g., phonetics, phonology, sociolinguistics), speech sciences, cognitive science, or a related field, or equivalent practical experience, with demonstrated ability to analyse accent variation, pronunciation differences, and multilingual speech patterns.</li>
</ul>
<ul>
<li>Experience working with speech/audio datasets, annotation workflows, or AI training data, including knowledge/experience with training voice models, and an understanding of how data quality impacts model performance.</li>
</ul>
<ul>
<li>Professional experience in voice work, including voice acting, voice recording, podcasting with a measurable audience (e.g., X following), or similar audio production demonstrating attention to clarity and recording quality.</li>
</ul>
<ul>
<li>Demonstrated ability to exercise independent judgment in ambiguous audio scenarios and make consistent, defensible annotation decisions.</li>
</ul>
<ul>
<li>Portfolio (strongly preferred for advanced candidates): Voice samples, annotated transcripts, or audio-related work demonstrating quality, methodology, and attention to detail.</li>
</ul>
<ul>
<li>Candidates with professional experience in voice, linguistics, speech data, or speech evaluation and research are especially encouraged to apply.</li>
</ul>
<p>Location and Other Expectations:</p>
<ul>
<li>Tutor roles may be offered as full-time, part-time, or contractor positions, depending on role needs and candidate fit.</li>
</ul>
<ul>
<li>For contractor positions, hours will vary widely based on project scope and contractor availability, with no fixed commitments required. On average, most projects may require at least 10 hours per week to deliver effectively, though this is not a fixed commitment and depends on the scope of work. Contractors have full flexibility to set their own hours and determine the exact amount of time needed to complete deliverables.</li>
</ul>
<ul>
<li>Tutor roles may be performed remotely from any location worldwide, subject to legal eligibility, time-zone compatibility, and role-specific needs.</li>
</ul>
<ul>
<li>For US-based candidates, please note that we are unable to hire in Wyoming and Illinois at this time.</li>
</ul>
<ul>
<li>We are unable to provide visa sponsorship.</li>
</ul>
<ul>
<li>For those who will be working from a personal device, your computer must be a Chromebook, a Mac with macOS 11.0 or later, or Windows 10 or later.</li>
</ul>
<p>Compensation and Benefits:</p>
<p>US-based candidates: $35/hour - $45/hour depending on factors including relevant experience, skills, education, geographic location, and qualifications. International candidates: Information will be provided to you during the recruitment process.</p>
<p>Benefits vary based on employment type, location, and jurisdiction. Benefits for eligible U.S.-based positions include health insurance, 401(k) plan, and paid sick leave. Specific details and role-specific information will be provided to you during the interview process.</p>
<p style="margin-top:24px;font-size:13px;color:#666;">XML job scraping automation by <a href="https://yubhub.co">YubHub</a></p>]]></Description>
      <Jobtype>full-time|part-time|contract</Jobtype>
      <Experiencelevel>mid</Experiencelevel>
      <Workarrangement>remote</Workarrangement>
      <Salaryrange>$35/hour - $45/hour</Salaryrange>
      <Skills>Native proficiency in Telugu, Proficiency in English, Strong auditory perception, Demonstrated ability to handle multilingual audio content, Demonstrated ability to transcribe audio with high accuracy, Comfort providing high-quality voice recordings and feedback on audio samples, Strong comprehension skills, Strong communication, interpersonal, analytical, detail-oriented, and organisational skills, Exceptional attention to linguistic nuance, auditory detail, and data quality, Deep understanding and taste of what good/useful Audio data is, Strong command of advanced transcription and annotation practices, Background in linguistics, speech sciences, cognitive science, or a related field, Experience working with speech/audio datasets, annotation workflows, or AI training data, Professional experience in voice work, including voice acting, voice recording, podcasting, Demonstrated ability to exercise independent judgment in ambiguous audio scenarios</Skills>
      <Category>Engineering</Category>
      <Industry>Technology</Industry>
      <Employername>xAI</Employername>
      <Employerlogo>https://logos.yubhub.co/xaitech.com.png</Employerlogo>
      <Employerdescription>xAI creates AI systems that can accurately understand the universe and aid humanity in its pursuit of knowledge. The team is small and highly motivated.</Employerdescription>
      <Employerwebsite>https://xaitech.com/</Employerwebsite>
      <Compensationcurrency></Compensationcurrency>
      <Compensationmin></Compensationmin>
      <Compensationmax></Compensationmax>
      <Applyto>https://job-boards.greenhouse.io/xai/jobs/5090270007</Applyto>
      <Location>Remote</Location>
      <Country></Country>
      <Postedate>2026-04-18</Postedate>
    </job>
    <job>
      <externalid>7321e6d5-9bb</externalid>
      <Title>AI Tutor - Japanese</Title>
      <Description><![CDATA[<p>As an AI Tutor specializing in multilingual audio capabilities, you will contribute to xAI&#39;s mission by training and refining Grok to excel in voice interactions, speech recognition, and auditory experiences across diverse languages, accents, and cultural contexts.</p>
<p>Your work will focus on curating and annotating high-quality audio data to enhance Grok&#39;s global accessibility, enabling natural spoken interactions for users worldwide, bridging language barriers through accurate speech processing, and improving the AI&#39;s handling of multilingual audio nuances.</p>
<p>Responsibilities:</p>
<ul>
<li>Use proprietary software to provide labels, annotations, recordings, and inputs on projects involving multilingual audio clips, voice recordings, speech samples, and auditory elements in various languages.</li>
</ul>
<ul>
<li>Support the delivery of high-quality curated audio data that ensures clear, natural spoken output, accurate representation of linguistic and prosodic details (such as intonation, rhythm, and accent), and professional audio standards.</li>
</ul>
<ul>
<li>Collaborate with technical staff to develop tasks that improve AI&#39;s ability to handle speech modulation, accent variation, noise in real-world recordings, and multilingual audio processing.</li>
</ul>
<ul>
<li>Work with technical staff to improve annotation tools for efficient audio workflows.</li>
</ul>
<p>Requirements:</p>
<ul>
<li>Native proficiency in Japanese with exposure to diverse accents, dialects, or regional variations.</li>
</ul>
<ul>
<li>Proficiency in English (minimum B2 level) with clear, natural vocal delivery and pronunciation suitable for audio recording purposes.</li>
</ul>
<ul>
<li>Strong auditory perception to identify nuances in speech, accents, pronunciation, intonation, and audio quality across languages.</li>
</ul>
<ul>
<li>Demonstrated ability to handle multilingual audio content, including evaluating speech accuracy, cultural vocal expressions, and contextual interpretation in spoken form.</li>
</ul>
<ul>
<li>Demonstrated ability to transcribe audio with high accuracy across accents and varying audio quality.</li>
</ul>
<ul>
<li>Comfort providing high-quality voice recordings and feedback on audio samples in multiple languages.</li>
</ul>
<ul>
<li>Strong comprehension skills and the ability to make independent judgments on ambiguous or varied audio material, including noisy or accented speech.</li>
</ul>
<ul>
<li>Strong communication, interpersonal, analytical, detail-oriented, and organizational skills, with the ability to articulate audio-related feedback effectively.</li>
</ul>
<ul>
<li>Commitment to developing AI that masters sophisticated multilingual audio capabilities.</li>
</ul>
<p>Preferred Skills and Experience:</p>
<ul>
<li>Demonstration of exceptional attention to linguistic nuance, auditory detail, and data quality beyond standard transcription work.</li>
</ul>
<ul>
<li>Deep understanding and taste of what good/useful Audio data is.</li>
</ul>
<ul>
<li>Strong command of advanced transcription and annotation practices, including handling disfluencies, accents, and prosodic features (intonation, stress, rhythm, emotion, etc) with high consistency and accuracy.</li>
</ul>
<ul>
<li>Background in linguistics (e.g., phonetics, phonology, sociolinguistics), speech sciences, cognitive science, or a related field, or equivalent practical experience, with demonstrated ability to analyze accent variation, pronunciation differences, and multilingual speech patterns.</li>
</ul>
<ul>
<li>Experience working with speech/audio datasets, annotation workflows, or AI training data, including knowledge/experience with training voice models, and an understanding of how data quality impacts model performance.</li>
</ul>
<ul>
<li>Professional experience in voice work, including voice acting, voice recording, podcasting with a measurable audience (e.g., X following), or similar audio production demonstrating attention to clarity and recording quality.</li>
</ul>
<ul>
<li>Demonstrated ability to exercise independent judgment in ambiguous audio scenarios and make consistent, defensible annotation decisions.</li>
</ul>
<ul>
<li>Portfolio (strongly preferred for advanced candidates): Voice samples, annotated transcripts, or audio-related work demonstrating quality, methodology, and attention to detail.</li>
</ul>
<ul>
<li>Candidates with professional experience in voice, linguistics, speech data, or speech evaluation and research are especially encouraged to apply.</li>
</ul>
<p>Location and Other Expectations:</p>
<ul>
<li>Tutor roles may be offered as full-time, part-time, or contractor positions, depending on role needs and candidate fit.</li>
</ul>
<ul>
<li>For contractor positions, hours will vary widely based on project scope and contractor availability, with no fixed commitments required. On average, most projects may require at least 10 hours per week to deliver effectively, though this is not a fixed commitment and depends on the scope of work. Contractors have full flexibility to set their own hours and determine the exact amount of time needed to complete deliverables.</li>
</ul>
<ul>
<li>Tutor roles may be performed remotely from any location worldwide, subject to legal eligibility, time-zone compatibility, and role-specific needs.</li>
</ul>
<ul>
<li>For US-based candidates, please note that we are unable to hire in Wyoming and Illinois at this time.</li>
</ul>
<ul>
<li>We are unable to provide visa sponsorship.</li>
</ul>
<ul>
<li>For those who will be working from a personal device, your computer must be a Chromebook, a Mac with macOS 11.0 or later, or Windows 10 or later.</li>
</ul>
<p>Compensation and Benefits:</p>
<p>US-based candidates: $35/hour - $45/hour depending on factors including relevant experience, skills, education, geographic location, and qualifications. International candidates: Information will be provided to you during the recruitment process.</p>
<p>Benefits vary based on employment type, location, and jurisdiction. Benefits for eligible U.S.-based positions include health insurance, 401(k) plan, and paid sick leave. Specific details and role-specific information will be provided to you during the interview process.</p>
<p style="margin-top:24px;font-size:13px;color:#666;">XML job scraping automation by <a href="https://yubhub.co">YubHub</a></p>]]></Description>
      <Jobtype>full-time|part-time|contract|temporary|internship</Jobtype>
      <Experiencelevel></Experiencelevel>
      <Workarrangement>remote</Workarrangement>
      <Salaryrange>$35/hour - $45/hour</Salaryrange>
      <Skills>Native proficiency in Japanese, Proficiency in English, Strong auditory perception, Demonstrated ability to handle multilingual audio content, Demonstrated ability to transcribe audio with high accuracy, Comfort providing high-quality voice recordings and feedback, Strong comprehension skills and ability to make independent judgments, Strong communication, interpersonal, analytical, detail-oriented, and organizational skills, Demonstration of exceptional attention to linguistic nuance, auditory detail, and data quality, Deep understanding and taste of what good/useful Audio data is, Strong command of advanced transcription and annotation practices, Background in linguistics, speech sciences, cognitive science, or a related field, Experience working with speech/audio datasets, annotation workflows, or AI training data, Professional experience in voice work, including voice acting, voice recording, podcasting, Demonstrated ability to exercise independent judgment in ambiguous audio scenarios</Skills>
      <Category>Engineering</Category>
      <Industry>Technology</Industry>
      <Employername>xAI</Employername>
      <Employerlogo>https://logos.yubhub.co/xai.com.png</Employerlogo>
      <Employerdescription>xAI creates AI systems to understand the universe and aid humanity in its pursuit of knowledge. The team is small and highly motivated.</Employerdescription>
      <Employerwebsite>https://www.xai.com/</Employerwebsite>
      <Compensationcurrency></Compensationcurrency>
      <Compensationmin></Compensationmin>
      <Compensationmax></Compensationmax>
      <Applyto>https://job-boards.greenhouse.io/xai/jobs/5095658007</Applyto>
      <Location>Remote</Location>
      <Country></Country>
      <Postedate>2026-04-18</Postedate>
    </job>
    <job>
      <externalid>5fd664dc-310</externalid>
      <Title>AI Tutor - Tamil</Title>
      <Description><![CDATA[<p>As an AI Tutor specialized in multilingual audio capabilities, you will contribute to xAI&#39;s mission by training and refining Grok to excel in voice interactions, speech recognition, and auditory experiences across diverse languages, accents, and cultural contexts.</p>
<p>Your work will focus on curating and annotating high-quality audio data to enhance Grok&#39;s global accessibility, enabling natural spoken interactions for users worldwide, bridging language barriers through accurate speech processing, and improving the AI&#39;s handling of multilingual audio nuances.</p>
<p>Responsibilities:</p>
<ul>
<li>Use proprietary software to provide labels, annotations, recordings, and inputs on projects involving multilingual audio clips, voice recordings, speech samples, and auditory elements in various languages.</li>
</ul>
<ul>
<li>Support the delivery of high-quality curated audio data that ensures clear, natural spoken output, accurate representation of linguistic and prosodic details (such as intonation, rhythm, and accent), and professional audio standards.</li>
</ul>
<ul>
<li>Collaborate with technical staff to develop tasks that improve AI&#39;s ability to handle speech modulation, accent variation, noise in real-world recordings, and multilingual audio processing.</li>
</ul>
<ul>
<li>Work with technical staff to improve annotation tools for efficient audio workflows.</li>
</ul>
<p>Basic Qualifications:</p>
<ul>
<li>Native proficiency in Tamil with exposure to diverse accents, dialects, or regional variations.</li>
</ul>
<ul>
<li>Proficiency in English (minimum B2 level) with clear, natural vocal delivery and pronunciation suitable for audio recording purposes.</li>
</ul>
<ul>
<li>Strong auditory perception to identify nuances in speech, accents, pronunciation, intonation, and audio quality across languages.</li>
</ul>
<ul>
<li>Demonstrated ability to handle multilingual audio content, including evaluating speech accuracy, cultural vocal expressions, and contextual interpretation in spoken form.</li>
</ul>
<ul>
<li>Demonstrated ability to transcribe audio with high accuracy across accents and varying audio quality.</li>
</ul>
<ul>
<li>Comfort providing high-quality voice recordings and feedback on audio samples in multiple languages.</li>
</ul>
<ul>
<li>Strong comprehension skills and the ability to make independent judgments on ambiguous or varied audio material, including noisy or accented speech.</li>
</ul>
<ul>
<li>Strong communication, interpersonal, analytical, detail-oriented, and organizational skills, with the ability to articulate audio-related feedback effectively.</li>
</ul>
<ul>
<li>Commitment to developing AI that masters sophisticated multilingual audio capabilities.</li>
</ul>
<p>Preferred Skills and Experience:</p>
<ul>
<li>Demonstration of exceptional attention to linguistic nuance, auditory detail, and data quality beyond standard transcription work.</li>
</ul>
<ul>
<li>Deep understanding and taste of what good/useful Audio data is.</li>
</ul>
<ul>
<li>Strong command of advanced transcription and annotation practices, including handling disfluencies, accents, and prosodic features (intonation, stress, rhythm, emotion, etc) with high consistency and accuracy.</li>
</ul>
<ul>
<li>Background in linguistics (e.g., phonetics, phonology, sociolinguistics), speech sciences, cognitive science, or a related field, or equivalent practical experience, with demonstrated ability to analyze accent variation, pronunciation differences, and multilingual speech patterns.</li>
</ul>
<ul>
<li>Experience working with speech/audio datasets, annotation workflows, or AI training data, including knowledge/experience with training voice models, and an understanding of how data quality impacts model performance.</li>
</ul>
<ul>
<li>Professional experience in voice work, including voice acting, voice recording, podcasting with a measurable audience (e.g., X following), or similar audio production demonstrating attention to clarity and recording quality.</li>
</ul>
<ul>
<li>Demonstrated ability to exercise independent judgment in ambiguous audio scenarios and make consistent, defensible annotation decisions.</li>
</ul>
<ul>
<li>Portfolio (strongly preferred for advanced candidates): Voice samples, annotated transcripts, or audio-related work demonstrating quality, methodology, and attention to detail.</li>
</ul>
<ul>
<li>Candidates with professional experience in voice, linguistics, speech data, or speech evaluation and research are especially encouraged to apply.</li>
</ul>
<p>Location and Other Expectations:</p>
<ul>
<li>Tutor roles may be offered as full-time, part-time, or contractor positions, depending on role needs and candidate fit.</li>
</ul>
<ul>
<li>For contractor positions, hours will vary widely based on project scope and contractor availability, with no fixed commitments required. On average, most projects may require at least 10 hours per week to deliver effectively, though this is not a fixed commitment and depends on the scope of work. Contractors have full flexibility to set their own hours and determine the exact amount of time needed to complete deliverables.</li>
</ul>
<ul>
<li>Tutor roles may be performed remotely from any location worldwide, subject to legal eligibility, time-zone compatibility, and role-specific needs.</li>
</ul>
<ul>
<li>For US-based candidates, please note that we are unable to hire in Wyoming and Illinois at this time.</li>
</ul>
<ul>
<li>We are unable to provide visa sponsorship.</li>
</ul>
<ul>
<li>For those who will be working from a personal device, your computer must be a Chromebook, a Mac with macOS 11.0 or later, or Windows 10 or later.</li>
</ul>
<p>Compensation and Benefits:</p>
<p>US-based candidates: $35/hour - $45/hour depending on factors including relevant experience, skills, education, geographic location, and qualifications. International candidates: Information will be provided to you during the recruitment process.</p>
<p>Benefits vary based on employment type, location, and jurisdiction. Benefits for eligible U.S.-based positions include health insurance, 401(k) plan, and paid sick leave. Specific details and role-specific information will be provided to you during the interview process.</p>
<p style="margin-top:24px;font-size:13px;color:#666;">XML job scraping automation by <a href="https://yubhub.co">YubHub</a></p>]]></Description>
      <Jobtype>full-time|part-time|contract</Jobtype>
      <Experiencelevel>mid</Experiencelevel>
      <Workarrangement>remote</Workarrangement>
      <Salaryrange>$35/hour - $45/hour</Salaryrange>
      <Skills>Native proficiency in Tamil, Proficiency in English, Strong auditory perception, Demonstrated ability to handle multilingual audio content, Demonstrated ability to transcribe audio with high accuracy, Comfort providing high-quality voice recordings and feedback on audio samples, Strong comprehension skills, Strong communication, interpersonal, analytical, detail-oriented, and organizational skills, Exceptional attention to linguistic nuance, auditory detail, and data quality, Deep understanding and taste of what good/useful Audio data is, Strong command of advanced transcription and annotation practices, Background in linguistics, speech sciences, cognitive science, or a related field, Experience working with speech/audio datasets, annotation workflows, or AI training data, Professional experience in voice work, including voice acting, voice recording, podcasting, Portfolio (voice samples, annotated transcripts, or audio-related work)</Skills>
      <Category>Engineering</Category>
      <Industry>Technology</Industry>
      <Employername>xAI</Employername>
      <Employerlogo>https://logos.yubhub.co/xai.com.png</Employerlogo>
      <Employerdescription>xAI creates AI systems that can accurately understand the universe and aid humanity in its pursuit of knowledge. The team is small and highly motivated.</Employerdescription>
      <Employerwebsite>https://www.xai.com/</Employerwebsite>
      <Compensationcurrency></Compensationcurrency>
      <Compensationmin></Compensationmin>
      <Compensationmax></Compensationmax>
      <Applyto>https://job-boards.greenhouse.io/xai/jobs/5090269007</Applyto>
      <Location>Remote</Location>
      <Country></Country>
      <Postedate>2026-04-18</Postedate>
    </job>
    <job>
      <externalid>738a6055-653</externalid>
      <Title>AI Tutor - Italian</Title>
      <Description><![CDATA[<p>As an AI Tutor specialized in multilingual audio capabilities, you will contribute to xAI&#39;s mission by training and refining Grok to excel in voice interactions, speech recognition, and auditory experiences across diverse languages, accents, and cultural contexts.</p>
<p>Your work will focus on curating and annotating high-quality audio data to enhance Grok&#39;s global accessibility, enabling natural spoken interactions for users worldwide, bridging language barriers through accurate speech processing, and improving the AI&#39;s handling of multilingual audio nuances.</p>
<p>Responsibilities:</p>
<ul>
<li>Use proprietary software to provide labels, annotations, recordings, and inputs on projects involving multilingual audio clips, voice recordings, speech samples, and auditory elements in various languages.</li>
</ul>
<ul>
<li>Support the delivery of high-quality curated audio data that ensures clear, natural spoken output, accurate representation of linguistic and prosodic details (such as intonation, rhythm, and accent), and professional audio standards.</li>
</ul>
<ul>
<li>Collaborate with technical staff to develop tasks that improve AI&#39;s ability to handle speech modulation, accent variation, noise in real-world recordings, and multilingual audio processing.</li>
</ul>
<ul>
<li>Work with technical staff to improve annotation tools for efficient audio workflows.</li>
</ul>
<p>Basic Qualifications:</p>
<ul>
<li>Native proficiency in Italian with exposure to diverse accents, dialects, or regional variations.</li>
</ul>
<ul>
<li>Proficiency in English (minimum B2 level) with clear, natural vocal delivery and pronunciation suitable for audio recording purposes.</li>
</ul>
<ul>
<li>Strong auditory perception to identify nuances in speech, accents, pronunciation, intonation, and audio quality across languages.</li>
</ul>
<ul>
<li>Demonstrated ability to handle multilingual audio content, including evaluating speech accuracy, cultural vocal expressions, and contextual interpretation in spoken form.</li>
</ul>
<ul>
<li>Demonstrated ability to transcribe audio with high accuracy across accents and varying audio quality.</li>
</ul>
<ul>
<li>Comfort providing high-quality voice recordings and feedback on audio samples in multiple languages.</li>
</ul>
<ul>
<li>Strong comprehension skills and the ability to make independent judgments on ambiguous or varied audio material, including noisy or accented speech.</li>
</ul>
<ul>
<li>Strong communication, interpersonal, analytical, detail-oriented, and organizational skills, with the ability to articulate audio-related feedback effectively.</li>
</ul>
<ul>
<li>Commitment to developing AI that masters sophisticated multilingual audio capabilities.</li>
</ul>
<p>Preferred Skills and Experience:</p>
<ul>
<li>Demonstration of exceptional attention to linguistic nuance, auditory detail, and data quality beyond standard transcription work.</li>
</ul>
<ul>
<li>Deep understanding and taste of what good/useful Audio data is.</li>
</ul>
<ul>
<li>Strong command of advanced transcription and annotation practices, including handling disfluencies, accents, and prosodic features (intonation, stress, rhythm, emotion, etc) with high consistency and accuracy.</li>
</ul>
<ul>
<li>Background in linguistics (e.g., phonetics, phonology, sociolinguistics), speech sciences, cognitive science, or a related field, or equivalent practical experience, with demonstrated ability to analyze accent variation, pronunciation differences, and multilingual speech patterns.</li>
</ul>
<ul>
<li>Experience working with speech/audio datasets, annotation workflows, or AI training data, including knowledge/experience with training voice models, and an understanding of how data quality impacts model performance.</li>
</ul>
<ul>
<li>Professional experience in voice work, including voice acting, voice recording, podcasting with a measurable audience (e.g., X following), or similar audio production demonstrating attention to clarity and recording quality.</li>
</ul>
<ul>
<li>Demonstrated ability to exercise independent judgment in ambiguous audio scenarios and make consistent, defensible annotation decisions.</li>
</ul>
<ul>
<li>Portfolio (strongly preferred for advanced candidates): Voice samples, annotated transcripts, or audio-related work demonstrating quality, methodology, and attention to detail.</li>
</ul>
<ul>
<li>Candidates with professional experience in voice, linguistics, speech data, or speech evaluation and research are especially encouraged to apply.</li>
</ul>
<p>Location and Other Expectations:</p>
<ul>
<li>Tutor roles may be offered as full-time, part-time, or contractor positions, depending on role needs and candidate fit.</li>
</ul>
<ul>
<li>For contractor positions, hours will vary widely based on project scope and contractor availability, with no fixed commitments required. On average, most projects may require at least 10 hours per week to deliver effectively, though this is not a fixed commitment and depends on the scope of work. Contractors have full flexibility to set their own hours and determine the exact amount of time needed to complete deliverables.</li>
</ul>
<ul>
<li>Tutor roles may be performed remotely from any location worldwide, subject to legal eligibility, time-zone compatibility, and role-specific needs.</li>
</ul>
<ul>
<li>For US-based candidates, please note that we are unable to hire in Wyoming and Illinois at this time.</li>
</ul>
<ul>
<li>We are unable to provide visa sponsorship.</li>
</ul>
<ul>
<li>For those who will be working from a personal device, your computer must be a Chromebook, a Mac with macOS 11.0 or later, or Windows 10 or later.</li>
</ul>
<p>Compensation and Benefits:</p>
<p>US-based candidates: $35/hour - $45/hour depending on factors including relevant experience, skills, education, geographic location, and qualifications. International candidates: Information will be provided to you during the recruitment process.</p>
<p>Benefits vary based on employment type, location, and jurisdiction. Benefits for eligible U.S.-based positions include health insurance, 401(k) plan, and paid sick leave. Specific details and role-specific information will be provided to you during the interview process.</p>
<p style="margin-top:24px;font-size:13px;color:#666;">XML job scraping automation by <a href="https://yubhub.co">YubHub</a></p>]]></Description>
      <Jobtype>full-time|part-time|contract</Jobtype>
      <Experiencelevel>mid</Experiencelevel>
      <Workarrangement>remote</Workarrangement>
      <Salaryrange>$35/hour - $45/hour</Salaryrange>
      <Skills>multilingual audio capabilities, speech recognition, auditory experiences, linguistic and prosodic details, professional audio standards, accent variation, noise in real-world recordings, multilingual audio processing, annotation tools, efficient audio workflows, native proficiency in Italian, English (minimum B2 level), strong auditory perception, nuances in speech, accents, pronunciation, audio quality, speech accuracy, cultural vocal expressions, contextual interpretation, transcription, voice recordings, feedback on audio samples, independent judgments, ambiguous audio material, communication, interpersonal, analytical, detail-oriented, organizational, audio-related feedback, commitment to developing AI, exceptional attention to linguistic nuance, auditory detail, data quality, deep understanding of good/useful Audio data, advanced transcription and annotation practices, handling disfluencies, prosodic features, background in linguistics, phonetics, phonology, sociolinguistics, speech sciences, cognitive science, equivalent practical experience, analysis of accent variation, pronunciation differences, multilingual speech patterns, experience working with speech/audio datasets, annotation workflows, AI training data, training voice models, understanding of data quality impacts model performance, professional experience in voice work, voice acting, voice recording, podcasting, similar audio production, exercise independent judgment, defensible annotation decisions, portfolio, voice samples, annotated transcripts, audio-related work</Skills>
      <Category>Engineering</Category>
      <Industry>Technology</Industry>
      <Employername>xAI</Employername>
      <Employerlogo>https://logos.yubhub.co/xai.com.png</Employerlogo>
      <Employerdescription>xAI creates AI systems to understand the universe and aid humanity in its pursuit of knowledge. The team is small and highly motivated.</Employerdescription>
      <Employerwebsite>https://www.xai.com</Employerwebsite>
      <Compensationcurrency></Compensationcurrency>
      <Compensationmin></Compensationmin>
      <Compensationmax></Compensationmax>
      <Applyto>https://job-boards.greenhouse.io/xai/jobs/5090209007</Applyto>
      <Location>Remote</Location>
      <Country></Country>
      <Postedate>2026-04-18</Postedate>
    </job>
    <job>
      <externalid>76163bba-2eb</externalid>
      <Title>AI Tutor - Swedish</Title>
      <Description><![CDATA[<p>As an AI Tutor specialised in multilingual audio capabilities, you will contribute to xAI&#39;s mission by training and refining Grok to excel in voice interactions, speech recognition, and auditory experiences across diverse languages, accents, and cultural contexts.</p>
<p>Your work will focus on curating and annotating high-quality audio data to enhance Grok&#39;s global accessibility, enabling natural spoken interactions for users worldwide, bridging language barriers through accurate speech processing, and improving the AI&#39;s handling of multilingual audio nuances.</p>
<p>Responsibilities:</p>
<ul>
<li>Use proprietary software to provide labels, annotations, recordings, and inputs on projects involving multilingual audio clips, voice recordings, speech samples, and auditory elements in various languages.</li>
</ul>
<ul>
<li>Support the delivery of high-quality curated audio data that ensures clear, natural spoken output, accurate representation of linguistic and prosodic details (such as intonation, rhythm, and accent), and professional audio standards.</li>
</ul>
<ul>
<li>Collaborate with technical staff to develop tasks that improve AI&#39;s ability to handle speech modulation, accent variation, noise in real-world recordings, and multilingual audio processing.</li>
</ul>
<ul>
<li>Work with technical staff to improve annotation tools for efficient audio workflows.</li>
</ul>
<p>Basic Qualifications:</p>
<ul>
<li>Native proficiency in Swedish with exposure to diverse accents, dialects, or regional variations.</li>
</ul>
<ul>
<li>Proficiency in English (minimum B2 level) with clear, natural vocal delivery and pronunciation suitable for audio recording purposes.</li>
</ul>
<ul>
<li>Strong auditory perception to identify nuances in speech, accents, pronunciation, intonation, and audio quality across languages.</li>
</ul>
<ul>
<li>Demonstrated ability to handle multilingual audio content, including evaluating speech accuracy, cultural vocal expressions, and contextual interpretation in spoken form.</li>
</ul>
<ul>
<li>Demonstrated ability to transcribe audio with high accuracy across accents and varying audio quality.</li>
</ul>
<ul>
<li>Comfort providing high-quality voice recordings and feedback on audio samples in multiple languages.</li>
</ul>
<ul>
<li>Strong comprehension skills and the ability to make independent judgments on ambiguous or varied audio material, including noisy or accented speech.</li>
</ul>
<ul>
<li>Strong communication, interpersonal, analytical, detail-oriented, and organisational skills, with the ability to articulate audio-related feedback effectively.</li>
</ul>
<ul>
<li>Commitment to developing AI that masters sophisticated multilingual audio capabilities.</li>
</ul>
<p>Preferred Skills and Experience:</p>
<ul>
<li>Demonstration of exceptional attention to linguistic nuance, auditory detail, and data quality beyond standard transcription work.</li>
</ul>
<ul>
<li>Deep understanding and taste of what good/useful Audio data is.</li>
</ul>
<ul>
<li>Strong command of advanced transcription and annotation practices, including handling disfluencies, accents, and prosodic features (intonation, stress, rhythm, emotion, etc) with high consistency and accuracy.</li>
</ul>
<ul>
<li>Background in linguistics (e.g., phonetics, phonology, sociolinguistics), speech sciences, cognitive science, or a related field, or equivalent practical experience, with demonstrated ability to analyse accent variation, pronunciation differences, and multilingual speech patterns.</li>
</ul>
<ul>
<li>Experience working with speech/audio datasets, annotation workflows, or AI training data, including knowledge/experience with training voice models, and an understanding of how data quality impacts model performance.</li>
</ul>
<ul>
<li>Professional experience in voice work, including voice acting, voice recording, podcasting with a measurable audience (e.g., X following), or similar audio production demonstrating attention to clarity and recording quality.</li>
</ul>
<ul>
<li>Demonstrated ability to exercise independent judgment in ambiguous audio scenarios and make consistent, defensible annotation decisions.</li>
</ul>
<ul>
<li>Portfolio (strongly preferred for advanced candidates): Voice samples, annotated transcripts, or audio-related work demonstrating quality, methodology, and attention to detail.</li>
</ul>
<ul>
<li>Candidates with professional experience in voice, linguistics, speech data, or speech evaluation and research are especially encouraged to apply.</li>
</ul>
<p>Location and Other Expectations:</p>
<ul>
<li>Tutor roles may be offered as full-time, part-time, or contractor positions, depending on role needs and candidate fit.</li>
</ul>
<ul>
<li>For contractor positions, hours will vary widely based on project scope and contractor availability, with no fixed commitments required. On average, most projects may require at least 10 hours per week to deliver effectively, though this is not a fixed commitment and depends on the scope of work. Contractors have full flexibility to set their own hours and determine the exact amount of time needed to complete deliverables.</li>
</ul>
<ul>
<li>Tutor roles may be performed remotely from any location worldwide, subject to legal eligibility, time-zone compatibility, and role-specific needs.</li>
</ul>
<ul>
<li>For US-based candidates, please note that we are unable to hire in Wyoming and Illinois at this time.</li>
</ul>
<ul>
<li>We are unable to provide visa sponsorship.</li>
</ul>
<ul>
<li>For those who will be working from a personal device, your computer must be a Chromebook, a Mac with macOS 11.0 or later, or Windows 10 or later.</li>
</ul>
<p>Compensation and Benefits:</p>
<p>US-based candidates: $35/hour - $45/hour depending on factors including relevant experience, skills, education, geographic location, and qualifications. International candidates: Information will be provided to you during the recruitment process.</p>
<p>Benefits vary based on employment type, location, and jurisdiction. Benefits for eligible U.S.-based positions include health insurance, 401(k) plan, and paid sick leave. Specific details and role-specific information will be provided to you during the interview process.</p>
<p style="margin-top:24px;font-size:13px;color:#666;">XML job scraping automation by <a href="https://yubhub.co">YubHub</a></p>]]></Description>
      <Jobtype>full-time|part-time|contract</Jobtype>
      <Experiencelevel>mid</Experiencelevel>
      <Workarrangement>remote</Workarrangement>
      <Salaryrange>$35/hour - $45/hour</Salaryrange>
      <Skills>Native proficiency in Swedish, Proficiency in English, Strong auditory perception, Demonstrated ability to handle multilingual audio content, Demonstrated ability to transcribe audio with high accuracy, Comfort providing high-quality voice recordings and feedback on audio samples, Strong comprehension skills, Strong communication, interpersonal, analytical, detail-oriented, and organisational skills, Exceptional attention to linguistic nuance, auditory detail, and data quality, Deep understanding and taste of what good/useful Audio data is, Strong command of advanced transcription and annotation practices, Background in linguistics, speech sciences, cognitive science, or a related field, Experience working with speech/audio datasets, annotation workflows, or AI training data, Professional experience in voice work, including voice acting, voice recording, podcasting, Demonstrated ability to exercise independent judgment in ambiguous audio scenarios</Skills>
      <Category>Engineering</Category>
      <Industry>Technology</Industry>
      <Employername>xAI</Employername>
      <Employerlogo>https://logos.yubhub.co/xai.com.png</Employerlogo>
      <Employerdescription>xAI creates AI systems that can accurately understand the universe and aid humanity in its pursuit of knowledge. The organisation is small and highly motivated.</Employerdescription>
      <Employerwebsite>https://www.xai.com/</Employerwebsite>
      <Compensationcurrency></Compensationcurrency>
      <Compensationmin></Compensationmin>
      <Compensationmax></Compensationmax>
      <Applyto>https://job-boards.greenhouse.io/xai/jobs/5090265007</Applyto>
      <Location>Remote</Location>
      <Country></Country>
      <Postedate>2026-04-18</Postedate>
    </job>
    <job>
      <externalid>047a7c93-c55</externalid>
      <Title>AI Tutor - Hindi</Title>
      <Description><![CDATA[<p>As an AI Tutor specialized in multilingual audio capabilities, you will contribute to xAI&#39;s mission by training and refining Grok to excel in voice interactions, speech recognition, and auditory experiences across diverse languages, accents, and cultural contexts.</p>
<p>Your work will focus on curating and annotating high-quality audio data to enhance Grok&#39;s global accessibility, enabling natural spoken interactions for users worldwide, bridging language barriers through accurate speech processing, and improving the AI&#39;s handling of multilingual audio nuances.</p>
<p>Responsibilities: Use proprietary software to provide labels, annotations, recordings, and inputs on projects involving multilingual audio clips, voice recordings, speech samples, and auditory elements in various languages. Support the delivery of high-quality curated audio data that ensures clear, natural spoken output, accurate representation of linguistic and prosodic details (such as intonation, rhythm, and accent), and professional audio standards. Collaborate with technical staff to develop tasks that improve AI&#39;s ability to handle speech modulation, accent variation, noise in real-world recordings, and multilingual audio processing. Work with technical staff to improve annotation tools for efficient audio workflows.</p>
<p>Basic Qualifications: Native proficiency in Hindi with exposure to diverse accents, dialects, or regional variations. Proficiency in English (minimum B2 level) with clear, natural vocal delivery and pronunciation suitable for audio recording purposes. Strong auditory perception to identify nuances in speech, accents, pronunciation, intonation, and audio quality across languages. Demonstrated ability to handle multilingual audio content, including evaluating speech accuracy, cultural vocal expressions, and contextual interpretation in spoken form. Demonstrated ability to transcribe audio with high accuracy across accents and varying audio quality. Comfort providing high-quality voice recordings and feedback on audio samples in multiple languages. Strong comprehension skills and the ability to make independent judgments on ambiguous or varied audio material, including noisy or accented speech. Strong communication, interpersonal, analytical, detail-oriented, and organizational skills, with the ability to articulate audio-related feedback effectively. Commitment to developing AI that masters sophisticated multilingual audio capabilities.</p>
<p>Preferred Skills and Experience: Demonstration of exceptional attention to linguistic nuance, auditory detail, and data quality beyond standard transcription work. Deep understanding and taste of what good/useful Audio data is. Strong command of advanced transcription and annotation practices, including handling disfluencies, accents, and prosodic features (intonation, stress, rhythm, emotion, etc) with high consistency and accuracy. Background in linguistics (e.g., phonetics, phonology, sociolinguistics), speech sciences, cognitive science, or a related field, or equivalent practical experience, with demonstrated ability to analyze accent variation, pronunciation differences, and multilingual speech patterns. Experience working with speech/audio datasets, annotation workflows, or AI training data, including knowledge/experience with training voice models, and an understanding of how data quality impacts model performance. Professional experience in voice work, including voice acting, voice recording, podcasting with a measurable audience (e.g., X following), or similar audio production demonstrating attention to clarity and recording quality. Demonstrated ability to exercise independent judgment in ambiguous audio scenarios and make consistent, defensible annotation decisions. Portfolio (strongly preferred for advanced candidates): Voice samples, annotated transcripts, or audio-related work demonstrating quality, methodology, and attention to detail. Candidates with professional experience in voice, linguistics, speech data, or speech evaluation and research are especially encouraged to apply.</p>
<p>Location and Other Expectations: Tutor roles may be offered as full-time, part-time, or contractor positions, depending on role needs and candidate fit. For contractor positions, hours will vary widely based on project scope and contractor availability, with no fixed commitments required. On average, most projects may require at least 10 hours per week to deliver effectively, though this is not a fixed commitment and depends on the scope of work. Contractors have full flexibility to set their own hours and determine the exact amount of time needed to complete deliverables. Tutor roles may be performed remotely from any location worldwide, subject to legal eligibility, time-zone compatibility, and role-specific needs. For US-based candidates, please note that we are unable to hire in Wyoming and Illinois at this time. We are unable to provide visa sponsorship. For those who will be working from a personal device, your computer must be a Chromebook, a Mac with macOS 11.0 or later, or Windows 10 or later.</p>
<p>Compensation and Benefits: US-based candidates: $35/hour - $45/hour depending on factors including relevant experience, skills, education, geographic location, and qualifications. International candidates: Information will be provided to you during the recruitment process. Benefits vary based on employment type, location, and jurisdiction. Benefits for eligible U.S.-based positions include health insurance, 401(k) plan, and paid sick leave. Specific details and role-specific information will be provided to you during the interview process.</p>
<p style="margin-top:24px;font-size:13px;color:#666;">XML job scraping automation by <a href="https://yubhub.co">YubHub</a></p>]]></Description>
      <Jobtype>full-time|part-time|contract</Jobtype>
      <Experiencelevel>mid</Experiencelevel>
      <Workarrangement>remote</Workarrangement>
      <Salaryrange>$35/hour - $45/hour</Salaryrange>
      <Skills>Multilingual audio capabilities, Proprietary software, Audio data curation, Annotation tools, Speech recognition, Auditory experiences, Diverse languages, Accents, Cultural contexts, High-quality audio data, Clear spoken output, Linguistic and prosodic details, Professional audio standards, Speech modulation, Accent variation, Noise in real-world recordings, Multilingual audio processing, Efficient audio workflows, Native proficiency in Hindi, English (minimum B2 level), Strong auditory perception, Multilingual audio content, Speech accuracy, Cultural vocal expressions, Contextual interpretation, Transcription, Audio quality, Voice recordings, Feedback on audio samples, Independent judgments, Ambiguous audio material, Noisy or accented speech, Communication, Interpersonal, Analytical, Detail-oriented, Organizational, Independent judgment, Defensible annotation decisions, Voice samples, Annotated transcripts, Audio-related work, Quality, Methodology, Attention to detail, Exceptional attention to linguistic nuance, Auditory detail, Data quality, Advanced transcription and annotation practices, Disfluencies, Prosodic features, Intonation, Stress, Rhythm, Emotion, Linguistics, Phonetics, Phonology, Sociolinguistics, Speech sciences, Cognitive science, Pronunciation differences, Multilingual speech patterns, Speech/audio datasets, Annotation workflows, AI training data, Training voice models, Data quality impacts model performance, Voice work, Voice acting, Voice recording, Podcasting, Measurable audience, Clarity and recording quality</Skills>
      <Category>Engineering</Category>
      <Industry>Technology</Industry>
      <Employername>xAI</Employername>
      <Employerlogo>https://logos.yubhub.co/xai.com.png</Employerlogo>
      <Employerdescription>xAI creates AI systems that can accurately understand the universe and aid humanity in its pursuit of knowledge. The team is small and highly motivated.</Employerdescription>
      <Employerwebsite>https://www.xai.com/</Employerwebsite>
      <Compensationcurrency></Compensationcurrency>
      <Compensationmin></Compensationmin>
      <Compensationmax></Compensationmax>
      <Applyto>https://job-boards.greenhouse.io/xai/jobs/5090207007</Applyto>
      <Location>Remote</Location>
      <Country></Country>
      <Postedate>2026-04-18</Postedate>
    </job>
    <job>
      <externalid>db57818b-2d4</externalid>
      <Title>AI Tutor - Punjabi</Title>
      <Description><![CDATA[<p>As an AI Tutor specialized in multilingual audio capabilities, you will contribute to xAI&#39;s mission by training and refining Grok to excel in voice interactions, speech recognition, and auditory experiences across diverse languages, accents, and cultural contexts.</p>
<p>Your work will focus on curating and annotating high-quality audio data to enhance Grok&#39;s global accessibility, enabling natural spoken interactions for users worldwide, bridging language barriers through accurate speech processing, and improving the AI&#39;s handling of multilingual audio nuances.</p>
<p>Responsibilities:</p>
<ul>
<li>Use proprietary software to provide labels, annotations, recordings, and inputs on projects involving multilingual audio clips, voice recordings, speech samples, and auditory elements in various languages.</li>
</ul>
<ul>
<li>Support the delivery of high-quality curated audio data that ensures clear, natural spoken output, accurate representation of linguistic and prosodic details (such as intonation, rhythm, and accent), and professional audio standards.</li>
</ul>
<ul>
<li>Collaborate with technical staff to develop tasks that improve AI&#39;s ability to handle speech modulation, accent variation, noise in real-world recordings, and multilingual audio processing.</li>
</ul>
<ul>
<li>Work with technical staff to improve annotation tools for efficient audio workflows.</li>
</ul>
<p>Basic Qualifications:</p>
<ul>
<li>Native proficiency in Punjabi with exposure to diverse accents, dialects, or regional variations.</li>
</ul>
<ul>
<li>Proficiency in English (minimum B2 level) with clear, natural vocal delivery and pronunciation suitable for audio recording purposes.</li>
</ul>
<ul>
<li>Strong auditory perception to identify nuances in speech, accents, pronunciation, intonation, and audio quality across languages.</li>
</ul>
<ul>
<li>Demonstrated ability to handle multilingual audio content, including evaluating speech accuracy, cultural vocal expressions, and contextual interpretation in spoken form.</li>
</ul>
<ul>
<li>Demonstrated ability to transcribe audio with high accuracy across accents and varying audio quality.</li>
</ul>
<ul>
<li>Comfort providing high-quality voice recordings and feedback on audio samples in multiple languages.</li>
</ul>
<ul>
<li>Strong comprehension skills and the ability to make independent judgments on ambiguous or varied audio material, including noisy or accented speech.</li>
</ul>
<ul>
<li>Strong communication, interpersonal, analytical, detail-oriented, and organizational skills, with the ability to articulate audio-related feedback effectively.</li>
</ul>
<ul>
<li>Commitment to developing AI that masters sophisticated multilingual audio capabilities.</li>
</ul>
<p>Preferred Skills and Experience:</p>
<ul>
<li>Demonstration of exceptional attention to linguistic nuance, auditory detail, and data quality beyond standard transcription work.</li>
</ul>
<ul>
<li>Deep understanding and taste of what good/useful Audio data is.</li>
</ul>
<ul>
<li>Strong command of advanced transcription and annotation practices, including handling disfluencies, accents, and prosodic features (intonation, stress, rhythm, emotion, etc) with high consistency and accuracy.</li>
</ul>
<ul>
<li>Background in linguistics (e.g., phonetics, phonology, sociolinguistics), speech sciences, cognitive science, or a related field, or equivalent practical experience, with demonstrated ability to analyze accent variation, pronunciation differences, and multilingual speech patterns.</li>
</ul>
<ul>
<li>Experience working with speech/audio datasets, annotation workflows, or AI training data, including knowledge/experience with training voice models, and an understanding of how data quality impacts model performance.</li>
</ul>
<ul>
<li>Professional experience in voice work, including voice acting, voice recording, podcasting with a measurable audience (e.g., X following), or similar audio production demonstrating attention to clarity and recording quality.</li>
</ul>
<ul>
<li>Demonstrated ability to exercise independent judgment in ambiguous audio scenarios and make consistent, defensible annotation decisions.</li>
</ul>
<ul>
<li>Portfolio (strongly preferred for advanced candidates): Voice samples, annotated transcripts, or audio-related work demonstrating quality, methodology, and attention to detail.</li>
</ul>
<ul>
<li>Candidates with professional experience in voice, linguistics, speech data, or speech evaluation and research are especially encouraged to apply.</li>
</ul>
<p>Location and Other Expectations:</p>
<ul>
<li>Tutor roles may be offered as full-time, part-time, or contractor positions, depending on role needs and candidate fit.</li>
</ul>
<ul>
<li>For contractor positions, hours will vary widely based on project scope and contractor availability, with no fixed commitments required. On average, most projects may require at least 10 hours per week to deliver effectively, though this is not a fixed commitment and depends on the scope of work. Contractors have full flexibility to set their own hours and determine the exact amount of time needed to complete deliverables.</li>
</ul>
<ul>
<li>Tutor roles may be performed remotely from any location worldwide, subject to legal eligibility, time-zone compatibility, and role-specific needs.</li>
</ul>
<ul>
<li>For US-based candidates, please note that we are unable to hire in Wyoming and Illinois at this time.</li>
</ul>
<ul>
<li>We are unable to provide visa sponsorship.</li>
</ul>
<ul>
<li>For those who will be working from a personal device, your computer must be a Chromebook, a Mac with macOS 11.0 or later, or Windows 10 or later.</li>
</ul>
<p>Compensation and Benefits:</p>
<p>US-based candidates: $35/hour - $45/hour depending on factors including relevant experience, skills, education, geographic location, and qualifications. International candidates: Information will be provided to you during the recruitment process.</p>
<p>Benefits vary based on employment type, location, and jurisdiction. Benefits for eligible U.S.-based positions include health insurance, 401(k) plan, and paid sick leave. Specific details and role-specific information will be provided to you during the interview process.</p>
<p style="margin-top:24px;font-size:13px;color:#666;">XML job scraping automation by <a href="https://yubhub.co">YubHub</a></p>]]></Description>
      <Jobtype>full-time|part-time|contract|temporary|internship</Jobtype>
      <Experiencelevel></Experiencelevel>
      <Workarrangement>remote</Workarrangement>
      <Salaryrange>$35/hour - $45/hour</Salaryrange>
      <Skills>Native proficiency in Punjabi, Proficiency in English, Strong auditory perception, Demonstrated ability to handle multilingual audio content, Demonstrated ability to transcribe audio with high accuracy, Comfort providing high-quality voice recordings and feedback on audio samples, Strong comprehension skills, Strong communication, interpersonal, analytical, detail-oriented, and organizational skills, Exceptional attention to linguistic nuance, auditory detail, and data quality, Deep understanding and taste of what good/useful Audio data is, Strong command of advanced transcription and annotation practices, Background in linguistics, speech sciences, cognitive science, or a related field, Experience working with speech/audio datasets, annotation workflows, or AI training data, Professional experience in voice work, including voice acting, voice recording, podcasting, Demonstrated ability to exercise independent judgment in ambiguous audio scenarios</Skills>
      <Category>Engineering</Category>
      <Industry>Technology</Industry>
      <Employername>xAI</Employername>
      <Employerlogo>https://logos.yubhub.co/xai.com.png</Employerlogo>
      <Employerdescription>xAI creates AI systems to understand the universe and aid humanity in its pursuit of knowledge. The team is small and highly motivated.</Employerdescription>
      <Employerwebsite>https://www.xai.com</Employerwebsite>
      <Compensationcurrency></Compensationcurrency>
      <Compensationmin></Compensationmin>
      <Compensationmax></Compensationmax>
      <Applyto>https://job-boards.greenhouse.io/xai/jobs/5090246007</Applyto>
      <Location>Remote</Location>
      <Country></Country>
      <Postedate>2026-04-18</Postedate>
    </job>
    <job>
      <externalid>a15f16b0-f9b</externalid>
      <Title>AI Tutor - Hebrew</Title>
      <Description><![CDATA[<p>As an AI Tutor specialized in multilingual audio capabilities, you will contribute to xAI&#39;s mission by training and refining Grok to excel in voice interactions, speech recognition, and auditory experiences across diverse languages, accents, and cultural contexts.</p>
<p>Your work will focus on curating and annotating high-quality audio data to enhance Grok&#39;s global accessibility, enabling natural spoken interactions for users worldwide, bridging language barriers through accurate speech processing, and improving the AI&#39;s handling of multilingual audio nuances.</p>
<p>Responsibilities:</p>
<ul>
<li>Use proprietary software to provide labels, annotations, recordings, and inputs on projects involving multilingual audio clips, voice recordings, speech samples, and auditory elements in various languages.</li>
</ul>
<ul>
<li>Support the delivery of high-quality curated audio data that ensures clear, natural spoken output, accurate representation of linguistic and prosodic details (such as intonation, rhythm, and accent), and professional audio standards.</li>
</ul>
<ul>
<li>Collaborate with technical staff to develop tasks that improve AI&#39;s ability to handle speech modulation, accent variation, noise in real-world recordings, and multilingual audio processing.</li>
</ul>
<ul>
<li>Work with technical staff to improve annotation tools for efficient audio workflows.</li>
</ul>
<p>Basic Qualifications:</p>
<ul>
<li>Native proficiency in Hebrew with exposure to diverse accents, dialects, or regional variations.</li>
</ul>
<ul>
<li>Proficiency in English (minimum B2 level) with clear, natural vocal delivery and pronunciation suitable for audio recording purposes.</li>
</ul>
<ul>
<li>Strong auditory perception to identify nuances in speech, accents, pronunciation, intonation, and audio quality across languages.</li>
</ul>
<ul>
<li>Demonstrated ability to handle multilingual audio content, including evaluating speech accuracy, cultural vocal expressions, and contextual interpretation in spoken form.</li>
</ul>
<ul>
<li>Demonstrated ability to transcribe audio with high accuracy across accents and varying audio quality.</li>
</ul>
<ul>
<li>Comfort providing high-quality voice recordings and feedback on audio samples in multiple languages.</li>
</ul>
<ul>
<li>Strong comprehension skills and the ability to make independent judgments on ambiguous or varied audio material, including noisy or accented speech.</li>
</ul>
<ul>
<li>Strong communication, interpersonal, analytical, detail-oriented, and organizational skills, with the ability to articulate audio-related feedback effectively.</li>
</ul>
<ul>
<li>Commitment to developing AI that masters sophisticated multilingual audio capabilities.</li>
</ul>
<p>Preferred Skills and Experience:</p>
<ul>
<li>Demonstration of exceptional attention to linguistic nuance, auditory detail, and data quality beyond standard transcription work.</li>
</ul>
<ul>
<li>Deep understanding and taste of what good/useful Audio data is.</li>
</ul>
<ul>
<li>Strong command of advanced transcription and annotation practices, including handling disfluencies, accents, and prosodic features (intonation, stress, rhythm, emotion, etc) with high consistency and accuracy.</li>
</ul>
<ul>
<li>Background in linguistics (e.g., phonetics, phonology, sociolinguistics), speech sciences, cognitive science, or a related field, or equivalent practical experience, with demonstrated ability to analyze accent variation, pronunciation differences, and multilingual speech patterns.</li>
</ul>
<ul>
<li>Experience working with speech/audio datasets, annotation workflows, or AI training data, including knowledge/experience with training voice models, and an understanding of how data quality impacts model performance.</li>
</ul>
<ul>
<li>Professional experience in voice work, including voice acting, voice recording, podcasting with a measurable audience (e.g., X following), or similar audio production demonstrating attention to clarity and recording quality.</li>
</ul>
<ul>
<li>Demonstrated ability to exercise independent judgment in ambiguous audio scenarios and make consistent, defensible annotation decisions.</li>
</ul>
<ul>
<li>Portfolio (strongly preferred for advanced candidates): Voice samples, annotated transcripts, or audio-related work demonstrating quality, methodology, and attention to detail.</li>
</ul>
<ul>
<li>Candidates with professional experience in voice, linguistics, speech data, or speech evaluation and research are especially encouraged to apply.</li>
</ul>
<p>Location and Other Expectations:</p>
<ul>
<li>Tutor roles may be offered as full-time, part-time, or contractor positions, depending on role needs and candidate fit.</li>
</ul>
<ul>
<li>For contractor positions, hours will vary widely based on project scope and contractor availability, with no fixed commitments required. On average, most projects may require at least 10 hours per week to deliver effectively, though this is not a fixed commitment and depends on the scope of work. Contractors have full flexibility to set their own hours and determine the exact amount of time needed to complete deliverables.</li>
</ul>
<ul>
<li>Tutor roles may be performed remotely from any location worldwide, subject to legal eligibility, time-zone compatibility, and role-specific needs.</li>
</ul>
<ul>
<li>For US-based candidates, please note that we are unable to hire in Wyoming and Illinois at this time.</li>
</ul>
<ul>
<li>We are unable to provide visa sponsorship.</li>
</ul>
<ul>
<li>For those who will be working from a personal device, your computer must be a Chromebook, a Mac with macOS 11.0 or later, or Windows 10 or later.</li>
</ul>
<p>Compensation and Benefits:</p>
<p>US-based candidates: $35/hour - $45/hour depending on factors including relevant experience, skills, education, geographic location, and qualifications. International candidates: Information will be provided to you during the recruitment process.</p>
<p>Benefits vary based on employment type, location, and jurisdiction. Benefits for eligible U.S.-based positions include health insurance, 401(k) plan, and paid sick leave. Specific details and role-specific information will be provided to you during the interview process.</p>
<p style="margin-top:24px;font-size:13px;color:#666;">XML job scraping automation by <a href="https://yubhub.co">YubHub</a></p>]]></Description>
      <Jobtype>full-time|part-time|contract</Jobtype>
      <Experiencelevel>mid</Experiencelevel>
      <Workarrangement>remote</Workarrangement>
      <Salaryrange>$35/hour - $45/hour</Salaryrange>
      <Skills>Multilingual audio capabilities, Proprietary software, High-quality audio data, Speech recognition, Auditory experiences, Diverse languages, Accents, Cultural contexts, Audio clips, Voice recordings, Speech samples, Auditory elements, Professional audio standards, Speech modulation, Accent variation, Noise in real-world recordings, Multilingual audio processing, Annotation tools, Efficient audio workflows, Native proficiency in Hebrew, Proficiency in English, Strong auditory perception, Multilingual audio content, Speech accuracy, Cultural vocal expressions, Contextual interpretation, Transcription, Audio quality, Comfort providing high-quality voice recordings, Feedback on audio samples, Strong comprehension skills, Independent judgments, Ambiguous audio material, Noisy or accented speech, Communication skills, Interpersonal skills, Analytical skills, Detail-oriented skills, Organizational skills, Commitment to developing AI, Sophisticated multilingual audio capabilities, Exceptional attention to linguistic nuance, Auditory detail, Data quality, Advanced transcription and annotation practices, Handling disfluencies, Prosodic features, Intonation, Stress, Rhythm, Emotion, Background in linguistics, Speech sciences, Cognitive science, Linguistics, Phonetics, Phonology, Sociolinguistics, Pronunciation differences, Multilingual speech patterns, Experience working with speech/audio datasets, Annotation workflows, AI training data, Training voice models, Data quality impacts model performance, Professional experience in voice work, Voice acting, Voice recording, Podcasting, Audio production, Attention to clarity and recording quality, Independent judgment in ambiguous audio scenarios, Defensible annotation decisions, Portfolio, Voice samples, Annotated transcripts, Audio-related work, Quality, Methodology, Attention to detail</Skills>
      <Category>Engineering</Category>
      <Industry>Technology</Industry>
      <Employername>xAI</Employername>
      <Employerlogo>https://logos.yubhub.co/xai.com.png</Employerlogo>
      <Employerdescription>xAI creates AI systems that can accurately understand the universe and aid humanity in its pursuit of knowledge. The team is small and highly motivated.</Employerdescription>
      <Employerwebsite>https://www.xai.com/</Employerwebsite>
      <Compensationcurrency></Compensationcurrency>
      <Compensationmin></Compensationmin>
      <Compensationmax></Compensationmax>
      <Applyto>https://job-boards.greenhouse.io/xai/jobs/5090206007</Applyto>
      <Location>Remote</Location>
      <Country></Country>
      <Postedate>2026-04-18</Postedate>
    </job>
    <job>
      <externalid>670aee10-fc8</externalid>
      <Title>AI Tutor - Portuguese</Title>
      <Description><![CDATA[<p>As an AI Tutor specialized in multilingual audio capabilities, you will contribute to xAI&#39;s mission by training and refining Grok to excel in voice interactions, speech recognition, and auditory experiences across diverse languages, accents, and cultural contexts.</p>
<p>Your work will focus on curating and annotating high-quality audio data to enhance Grok&#39;s global accessibility, enabling natural spoken interactions for users worldwide, bridging language barriers through accurate speech processing, and improving the AI&#39;s handling of multilingual audio nuances.</p>
<p>Responsibilities:</p>
<ul>
<li>Use proprietary software to provide labels, annotations, recordings, and inputs on projects involving multilingual audio clips, voice recordings, speech samples, and auditory elements in various languages.</li>
<li>Support the delivery of high-quality curated audio data that ensures clear, natural spoken output, accurate representation of linguistic and prosodic details (such as intonation, rhythm, and accent), and professional audio standards.</li>
<li>Collaborate with technical staff to develop tasks that improve AI&#39;s ability to handle speech modulation, accent variation, noise in real-world recordings, and multilingual audio processing.</li>
<li>Work with technical staff to improve annotation tools for efficient audio workflows.</li>
</ul>
<p>Basic Qualifications:</p>
<ul>
<li>Native proficiency in Portuguese with exposure to diverse accents, dialects, or regional variations.</li>
<li>Proficiency in English (minimum B2 level) with clear, natural vocal delivery and pronunciation suitable for audio recording purposes.</li>
<li>Strong auditory perception to identify nuances in speech, accents, pronunciation, intonation, and audio quality across languages.</li>
<li>Demonstrated ability to handle multilingual audio content, including evaluating speech accuracy, cultural vocal expressions, and contextual interpretation in spoken form.</li>
<li>Demonstrated ability to transcribe audio with high accuracy across accents and varying audio quality.</li>
<li>Comfort providing high-quality voice recordings and feedback on audio samples in multiple languages.</li>
<li>Strong comprehension skills and the ability to make independent judgments on ambiguous or varied audio material, including noisy or accented speech.</li>
<li>Strong communication, interpersonal, analytical, detail-oriented, and organizational skills, with the ability to articulate audio-related feedback effectively.</li>
<li>Commitment to developing AI that masters sophisticated multilingual audio capabilities.</li>
</ul>
<p>Preferred Skills and Experience:</p>
<ul>
<li>Demonstration of exceptional attention to linguistic nuance, auditory detail, and data quality beyond standard transcription work.</li>
<li>Deep understanding and taste of what good/useful Audio data is.</li>
<li>Strong command of advanced transcription and annotation practices, including handling disfluencies, accents, and prosodic features (intonation, stress, rhythm, emotion, etc) with high consistency and accuracy.</li>
<li>Background in linguistics (e.g., phonetics, phonology, sociolinguistics), speech sciences, cognitive science, or a related field, or equivalent practical experience, with demonstrated ability to analyze accent variation, pronunciation differences, and multilingual speech patterns.</li>
<li>Experience working with speech/audio datasets, annotation workflows, or AI training data, including knowledge/experience with training voice models, and an understanding of how data quality impacts model performance.</li>
<li>Professional experience in voice work, including voice acting, voice recording, podcasting with a measurable audience (e.g., X following), or similar audio production demonstrating attention to clarity and recording quality.</li>
<li>Demonstrated ability to exercise independent judgment in ambiguous audio scenarios and make consistent, defensible annotation decisions.</li>
<li>Portfolio (strongly preferred for advanced candidates): Voice samples, annotated transcripts, or audio-related work demonstrating quality, methodology, and attention to detail.</li>
<li>Candidates with professional experience in voice, linguistics, speech data, or speech evaluation and research are especially encouraged to apply.</li>
</ul>
<p>Location and Other Expectations:</p>
<ul>
<li>Tutor roles may be offered as full-time, part-time, or contractor positions, depending on role needs and candidate fit.</li>
<li>For contractor positions, hours will vary widely based on project scope and contractor availability, with no fixed commitments required. On average, most projects may require at least 10 hours per week to deliver effectively, though this is not a fixed commitment and depends on the scope of work. Contractors have full flexibility to set their own hours and determine the exact amount of time needed to complete deliverables.</li>
<li>Tutor roles may be performed remotely from any location worldwide, subject to legal eligibility, time-zone compatibility, and role-specific needs.</li>
<li>For US-based candidates, please note that we are unable to hire in Wyoming and Illinois at this time.</li>
<li>We are unable to provide visa sponsorship.</li>
<li>For those who will be working from a personal device, your computer must be a Chromebook, a Mac with macOS 11.0 or later, or Windows 10 or later.</li>
</ul>
<p>Compensation and Benefits:</p>
<p>US-based candidates: $35/hour - $45/hour depending on factors including relevant experience, skills, education, geographic location, and qualifications. International candidates: Information will be provided to you during the recruitment process.</p>
<p>Benefits vary based on employment type, location, and jurisdiction. Benefits for eligible U.S.-based positions include health insurance, 401(k) plan, and paid sick leave. Specific details and role-specific information will be provided to you during the interview process.</p>
<p style="margin-top:24px;font-size:13px;color:#666;">XML job scraping automation by <a href="https://yubhub.co">YubHub</a></p>]]></Description>
      <Jobtype>full-time|part-time|contract</Jobtype>
      <Experiencelevel>mid</Experiencelevel>
      <Workarrangement>remote</Workarrangement>
      <Salaryrange>$35/hour - $45/hour</Salaryrange>
      <Skills>Portuguese, English, Auditory perception, Multilingual audio content, Transcription, Voice recordings, Audio analysis, Linguistics, Speech sciences, Cognitive science, Exceptional attention to linguistic nuance, Advanced transcription and annotation practices, Disfluencies, accents, and prosodic features, Accent variation, pronunciation differences, and multilingual speech patterns, Speech/audio datasets, annotation workflows, or AI training data, Training voice models, Data quality impacts model performance, Voice work, including voice acting, voice recording, podcasting, Independent judgment in ambiguous audio scenarios, Defensible annotation decisions</Skills>
      <Category>Engineering</Category>
      <Industry>Technology</Industry>
      <Employername>xAI</Employername>
      <Employerlogo>https://logos.yubhub.co/xai.com.png</Employerlogo>
      <Employerdescription>xAI creates AI systems to understand the universe and aid humanity in its pursuit of knowledge.</Employerdescription>
      <Employerwebsite>https://www.xai.com/</Employerwebsite>
      <Compensationcurrency></Compensationcurrency>
      <Compensationmin></Compensationmin>
      <Compensationmax></Compensationmax>
      <Applyto>https://job-boards.greenhouse.io/xai/jobs/5090221007</Applyto>
      <Location>Remote</Location>
      <Country></Country>
      <Postedate>2026-04-18</Postedate>
    </job>
    <job>
      <externalid>d7ad6078-168</externalid>
      <Title>AI Tutor - Gujarati</Title>
      <Description><![CDATA[<p>As an AI Tutor specialized in multilingual audio capabilities, you will contribute to xAI&#39;s mission by training and refining Grok to excel in voice interactions, speech recognition, and auditory experiences across diverse languages, accents, and cultural contexts.</p>
<p>Your work will focus on curating and annotating high-quality audio data to enhance Grok&#39;s global accessibility, enabling natural spoken interactions for users worldwide, bridging language barriers through accurate speech processing, and improving the AI&#39;s handling of multilingual audio nuances.</p>
<p>Responsibilities:</p>
<ul>
<li>Use proprietary software to provide labels, annotations, recordings, and inputs on projects involving multilingual audio clips, voice recordings, speech samples, and auditory elements in various languages.</li>
</ul>
<ul>
<li>Support the delivery of high-quality curated audio data that ensures clear, natural spoken output, accurate representation of linguistic and prosodic details (such as intonation, rhythm, and accent), and professional audio standards.</li>
</ul>
<ul>
<li>Collaborate with technical staff to develop tasks that improve AI&#39;s ability to handle speech modulation, accent variation, noise in real-world recordings, and multilingual audio processing.</li>
</ul>
<ul>
<li>Work with technical staff to improve annotation tools for efficient audio workflows.</li>
</ul>
<p>Basic Qualifications:</p>
<ul>
<li>Native proficiency in Gujarati with exposure to diverse accents, dialects, or regional variations.</li>
</ul>
<ul>
<li>Proficiency in English (minimum B2 level) with clear, natural vocal delivery and pronunciation suitable for audio recording purposes.</li>
</ul>
<ul>
<li>Strong auditory perception to identify nuances in speech, accents, pronunciation, intonation, and audio quality across languages.</li>
</ul>
<ul>
<li>Demonstrated ability to handle multilingual audio content, including evaluating speech accuracy, cultural vocal expressions, and contextual interpretation in spoken form.</li>
</ul>
<ul>
<li>Demonstrated ability to transcribe audio with high accuracy across accents and varying audio quality.</li>
</ul>
<ul>
<li>Comfort providing high-quality voice recordings and feedback on audio samples in multiple languages.</li>
</ul>
<ul>
<li>Strong comprehension skills and the ability to make independent judgments on ambiguous or varied audio material, including noisy or accented speech.</li>
</ul>
<ul>
<li>Strong communication, interpersonal, analytical, detail-oriented, and organizational skills, with the ability to articulate audio-related feedback effectively.</li>
</ul>
<ul>
<li>Commitment to developing AI that masters sophisticated multilingual audio capabilities.</li>
</ul>
<p>Preferred Skills and Experience:</p>
<ul>
<li>Demonstration of exceptional attention to linguistic nuance, auditory detail, and data quality beyond standard transcription work.</li>
</ul>
<ul>
<li>Deep understanding and taste of what good/useful Audio data is.</li>
</ul>
<ul>
<li>Strong command of advanced transcription and annotation practices, including handling disfluencies, accents, and prosodic features (intonation, stress, rhythm, emotion, etc) with high consistency and accuracy.</li>
</ul>
<ul>
<li>Background in linguistics (e.g., phonetics, phonology, sociolinguistics), speech sciences, cognitive science, or a related field, or equivalent practical experience, with demonstrated ability to analyze accent variation, pronunciation differences, and multilingual speech patterns.</li>
</ul>
<ul>
<li>Experience working with speech/audio datasets, annotation workflows, or AI training data, including knowledge/experience with training voice models, and an understanding of how data quality impacts model performance.</li>
</ul>
<ul>
<li>Professional experience in voice work, including voice acting, voice recording, podcasting with a measurable audience (e.g., X following), or similar audio production demonstrating attention to clarity and recording quality.</li>
</ul>
<ul>
<li>Demonstrated ability to exercise independent judgment in ambiguous audio scenarios and make consistent, defensible annotation decisions.</li>
</ul>
<ul>
<li>Portfolio (strongly preferred for advanced candidates): Voice samples, annotated transcripts, or audio-related work demonstrating quality, methodology, and attention to detail.</li>
</ul>
<ul>
<li>Candidates with professional experience in voice, linguistics, speech data, or speech evaluation and research are especially encouraged to apply.</li>
</ul>
<p>Location and Other Expectations:</p>
<ul>
<li>Tutor roles may be offered as full-time, part-time, or contractor positions, depending on role needs and candidate fit.</li>
</ul>
<ul>
<li>For contractor positions, hours will vary widely based on project scope and contractor availability, with no fixed commitments required. On average, most projects may require at least 10 hours per week to deliver effectively, though this is not a fixed commitment and depends on the scope of work. Contractors have full flexibility to set their own hours and determine the exact amount of time needed to complete deliverables.</li>
</ul>
<ul>
<li>Tutor roles may be performed remotely from any location worldwide, subject to legal eligibility, time-zone compatibility, and role-specific needs.</li>
</ul>
<ul>
<li>For US-based candidates, please note that we are unable to hire in Wyoming and Illinois at this time.</li>
</ul>
<ul>
<li>We are unable to provide visa sponsorship.</li>
</ul>
<ul>
<li>For those who will be working from a personal device, your computer must be a Chromebook, a Mac with macOS 11.0 or later, or Windows 10 or later.</li>
</ul>
<p>Compensation and Benefits:</p>
<p>US-based candidates: $35/hour - $45/hour depending on factors including relevant experience, skills, education, geographic location, and qualifications. International candidates: Information will be provided to you during the recruitment process.</p>
<p>Benefits vary based on employment type, location, and jurisdiction. Benefits for eligible U.S.-based positions include health insurance, 401(k) plan, and paid sick leave. Specific details and role-specific information will be provided to you during the interview process.</p>
<p style="margin-top:24px;font-size:13px;color:#666;">XML job scraping automation by <a href="https://yubhub.co">YubHub</a></p>]]></Description>
      <Jobtype>full-time|part-time|contract</Jobtype>
      <Experiencelevel>mid</Experiencelevel>
      <Workarrangement>remote</Workarrangement>
      <Salaryrange>$35/hour - $45/hour</Salaryrange>
      <Skills>Native proficiency in Gujarati, Proficiency in English, Strong auditory perception, Demonstrated ability to handle multilingual audio content, Demonstrated ability to transcribe audio with high accuracy, Comfort providing high-quality voice recordings and feedback on audio samples, Strong comprehension skills, Strong communication, interpersonal, analytical, detail-oriented, and organizational skills, Exceptional attention to linguistic nuance, auditory detail, and data quality, Deep understanding and taste of what good/useful Audio data is, Strong command of advanced transcription and annotation practices, Background in linguistics, speech sciences, cognitive science, or a related field, Experience working with speech/audio datasets, annotation workflows, or AI training data, Professional experience in voice work, including voice acting, voice recording, podcasting with a measurable audience, Portfolio (voice samples, annotated transcripts, or audio-related work), Candidates with professional experience in voice, linguistics, speech data, or speech evaluation and research</Skills>
      <Category>Engineering</Category>
      <Industry>Technology</Industry>
      <Employername>xAI</Employername>
      <Employerlogo>https://logos.yubhub.co/xai.com.png</Employerlogo>
      <Employerdescription>xAI creates AI systems that can accurately understand the universe and aid humanity in its pursuit of knowledge. The team is small and highly motivated.</Employerdescription>
      <Employerwebsite>https://www.xai.com/</Employerwebsite>
      <Compensationcurrency></Compensationcurrency>
      <Compensationmin></Compensationmin>
      <Compensationmax></Compensationmax>
      <Applyto>https://job-boards.greenhouse.io/xai/jobs/5090203007</Applyto>
      <Location>Remote</Location>
      <Country></Country>
      <Postedate>2026-04-18</Postedate>
    </job>
    <job>
      <externalid>3931fe10-1fd</externalid>
      <Title>AI Tutor - Polish</Title>
      <Description><![CDATA[<p>As an AI Tutor specialized in multilingual audio capabilities, you will contribute to xAI&#39;s mission by training and refining Grok to excel in voice interactions, speech recognition, and auditory experiences across diverse languages, accents, and cultural contexts.</p>
<p>Your work will focus on curating and annotating high-quality audio data to enhance Grok&#39;s global accessibility, enabling natural spoken interactions for users worldwide, bridging language barriers through accurate speech processing, and improving the AI&#39;s handling of multilingual audio nuances.</p>
<p>Responsibilities:</p>
<ul>
<li>Use proprietary software to provide labels, annotations, recordings, and inputs on projects involving multilingual audio clips, voice recordings, speech samples, and auditory elements in various languages.</li>
</ul>
<ul>
<li>Support the delivery of high-quality curated audio data that ensures clear, natural spoken output, accurate representation of linguistic and prosodic details (such as intonation, rhythm, and accent), and professional audio standards.</li>
</ul>
<ul>
<li>Collaborate with technical staff to develop tasks that improve AI&#39;s ability to handle speech modulation, accent variation, noise in real-world recordings, and multilingual audio processing.</li>
</ul>
<ul>
<li>Work with technical staff to improve annotation tools for efficient audio workflows.</li>
</ul>
<p>Basic Qualifications:</p>
<ul>
<li>Native proficiency in Polish with exposure to diverse accents, dialects, or regional variations.</li>
</ul>
<ul>
<li>Proficiency in English (minimum B2 level) with clear, natural vocal delivery and pronunciation suitable for audio recording purposes.</li>
</ul>
<ul>
<li>Strong auditory perception to identify nuances in speech, accents, pronunciation, intonation, and audio quality across languages.</li>
</ul>
<ul>
<li>Demonstrated ability to handle multilingual audio content, including evaluating speech accuracy, cultural vocal expressions, and contextual interpretation in spoken form.</li>
</ul>
<ul>
<li>Demonstrated ability to transcribe audio with high accuracy across accents and varying audio quality.</li>
</ul>
<ul>
<li>Comfort providing high-quality voice recordings and feedback on audio samples in multiple languages.</li>
</ul>
<ul>
<li>Strong comprehension skills and the ability to make independent judgments on ambiguous or varied audio material, including noisy or accented speech.</li>
</ul>
<ul>
<li>Strong communication, interpersonal, analytical, detail-oriented, and organizational skills, with the ability to articulate audio-related feedback effectively.</li>
</ul>
<ul>
<li>Commitment to developing AI that masters sophisticated multilingual audio capabilities.</li>
</ul>
<p>Preferred Skills and Experience:</p>
<ul>
<li>Demonstration of exceptional attention to linguistic nuance, auditory detail, and data quality beyond standard transcription work.</li>
</ul>
<ul>
<li>Deep understanding and taste of what good/useful Audio data is.</li>
</ul>
<ul>
<li>Strong command of advanced transcription and annotation practices, including handling disfluencies, accents, and prosodic features (intonation, stress, rhythm, emotion, etc) with high consistency and accuracy.</li>
</ul>
<ul>
<li>Background in linguistics (e.g., phonetics, phonology, sociolinguistics), speech sciences, cognitive science, or a related field, or equivalent practical experience, with demonstrated ability to analyze accent variation, pronunciation differences, and multilingual speech patterns.</li>
</ul>
<ul>
<li>Experience working with speech/audio datasets, annotation workflows, or AI training data, including knowledge/experience with training voice models, and an understanding of how data quality impacts model performance.</li>
</ul>
<ul>
<li>Professional experience in voice work, including voice acting, voice recording, podcasting with a measurable audience (e.g., X following), or similar audio production demonstrating attention to clarity and recording quality.</li>
</ul>
<ul>
<li>Demonstrated ability to exercise independent judgment in ambiguous audio scenarios and make consistent, defensible annotation decisions.</li>
</ul>
<ul>
<li>Portfolio (strongly preferred for advanced candidates): Voice samples, annotated transcripts, or audio-related work demonstrating quality, methodology, and attention to detail.</li>
</ul>
<ul>
<li>Candidates with professional experience in voice, linguistics, speech data, or speech evaluation and research are especially encouraged to apply.</li>
</ul>
<p>Location and Other Expectations:</p>
<ul>
<li>Tutor roles may be offered as full-time, part-time, or contractor positions, depending on role needs and candidate fit.</li>
</ul>
<ul>
<li>For contractor positions, hours will vary widely based on project scope and contractor availability, with no fixed commitments required. On average, most projects may require at least 10 hours per week to deliver effectively, though this is not a fixed commitment and depends on the scope of work. Contractors have full flexibility to set their own hours and determine the exact amount of time needed to complete deliverables.</li>
</ul>
<ul>
<li>Tutor roles may be performed remotely from any location worldwide, subject to legal eligibility, time-zone compatibility, and role-specific needs.</li>
</ul>
<ul>
<li>For US-based candidates, please note that we are unable to hire in Wyoming and Illinois at this time.</li>
</ul>
<ul>
<li>We are unable to provide visa sponsorship.</li>
</ul>
<ul>
<li>For those who will be working from a personal device, your computer must be a Chromebook, a Mac with macOS 11.0 or later, or Windows 10 or later.</li>
</ul>
<p>Compensation and Benefits:</p>
<p>US-based candidates: $35/hour - $45/hour depending on factors including relevant experience, skills, education, geographic location, and qualifications. International candidates: Information will be provided to you during the recruitment process.</p>
<p>Benefits vary based on employment type, location, and jurisdiction. Benefits for eligible U.S.-based positions include health insurance, 401(k) plan, and paid sick leave. Specific details and role-specific information will be provided to you during the interview process.</p>
<p style="margin-top:24px;font-size:13px;color:#666;">XML job scraping automation by <a href="https://yubhub.co">YubHub</a></p>]]></Description>
      <Jobtype>full-time|part-time|contract</Jobtype>
      <Experiencelevel>mid</Experiencelevel>
      <Workarrangement>remote</Workarrangement>
      <Salaryrange>$35/hour - $45/hour</Salaryrange>
      <Skills>Native proficiency in Polish, Proficiency in English, Strong auditory perception, Demonstrated ability to handle multilingual audio content, Demonstrated ability to transcribe audio , Exceptional attention to linguistic nuance, Deep understanding and taste of what good/useful Audio data is, Strong command of advanced transcription and annotation practices, Background in linguistics, Experience working with speech/audio datasets</Skills>
      <Category>Engineering</Category>
      <Industry>Technology</Industry>
      <Employername>xAI</Employername>
      <Employerlogo>https://logos.yubhub.co/xai.com.png</Employerlogo>
      <Employerdescription>xAI creates AI systems that can accurately understand the universe and aid humanity in its pursuit of knowledge. The organisation is small and highly motivated.</Employerdescription>
      <Employerwebsite>https://www.xai.com/</Employerwebsite>
      <Compensationcurrency></Compensationcurrency>
      <Compensationmin></Compensationmin>
      <Compensationmax></Compensationmax>
      <Applyto>https://job-boards.greenhouse.io/xai/jobs/5090218007</Applyto>
      <Location>Remote</Location>
      <Country></Country>
      <Postedate>2026-04-18</Postedate>
    </job>
    <job>
      <externalid>f64f8085-f84</externalid>
      <Title>AI Tutor - German</Title>
      <Description><![CDATA[<p>As an AI Tutor specialized in multilingual audio capabilities, you will contribute to xAI&#39;s mission by training and refining Grok to excel in voice interactions, speech recognition, and auditory experiences across diverse languages, accents, and cultural contexts.</p>
<p>Your work will focus on curating and annotating high-quality audio data to enhance Grok&#39;s global accessibility, enabling natural spoken interactions for users worldwide, bridging language barriers through accurate speech processing, and improving the AI&#39;s handling of multilingual audio nuances.</p>
<p>Responsibilities:</p>
<ul>
<li>Use proprietary software to provide labels, annotations, recordings, and inputs on projects involving multilingual audio clips, voice recordings, speech samples, and auditory elements in various languages.</li>
</ul>
<ul>
<li>Support the delivery of high-quality curated audio data that ensures clear, natural spoken output, accurate representation of linguistic and prosodic details (such as intonation, rhythm, and accent), and professional audio standards.</li>
</ul>
<ul>
<li>Collaborate with technical staff to develop tasks that improve AI&#39;s ability to handle speech modulation, accent variation, noise in real-world recordings, and multilingual audio processing.</li>
</ul>
<ul>
<li>Work with technical staff to improve annotation tools for efficient audio workflows.</li>
</ul>
<p>Basic Qualifications:</p>
<ul>
<li>Native proficiency in German with exposure to diverse accents, dialects, or regional variations.</li>
</ul>
<ul>
<li>Proficiency in English (minimum B2 level) with clear, natural vocal delivery and pronunciation suitable for audio recording purposes.</li>
</ul>
<ul>
<li>Strong auditory perception to identify nuances in speech, accents, pronunciation, intonation, and audio quality across languages.</li>
</ul>
<ul>
<li>Demonstrated ability to handle multilingual audio content, including evaluating speech accuracy, cultural vocal expressions, and contextual interpretation in spoken form.</li>
</ul>
<ul>
<li>Demonstrated ability to transcribe audio with high accuracy across accents and varying audio quality.</li>
</ul>
<ul>
<li>Comfort providing high-quality voice recordings and feedback on audio samples in multiple languages.</li>
</ul>
<ul>
<li>Strong comprehension skills and the ability to make independent judgments on ambiguous or varied audio material, including noisy or accented speech.</li>
</ul>
<ul>
<li>Strong communication, interpersonal, analytical, detail-oriented, and organizational skills, with the ability to articulate audio-related feedback effectively.</li>
</ul>
<ul>
<li>Commitment to developing AI that masters sophisticated multilingual audio capabilities.</li>
</ul>
<p>Preferred Skills and Experience:</p>
<ul>
<li>Demonstration of exceptional attention to linguistic nuance, auditory detail, and data quality beyond standard transcription work.</li>
</ul>
<ul>
<li>Deep understanding and taste of what good/useful Audio data is.</li>
</ul>
<ul>
<li>Strong command of advanced transcription and annotation practices, including handling disfluencies, accents, and prosodic features (intonation, stress, rhythm, emotion, etc) with high consistency and accuracy.</li>
</ul>
<ul>
<li>Background in linguistics (e.g., phonetics, phonology, sociolinguistics), speech sciences, cognitive science, or a related field, or equivalent practical experience, with demonstrated ability to analyze accent variation, pronunciation differences, and multilingual speech patterns.</li>
</ul>
<ul>
<li>Experience working with speech/audio datasets, annotation workflows, or AI training data, including knowledge/experience with training voice models, and an understanding of how data quality impacts model performance.</li>
</ul>
<ul>
<li>Professional experience in voice work, including voice acting, voice recording, podcasting with a measurable audience (e.g., X following), or similar audio production demonstrating attention to clarity and recording quality.</li>
</ul>
<ul>
<li>Demonstrated ability to exercise independent judgment in ambiguous audio scenarios and make consistent, defensible annotation decisions.</li>
</ul>
<ul>
<li>Portfolio (strongly preferred for advanced candidates): Voice samples, annotated transcripts, or audio-related work demonstrating quality, methodology, and attention to detail.</li>
</ul>
<ul>
<li>Candidates with professional experience in voice, linguistics, speech data, or speech evaluation and research are especially encouraged to apply.</li>
</ul>
<p>Location and Other Expectations:</p>
<ul>
<li>Tutor roles may be offered as full-time, part-time, or contractor positions, depending on role needs and candidate fit.</li>
</ul>
<ul>
<li>For contractor positions, hours will vary widely based on project scope and contractor availability, with no fixed commitments required. On average, most projects may require at least 10 hours per week to deliver effectively, though this is not a fixed commitment and depends on the scope of work. Contractors have full flexibility to set their own hours and determine the exact amount of time needed to complete deliverables.</li>
</ul>
<ul>
<li>Tutor roles may be performed remotely from any location worldwide, subject to legal eligibility, time-zone compatibility, and role-specific needs.</li>
</ul>
<ul>
<li>For US-based candidates, please note that we are unable to hire in Wyoming and Illinois at this time.</li>
</ul>
<ul>
<li>We are unable to provide visa sponsorship.</li>
</ul>
<ul>
<li>For those who will be working from a personal device, your computer must be a Chromebook, a Mac with macOS 11.0 or later, or Windows 10 or later.</li>
</ul>
<p>Compensation and Benefits:</p>
<p>US-based candidates: $35/hour - $45/hour depending on factors including relevant experience, skills, education, geographic location, and qualifications. International candidates: Information will be provided to you during the recruitment process.</p>
<p>Benefits vary based on employment type, location, and jurisdiction. Benefits for eligible U.S.-based positions include health insurance, 401(k) plan, and paid sick leave. Specific details and role-specific information will be provided to you during the interview process.</p>
<p style="margin-top:24px;font-size:13px;color:#666;">XML job scraping automation by <a href="https://yubhub.co">YubHub</a></p>]]></Description>
      <Jobtype>full-time|part-time|contract</Jobtype>
      <Experiencelevel>mid</Experiencelevel>
      <Workarrangement>remote</Workarrangement>
      <Salaryrange>$35/hour - $45/hour</Salaryrange>
      <Skills>Native proficiency in German, Proficiency in English, Strong auditory perception, Demonstrated ability to handle multilingual audio content, Demonstrated ability to transcribe audio with high accuracy, Comfort providing high-quality voice recordings and feedback on audio samples, Strong comprehension skills, Strong communication, interpersonal, analytical, detail-oriented, and organizational skills, Exceptional attention to linguistic nuance, auditory detail, and data quality, Deep understanding and taste of what good/useful Audio data is, Strong command of advanced transcription and annotation practices, Background in linguistics, speech sciences, cognitive science, or a related field, Experience working with speech/audio datasets, annotation workflows, or AI training data, Professional experience in voice work, including voice acting, voice recording, podcasting, Demonstrated ability to exercise independent judgment in ambiguous audio scenarios</Skills>
      <Category>Engineering</Category>
      <Industry>Technology</Industry>
      <Employername>xAI</Employername>
      <Employerlogo>https://logos.yubhub.co/xai.com.png</Employerlogo>
      <Employerdescription>xAI creates AI systems to understand the universe and aid humanity in its pursuit of knowledge.</Employerdescription>
      <Employerwebsite>https://www.xai.com/</Employerwebsite>
      <Compensationcurrency></Compensationcurrency>
      <Compensationmin></Compensationmin>
      <Compensationmax></Compensationmax>
      <Applyto>https://job-boards.greenhouse.io/xai/jobs/5090120007</Applyto>
      <Location>Remote</Location>
      <Country></Country>
      <Postedate>2026-04-18</Postedate>
    </job>
    <job>
      <externalid>9c65d655-aff</externalid>
      <Title>AI Tutor - Finnish</Title>
      <Description><![CDATA[<p>As an AI Tutor specialized in multilingual audio capabilities, you will contribute to xAI&#39;s mission by training and refining Grok to excel in voice interactions, speech recognition, and auditory experiences across diverse languages, accents, and cultural contexts.</p>
<p>Your work will focus on curating and annotating high-quality audio data to enhance Grok&#39;s global accessibility, enabling natural spoken interactions for users worldwide, bridging language barriers through accurate speech processing, and improving the AI&#39;s handling of multilingual audio nuances.</p>
<p>Responsibilities:</p>
<ul>
<li>Use proprietary software to provide labels, annotations, recordings, and inputs on projects involving multilingual audio clips, voice recordings, speech samples, and auditory elements in various languages.</li>
</ul>
<ul>
<li>Support the delivery of high-quality curated audio data that ensures clear, natural spoken output, accurate representation of linguistic and prosodic details (such as intonation, rhythm, and accent), and professional audio standards.</li>
</ul>
<ul>
<li>Collaborate with technical staff to develop tasks that improve AI&#39;s ability to handle speech modulation, accent variation, noise in real-world recordings, and multilingual audio processing.</li>
</ul>
<ul>
<li>Work with technical staff to improve annotation tools for efficient audio workflows.</li>
</ul>
<p>Basic Qualifications:</p>
<ul>
<li>Native proficiency in Finnish with exposure to diverse accents, dialects, or regional variations.</li>
</ul>
<ul>
<li>Proficiency in English (minimum B2 level) with clear, natural vocal delivery and pronunciation suitable for audio recording purposes.</li>
</ul>
<ul>
<li>Strong auditory perception to identify nuances in speech, accents, pronunciation, intonation, and audio quality across languages.</li>
</ul>
<ul>
<li>Demonstrated ability to handle multilingual audio content, including evaluating speech accuracy, cultural vocal expressions, and contextual interpretation in spoken form.</li>
</ul>
<ul>
<li>Demonstrated ability to transcribe audio with high accuracy across accents and varying audio quality.</li>
</ul>
<ul>
<li>Comfort providing high-quality voice recordings and feedback on audio samples in multiple languages.</li>
</ul>
<ul>
<li>Strong comprehension skills and the ability to make independent judgments on ambiguous or varied audio material, including noisy or accented speech.</li>
</ul>
<ul>
<li>Strong communication, interpersonal, analytical, detail-oriented, and organizational skills, with the ability to articulate audio-related feedback effectively.</li>
</ul>
<ul>
<li>Commitment to developing AI that masters sophisticated multilingual audio capabilities.</li>
</ul>
<p>Preferred Skills and Experience:</p>
<ul>
<li>Demonstration of exceptional attention to linguistic nuance, auditory detail, and data quality beyond standard transcription work.</li>
</ul>
<ul>
<li>Deep understanding and taste of what good/useful Audio data is.</li>
</ul>
<ul>
<li>Strong command of advanced transcription and annotation practices, including handling disfluencies, accents, and prosodic features (intonation, stress, rhythm, emotion, etc) with high consistency and accuracy.</li>
</ul>
<ul>
<li>Background in linguistics (e.g., phonetics, phonology, sociolinguistics), speech sciences, cognitive science, or a related field, or equivalent practical experience, with demonstrated ability to analyze accent variation, pronunciation differences, and multilingual speech patterns.</li>
</ul>
<ul>
<li>Experience working with speech/audio datasets, annotation workflows, or AI training data, including knowledge/experience with training voice models, and an understanding of how data quality impacts model performance.</li>
</ul>
<ul>
<li>Professional experience in voice work, including voice acting, voice recording, podcasting with a measurable audience (e.g., X following), or similar audio production demonstrating attention to clarity and recording quality.</li>
</ul>
<ul>
<li>Demonstrated ability to exercise independent judgment in ambiguous audio scenarios and make consistent, defensible annotation decisions.</li>
</ul>
<ul>
<li>Portfolio (strongly preferred for advanced candidates): Voice samples, annotated transcripts, or audio-related work demonstrating quality, methodology, and attention to detail.</li>
</ul>
<ul>
<li>Candidates with professional experience in voice, linguistics, speech data, or speech evaluation and research are especially encouraged to apply.</li>
</ul>
<p>Location and Other Expectations:</p>
<ul>
<li>Tutor roles may be offered as full-time, part-time, or contractor positions, depending on role needs and candidate fit.</li>
</ul>
<ul>
<li>For contractor positions, hours will vary widely based on project scope and contractor availability, with no fixed commitments required. On average, most projects may require at least 10 hours per week to deliver effectively, though this is not a fixed commitment and depends on the scope of work. Contractors have full flexibility to set their own hours and determine the exact amount of time needed to complete deliverables.</li>
</ul>
<ul>
<li>Tutor roles may be performed remotely from any location worldwide, subject to legal eligibility, time-zone compatibility, and role-specific needs.</li>
</ul>
<ul>
<li>For US-based candidates, please note that we are unable to hire in Wyoming and Illinois at this time.</li>
</ul>
<ul>
<li>We are unable to provide visa sponsorship.</li>
</ul>
<ul>
<li>For those who will be working from a personal device, your computer must be a Chromebook, a Mac with macOS 11.0 or later, or Windows 10 or later.</li>
</ul>
<p>Compensation and Benefits:</p>
<p>US-based candidates: $35/hour - $45/hour depending on factors including relevant experience, skills, education, geographic location, and qualifications. International candidates: Information will be provided to you during the recruitment process.</p>
<p>Benefits vary based on employment type, location, and jurisdiction. Benefits for eligible U.S.-based positions include health insurance, 401(k) plan, and paid sick leave. Specific details and role-specific information will be provided to you during the interview process.</p>
<p style="margin-top:24px;font-size:13px;color:#666;">XML job scraping automation by <a href="https://yubhub.co">YubHub</a></p>]]></Description>
      <Jobtype>full-time|part-time|contract</Jobtype>
      <Experiencelevel>mid</Experiencelevel>
      <Workarrangement>remote</Workarrangement>
      <Salaryrange>$35/hour - $45/hour</Salaryrange>
      <Skills>multilingual audio capabilities, voice interactions, speech recognition, auditory experiences, linguistic and prosodic details, intonation, rhythm, accent, professional audio standards, speech modulation, accent variation, noise in real-world recordings, multilingual audio processing, annotation tools, efficient audio workflows, native proficiency in Finnish, English (minimum B2 level), strong auditory perception, nuances in speech, accents, pronunciation, audio quality, multilingual audio content, speech accuracy, cultural vocal expressions, contextual interpretation, transcription, high accuracy, various audio quality, voice recordings, feedback on audio samples, independent judgments, ambiguous or varied audio material, noisy or accented speech, communication, interpersonal, analytical, detail-oriented, organizational, audio-related feedback, commitment to developing AI, exceptional attention to linguistic nuance, auditory detail, data quality, deep understanding and taste of what good/useful Audio data is, strong command of advanced transcription and annotation practices, handling disfluencies, prosodic features, background in linguistics, phonetics, phonology, sociolinguistics, speech sciences, cognitive science, equivalent practical experience, analysis of accent variation, pronunciation differences, multilingual speech patterns, experience working with speech/audio datasets, annotation workflows, AI training data, training voice models, understanding of how data quality impacts model performance, professional experience in voice work, voice acting, voice recording, podcasting, measurable audience, similar audio production, exercise independent judgment, defensible annotation decisions, portfolio, voice samples, annotated transcripts, audio-related work, quality, methodology, attention to detail</Skills>
      <Category>Engineering</Category>
      <Industry>Technology</Industry>
      <Employername>xAI</Employername>
      <Employerlogo>https://logos.yubhub.co/xai.com.png</Employerlogo>
      <Employerdescription>xAI creates AI systems to understand the universe and aid humanity in its pursuit of knowledge. The team is small and highly motivated.</Employerdescription>
      <Employerwebsite>https://www.xai.com/</Employerwebsite>
      <Compensationcurrency></Compensationcurrency>
      <Compensationmin></Compensationmin>
      <Compensationmax></Compensationmax>
      <Applyto>https://job-boards.greenhouse.io/xai/jobs/5090199007</Applyto>
      <Location>Remote</Location>
      <Country></Country>
      <Postedate>2026-04-18</Postedate>
    </job>
    <job>
      <externalid>c190b4d0-c63</externalid>
      <Title>AI Tutor - English</Title>
      <Description><![CDATA[<p>As an AI Tutor specialized in multilingual audio capabilities, you will contribute to xAI&#39;s mission by training and refining Grok to excel in voice interactions, speech recognition, and auditory experiences across diverse languages, accents, and cultural contexts.</p>
<p>Your work will focus on curating and annotating high-quality audio data to enhance Grok&#39;s global accessibility, enabling natural spoken interactions for users worldwide, bridging language barriers through accurate speech processing, and improving the AI&#39;s handling of multilingual audio nuances.</p>
<p>Responsibilities:</p>
<ul>
<li>Use proprietary software to provide labels, annotations, recordings, and inputs on projects involving multilingual audio clips, voice recordings, speech samples, and auditory elements in various languages.</li>
<li>Support the delivery of high-quality curated audio data that ensures clear, natural spoken output, accurate representation of linguistic and prosodic details (such as intonation, rhythm, and accent), and professional audio standards.</li>
<li>Collaborate with technical staff to develop tasks that improve AI&#39;s ability to handle speech modulation, accent variation, noise in real-world recordings, and multilingual audio processing.</li>
<li>Work with technical staff to improve annotation tools for efficient audio workflows.</li>
</ul>
<p>Basic Qualifications:</p>
<ul>
<li>Native proficiency in English with exposure to diverse accents, dialects, or regional variations.</li>
<li>Strong auditory perception to identify nuances in speech, accents, pronunciation, intonation, and audio quality across languages.</li>
<li>Demonstrated ability to handle multilingual audio content, including evaluating speech accuracy, cultural vocal expressions, and contextual interpretation in spoken form.</li>
<li>Demonstrated ability to transcribe audio with high accuracy across accents and varying audio quality.</li>
<li>Comfort providing high-quality voice recordings and feedback on audio samples in multiple languages.</li>
<li>Strong comprehension skills and the ability to make independent judgments on ambiguous or varied audio material, including noisy or accented speech.</li>
<li>Strong communication, interpersonal, analytical, detail-oriented, and organizational skills, with the ability to articulate audio-related feedback effectively.</li>
<li>Commitment to developing AI that masters sophisticated multilingual audio capabilities.</li>
</ul>
<p>Preferred Skills and Experience:</p>
<ul>
<li>Demonstration of exceptional attention to linguistic nuance, auditory detail, and data quality beyond standard transcription work.</li>
<li>Deep understanding and taste of what good/useful Audio data is.</li>
<li>Strong command of advanced transcription and annotation practices, including handling disfluencies, accents, and prosodic features (intonation, stress, rhythm, emotion, etc) with high consistency and accuracy.</li>
<li>Background in linguistics (e.g., phonetics, phonology, sociolinguistics), speech sciences, cognitive science, or a related field, or equivalent practical experience, with demonstrated ability to analyze accent variation, pronunciation differences, and multilingual speech patterns.</li>
<li>Experience working with speech/audio datasets, annotation workflows, or AI training data, including knowledge/experience with training voice models, and an understanding of how data quality impacts model performance.</li>
<li>Professional experience in voice work, including voice acting, voice recording, podcasting with a measurable audience (e.g., X following), or similar audio production demonstrating attention to clarity and recording quality.</li>
<li>Demonstrated ability to exercise independent judgment in ambiguous audio scenarios and make consistent, defensible annotation decisions.</li>
<li>Portfolio (strongly preferred for advanced candidates): Voice samples, annotated transcripts, or audio-related work demonstrating quality, methodology, and attention to detail.</li>
<li>Candidates with professional experience in voice, linguistics, speech data, or speech evaluation and research are especially encouraged to apply.</li>
</ul>
<p>Location and Other Expectations:</p>
<ul>
<li>Tutor roles may be offered as full-time, part-time, or contractor positions, depending on role needs and candidate fit.</li>
<li>For contractor positions, hours will vary widely based on project scope and contractor availability, with no fixed commitments required. On average, most projects may require at least 10 hours per week to deliver effectively, though this is not a fixed commitment and depends on the scope of work. Contractors have full flexibility to set their own hours and determine the exact amount of time needed to complete deliverables.</li>
<li>Tutor roles may be performed remotely from any location worldwide, subject to legal eligibility, time-zone compatibility, and role-specific needs.</li>
<li>For US-based candidates, please note that we are unable to hire in Wyoming and Illinois at this time.</li>
<li>We are unable to provide visa sponsorship.</li>
<li>For those who will be working from a personal device, your computer must be a Chromebook, a Mac with macOS 11.0 or later, or Windows 10 or later.</li>
</ul>
<p>Compensation and Benefits: US-based candidates: $35/hour - $45/hour depending on factors including relevant experience, skills, education, geographic location, and qualifications. International candidates: Information will be provided to you during the recruitment process. Benefits vary based on employment type, location, and jurisdiction. Benefits for eligible U.S.-based positions include health insurance, 401(k) plan, and paid sick leave. Specific details and role-specific information will be provided to you during the interview process.</p>
<p style="margin-top:24px;font-size:13px;color:#666;">XML job scraping automation by <a href="https://yubhub.co">YubHub</a></p>]]></Description>
      <Jobtype>full-time|part-time|contract|temporary|internship</Jobtype>
      <Experiencelevel></Experiencelevel>
      <Workarrangement>remote</Workarrangement>
      <Salaryrange>$35/hour - $45/hour</Salaryrange>
      <Skills>Native proficiency in English, Strong auditory perception, Demonstrated ability to handle multilingual audio content, Demonstrated ability to transcribe audio with high accuracy, Comfort providing high-quality voice recordings and feedback on audio samples, Exceptional attention to linguistic nuance, Deep understanding and taste of what good/useful Audio data is, Strong command of advanced transcription and annotation practices, Background in linguistics or speech sciences, Experience working with speech/audio datasets</Skills>
      <Category>Engineering</Category>
      <Industry>Technology</Industry>
      <Employername>xAI</Employername>
      <Employerlogo>https://logos.yubhub.co/xai.com.png</Employerlogo>
      <Employerdescription>xAI creates AI systems that can accurately understand the universe and aid humanity in its pursuit of knowledge. The team is small and focused on engineering excellence.</Employerdescription>
      <Employerwebsite>https://www.xai.com/</Employerwebsite>
      <Compensationcurrency></Compensationcurrency>
      <Compensationmin></Compensationmin>
      <Compensationmax></Compensationmax>
      <Applyto>https://job-boards.greenhouse.io/xai/jobs/5090198007</Applyto>
      <Location>Remote</Location>
      <Country></Country>
      <Postedate>2026-04-18</Postedate>
    </job>
    <job>
      <externalid>02d3881d-a73</externalid>
      <Title>AI Tutor - Dutch</Title>
      <Description><![CDATA[<p>As an AI Tutor specialized in multilingual audio capabilities, you will contribute to xAI&#39;s mission by training and refining Grok to excel in voice interactions, speech recognition, and auditory experiences across diverse languages, accents, and cultural contexts.</p>
<p>Your work will focus on curating and annotating high-quality audio data to enhance Grok&#39;s global accessibility, enabling natural spoken interactions for users worldwide, bridging language barriers through accurate speech processing, and improving the AI&#39;s handling of multilingual audio nuances.</p>
<p>Responsibilities:</p>
<ul>
<li>Use proprietary software to provide labels, annotations, recordings, and inputs on projects involving multilingual audio clips, voice recordings, speech samples, and auditory elements in various languages.</li>
</ul>
<ul>
<li>Support the delivery of high-quality curated audio data that ensures clear, natural spoken output, accurate representation of linguistic and prosodic details (such as intonation, rhythm, and accent), and professional audio standards.</li>
</ul>
<ul>
<li>Collaborate with technical staff to develop tasks that improve AI&#39;s ability to handle speech modulation, accent variation, noise in real-world recordings, and multilingual audio processing.</li>
</ul>
<ul>
<li>Work with technical staff to improve annotation tools for efficient audio workflows.</li>
</ul>
<p>Basic Qualifications:</p>
<ul>
<li>Native proficiency in Dutch with exposure to diverse accents, dialects, or regional variations.</li>
</ul>
<ul>
<li>Proficiency in English (minimum B2 level) with clear, natural vocal delivery and pronunciation suitable for audio recording purposes.</li>
</ul>
<ul>
<li>Strong auditory perception to identify nuances in speech, accents, pronunciation, intonation, and audio quality across languages.</li>
</ul>
<ul>
<li>Demonstrated ability to handle multilingual audio content, including evaluating speech accuracy, cultural vocal expressions, and contextual interpretation in spoken form.</li>
</ul>
<ul>
<li>Demonstrated ability to transcribe audio with high accuracy across accents and varying audio quality.</li>
</ul>
<ul>
<li>Comfort providing high-quality voice recordings and feedback on audio samples in multiple languages.</li>
</ul>
<ul>
<li>Strong comprehension skills and the ability to make independent judgments on ambiguous or varied audio material, including noisy or accented speech.</li>
</ul>
<ul>
<li>Strong communication, interpersonal, analytical, detail-oriented, and organizational skills, with the ability to articulate audio-related feedback effectively.</li>
</ul>
<ul>
<li>Commitment to developing AI that masters sophisticated multilingual audio capabilities.</li>
</ul>
<p>Preferred Skills and Experience:</p>
<ul>
<li>Demonstration of exceptional attention to linguistic nuance, auditory detail, and data quality beyond standard transcription work.</li>
</ul>
<ul>
<li>Deep understanding and taste of what good/useful Audio data is.</li>
</ul>
<ul>
<li>Strong command of advanced transcription and annotation practices, including handling disfluencies, accents, and prosodic features (intonation, stress, rhythm, emotion, etc) with high consistency and accuracy.</li>
</ul>
<ul>
<li>Background in linguistics (e.g., phonetics, phonology, sociolinguistics), speech sciences, cognitive science, or a related field, or equivalent practical experience, with demonstrated ability to analyze accent variation, pronunciation differences, and multilingual speech patterns.</li>
</ul>
<ul>
<li>Experience working with speech/audio datasets, annotation workflows, or AI training data, including knowledge/experience with training voice models, and an understanding of how data quality impacts model performance.</li>
</ul>
<ul>
<li>Professional experience in voice work, including voice acting, voice recording, podcasting with a measurable audience (e.g., X following), or similar audio production demonstrating attention to clarity and recording quality.</li>
</ul>
<ul>
<li>Demonstrated ability to exercise independent judgment in ambiguous audio scenarios and make consistent, defensible annotation decisions.</li>
</ul>
<ul>
<li>Portfolio (strongly preferred for advanced candidates): Voice samples, annotated transcripts, or audio-related work demonstrating quality, methodology, and attention to detail.</li>
</ul>
<ul>
<li>Candidates with professional experience in voice, linguistics, speech data, or speech evaluation and research are especially encouraged to apply.</li>
</ul>
<p>Location and Other Expectations:</p>
<ul>
<li>Tutor roles may be offered as full-time, part-time, or contractor positions, depending on role needs and candidate fit.</li>
</ul>
<ul>
<li>For contractor positions, hours will vary widely based on project scope and contractor availability, with no fixed commitments required. On average, most projects may require at least 10 hours per week to deliver effectively, though this is not a fixed commitment and depends on the scope of work. Contractors have full flexibility to set their own hours and determine the exact amount of time needed to complete deliverables.</li>
</ul>
<ul>
<li>Tutor roles may be performed remotely from any location worldwide, subject to legal eligibility, time-zone compatibility, and role-specific needs.</li>
</ul>
<ul>
<li>For US-based candidates, please note that we are unable to hire in Wyoming and Illinois at this time.</li>
</ul>
<ul>
<li>We are unable to provide visa sponsorship.</li>
</ul>
<ul>
<li>For those who will be working from a personal device, your computer must be a Chromebook, a Mac with macOS 11.0 or later, or Windows 10 or later.</li>
</ul>
<p>Compensation and Benefits:</p>
<p>US-based candidates: $35/hour - $45/hour depending on factors including relevant experience, skills, education, geographic location, and qualifications. International candidates: Information will be provided to you during the recruitment process.</p>
<p>Benefits vary based on employment type, location, and jurisdiction. Benefits for eligible U.S.-based positions include health insurance, 401(k) plan, and paid sick leave. Specific details and role-specific information will be provided to you during the interview process.</p>
<p style="margin-top:24px;font-size:13px;color:#666;">XML job scraping automation by <a href="https://yubhub.co">YubHub</a></p>]]></Description>
      <Jobtype>full-time|part-time|contract|temporary|internship</Jobtype>
      <Experiencelevel>mid</Experiencelevel>
      <Workarrangement>remote</Workarrangement>
      <Salaryrange>$35/hour - $45/hour</Salaryrange>
      <Skills>multilingual audio capabilities, voice interactions, speech recognition, auditory experiences, linguistic and prosodic details, intonation, rhythm, accent, professional audio standards, speech modulation, accent variation, noise in real-world recordings, multilingual audio processing, annotation tools, efficient audio workflows, native proficiency in Dutch, proficiency in English, auditory perception, nuances in speech, accents, pronunciation, audio quality, speech accuracy, cultural vocal expressions, contextual interpretation, transcription, high accuracy, vocal delivery, clarity, recording quality, independent judgments, ambiguous audio scenarios, defensible annotation decisions, portfolio, voice samples, annotated transcripts, audio-related work, quality, methodology, attention to detail, voice work, voice acting, voice recording, podcasting, measurable audience, speech sciences, cognitive science, linguistics, phonetics, phonology, sociolinguistics, pronunciation differences, multilingual speech patterns, speech/audio datasets, annotation workflows, AI training data, training voice models, data quality, model performance, independent judgment, exceptional attention to linguistic nuance, auditory detail, deep understanding, taste of what good/useful Audio data is, strong command of advanced transcription and annotation practices, handling disfluencies, prosodic features, high consistency and accuracy, background in linguistics, equivalent practical experience, analyzing accent variation, experienced working with speech/audio datasets, knowledge/experience with training voice models, understanding how data quality impacts model performance, professional experience in voice work, similar audio production, attention to clarity</Skills>
      <Category>Engineering</Category>
      <Industry>Technology</Industry>
      <Employername>xAI</Employername>
      <Employerlogo>https://logos.yubhub.co/xai.com.png</Employerlogo>
      <Employerdescription>xAI creates AI systems that can accurately understand the universe and aid humanity in its pursuit of knowledge. It has a small team of highly motivated engineers.</Employerdescription>
      <Employerwebsite>https://www.xai.com/</Employerwebsite>
      <Compensationcurrency></Compensationcurrency>
      <Compensationmin></Compensationmin>
      <Compensationmax></Compensationmax>
      <Applyto>https://job-boards.greenhouse.io/xai/jobs/5090197007</Applyto>
      <Location>Remote</Location>
      <Country></Country>
      <Postedate>2026-04-18</Postedate>
    </job>
    <job>
      <externalid>5021e5ca-0af</externalid>
      <Title>AI Tutor - Danish</Title>
      <Description><![CDATA[<p>As an AI Tutor specialized in multilingual audio capabilities, you will contribute to xAI&#39;s mission by training and refining Grok to excel in voice interactions, speech recognition, and auditory experiences across diverse languages, accents, and cultural contexts.</p>
<p>Your work will focus on curating and annotating high-quality audio data to enhance Grok&#39;s global accessibility, enabling natural spoken interactions for users worldwide, bridging language barriers through accurate speech processing, and improving the AI&#39;s handling of multilingual audio nuances.</p>
<p>Responsibilities:</p>
<ul>
<li>Use proprietary software to provide labels, annotations, recordings, and inputs on projects involving multilingual audio clips, voice recordings, speech samples, and auditory elements in various languages.</li>
</ul>
<ul>
<li>Support the delivery of high-quality curated audio data that ensures clear, natural spoken output, accurate representation of linguistic and prosodic details (such as intonation, rhythm, and accent), and professional audio standards.</li>
</ul>
<ul>
<li>Collaborate with technical staff to develop tasks that improve AI&#39;s ability to handle speech modulation, accent variation, noise in real-world recordings, and multilingual audio processing.</li>
</ul>
<ul>
<li>Work with technical staff to improve annotation tools for efficient audio workflows.</li>
</ul>
<p>Basic Qualifications:</p>
<ul>
<li>Native proficiency in Danish with exposure to diverse accents, dialects, or regional variations.</li>
</ul>
<ul>
<li>Proficiency in English (minimum B2 level) with clear, natural vocal delivery and pronunciation suitable for audio recording purposes.</li>
</ul>
<ul>
<li>Strong auditory perception to identify nuances in speech, accents, pronunciation, intonation, and audio quality across languages.</li>
</ul>
<ul>
<li>Demonstrated ability to handle multilingual audio content, including evaluating speech accuracy, cultural vocal expressions, and contextual interpretation in spoken form.</li>
</ul>
<ul>
<li>Demonstrated ability to transcribe audio with high accuracy across accents and varying audio quality.</li>
</ul>
<ul>
<li>Comfort providing high-quality voice recordings and feedback on audio samples in multiple languages.</li>
</ul>
<ul>
<li>Strong comprehension skills and the ability to make independent judgments on ambiguous or varied audio material, including noisy or accented speech.</li>
</ul>
<ul>
<li>Strong communication, interpersonal, analytical, detail-oriented, and organizational skills, with the ability to articulate audio-related feedback effectively.</li>
</ul>
<ul>
<li>Commitment to developing AI that masters sophisticated multilingual audio capabilities.</li>
</ul>
<p>Preferred Skills and Experience:</p>
<ul>
<li>Demonstration of exceptional attention to linguistic nuance, auditory detail, and data quality beyond standard transcription work.</li>
</ul>
<ul>
<li>Deep understanding and taste of what good/useful Audio data is.</li>
</ul>
<ul>
<li>Strong command of advanced transcription and annotation practices, including handling disfluencies, accents, and prosodic features (intonation, stress, rhythm, emotion, etc) with high consistency and accuracy.</li>
</ul>
<ul>
<li>Background in linguistics (e.g., phonetics, phonology, sociolinguistics), speech sciences, cognitive science, or a related field, or equivalent practical experience, with demonstrated ability to analyze accent variation, pronunciation differences, and multilingual speech patterns.</li>
</ul>
<ul>
<li>Experience working with speech/audio datasets, annotation workflows, or AI training data, including knowledge/experience with training voice models, and an understanding of how data quality impacts model performance.</li>
</ul>
<ul>
<li>Professional experience in voice work, including voice acting, voice recording, podcasting with a measurable audience (e.g., X following), or similar audio production demonstrating attention to clarity and recording quality.</li>
</ul>
<ul>
<li>Demonstrated ability to exercise independent judgment in ambiguous audio scenarios and make consistent, defensible annotation decisions.</li>
</ul>
<ul>
<li>Portfolio (strongly preferred for advanced candidates): Voice samples, annotated transcripts, or audio-related work demonstrating quality, methodology, and attention to detail.</li>
</ul>
<ul>
<li>Candidates with professional experience in voice, linguistics, speech data, or speech evaluation and research are especially encouraged to apply.</li>
</ul>
<p>Location and Other Expectations:</p>
<ul>
<li>Tutor roles may be offered as full-time, part-time, or contractor positions, depending on role needs and candidate fit.</li>
</ul>
<ul>
<li>For contractor positions, hours will vary widely based on project scope and contractor availability, with no fixed commitments required. On average, most projects may require at least 10 hours per week to deliver effectively, though this is not a fixed commitment and depends on the scope of work. Contractors have full flexibility to set their own hours and determine the exact amount of time needed to complete deliverables.</li>
</ul>
<ul>
<li>Tutor roles may be performed remotely from any location worldwide, subject to legal eligibility, time-zone compatibility, and role-specific needs.</li>
</ul>
<ul>
<li>For US-based candidates, please note that we are unable to hire in Wyoming and Illinois at this time.</li>
</ul>
<ul>
<li>We are unable to provide visa sponsorship.</li>
</ul>
<ul>
<li>For those who will be working from a personal device, your computer must be a Chromebook, a Mac with macOS 11.0 or later, or Windows 10 or later.</li>
</ul>
<p>Compensation and Benefits:</p>
<p>US-based candidates: $35/hour - $45/hour depending on factors including relevant experience, skills, education, geographic location, and qualifications. International candidates: Information will be provided to you during the recruitment process.</p>
<p>Benefits vary based on employment type, location, and jurisdiction. Benefits for eligible U.S.-based positions include health insurance, 401(k) plan, and paid sick leave. Specific details and role-specific information will be provided to you during the interview process.</p>
<p style="margin-top:24px;font-size:13px;color:#666;">XML job scraping automation by <a href="https://yubhub.co">YubHub</a></p>]]></Description>
      <Jobtype>full-time|part-time|contract</Jobtype>
      <Experiencelevel>mid</Experiencelevel>
      <Workarrangement>remote</Workarrangement>
      <Salaryrange>$35/hour - $45/hour</Salaryrange>
      <Skills>Native proficiency in Danish, Proficiency in English, Strong auditory perception, Demonstrated ability to handle multilingual audio content, Demonstrated ability to transcribe audio with high accuracy, Comfort providing high-quality voice recordings and feedback on audio samples, Strong comprehension skills, Strong communication, interpersonal, analytical, detail-oriented, and organizational skills, Exceptional attention to linguistic nuance, auditory detail, and data quality, Deep understanding and taste of what good/useful Audio data is, Strong command of advanced transcription and annotation practices, Background in linguistics, speech sciences, cognitive science, or a related field, Experience working with speech/audio datasets, annotation workflows, or AI training data, Professional experience in voice work, including voice acting, voice recording, podcasting, Demonstrated ability to exercise independent judgment in ambiguous audio scenarios</Skills>
      <Category>Engineering</Category>
      <Industry>Technology</Industry>
      <Employername>xAI</Employername>
      <Employerlogo>https://logos.yubhub.co/xai.com.png</Employerlogo>
      <Employerdescription>xAI creates AI systems that can accurately understand the universe and aid humanity in its pursuit of knowledge. The team is small and highly motivated.</Employerdescription>
      <Employerwebsite>https://www.xai.com/</Employerwebsite>
      <Compensationcurrency></Compensationcurrency>
      <Compensationmin></Compensationmin>
      <Compensationmax></Compensationmax>
      <Applyto>https://job-boards.greenhouse.io/xai/jobs/5090189007</Applyto>
      <Location>Remote</Location>
      <Country></Country>
      <Postedate>2026-04-18</Postedate>
    </job>
    <job>
      <externalid>d766bce6-527</externalid>
      <Title>AI Tutor - Chinese</Title>
      <Description><![CDATA[<p>As an AI Tutor specialized in multilingual audio capabilities, you will contribute to xAI&#39;s mission by training and refining Grok to excel in voice interactions, speech recognition, and auditory experiences across diverse languages, accents, and cultural contexts.</p>
<p>Your work will focus on curating and annotating high-quality audio data to enhance Grok&#39;s global accessibility, enabling natural spoken interactions for users worldwide, bridging language barriers through accurate speech processing, and improving the AI&#39;s handling of multilingual audio nuances.</p>
<p>Responsibilities:</p>
<ul>
<li>Use proprietary software to provide labels, annotations, recordings, and inputs on projects involving multilingual audio clips, voice recordings, speech samples, and auditory elements in various languages.</li>
</ul>
<ul>
<li>Support the delivery of high-quality curated audio data that ensures clear, natural spoken output, accurate representation of linguistic and prosodic details (such as intonation, rhythm, and accent), and professional audio standards.</li>
</ul>
<ul>
<li>Collaborate with technical staff to develop tasks that improve AI&#39;s ability to handle speech modulation, accent variation, noise in real-world recordings, and multilingual audio processing.</li>
</ul>
<ul>
<li>Work with technical staff to improve annotation tools for efficient audio workflows.</li>
</ul>
<p>Basic Qualifications:</p>
<ul>
<li>Native proficiency in Chinese with exposure to diverse accents, dialects, or regional variations.</li>
</ul>
<ul>
<li>Proficiency in English (minimum B2 level) with clear, natural vocal delivery and pronunciation suitable for audio recording purposes.</li>
</ul>
<ul>
<li>Strong auditory perception to identify nuances in speech, accents, pronunciation, intonation, and audio quality across languages.</li>
</ul>
<ul>
<li>Demonstrated ability to handle multilingual audio content, including evaluating speech accuracy, cultural vocal expressions, and contextual interpretation in spoken form.</li>
</ul>
<ul>
<li>Demonstrated ability to transcribe audio with high accuracy across accents and varying audio quality.</li>
</ul>
<ul>
<li>Comfort providing high-quality voice recordings and feedback on audio samples in multiple languages.</li>
</ul>
<ul>
<li>Strong comprehension skills and the ability to make independent judgments on ambiguous or varied audio material, including noisy or accented speech.</li>
</ul>
<ul>
<li>Strong communication, interpersonal, analytical, detail-oriented, and organizational skills, with the ability to articulate audio-related feedback effectively.</li>
</ul>
<ul>
<li>Commitment to developing AI that masters sophisticated multilingual audio capabilities.</li>
</ul>
<p>Preferred Skills and Experience:</p>
<ul>
<li>Demonstration of exceptional attention to linguistic nuance, auditory detail, and data quality beyond standard transcription work.</li>
</ul>
<ul>
<li>Deep understanding and taste of what good/useful Audio data is.</li>
</ul>
<ul>
<li>Strong command of advanced transcription and annotation practices, including handling disfluencies, accents, and prosodic features (intonation, stress, rhythm, emotion, etc) with high consistency and accuracy.</li>
</ul>
<ul>
<li>Background in linguistics (e.g., phonetics, phonology, sociolinguistics), speech sciences, cognitive science, or a related field, or equivalent practical experience, with demonstrated ability to analyze accent variation, pronunciation differences, and multilingual speech patterns.</li>
</ul>
<ul>
<li>Experience working with speech/audio datasets, annotation workflows, or AI training data, including knowledge/experience with training voice models, and an understanding of how data quality impacts model performance.</li>
</ul>
<ul>
<li>Professional experience in voice work, including voice acting, voice recording, podcasting with a measurable audience (e.g., X following), or similar audio production demonstrating attention to clarity and recording quality.</li>
</ul>
<ul>
<li>Demonstrated ability to exercise independent judgment in ambiguous audio scenarios and make consistent, defensible annotation decisions.</li>
</ul>
<ul>
<li>Portfolio (strongly preferred for advanced candidates): Voice samples, annotated transcripts, or audio-related work demonstrating quality, methodology, and attention to detail.</li>
</ul>
<ul>
<li>Candidates with professional experience in voice, linguistics, speech data, or speech evaluation and research are especially encouraged to apply.</li>
</ul>
<p>Location and Other Expectations:</p>
<ul>
<li>Tutor roles may be offered as full-time, part-time, or contractor positions, depending on role needs and candidate fit.</li>
</ul>
<ul>
<li>For contractor positions, hours will vary widely based on project scope and contractor availability, with no fixed commitments required. On average, most projects may require at least 10 hours per week to deliver effectively, though this is not a fixed commitment and depends on the scope of work. Contractors have full flexibility to set their own hours and determine the exact amount of time needed to complete deliverables.</li>
</ul>
<ul>
<li>Tutor roles may be performed remotely from any location worldwide, subject to legal eligibility, time-zone compatibility, and role-specific needs.</li>
</ul>
<ul>
<li>For US-based candidates, please note that we are unable to hire in Wyoming and Illinois at this time.</li>
</ul>
<ul>
<li>We are unable to provide visa sponsorship.</li>
</ul>
<ul>
<li>For those who will be working from a personal device, your computer must be a Chromebook, a Mac with macOS 11.0 or later, or Windows 10 or later.</li>
</ul>
<p>Compensation and Benefits:</p>
<p>US-based candidates: $35/hour - $45/hour depending on factors including relevant experience, skills, education, geographic location, and qualifications. International candidates: Information will be provided to you during the recruitment process.</p>
<p>Benefits vary based on employment type, location, and jurisdiction. Benefits for eligible U.S.-based positions include health insurance, 401(k) plan, and paid sick leave. Specific details and role-specific information will be provided to you during the interview process.</p>
<p style="margin-top:24px;font-size:13px;color:#666;">XML job scraping automation by <a href="https://yubhub.co">YubHub</a></p>]]></Description>
      <Jobtype>full-time|part-time|contract|temporary|internship</Jobtype>
      <Experiencelevel>mid</Experiencelevel>
      <Workarrangement>remote</Workarrangement>
      <Salaryrange>$35/hour - $45/hour</Salaryrange>
      <Skills>Native proficiency in Chinese, Proficiency in English, Strong auditory perception, Demonstrated ability to handle multilingual audio content, Demonstrated ability to transcribe audio with high accuracy, Comfort providing high-quality voice recordings and feedback on audio samples, Strong comprehension skills, Strong communication, interpersonal, analytical, detail-oriented, and organizational skills, Exceptional attention to linguistic nuance, auditory detail, and data quality, Deep understanding and taste of what good/useful Audio data is, Strong command of advanced transcription and annotation practices, Background in linguistics, speech sciences, cognitive science, or a related field, Experience working with speech/audio datasets, annotation workflows, or AI training data, Professional experience in voice work, including voice acting, voice recording, podcasting, Demonstrated ability to exercise independent judgment in ambiguous audio scenarios</Skills>
      <Category>Engineering</Category>
      <Industry>Technology</Industry>
      <Employername>xAI</Employername>
      <Employerlogo>https://logos.yubhub.co/xai.com.png</Employerlogo>
      <Employerdescription>xAI creates AI systems to understand the universe and aid humanity in its pursuit of knowledge. The company has a small, highly motivated team focused on engineering excellence.</Employerdescription>
      <Employerwebsite>https://www.xai.com</Employerwebsite>
      <Compensationcurrency></Compensationcurrency>
      <Compensationmin></Compensationmin>
      <Compensationmax></Compensationmax>
      <Applyto>https://job-boards.greenhouse.io/xai/jobs/5090180007</Applyto>
      <Location>Remote</Location>
      <Country></Country>
      <Postedate>2026-04-18</Postedate>
    </job>
    <job>
      <externalid>6255d8e7-64b</externalid>
      <Title>AI Tutor - Bengali</Title>
      <Description><![CDATA[<p>As an AI Tutor specialized in multilingual audio capabilities, you will contribute to xAI&#39;s mission by training and refining Grok to excel in voice interactions, speech recognition, and auditory experiences across diverse languages, accents, and cultural contexts.</p>
<p>Your work will focus on curating and annotating high-quality audio data to enhance Grok&#39;s global accessibility, enabling natural spoken interactions for users worldwide, bridging language barriers through accurate speech processing, and improving the AI&#39;s handling of multilingual audio nuances.</p>
<p>Responsibilities:</p>
<ul>
<li>Use proprietary software to provide labels, annotations, recordings, and inputs on projects involving multilingual audio clips, voice recordings, speech samples, and auditory elements in various languages.</li>
</ul>
<ul>
<li>Support the delivery of high-quality curated audio data that ensures clear, natural spoken output, accurate representation of linguistic and prosodic details (such as intonation, rhythm, and accent), and professional audio standards.</li>
</ul>
<ul>
<li>Collaborate with technical staff to develop tasks that improve AI&#39;s ability to handle speech modulation, accent variation, noise in real-world recordings, and multilingual audio processing.</li>
</ul>
<ul>
<li>Work with technical staff to improve annotation tools for efficient audio workflows.</li>
</ul>
<p>Basic Qualifications:</p>
<ul>
<li>Native proficiency in Bengali with exposure to diverse accents, dialects, or regional variations.</li>
</ul>
<ul>
<li>Proficiency in English (minimum B2 level) with clear, natural vocal delivery and pronunciation suitable for audio recording purposes.</li>
</ul>
<ul>
<li>Strong auditory perception to identify nuances in speech, accents, pronunciation, intonation, and audio quality across languages.</li>
</ul>
<ul>
<li>Demonstrated ability to handle multilingual audio content, including evaluating speech accuracy, cultural vocal expressions, and contextual interpretation in spoken form.</li>
</ul>
<ul>
<li>Demonstrated ability to transcribe audio with high accuracy across accents and varying audio quality.</li>
</ul>
<ul>
<li>Comfort providing high-quality voice recordings and feedback on audio samples in multiple languages.</li>
</ul>
<ul>
<li>Strong comprehension skills and the ability to make independent judgments on ambiguous or varied audio material, including noisy or accented speech.</li>
</ul>
<ul>
<li>Strong communication, interpersonal, analytical, detail-oriented, and organizational skills, with the ability to articulate audio-related feedback effectively.</li>
</ul>
<ul>
<li>Commitment to developing AI that masters sophisticated multilingual audio capabilities.</li>
</ul>
<p>Preferred Skills and Experience:</p>
<ul>
<li>Demonstration of exceptional attention to linguistic nuance, auditory detail, and data quality beyond standard transcription work.</li>
</ul>
<ul>
<li>Deep understanding and taste of what good/useful Audio data is.</li>
</ul>
<ul>
<li>Strong command of advanced transcription and annotation practices, including handling disfluencies, accents, and prosodic features (intonation, stress, rhythm, emotion, etc) with high consistency and accuracy.</li>
</ul>
<ul>
<li>Background in linguistics (e.g., phonetics, phonology, sociolinguistics), speech sciences, cognitive science, or a related field, or equivalent practical experience, with demonstrated ability to analyze accent variation, pronunciation differences, and multilingual speech patterns.</li>
</ul>
<ul>
<li>Experience working with speech/audio datasets, annotation workflows, or AI training data, including knowledge/experience with training voice models, and an understanding of how data quality impacts model performance.</li>
</ul>
<ul>
<li>Professional experience in voice work, including voice acting, voice recording, podcasting with a measurable audience (e.g., X following), or similar audio production demonstrating attention to clarity and recording quality.</li>
</ul>
<ul>
<li>Demonstrated ability to exercise independent judgment in ambiguous audio scenarios and make consistent, defensible annotation decisions.</li>
</ul>
<ul>
<li>Portfolio (strongly preferred for advanced candidates): Voice samples, annotated transcripts, or audio-related work demonstrating quality, methodology, and attention to detail.</li>
</ul>
<ul>
<li>Candidates with professional experience in voice, linguistics, speech data, or speech evaluation and research are especially encouraged to apply.</li>
</ul>
<p>Location and Other Expectations:</p>
<ul>
<li>Tutor roles may be offered as full-time, part-time, or contractor positions, depending on role needs and candidate fit.</li>
</ul>
<ul>
<li>For contractor positions, hours will vary widely based on project scope and contractor availability, with no fixed commitments required. On average, most projects may require at least 10 hours per week to deliver effectively, though this is not a fixed commitment and depends on the scope of work. Contractors have full flexibility to set their own hours and determine the exact amount of time needed to complete deliverables.</li>
</ul>
<ul>
<li>Tutor roles may be performed remotely from any location worldwide, subject to legal eligibility, time-zone compatibility, and role-specific needs.</li>
</ul>
<ul>
<li>For US-based candidates, please note that we are unable to hire in Wyoming and Illinois at this time.</li>
</ul>
<ul>
<li>We are unable to provide visa sponsorship.</li>
</ul>
<ul>
<li>For those who will be working from a personal device, your computer must be a Chromebook, a Mac with macOS 11.0 or later, or Windows 10 or later.</li>
</ul>
<p>Compensation and Benefits:</p>
<p>US-based candidates: $35/hour - $45/hour depending on factors including relevant experience, skills, education, geographic location, and qualifications. International candidates: Information will be provided to you during the recruitment process.</p>
<p>Benefits vary based on employment type, location, and jurisdiction. Benefits for eligible U.S.-based positions include health insurance, 401(k) plan, and paid sick leave. Specific details and role-specific information will be provided to you during the interview process.</p>
<p style="margin-top:24px;font-size:13px;color:#666;">XML job scraping automation by <a href="https://yubhub.co">YubHub</a></p>]]></Description>
      <Jobtype>full-time|part-time|contract</Jobtype>
      <Experiencelevel>mid</Experiencelevel>
      <Workarrangement>remote</Workarrangement>
      <Salaryrange>$35/hour - $45/hour</Salaryrange>
      <Skills>Native proficiency in Bengali, Proficiency in English, Strong auditory perception, Demonstrated ability to handle multilingual audio content, Demonstrated ability to transcribe audio with high accuracy, Comfort providing high-quality voice recordings and feedback on audio samples, Strong comprehension skills, Strong communication, interpersonal, analytical, detail-oriented, and organizational skills, Exceptional attention to linguistic nuance, Deep understanding and taste of what good/useful Audio data is, Strong command of advanced transcription and annotation practices, Background in linguistics (e.g., phonetics, phonology, sociolinguistics), Experience working with speech/audio datasets, annotation workflows, or AI training data, Professional experience in voice work, Portfolio (voice samples, annotated transcripts, or audio-related work)</Skills>
      <Category>Engineering</Category>
      <Industry>Technology</Industry>
      <Employername>xAI</Employername>
      <Employerlogo>https://logos.yubhub.co/xai.com.png</Employerlogo>
      <Employerdescription>xAI creates AI systems that can accurately understand the universe and aid humanity in its pursuit of knowledge.</Employerdescription>
      <Employerwebsite>https://www.xai.com/</Employerwebsite>
      <Compensationcurrency></Compensationcurrency>
      <Compensationmin></Compensationmin>
      <Compensationmax></Compensationmax>
      <Applyto>https://job-boards.greenhouse.io/xai/jobs/5090176007</Applyto>
      <Location>Remote</Location>
      <Country></Country>
      <Postedate>2026-04-18</Postedate>
    </job>
    <job>
      <externalid>19b66bec-a6b</externalid>
      <Title>Research Engineer / Scientist (SLAM)</Title>
      <Description><![CDATA[<p>We&#39;re looking for a Research Engineer/Scientist to design, implement, and advance state-of-the-art simultaneous localization and mapping systems. This role is focused on modern SLAM techniques,both classical and learning-based,with an emphasis on scalable state estimation, sensor fusion, and long-term mapping in complex, dynamic environments.</p>
<p>As a Research Engineer/Scientist, you will:</p>
<ul>
<li>Design and implement modern SLAM systems for real-world environments, including visual, visual-inertial, lidar, or multi-sensor configurations.</li>
<li>Develop robust localization and mapping pipelines, including pose estimation, map management, loop closure, and global optimization.</li>
<li>Research and prototype learning-based or hybrid SLAM approaches that combine classical geometry with modern machine learning methods.</li>
<li>Build and maintain scalable state estimation frameworks, including factor graph optimization, filtering, and smoothing techniques.</li>
<li>Develop sensor fusion strategies that integrate cameras, IMUs, depth sensors, lidar, or other modalities to improve robustness and accuracy.</li>
<li>Analyze failure modes in real-world SLAM deployments (e.g., perceptual aliasing, dynamic scenes, drift) and design principled solutions.</li>
<li>Create evaluation frameworks, benchmarks, and metrics to measure SLAM accuracy, robustness, and performance across large datasets.</li>
<li>Optimize performance across the stack, including real-time constraints, memory usage, and compute efficiency, for large-scale and production systems.</li>
<li>Collaborate with reconstruction, simulation, and infrastructure teams to ensure SLAM outputs integrate cleanly with downstream world modeling and rendering pipelines.</li>
<li>Contribute to technical direction by proposing new research ideas, mentoring teammates, and helping define best practices for localization and mapping across the organization.</li>
</ul>
<p>We&#39;re looking for someone with 6+ years of experience working on SLAM, state estimation, robotics perception, or related areas. A strong foundation in probabilistic estimation, optimization, and geometric vision is required, as well as proficiency in Python and/or C++.</p>
<p style="margin-top:24px;font-size:13px;color:#666;">XML job scraping automation by <a href="https://yubhub.co">YubHub</a></p>]]></Description>
      <Jobtype>full-time</Jobtype>
      <Experiencelevel>senior</Experiencelevel>
      <Workarrangement>onsite</Workarrangement>
      <Salaryrange>$250,000-$350,000 base salary (good-faith estimate for San Francisco Bay Area upon hire; actual offer based on experience, skills, and qualifications)</Salaryrange>
      <Skills>SLAM, state estimation, robotics perception, probabilistic estimation, optimization, geometric vision, Python, C++</Skills>
      <Category>Engineering</Category>
      <Industry>Technology</Industry>
      <Employername>World Labs</Employername>
      <Employerlogo>https://logos.yubhub.co/worldlabs.ai.png</Employerlogo>
      <Employerdescription>World Labs builds foundational world models that can perceive, generate, reason, and interact with the 3D world.</Employerdescription>
      <Employerwebsite>https://worldlabs.ai</Employerwebsite>
      <Compensationcurrency></Compensationcurrency>
      <Compensationmin></Compensationmin>
      <Compensationmax></Compensationmax>
      <Applyto>https://job-boards.greenhouse.io/worldlabs/jobs/4135311009</Applyto>
      <Location>San Francisco</Location>
      <Country></Country>
      <Postedate>2026-04-17</Postedate>
    </job>
    <job>
      <externalid>baf3d89c-84b</externalid>
      <Title>Senior Manager, Perception</Title>
      <Description><![CDATA[<p>As a member of the HMS Perception team, you will conduct software development at the intersection of classical state estimation techniques, sensor fusion, artificial intelligence, machine learning, and machine perception. You will develop cutting-edge technology onto real hardware that provides robust and accurate estimates of vehicle pose and surroundings for real missions.</p>
<p>Shield AI is pushing the envelope by applying advanced AI solutions to real hardware systems. An ideal candidate should aspire to be a part of this industry-changing team developing and deploying advanced technology that can truly make an impact.</p>
<p>We are seeking a skilled and motivated leader with 10+ years of experience to manage a technical team supporting the development, integration, and testing of perception algorithms for advanced aerospace, defense, and robotics systems. In this role, you will contribute to implementing and integrating innovative perception solutions while collaborating with a multidisciplinary team of engineers to meet challenging operational requirements.</p>
<p><strong>Responsibilities:</strong></p>
<ul>
<li>Lead teams across autonomy, integration, and testing by aligning technical efforts, resolving cross-functional challenges, and driving mission-focused execution. Balance hands-on technical oversight with performance optimization, innovation, and clear stakeholder communication.</li>
</ul>
<ul>
<li>Write production quality software in C++</li>
</ul>
<ul>
<li>Produce an Assured Position, Navigation, and Timing (A-PNT) system to enable reliable autonomy in GNSS-degraded or denied environments</li>
</ul>
<ul>
<li>Extend and specialize Shield AI’s state-of-the-art state estimation framework for new sensors, platforms, and missions</li>
</ul>
<ul>
<li>Write test code to validate your software with simulated and real-world data</li>
</ul>
<ul>
<li>Collaborate with hardware and test teams to validate algorithms/code on aerial platforms</li>
</ul>
<ul>
<li>Write analyzers to ingest data and produce statistics to validate code quality</li>
</ul>
<ul>
<li>Enhance sensor models within a high-fidelity simulation environment</li>
</ul>
<ul>
<li>Work in a fast-paced, collaborative, continuous development environment, enhancing analysis and benchmarking capabilities</li>
</ul>
<p style="margin-top:24px;font-size:13px;color:#666;">XML job scraping automation by <a href="https://yubhub.co">YubHub</a></p>]]></Description>
      <Jobtype>full-time</Jobtype>
      <Experiencelevel>senior</Experiencelevel>
      <Workarrangement>onsite</Workarrangement>
      <Salaryrange>$229,233 - $343,849 a year</Salaryrange>
      <Skills>C++, Sensor fusion, Artificial intelligence, Machine learning, Machine perception, Kalman Filter, Factor Graphs, Computer Vision, OpenCV, Unix environments, Robotics technologies, Unmanned system technologies, High-fidelity simulation, Sensor modeling</Skills>
      <Category>Engineering</Category>
      <Industry>Technology</Industry>
      <Employername>Shield AI</Employername>
      <Employerlogo>https://logos.yubhub.co/shield.ai.png</Employerlogo>
      <Employerdescription>Shield AI is a venture-backed deep-tech company founded in 2015 with a mission to protect service members and civilians with intelligent systems.</Employerdescription>
      <Employerwebsite>https://www.shield.ai</Employerwebsite>
      <Compensationcurrency></Compensationcurrency>
      <Compensationmin></Compensationmin>
      <Compensationmax></Compensationmax>
      <Applyto>https://jobs.lever.co/shieldai/501e3703-1a63-4773-b961-6029e5fb71d6</Applyto>
      <Location>San Diego, California / Washington, DC</Location>
      <Country></Country>
      <Postedate>2026-04-17</Postedate>
    </job>
    <job>
      <externalid>ceba9e5b-250</externalid>
      <Title>Senior Backend Engineer, Product and Infra</Title>
      <Description><![CDATA[<p>We&#39;re looking for a Senior Backend Engineer to build the systems and services that power our product experience. You&#39;ll own the backend infrastructure that makes our content discoverable, our features responsive, and our platform reliable at scale.</p>
<p>Your work will directly shape what users experience: designing APIs that serve rich content, building services that handle real-time interactions, implementing content-matching systems for rights and safety, and ensuring our platform performs under load. You&#39;ll architect systems that are fast, correct, and maintainable.</p>
<p>You&#39;ll collaborate closely with Product, ML Research, and Mobile/Web teams to ship features that matter. We use Python, Go, BigQuery, Pub/Sub, and a microservices architecture,but we care more about good judgment than specific tool experience.</p>
<p><strong>Responsibilities</strong></p>
<ul>
<li>Design and maintain application-level data models that organize rich content into canonical structures optimized for product features, search, and retrieval.</li>
<li>Build high-reliability ETLs and streaming pipelines to process usage events, analytics data, behavioral signals, and application logs.</li>
<li>Develop data services that expose unified content to the application, such as metadata access APIs, indexing workflows, and retrieval-ready representations.</li>
<li>Implement and refine fingerprinting pipelines used for deduplication, rights attribution, safety checks, and provenance validation.</li>
<li>Own data consistency between ingestion systems, application surfaces, metadata storage, and downstream reporting environments.</li>
<li>Define and track key operational metrics, including latency, completeness, accuracy, and event health.</li>
<li>Collaborate with Product teams to ensure content structures and APIs support evolving features and high-quality user experiences.</li>
<li>Partner with Analytics and Research teams to deliver clean usage datasets for experimentation, model evaluation, reporting, and internal insights.</li>
<li>Operate large analytical workloads in BigQuery and build reusable Dataflow/Beam components for structured processing.</li>
<li>Improve reliability and scale by designing robust schema evolution strategies, idempotent pipelines, and well-instrumented operational flows.</li>
</ul>
<p><strong>Requirements</strong></p>
<ul>
<li>Experience building production backend services and APIs at scale</li>
<li>Experience building ETL/ELT pipelines, event processing systems, and structured data models for applications or analytics</li>
<li>Strong background in data modeling, metadata systems, indexing, or building canonical representations for heterogeneous content</li>
<li>Proficiency in Python, Go, SQL, and scalable data-processing frameworks (Dataflow/Beam, Spark, or similar)</li>
<li>Familiarity with BigQuery or other analytical data warehouses and strong comfort optimizing large queries and schemas</li>
<li>Experience with event-driven architectures, Pub/Sub, or Kafka-like systems</li>
<li>Strong understanding of data quality, schema evolution, lineage, and operational reliability</li>
<li>Ability to design pipelines that balance cost, latency, correctness, and scale</li>
<li>Clear communication skills and an ability to collaborate closely with Product, Research, and Analytics stakeholders</li>
</ul>
<p><strong>Nice to Have</strong></p>
<ul>
<li>Experience building application-facing APIs or microservices that expose structured content</li>
<li>Background in information retrieval, indexing systems, or search infrastructure</li>
<li>Experience with fingerprinting, perceptual hashing, audio similarity metrics, or content-matching algorithms</li>
<li>Familiarity with ML workflows and how downstream analytics and usage data feed back into research pipelines</li>
<li>Understanding of batch + streaming architectures and how to blend them effectively</li>
<li>Experience with Go, Next.js, or React Native for occasional full-stack contributions</li>
</ul>
<p><strong>Why Join Us</strong></p>
<p>You will design the core data services and pipelines that power our product experience, analytics, and business operations. You’ll work on high-impact data challenges involving real-time signals, large-scale metadata systems, and cross-platform consistency. You’ll join a small, fast-moving team where you’ll shape the structure, reliability, and intelligence of our downstream data ecosystem.</p>
<p><strong>Benefits</strong></p>
<ul>
<li>Highly competitive salary and equity</li>
<li>Quarterly productivity budget</li>
<li>Flexible time off</li>
<li>Fantastic office location in Manhattan</li>
<li>Productivity package, including ChatGPT Plus, Claude Code, and Copilot</li>
<li>Top-notch private health, dental, and vision insurance for you and your dependents</li>
<li>401(k) plan options with employer matching</li>
<li>Concierge medical/primary care through One Medical and Rightway</li>
<li>Mental health support from Spring Health</li>
<li>Personalized life insurance, travel assistance, and many other perks</li>
</ul>
<p style="margin-top:24px;font-size:13px;color:#666;">XML job scraping automation by <a href="https://yubhub.co">YubHub</a></p>]]></Description>
      <Jobtype>full-time</Jobtype>
      <Experiencelevel>senior</Experiencelevel>
      <Workarrangement>onsite</Workarrangement>
      <Salaryrange>$180,000 - $220,000</Salaryrange>
      <Skills>Python, Go, BigQuery, Pub/Sub, Data modeling, Metadata systems, Indexing, Canonical representations, ETL/ELT pipelines, Event processing systems, Structured data models, Scalable data-processing frameworks, Analytical data warehouses, Event-driven architectures, Kafka-like systems, Data quality, Schema evolution, Lineage, Operational reliability, Application-facing APIs, Microservices, Information retrieval, Indexing systems, Search infrastructure, Fingerprinting, Perceptual hashing, Audio similarity metrics, Content-matching algorithms, ML workflows, Batch + streaming architectures</Skills>
      <Category>Engineering</Category>
      <Industry>Technology</Industry>
      <Employername>Udio</Employername>
      <Employerlogo>https://logos.yubhub.co/udio.com.png</Employerlogo>
      <Employerdescription>Udio is a technology company that powers product experiences.</Employerdescription>
      <Employerwebsite>https://www.udio.com/</Employerwebsite>
      <Compensationcurrency></Compensationcurrency>
      <Compensationmin></Compensationmin>
      <Compensationmax></Compensationmax>
      <Applyto>https://job-boards.greenhouse.io/udio/jobs/4987729008</Applyto>
      <Location>New York</Location>
      <Country></Country>
      <Postedate>2026-04-17</Postedate>
    </job>
    <job>
      <externalid>1044b51e-cc6</externalid>
      <Title>Senior Manager, Software - Perception</Title>
      <Description><![CDATA[<p>This position is ideal for an individual who thrives on building advanced perception systems that enable autonomous aircraft to operate effectively in complex and contested environments.</p>
<p>A successful candidate will be skilled in developing real-time object detection, sensor fusion, and state estimation algorithms using data from diverse mission sensors such as EO/IR cameras, radars, and IMUs. The role requires strong algorithmic thinking, deep familiarity with airborne sensing systems, and the ability to deliver performant software in simulation and real-world conditions.</p>
<p>Shield AI is committed to developing cutting-edge autonomy for unmanned aircraft operating across all Department of Defense (DoD) domains, including air, sea, and land. Our Perception Engineers are instrumental in creating the situational awareness that underpins autonomy, ensuring our systems understand and respond to the operational environment with speed, precision, and resilience.</p>
<p><strong>Responsibilities:</strong></p>
<ul>
<li>Lead teams across autonomy, integration, and testing by aligning technical efforts, resolving cross-functional challenges, and driving mission-focused execution.</li>
<li>Develop advanced perception algorithms for object detection, classification, and multi-target tracking across diverse sensor modalities.</li>
<li>Implement sensor fusion frameworks by integrating data from vision systems, radars, and other mission sensors using probabilistic and deterministic fusion techniques.</li>
<li>Develop state estimation capabilities by designing and refining algorithms for localization and pose estimation using IMU, GPS, vision, and other onboard sensing inputs.</li>
<li>Analyze and utilize sensor ICDs to ensure correct data handling, interpretation, and synchronization.</li>
<li>Optimize perception performance by tuning and evaluating perception pipelines for performance, robustness, and real-time efficiency in both simulation and real-world environments.</li>
<li>Support autonomy integration by working closely with autonomy, systems, and integration teams to interface perception outputs with planning, behaviors, and decision-making modules.</li>
<li>Validate in simulated and operational settings by leveraging synthetic data, simulation environments, and field testing to validate algorithm accuracy and mission readiness.</li>
<li>Collaborate with hardware and sensor teams to ensure seamless integration of perception algorithms with onboard compute platforms and diverse sensor payloads.</li>
<li>Drive innovation in airborne sensing by contributing novel ideas and state-of-the-art techniques to advance real-time perception capabilities for unmanned aircraft operating in complex, GPS-denied, or contested environments.</li>
<li>Travel Requirement – Members of this team typically travel around 10-15% of the year (to different office locations, customer sites, and flight integration events).</li>
</ul>
<p><strong>Requirements:</strong></p>
<ul>
<li>BS/MS in Computer Science, Electrical Engineering, Mechanical Engineering, Aerospace Engineering, and/or similar degree, or equivalent practical experience.</li>
<li>Typically requires a minimum of 10 years of related experience with a Bachelor’s degree; or 9 years and a Master’s degree; or 7 years with a PhD; or equivalent work experience.</li>
<li>7+ years of experience in Unmanned Systems programs in the DoD or applied R&amp;D.</li>
<li>2+ years of people leadership experience.</li>
<li>Background in implementing algorithms such as Kalman Filters, multi-target tracking, or deep learning-based detection models.</li>
<li>Familiarity with fusing data from radar, EO/IR cameras, or other sensors using probabilistic or rule-based approaches.</li>
<li>Familiarity with SLAM, visual-inertial odometry, or sensor-fused localization approaches in real-time applications.</li>
<li>Ability to interpret and work with Interface Control Documents (ICDs) and hardware integration specs.</li>
<li>Proficiency with version control, debugging, and test-driven development in cross-functional teams.</li>
<li>Ability to obtain a SECRET clearance.</li>
</ul>
<p><strong>Preferences:</strong></p>
<ul>
<li>Hands-on integration or algorithm development with airborne sensing systems.</li>
<li>Experience with ML frameworks such as PyTorch or Tensorflow, particularly for vision-based object detection or classification tasks.</li>
<li>Experience deploying perception software on SWaP-constrained platforms.</li>
<li>Familiarity with validating perception systems during flight test events or operational environments.</li>
<li>Understanding of sensing challenges in denied or degraded conditions.</li>
<li>Exposure to perception applications across air, maritime, and ground platforms.</li>
</ul>
<p style="margin-top:24px;font-size:13px;color:#666;">XML job scraping automation by <a href="https://yubhub.co">YubHub</a></p>]]></Description>
      <Jobtype>full-time</Jobtype>
      <Experiencelevel>senior</Experiencelevel>
      <Workarrangement>onsite</Workarrangement>
      <Salaryrange>$229,233 - $343,849 a year</Salaryrange>
      <Skills>BS/MS in Computer Science, Electrical Engineering, Mechanical Engineering, Aerospace Engineering, and/or similar degree, 10+ years of related experience, 7+ years of experience in Unmanned Systems programs in the DoD or applied R&amp;D, 2+ years of people leadership experience, Background in implementing algorithms such as Kalman Filters, multi-target tracking, or deep learning-based detection models, Familiarity with fusing data from radar, EO/IR cameras, or other sensors using probabilistic or rule-based approaches, Familiarity with SLAM, visual-inertial odometry, or sensor-fused localization approaches in real-time applications, Ability to interpret and work with Interface Control Documents (ICDs) and hardware integration specs, Proficiency with version control, debugging, and test-driven development in cross-functional teams, Ability to obtain a SECRET clearance, Hands-on integration or algorithm development with airborne sensing systems, Experience with ML frameworks such as PyTorch or Tensorflow, particularly for vision-based object detection or classification tasks, Experience deploying perception software on SWaP-constrained platforms, Familiarity with validating perception systems during flight test events or operational environments, Understanding of sensing challenges in denied or degraded conditions, Exposure to perception applications across air, maritime, and ground platforms</Skills>
      <Category>Engineering</Category>
      <Industry>Technology</Industry>
      <Employername>Shield AI</Employername>
      <Employerlogo>https://logos.yubhub.co/shield.ai.png</Employerlogo>
      <Employerdescription>Shield AI is a venture-backed deep-tech company founded in 2015, developing intelligent systems to protect service members and civilians.</Employerdescription>
      <Employerwebsite>https://www.shield.ai</Employerwebsite>
      <Compensationcurrency></Compensationcurrency>
      <Compensationmin></Compensationmin>
      <Compensationmax></Compensationmax>
      <Applyto>https://jobs.lever.co/shieldai/cebc0dd3-ffbf-4013-a2ad-ae32732cabd3</Applyto>
      <Location>Washington, DC / San Diego, California / Boston, MA</Location>
      <Country></Country>
      <Postedate>2026-04-17</Postedate>
    </job>
    <job>
      <externalid>3f0b0cce-7be</externalid>
      <Title>Manager, Software - Perception</Title>
      <Description><![CDATA[<p>This position is ideal for an individual who thrives on building advanced perception systems that enable autonomous aircraft to operate effectively in complex and contested environments.</p>
<p>A successful candidate will be skilled in developing real-time object detection, sensor fusion, and state estimation algorithms using data from diverse mission sensors such as EO/IR cameras, radars, and IMUs.
The role requires strong algorithmic thinking, deep familiarity with airborne sensing systems, and the ability to deliver performant software in simulation and real-world conditions.</p>
<p>We are seeking a skilled and motivated manager to lead technical teams and support direct projects integrating perception solutions for defense platforms.</p>
<p>Shield AI is committed to developing cutting-edge autonomy for unmanned aircraft operating across all Department of Defense (DoD) domains, including air, sea, and land.
Our Perception Engineers are instrumental in creating the situational awareness that underpins autonomy, ensuring our systems understand and respond to the operational environment with speed, precision, and resilience.</p>
<p>Responsibilities:</p>
<ul>
<li>Multidisciplinary Team Leadership – Lead teams across autonomy, integration, and testing by aligning technical efforts, resolving cross-functional challenges, and driving mission-focused execution.</li>
<li>Develop advanced perception algorithms , Design and implement robust algorithms for object detection, classification, and multi-target tracking across diverse sensor modalities.</li>
<li>Implement sensor fusion frameworks , Integrate data from vision systems, radars, and other mission sensors using probabilistic and deterministic fusion techniques to generate accurate situational awareness.</li>
<li>Develop state estimation capabilities , Design and refine algorithms for localization and pose estimation using IMU, GPS, vision, and other onboard sensing inputs to enable stable and accurate navigation.</li>
<li>Analyze and utilize sensor ICDs , Interpret interface control documents (ICDs) and technical specifications for aircraft-mounted sensors to ensure correct data handling, interpretation, and synchronization.</li>
<li>Optimize perception performance , Tune and evaluate perception pipelines for performance, robustness, and real-time efficiency in both simulation and real-world environments.</li>
<li>Support autonomy integration , Work closely with autonomy, systems, and integration teams to interface perception outputs with planning, behaviors, and decision-making modules.</li>
<li>Validate in simulated and operational settings , Leverage synthetic data, simulation environments, and field testing to validate algorithm accuracy and mission readiness.</li>
<li>Collaborate with hardware and sensor teams , Ensure seamless integration of perception algorithms with onboard compute platforms and diverse sensor payloads.</li>
<li>Drive innovation in airborne sensing , Contribute novel ideas and state-of-the-art techniques to advance real-time perception capabilities for unmanned aircraft operating in complex, GPS-denied, or contested environments.</li>
<li>Travel Requirement , Members of this team typically travel around 10-15% of the year (to different office locations, customer sites, and flight integration events).</li>
</ul>
<p>Required Qualifications:</p>
<ul>
<li>BS/MS in Computer Science, Electrical Engineering, Mechanical Engineering, Aerospace Engineering, and/or similar degree, or equivalent practical experience</li>
<li>Typically requires a minimum of 7 years of related experience with a Bachelor’s degree; or 5 years and a Master’s degree; or 4 years with a PhD; or equivalent work experience</li>
<li>5+ years of experience in Unmanned Systems programs in the DoD or applied R&amp;D</li>
<li>2+ years of people leadership experience</li>
<li>Background in implementing algorithms such as Kalman Filters, multi-target tracking, or deep learning-based detection models.</li>
<li>Familiarity with fusing data from radar, EO/IR cameras, or other sensors using probabilistic or rule-based approaches.</li>
<li>Familiarity with SLAM, visual-inertial odometry, or sensor-fused localization approaches in real-time applications.</li>
<li>Ability to interpret and work with Interface Control Documents (ICDs) and hardware integration specs.</li>
<li>Proficiency with version control, debugging, and test-driven development in cross-functional teams.</li>
<li>Ability to obtain a SECRET clearance.</li>
</ul>
<p>Preferred Qualifications:</p>
<ul>
<li>Hands-on integration or algorithm development with airborne sensing systems.</li>
<li>Experience with ML frameworks such as PyTorch or Tensorflow, particularly for vision-based object detection or classification tasks.</li>
<li>Experience deploying perception software on SWaP-constrained platforms.</li>
<li>Familiarity with validating perception systems during flight test events or operational environments.</li>
<li>Understanding of sensing challenges in denied or degraded conditions.</li>
<li>Exposure to perception applications across air, maritime, and ground platforms.</li>
</ul>
<p style="margin-top:24px;font-size:13px;color:#666;">XML job scraping automation by <a href="https://yubhub.co">YubHub</a></p>]]></Description>
      <Jobtype>full-time</Jobtype>
      <Experiencelevel>senior</Experiencelevel>
      <Workarrangement>onsite</Workarrangement>
      <Salaryrange>$220,441 - $330,661 a year</Salaryrange>
      <Skills>BS/MS in Computer Science, Electrical Engineering, Mechanical Engineering, Aerospace Engineering, and/or similar degree, or equivalent practical experience, Typically requires a minimum of 7 years of related experience with a Bachelor’s degree; or 5 years and a Master’s degree; or 4 years with a PhD; or equivalent work experience, 5+ years of experience in Unmanned Systems programs in the DoD or applied R&amp;D, 2+ years of people leadership experience, Background in implementing algorithms such as Kalman Filters, multi-target tracking, or deep learning-based detection models., Hands-on integration or algorithm development with airborne sensing systems, Experience with ML frameworks such as PyTorch or Tensorflow, particularly for vision-based object detection or classification tasks, Experience deploying perception software on SWaP-constrained platforms, Familiarity with validating perception systems during flight test events or operational environments, Understanding of sensing challenges in denied or degraded conditions</Skills>
      <Category>Engineering</Category>
      <Industry>Technology</Industry>
      <Employername>Shield AI</Employername>
      <Employerlogo>https://logos.yubhub.co/shield.ai.png</Employerlogo>
      <Employerdescription>Shield AI is a venture-backed deep-tech company founded in 2015, developing intelligent systems to protect service members and civilians.</Employerdescription>
      <Employerwebsite>https://www.shield.ai</Employerwebsite>
      <Compensationcurrency></Compensationcurrency>
      <Compensationmin></Compensationmin>
      <Compensationmax></Compensationmax>
      <Applyto>https://jobs.lever.co/shieldai/1120529c-2f7d-4b27-a29b-50976c49c433</Applyto>
      <Location>Washington, DC</Location>
      <Country></Country>
      <Postedate>2026-04-17</Postedate>
    </job>
    <job>
      <externalid>841c78ea-841</externalid>
      <Title>Senior Engineer, Software - Perception</Title>
      <Description><![CDATA[<p>This position is ideal for an individual who thrives on building advanced perception systems that enable autonomous aircraft to operate effectively in complex and contested environments.</p>
<p>A successful candidate will be skilled in developing real-time object detection, sensor fusion, and state estimation algorithms using data from diverse mission sensors such as EO/IR cameras, radars, and IMUs.
The role requires strong algorithmic thinking, deep familiarity with airborne sensing systems, and the ability to deliver performant software in simulation and real-world conditions.</p>
<p>Develop advanced perception algorithms , Design and implement robust algorithms for object detection, classification, and multi-target tracking across diverse sensor modalities.
Implement sensor fusion frameworks , Integrate data from vision systems, radars, and other mission sensors using probabilistic and deterministic fusion techniques to generate accurate situational awareness.
Develop state estimation capabilities , Design and refine algorithms for localization and pose estimation using IMU, GPS, vision, and other onboard sensing inputs to enable stable and accurate navigation.
Analyze and utilize sensor ICDs , Interpret interface control documents (ICDs) and technical specifications for aircraft-mounted sensors to ensure correct data handling, interpretation, and synchronization.
Optimize perception performance , Tune and evaluate perception pipelines for performance, robustness, and real-time efficiency in both simulation and real-world environments.
Support autonomy integration , Work closely with autonomy, systems, and integration teams to interface perception outputs with planning, behaviors, and decision-making modules.
Validate in simulated and operational settings , Leverage synthetic data, simulation environments, and field testing to validate algorithm accuracy and mission readiness.
Collaborate with hardware and sensor teams , Ensure seamless integration of perception algorithms with onboard compute platforms and diverse sensor payloads.
Drive innovation in airborne sensing , Contribute novel ideas and state-of-the-art techniques to advance real-time perception capabilities for unmanned aircraft operating in complex, GPS-denied, or contested environments.</p>
<p style="margin-top:24px;font-size:13px;color:#666;">XML job scraping automation by <a href="https://yubhub.co">YubHub</a></p>]]></Description>
      <Jobtype>full-time</Jobtype>
      <Experiencelevel>senior</Experiencelevel>
      <Workarrangement>onsite</Workarrangement>
      <Salaryrange>$160,000 - $240,000 a year</Salaryrange>
      <Skills>BS/MS in Computer Science, Electrical Engineering, Mechanical Engineering, Aerospace Engineering, and/or similar degree, or equivalent practical experience, Typically requires a minimum of 5 years of related experience with a Bachelor’s degree; or 4 years and a Master’s degree; or 2 years with a PhD; or equivalent work experience, Background in implementing algorithms such as Kalman Filters, multi-target tracking, or deep learning-based detection models, Familiarity with fusing data from radar, EO/IR cameras, or other sensors using probabilistic or rule-based approaches, Familiarity with SLAM, visual-inertial odometry, or sensor-fused localization approaches in real-time applications, Ability to interpret and work with Interface Control Documents (ICDs) and hardware integration specs, Proficiency with version control, debugging, and test-driven development in cross-functional teams, Ability to obtain a SECRET clearance, Hands-on integration or algorithm development with airborne sensing systems, Experience with ML frameworks such as PyTorch or Tensorflow, particularly for vision-based object detection or classification tasks, Experience deploying perception software on SWaP-constrained platforms, Familiarity with validating perception systems during flight test events or operational environments, Understanding of sensing challenges in denied or degraded conditions, Exposure to perception applications across air, maritime, and ground platforms</Skills>
      <Category>Engineering</Category>
      <Industry>Technology</Industry>
      <Employername>Shield AI</Employername>
      <Employerlogo>https://logos.yubhub.co/shield.ai.png</Employerlogo>
      <Employerdescription>Shield AI is a venture-backed deep-tech company that develops intelligent systems to protect service members and civilians.</Employerdescription>
      <Employerwebsite>https://www.shield.ai</Employerwebsite>
      <Compensationcurrency></Compensationcurrency>
      <Compensationmin></Compensationmin>
      <Compensationmax></Compensationmax>
      <Applyto>https://jobs.lever.co/shieldai/d6f1d906-5c1e-4640-87f3-3e31e1b45fa6</Applyto>
      <Location>San Diego, California / Washington, DC / Boston, MA</Location>
      <Country></Country>
      <Postedate>2026-04-17</Postedate>
    </job>
    <job>
      <externalid>5f911dd8-860</externalid>
      <Title>Senior Staff Engineer, Software - Perception</Title>
      <Description><![CDATA[<p>This role is ideal for an individual who thrives on building advanced perception systems that enable autonomous aircraft to operate effectively in complex and contested environments.</p>
<p>A successful candidate will be skilled in developing real-time object detection, sensor fusion, and state estimation algorithms using data from diverse mission sensors such as EO/IR cameras, radars, and IMUs. The role requires strong algorithmic thinking, deep familiarity with airborne sensing systems, and the ability to deliver performant software in simulation and real-world conditions.</p>
<p>Shield AI is committed to developing cutting-edge autonomy for unmanned aircraft operating across all Department of Defense (DoD) domains, including air, sea, and land. Our Perception Engineers are instrumental in creating the situational awareness that underpins autonomy, ensuring our systems understand and respond to the operational environment with speed, precision, and resilience.</p>
<p><strong>Responsibilities:</strong></p>
<ul>
<li>Develop advanced perception algorithms , Design and implement robust algorithms for object detection, classification, and multi-target tracking across diverse sensor modalities.</li>
<li>Implement sensor fusion frameworks , Integrate data from vision systems, radars, and other mission sensors using probabilistic and deterministic fusion techniques to generate accurate situational awareness.</li>
<li>Develop state estimation capabilities , Design and refine algorithms for localization and pose estimation using IMU, GPS, vision, and other onboard sensing inputs to enable stable and accurate navigation.</li>
<li>Analyze and utilize sensor ICDs , Interpret interface control documents (ICDs) and technical specifications for aircraft-mounted sensors to ensure correct data handling, interpretation, and synchronization.</li>
<li>Optimize perception performance , Tune and evaluate perception pipelines for performance, robustness, and real-time efficiency in both simulation and real-world environments.</li>
<li>Support autonomy integration , Work closely with autonomy, systems, and integration teams to interface perception outputs with planning, behaviors, and decision-making modules.</li>
<li>Validate in simulated and operational settings , Leverage synthetic data, simulation environments, and field testing to validate algorithm accuracy and mission readiness.</li>
<li>Collaborate with hardware and sensor teams , Ensure seamless integration of perception algorithms with onboard compute platforms and diverse sensor payloads.</li>
<li>Drive innovation in airborne sensing , Contribute novel ideas and state-of-the-art techniques to advance real-time perception capabilities for unmanned aircraft operating in complex, GPS-denied, or contested environments.</li>
<li>Travel Requirement , Members of this team typically travel around 10-15% of the year (to different office locations, customer sites, and flight integration events).</li>
</ul>
<p><strong>Requirements:</strong></p>
<ul>
<li>BS/MS in Computer Science, Electrical Engineering, Mechanical Engineering, Aerospace Engineering, and/or similar degree, or equivalent practical experience</li>
<li>Typically requires a minimum of 10 years of related experience with a Bachelor’s degree; or 9 years and a Master’s degree; or 7 years with a PhD; or equivalent work experience</li>
<li>Background in implementing algorithms such as Kalman Filters, multi-target tracking, or deep learning-based detection models</li>
<li>Familiarity with fusing data from radar, EO/IR cameras, or other sensors using probabilistic or rule-based approaches</li>
<li>Familiarity with SLAM, visual-inertial odometry, or sensor-fused localization approaches in real-time applications</li>
<li>Ability to interpret and work with Interface Control Documents (ICDs) and hardware integration specs</li>
<li>Proficiency with version control, debugging, and test-driven development in cross-functional teams</li>
<li>Ability to obtain a SECRET clearance</li>
</ul>
<p><strong>Preferences:</strong></p>
<ul>
<li>Hands-on integration or algorithm development with airborne sensing systems</li>
<li>Experience with ML frameworks such as PyTorch or Tensorflow, particularly for vision-based object detection or classification tasks</li>
<li>Experience deploying perception software on SWaP-constrained platforms</li>
<li>Familiarity with validating perception systems during flight test events or operational environments</li>
<li>Understanding of sensing challenges in denied or degraded conditions</li>
<li>Exposure to perception applications across air, maritime, and ground platforms</li>
</ul>
<p style="margin-top:24px;font-size:13px;color:#666;">XML job scraping automation by <a href="https://yubhub.co">YubHub</a></p>]]></Description>
      <Jobtype>full-time</Jobtype>
      <Experiencelevel>senior</Experiencelevel>
      <Workarrangement>onsite</Workarrangement>
      <Salaryrange>$220,800 - $331,200 a year</Salaryrange>
      <Skills>algorithm development, sensor fusion, state estimation, Kalman Filters, multi-target tracking, deep learning-based detection models, probabilistic or rule-based approaches, SLAM, visual-inertial odometry, sensor-fused localization, version control, debugging, test-driven development, hands-on integration with airborne sensing systems, ML frameworks such as PyTorch or Tensorflow, perception software deployment on SWaP-constrained platforms, validating perception systems during flight test events or operational environments, sensing challenges in denied or degraded conditions, perception applications across air, maritime, and ground platforms</Skills>
      <Category>Engineering</Category>
      <Industry>Technology</Industry>
      <Employername>Shield AI</Employername>
      <Employerlogo>https://logos.yubhub.co/shield.ai.png</Employerlogo>
      <Employerdescription>Shield AI is a venture-backed deep-tech company founded in 2015, developing intelligent systems to protect service members and civilians.</Employerdescription>
      <Employerwebsite>https://www.shield.ai</Employerwebsite>
      <Compensationcurrency></Compensationcurrency>
      <Compensationmin></Compensationmin>
      <Compensationmax></Compensationmax>
      <Applyto>https://jobs.lever.co/shieldai/5cf8609e-ce9a-47e9-8956-00dae756e406</Applyto>
      <Location>San Diego, California / Washington, DC / Boston, MA</Location>
      <Country></Country>
      <Postedate>2026-04-17</Postedate>
    </job>
    <job>
      <externalid>3dc40911-47e</externalid>
      <Title>Engineering Lead, Autonomy Software</Title>
      <Description><![CDATA[<p>Lead the development of cutting-edge autonomy software that enables unmanned systems to operate intelligently in complex, real-world environments.</p>
<p>In this role, you will guide multidisciplinary teams to design, build, and deploy high-performance autonomy solutions,from algorithm development to system integration and field testing. Working at the intersection of robotics, aerospace, and software engineering, you’ll drive mission-critical capabilities from concept to flight, delivering resilient, scalable systems that perform in dynamic and contested conditions.</p>
<p>Responsibilities:</p>
<ul>
<li><p>Lead teams across autonomy, integration, and testing by aligning technical efforts, resolving cross-functional challenges, and driving mission-focused execution. Balance hands-on technical oversight with performance optimization, innovation, and clear stakeholder communication.</p>
</li>
<li><p>Design tactical autonomy algorithms to enable unmanned aircraft to perform complex missions across air, land, and sea domains with minimal human supervision.</p>
</li>
<li><p>Develop high-performance software modules that incorporate planning, decision-making, and behavior execution strategies for dynamic and adversarial environments.</p>
</li>
<li><p>Implement and test behavior architectures that enable multi-agent coordination, target engagement, reconnaissance, and survivability in contested scenarios.</p>
</li>
<li><p>Collaborate with cross-functional teams including perception, planning, simulation, hardware, and flight test to ensure seamless integration of autonomy solutions on real-world platforms.</p>
</li>
<li><p>Deploy autonomy capabilities to real platforms and participate in field tests and flight demos, validating performance in operationally relevant conditions.</p>
</li>
</ul>
<p>Required qualifications:</p>
<ul>
<li><p>A tertiary level qualification in Computer Science, Mechatronics, Software Engineering, Robotics or a related field</p>
</li>
<li><p>Significant professional experience in robotics, autonomy, perception or aerospace systems</p>
</li>
<li><p>Strong experience in modern C++</p>
</li>
<li><p>Experience leading teams to delivery engineering projects</p>
</li>
<li><p>Significant experience in building and delivering reliable software systems, ideally in fast-paced environments</p>
</li>
</ul>
<p>Preferred qualifications:</p>
<ul>
<li><p>Prior experience with uncrewed systems, especially in the air domain</p>
</li>
<li><p>Defence industry experience</p>
</li>
<li><p>Significant experience in one or more of the following domains:</p>
<ul>
<li><p>State Estimation</p>
</li>
<li><p>Real-Time Systems</p>
</li>
<li><p>Guidance, Navigation and Control</p>
</li>
<li><p>Path Planning</p>
</li>
</ul>
</li>
</ul>
<p style="margin-top:24px;font-size:13px;color:#666;">XML job scraping automation by <a href="https://yubhub.co">YubHub</a></p>]]></Description>
      <Jobtype>full-time</Jobtype>
      <Experiencelevel>senior</Experiencelevel>
      <Workarrangement>onsite</Workarrangement>
      <Salaryrange></Salaryrange>
      <Skills>C++, Robotics, Autonomy, Perception, Aerospace Systems, Software Engineering, Team Leadership, Project Management, Uncrewed Systems, Defence Industry, State Estimation, Real-Time Systems, Guidance, Navigation and Control, Path Planning</Skills>
      <Category>Engineering</Category>
      <Industry>Technology</Industry>
      <Employername>Shield AI</Employername>
      <Employerlogo>https://logos.yubhub.co/shield.ai.png</Employerlogo>
      <Employerdescription>Shield AI is a venture-backed deep-tech company founded in 2015. It develops intelligent systems to protect service members and civilians.</Employerdescription>
      <Employerwebsite>https://www.shield.ai</Employerwebsite>
      <Compensationcurrency></Compensationcurrency>
      <Compensationmin></Compensationmin>
      <Compensationmax></Compensationmax>
      <Applyto>https://jobs.lever.co/shieldai/28ebb068-b00d-4a34-a4d7-a471c84e09ff</Applyto>
      <Location>Melbourne</Location>
      <Country></Country>
      <Postedate>2026-04-17</Postedate>
    </job>
    <job>
      <externalid>bed4759c-578</externalid>
      <Title>Staff Engineer, Software - Perception</Title>
      <Description><![CDATA[<p>This position is ideal for an individual who thrives on building advanced perception systems that enable autonomous aircraft to operate effectively in complex and contested environments.</p>
<p>A successful candidate will be skilled in developing real-time object detection, sensor fusion, and state estimation algorithms using data from diverse mission sensors such as EO/IR cameras, radars, and IMUs. The role requires strong algorithmic thinking, deep familiarity with airborne sensing systems, and the ability to deliver performant software in simulation and real-world conditions.</p>
<p>Shield AI is committed to developing cutting-edge autonomy for unmanned aircraft operating across all Department of Defense (DoD) domains, including air, sea, and land. Our Perception Engineers are instrumental in creating the situational awareness that underpins autonomy, ensuring our systems understand and respond to the operational environment with speed, precision, and resilience.</p>
<p><strong>Responsibilities:</strong></p>
<ul>
<li>Develop advanced perception algorithms , Design and implement robust algorithms for object detection, classification, and multi-target tracking across diverse sensor modalities.</li>
<li>Implement sensor fusion frameworks , Integrate data from vision systems, radars, and other mission sensors using probabilistic and deterministic fusion techniques to generate accurate situational awareness.</li>
<li>Develop state estimation capabilities , Design and refine algorithms for localization and pose estimation using IMU, GPS, vision, and other onboard sensing inputs to enable stable and accurate navigation.</li>
<li>Analyze and utilize sensor ICDs , Interpret interface control documents (ICDs) and technical specifications for aircraft-mounted sensors to ensure correct data handling, interpretation, and synchronization.</li>
<li>Optimize perception performance , Tune and evaluate perception pipelines for performance, robustness, and real-time efficiency in both simulation and real-world environments.</li>
<li>Support autonomy integration , Work closely with autonomy, systems, and integration teams to interface perception outputs with planning, behaviors, and decision-making modules.</li>
<li>Validate in simulated and operational settings , Leverage synthetic data, simulation environments, and field testing to validate algorithm accuracy and mission readiness.</li>
<li>Collaborate with hardware and sensor teams , Ensure seamless integration of perception algorithms with onboard compute platforms and diverse sensor payloads.</li>
<li>Drive innovation in airborne sensing , Contribute novel ideas and state-of-the-art techniques to advance real-time perception capabilities for unmanned aircraft operating in complex, GPS-denied, or contested environments.</li>
<li>Travel Requirement , Members of this team typically travel around 10-15% of the year (to different office locations, customer sites, and flight integration events).</li>
</ul>
<p><strong>Required Qualifications:</strong></p>
<ul>
<li>BS/MS in Computer Science, Electrical Engineering, Mechanical Engineering, Aerospace Engineering, and/or similar degree, or equivalent practical experience</li>
<li>Typically requires a minimum of 7 years of related experience with a Bachelor’s degree; or 5 years and a Master’s degree; or 4 years with a PhD; or equivalent work experience</li>
<li>Background in implementing algorithms such as Kalman Filters, multi-target tracking, or deep learning-based detection models</li>
<li>Familiarity with fusing data from radar, EO/IR cameras, or other sensors using probabilistic or rule-based approaches</li>
<li>Familiarity with SLAM, visual-inertial odometry, or sensor-fused localization approaches in real-time applications</li>
<li>Ability to interpret and work with Interface Control Documents (ICDs) and hardware integration specs</li>
<li>Proficiency with version control, debugging, and test-driven development in cross-functional teams</li>
<li>Ability to obtain a SECRET clearance</li>
</ul>
<p><strong>Preferred Qualifications:</strong></p>
<ul>
<li>Hands-on integration or algorithm development with airborne sensing systems</li>
<li>Experience with ML frameworks such as PyTorch or Tensorflow, particularly for vision-based object detection or classification tasks</li>
<li>Experience deploying perception software on SWaP-constrained platforms</li>
<li>Familiarity with validating perception systems during flight test events or operational environments</li>
<li>Understanding of sensing challenges in denied or degraded conditions</li>
<li>Exposure to perception applications across air, maritime, and ground platforms</li>
</ul>
<p>$182,720 - $274,080 a year</p>
<p style="margin-top:24px;font-size:13px;color:#666;">XML job scraping automation by <a href="https://yubhub.co">YubHub</a></p>]]></Description>
      <Jobtype>full-time</Jobtype>
      <Experiencelevel>staff</Experiencelevel>
      <Workarrangement>onsite</Workarrangement>
      <Salaryrange>$182,720 - $274,080 a year</Salaryrange>
      <Skills>real-time object detection, sensor fusion, state estimation algorithms, EO/IR cameras, radars, IMUs, Kalman Filters, multi-target tracking, deep learning-based detection models, probabilistic or rule-based approaches, SLAM, visual-inertial odometry, sensor-fused localization, Interface Control Documents, hardware integration specs, version control, debugging, test-driven development, hands-on integration or algorithm development with airborne sensing systems, ML frameworks such as PyTorch or Tensorflow, vision-based object detection or classification tasks, SWaP-constrained platforms, validating perception systems during flight test events or operational environments, sensing challenges in denied or degraded conditions, perception applications across air, maritime, and ground platforms</Skills>
      <Category>Engineering</Category>
      <Industry>Technology</Industry>
      <Employername>Shield AI</Employername>
      <Employerlogo>https://logos.yubhub.co/shield.ai.png</Employerlogo>
      <Employerdescription>Shield AI is a venture-backed deep-tech company founded in 2015, developing intelligent systems to protect service members and civilians.</Employerdescription>
      <Employerwebsite>https://www.shield.ai</Employerwebsite>
      <Compensationcurrency></Compensationcurrency>
      <Compensationmin></Compensationmin>
      <Compensationmax></Compensationmax>
      <Applyto>https://jobs.lever.co/shieldai/8739c509-b6ea-4640-bcc1-c8b5b1de31b2</Applyto>
      <Location>San Diego, California / Washington, DC / Boston, MA</Location>
      <Country></Country>
      <Postedate>2026-04-17</Postedate>
    </job>
    <job>
      <externalid>f99e744e-649</externalid>
      <Title>Senior Quantum Engineer</Title>
      <Description><![CDATA[<p>We are looking for a Senior Quantum Engineer with expertise in superconducting qubits to join our hardware R&amp;D team.</p>
<p>This role is for someone who wants to work on technically difficult problems that directly shape a quantum hardware platform. You will not be optimising within a fixed roadmap, you will help define it.</p>
<p>You will design, execute, and interpret experiments that push the performance and scalability of superconducting qubit systems. You will develop new gate schemes, explore advanced control protocols, and test architectural ideas that can influence platform-level decisions.</p>
<p>We operate in direct competition with the best-funded and most established teams in the world. We are looking for someone who finds that motivating.</p>
<p>You will have a high degree of autonomy and ownership while working in a collaborative environment. If you have a strong technical hypothesis, you will be expected to test it rigorously and defend it with data. Strong ideas move quickly through experimental validation.</p>
<p><strong>Responsibilities</strong></p>
<ul>
<li>Develop experimental programs on superconducting qubit devices.</li>
</ul>
<ul>
<li>Develop and optimise high-fidelity quantum gates.</li>
</ul>
<ul>
<li>Design and test novel control and coupling strategies.</li>
</ul>
<ul>
<li>Identify fundamental performance bottlenecks and isolate their physical origin.</li>
</ul>
<ul>
<li>Analyse data with scientific rigour to extract insight, not just metrics.</li>
</ul>
<ul>
<li>Collaborate across device, fabrication, and control teams to translate ideas into hardware progress.</li>
</ul>
<p><strong>Requirements</strong></p>
<ul>
<li>PhD in Physics, Applied Physics, Electrical Engineering, or a related field.</li>
</ul>
<ul>
<li>Extensive hands-on experience with superconducting qubits.</li>
</ul>
<ul>
<li>Strong background in gate design, pulse engineering, and decoherence mechanisms.</li>
</ul>
<ul>
<li>Demonstrated ability to independently lead complex experimental efforts from concept to validated result.</li>
</ul>
<ul>
<li>Intellectual independence, technical courage, and the ability to defend ideas with evidence.</li>
</ul>
<ul>
<li>Motivation to compete at the highest technical level in the field.</li>
</ul>
<p>This position is best suited for someone who wants visible impact, real ownership, and the opportunity to help shape the direction of a quantum hardware platform, not just contribute to a small piece of it.</p>
<p><strong>Additional Information</strong></p>
<p>As engineering leaders, we value diversity and are committed to building a culture of inclusion to attract and engage innovative thinkers. Our technology, meant to serve all of humanity, cannot succeed if those who built it do not mirror the diversity of the communities we serve. Applications from women, minorities, and other under-represented groups are encouraged.</p>
<p style="margin-top:24px;font-size:13px;color:#666;">XML job scraping automation by <a href="https://yubhub.co">YubHub</a></p>]]></Description>
      <Jobtype>full-time</Jobtype>
      <Experiencelevel>senior</Experiencelevel>
      <Workarrangement>onsite</Workarrangement>
      <Salaryrange></Salaryrange>
      <Skills>superconducting qubits, gate design, pulse engineering, decoherence mechanisms, experimental programming, high-fidelity quantum gates, novel control and coupling strategies, data analysis</Skills>
      <Category>Engineering</Category>
      <Industry>Technology</Industry>
      <Employername>Rigetti Computing</Employername>
      <Employerlogo>https://logos.yubhub.co/rigetti.com.png</Employerlogo>
      <Employerdescription>Rigetti Computing is a pioneer in full-stack quantum computing, operating quantum computers over the cloud since 2017 and serving global enterprise, government, and research clients.</Employerdescription>
      <Employerwebsite>https://www.rigetti.com</Employerwebsite>
      <Compensationcurrency></Compensationcurrency>
      <Compensationmin></Compensationmin>
      <Compensationmax></Compensationmax>
      <Applyto>https://jobs.lever.co/rigetti/288d5644-744a-4989-b129-d742b0c10e1d</Applyto>
      <Location>United States</Location>
      <Country></Country>
      <Postedate>2026-04-17</Postedate>
    </job>
    <job>
      <externalid>4d844e9d-5d6</externalid>
      <Title>Senior Quantum Engineer, Readout</Title>
      <Description><![CDATA[<p>As a Senior Quantum Engineer, Readout, you will optimise superconducting qubit readout and reset on Rigetti&#39;s quantum processors and fault-tolerant prototypes. You will lead a range of experimental efforts to characterise and improve measurement protocols, signal chains, amplifiers, and device layouts.</p>
<p>In collaboration with an interdisciplinary group of engineers in theory, hardware, and chip design, you will advance the speed and fidelity of readout and reset in Rigetti Computing&#39;s quantum computing systems to fulfil its ambitious roadmap toward fault tolerance.</p>
<p>Key responsibilities include leading superconducting qubit measurements to characterise dispersive readout of quantum circuits, developing novel measurement techniques and pulse optimisations for readout speed or fidelity, and collaborating with theory and simulation teams to build or validate models that describe readout performance.</p>
<p>You will also specify improvements to the quantum chip layout, signal lines, and electronics, work with the hardware and device design teams to build them, and develop protocols to accelerate circuit execution and enable error-correcting codes, including reset, mid-circuit measurement, and ancilla readout.</p>
<p>A Ph.D. in Physics, Applied Physics, Electrical Engineering or a related field, plus 2+ years of industry and/or postdoctoral experience, is required. Deep expertise in circuit quantum electrodynamics and dispersive readout in particular is essential, as is demonstrated ability to perform measurements on quantum systems and explain the results through theory and/or simulation.</p>
<p>Experience with software development in an industry setting, in languages such as Python, C, C++, Rust, etc., is also necessary, as is experience with low-level pulse optimisation of quantum gates or readout. Ability to excel in a collaborative environment and excellent communication skills are also required.</p>
<p style="margin-top:24px;font-size:13px;color:#666;">XML job scraping automation by <a href="https://yubhub.co">YubHub</a></p>]]></Description>
      <Jobtype>full-time</Jobtype>
      <Experiencelevel>senior</Experiencelevel>
      <Workarrangement>onsite</Workarrangement>
      <Salaryrange></Salaryrange>
      <Skills>circuit quantum electrodynamics, dispersive readout, superconducting qubits, software development, Python, C, C++, Rust, low-level pulse optimisation</Skills>
      <Category>Engineering</Category>
      <Industry>Technology</Industry>
      <Employername>Rigetti Computing</Employername>
      <Employerlogo>https://logos.yubhub.co/rigetti.com.png</Employerlogo>
      <Employerdescription>Rigetti Computing is a pioneer in full-stack quantum computing, operating quantum computers over the cloud since 2017 and serving global enterprise, government, and research clients.</Employerdescription>
      <Employerwebsite>https://www.rigetti.com</Employerwebsite>
      <Compensationcurrency></Compensationcurrency>
      <Compensationmin></Compensationmin>
      <Compensationmax></Compensationmax>
      <Applyto>https://jobs.lever.co/rigetti/2a4c64d5-92c7-426c-89d0-7197781ef086</Applyto>
      <Location>United States</Location>
      <Country></Country>
      <Postedate>2026-04-17</Postedate>
    </job>
    <job>
      <externalid>6f62ae0c-4f7</externalid>
      <Title>Nutrition Assistant</Title>
      <Description><![CDATA[<p>Become part of an inclusive organisation with a mission to improve the health and well-being of unique communities. As a Nutrition Assistant, you will maintain patient safety by ensuring correct menu item selection in Computrition in compliance with physician prescribed Medical Nutrition Therapy (MNT).</p>
<p>Your responsibilities will include:
Assisting patients with menu selections, when needed, as part of AYS assuring compliance with physician prescribed MNT, according to the individualised needs of the patient with full consideration of patient safety.
Maintaining patient database utilising Epic and Computrition by correctly inputting physician MNT orders, patient allergy information, and patient special services such as appropriateness to order through the AYS system.
Answering the telephone, taking messages and relaying information as needed. This includes answering the patient menu line.
Providing troubleshooting of patient meal problems to include referring patients in need of assistance to a registered dietitian.
Maintaining accurate reports such as patients not appropriate to order (NAPS), patients who have missed 2 meals, required meal counts, etc.
Ensuring correct menu item selection for MNT on AYS orders according the individualised need of the patient.
Conducting Performance Improvement monitoring, measuring, and tracking results using specified indicators.
Properly preparing patient nourishments and enteral nutrition products according to HACCP guidelines.
Stocking office supplies, as needed, and maintaining organisation of work area.</p>
<p style="margin-top:24px;font-size:13px;color:#666;">XML job scraping automation by <a href="https://yubhub.co">YubHub</a></p>]]></Description>
      <Jobtype>part-time</Jobtype>
      <Experiencelevel>entry</Experiencelevel>
      <Workarrangement>onsite</Workarrangement>
      <Salaryrange>$18.84 - $26.77 per hour (Hiring Range)</Salaryrange>
      <Skills>basic computer knowledge and skills, customer service experience, language skills: ability to read and comprehend simple instructions, short correspondence, and memos, mathematical skills: ability to work with and apply mathematical concepts such as fractions, percentages, ratios, and proportions to practical situations, reasoning ability: ability to apply common sense understanding to carry out instructions furnished in written, oral, or diagram form</Skills>
      <Category>Healthcare</Category>
      <Industry>Healthcare</Industry>
      <Employername>UNC Health</Employername>
      <Employerlogo>https://logos.yubhub.co/jobs.unchealthcare.org.png</Employerlogo>
      <Employerdescription>A healthcare organisation with over 40,000 teammates, improving the health and well-being of unique communities.</Employerdescription>
      <Employerwebsite>https://jobs.unchealthcare.org</Employerwebsite>
      <Compensationcurrency></Compensationcurrency>
      <Compensationmin></Compensationmin>
      <Compensationmax></Compensationmax>
      <Applyto>https://jobs.unchealthcare.org/jobs/17621809-nutrition-assistant</Applyto>
      <Location>Raleigh</Location>
      <Country></Country>
      <Postedate>2026-04-16</Postedate>
    </job>
    <job>
      <externalid>471316cf-932</externalid>
      <Title>Analog Layout, Staff Engineer</Title>
      <Description><![CDATA[<p>Our Hardware Engineers at Synopsys are responsible for designing and developing cutting-edge semiconductor solutions. They work on intricate tasks such as chip architecture, circuit design, and verification to ensure the efficiency and reliability of semiconductor products.</p>
<p>These engineers play a crucial role in advancing technology and enabling innovations in various industries.</p>
<p>We are hiring a Staff Engineer to lead the design and development of cutting-edge DDR/HBM PHY layout IPs for next-generation technologies.</p>
<p>As a Staff Engineer, you will be responsible for leading the design and development of cutting-edge DDR/HBM PHY layout IPs for next-generation technologies. You will work on hands-on execution of layout development, ensuring precision and adherence to industry standards.</p>
<p>You will also mentor and support junior engineers, fostering technical growth and knowledge sharing within the team.</p>
<p>Estimating project efforts, planning schedules, and executing projects in cross-functional settings will be another key responsibility.</p>
<p>Collaborating with teams to support critical layout, floorplanning requirements, layout reviews, and quality checks will also be a part of your role.</p>
<p>Managing the release process, ensuring timely delivery and consistent quality of layout deliverables will be your additional responsibility.</p>
<p>Key Responsibilities:</p>
<ul>
<li><p>Lead the design and development of cutting-edge DDR/HBM PHY layout IPs for next-generation technologies.</p>
</li>
<li><p>Hands-on execution of layout development, ensuring precision and adherence to industry standards.</p>
</li>
<li><p>Mentor and support junior engineers, fostering technical growth and knowledge sharing within the team.</p>
</li>
<li><p>Estimating project efforts, planning schedules, and executing projects in cross-functional settings.</p>
</li>
<li><p>Collaborating with teams to support critical layout, floorplanning requirements, layout reviews, and quality checks.</p>
</li>
<li><p>Managing the release process, ensuring timely delivery and consistent quality of layout deliverables.</p>
</li>
</ul>
<p>Requirements:</p>
<ul>
<li><p>BTech/MTech degree in Electrical Engineering, Electronics, or related field.</p>
</li>
<li><p>5+ years of relevant experience in layout design for CMOS, FinFET, GAA process technologies (7nm and below).</p>
</li>
<li><p>Expertise in layout matching techniques, ESD, latch-up, PERC, EMIR, DFM, LEF generation, bond-pad layout, IO frame and pitch requirements.</p>
</li>
<li><p>Strong understanding of floorplan techniques and deep submicron effects.</p>
</li>
<li><p>Proven ability to lead projects and deliver best product quality within tight timelines.</p>
</li>
</ul>
<p>Preferred Qualifications:</p>
<ul>
<li><p>Collaborative and team-oriented, with a commitment to inclusion and diversity.</p>
</li>
<li><p>Detail-oriented, with strong problem-solving and analytical skills.</p>
</li>
<li><p>Effective communicator, both written and verbal, with excellent interpersonal abilities.</p>
</li>
<li><p>Adaptable and eager to learn, embracing new technologies and methodologies.</p>
</li>
<li><p>Empathetic mentor, fostering accountability, ownership, and technical growth in others.</p>
</li>
</ul>
<p>Benefits:</p>
<ul>
<li><p>Comprehensive medical and healthcare plans that work for you and your family.</p>
</li>
<li><p>In addition to company holidays, we have ETO and FTO Programs.</p>
</li>
<li><p>Maternity and paternity leave, parenting resources, adoption and surrogacy assistance, and more.</p>
</li>
<li><p>Purchase Synopsys common stock at a 15% discount, with a 24 month look-back.</p>
</li>
<li><p>Save for your future with our retirement plans that vary by region and country.</p>
</li>
<li><p>Competitive salaries.</p>
</li>
</ul>
<p style="margin-top:24px;font-size:13px;color:#666;">XML job scraping automation by <a href="https://yubhub.co">YubHub</a></p>]]></Description>
      <Jobtype>full-time</Jobtype>
      <Experiencelevel>staff</Experiencelevel>
      <Workarrangement>onsite</Workarrangement>
      <Salaryrange></Salaryrange>
      <Skills>layout design, CMOS, FinFET, GAA process technologies, layout matching techniques, ESD, latch-up, PERC, EMIR, DFM, LEF generation, bond-pad layout, IO frame and pitch requirements, collaborative and team-oriented, detail-oriented, effective communicator, adaptable and eager to learn, empathetic mentor</Skills>
      <Category>Engineering</Category>
      <Industry>Technology</Industry>
      <Employername>Synopsys</Employername>
      <Employerlogo>https://logos.yubhub.co/careers.synopsys.com.png</Employerlogo>
      <Employerdescription>Synopsys is a leading provider of electronic design automation (EDA) software and intellectual property (IP) used in the design and manufacturing of semiconductors.</Employerdescription>
      <Employerwebsite>https://careers.synopsys.com</Employerwebsite>
      <Compensationcurrency></Compensationcurrency>
      <Compensationmin></Compensationmin>
      <Compensationmax></Compensationmax>
      <Applyto>https://careers.synopsys.com/job/bengaluru/analog-layout-staff-engineer/44408/92693931728</Applyto>
      <Location>Bengaluru</Location>
      <Country></Country>
      <Postedate>2026-04-05</Postedate>
    </job>
    <job>
      <externalid>41cabece-785</externalid>
      <Title>Layout Design, Sr Supervisor</Title>
      <Description><![CDATA[<p>Our Hardware Engineers at Synopsys are responsible for designing and developing cutting-edge semiconductor solutions. They work on intricate tasks such as chip architecture, circuit design, and verification to ensure the efficiency and reliability of semiconductor products.</p>
<p>These engineers play a crucial role in advancing technology and enabling innovations in various industries.</p>
<p>At Synopsys, we drive the innovations that shape the way we live and connect. Our technology is central to the Era of Pervasive Intelligence, from self-driving cars to learning machines. We lead in chip design, verification, and IP integration, empowering the creation of high-performance silicon chips and software content.</p>
<p>You are a visionary leader and seasoned layout design professional, passionate about advancing the frontiers of semiconductor technology. With over eight years of hands-on experience, you thrive in dynamic environments where innovation and technical excellence are paramount.</p>
<p>You possess a deep understanding of deep submicron effects, advanced floorplanning techniques, and process technologies like CMOS, FinFET, and GAA at 7nm and below. Your expertise extends to layout matching, ESD, latch-up, PERC, EMIR, DFM, LEF generation, bond-pad layout, and IO frame and pitch requirements.</p>
<p>You are adept at leading multi-disciplinary teams, creating an environment of accountability, ownership, and growth, while mentoring junior engineers and empowering senior team members to excel.</p>
<p>You value diversity and inclusion, fostering a culture where every voice is heard and respected. Your collaborative approach ensures seamless cross-functional coordination, and you have a knack for translating complex technical requirements into actionable project plans.</p>
<p>Your communication skills,both written and verbal,enable you to engage effectively with stakeholders at all levels. You are motivated by the opportunity to contribute to high-impact projects, drive innovation in DDR/HBM PHY IP layout, and deliver differentiated products that shape the industry.</p>
<p>If you are ready to lead, inspire, and make a lasting impact, Synopsys is the place for you.</p>
<p>Leading the development of next-generation DDR/HBM IP layouts, driving technical innovation and quality excellence.</p>
<p>Mentoring and managing a team of layout engineers, fostering growth and maximizing individual and team potential.</p>
<p>Developing and maintaining project schedules, ensuring timely delivery while balancing technical and resource constraints.</p>
<p>Collaborating cross-functionally with design, verification, and IP teams to align on project requirements and execution.</p>
<p>Providing subject matter expertise in high-speed DDR/HBM IP layout, including floorplanning, layout reviews, and quality checks.</p>
<p>Executing layout matching techniques, ESD, latch-up, PERC, EMIR, DFM, LEF generation, and IO requirement analysis.</p>
<p>Supporting layout automation through scripting and tool enhancement, optimizing efficiency and productivity.</p>
<p>Acting as an advisor to resolve project challenges and guide teams towards innovative solutions.</p>
<p>Accelerating the integration of advanced capabilities into SoCs, helping customers achieve unique performance, power, and size targets.</p>
<p>Reducing time-to-market and risk for differentiated products through robust layout design and technical leadership.</p>
<p>Driving continuous improvement in layout methodologies and quality standards across cross-functional teams.</p>
<p>Empowering your team to deliver high-performance DDR/HBM PHY IPs that set industry benchmarks.</p>
<p>Fostering a collaborative, inclusive work environment that values innovation, accountability, and diversity.</p>
<p>Contributing to Synopsys’ reputation as the provider of the world’s broadest portfolio of silicon IP.</p>
<p>Shaping the future of chip design and verification technologies through your expertise and leadership.</p>
<p style="margin-top:24px;font-size:13px;color:#666;">XML job scraping automation by <a href="https://yubhub.co">YubHub</a></p>]]></Description>
      <Jobtype>full-time</Jobtype>
      <Experiencelevel>senior</Experiencelevel>
      <Workarrangement>onsite</Workarrangement>
      <Salaryrange></Salaryrange>
      <Skills>deep submicron effects, advanced floorplanning techniques, CMOS, FinFET, GAA, layout matching, ESD, latch-up, PERC, EMIR, DFM, LEF generation, bond-pad layout, IO frame and pitch requirements</Skills>
      <Category>Engineering</Category>
      <Industry>Technology</Industry>
      <Employername>Synopsys</Employername>
      <Employerlogo>https://logos.yubhub.co/careers.synopsys.com.png</Employerlogo>
      <Employerdescription>Synopsys is a leading provider of electronic design automation (EDA) software and services used to design, verify, and manufacture electronic systems and semiconductor devices.</Employerdescription>
      <Employerwebsite>https://careers.synopsys.com</Employerwebsite>
      <Compensationcurrency></Compensationcurrency>
      <Compensationmin></Compensationmin>
      <Compensationmax></Compensationmax>
      <Applyto>https://careers.synopsys.com/job/bengaluru/layout-design-sr-supervisor/44408/93269033008</Applyto>
      <Location>Bengaluru</Location>
      <Country></Country>
      <Postedate>2026-04-05</Postedate>
    </job>
    <job>
      <externalid>4d4ff9ad-77c</externalid>
      <Title>AI Quality Lead</Title>
      <Description><![CDATA[<p>At Electronic Arts, we&#39;re looking for an AI Quality Lead to join our Self-Service Fan Care team. As a key member of our team, you will be responsible for guiding the quality of AI-powered support for millions of players. You will report to the Director, Self Service Product and work with us to shape the future of fan care by building measurable, responsible, and impactful AI experiences.</p>
<p><strong>Responsibilities</strong></p>
<ul>
<li>Establish and lead the AI Quality program from inception, defining business-aligned quality standards, user experience and target outcome metrics across AI-powered self-service capabilities.</li>
<li>Accountable for ongoing quality performance across fan-facing AI experiences, ensuring measurable improvements in accuracy, containment, risk mitigation, and fan-perceived effort and satisfaction.</li>
<li>Define and enforce AI launch readiness criteria, working with partner teams on launch criteria, validating risk scenarios, and certifying capabilities before release.</li>
<li>Lead post-launch AI agent governance, including performance monitoring, compliance with security and privacy standards and policies, and continuous quality improvement cycles tied to measurable outcomes.</li>
<li>Partner with implementation and stakeholder teams to provide input on product direction to prevent downstream quality risks.</li>
<li>Evaluate cost-to-quality tradeoffs in containment strategies, ensuring impacts of optimizations are evaluated and tracked.</li>
</ul>
<p><strong>Required Qualifications</strong></p>
<ul>
<li>5+ years of program management experience in a related industry, with direct responsibility for quality programs or AI/ML-powered products.</li>
<li>Experience defining and operationalizing measurable AI quality KPIs for large-scale, production environments.</li>
<li>Experience influencing launch decisions and enforcing governance standards with technical and business partners.</li>
<li>Bachelor&#39;s degree in Computer Science, Engineering, or equivalent practical experience.</li>
</ul>
<p><strong>Benefits</strong></p>
<p>We adopt a holistic approach to our benefits programs, emphasizing physical, emotional, financial, career, and community wellness to support a balanced life. Our packages are tailored to meet local needs and may include healthcare coverage, mental well-being support, retirement savings, paid time off, family leaves, complimentary games, and more.</p>
<p style="margin-top:24px;font-size:13px;color:#666;">XML job scraping automation by <a href="https://yubhub.co">YubHub</a></p>]]></Description>
      <Jobtype>full-time</Jobtype>
      <Experiencelevel>senior</Experiencelevel>
      <Workarrangement>hybrid</Workarrangement>
      <Salaryrange></Salaryrange>
      <Skills>AI Quality, Program Management, Quality Standards, User Experience, Target Outcome Metrics, AI Launch Readiness Criteria, Risk Mitigation, Fan-Perceived Effort and Satisfaction, Cost-to-Quality Tradeoffs, Containment Strategies</Skills>
      <Category>Engineering</Category>
      <Industry>Technology</Industry>
      <Employername>Electronic Arts</Employername>
      <Employerlogo>https://logos.yubhub.co/jobs.ea.com.png</Employerlogo>
      <Employerdescription>Electronic Arts is a leading video game developer and publisher with a portfolio of iconic franchises and a global presence.</Employerdescription>
      <Employerwebsite>https://jobs.ea.com</Employerwebsite>
      <Compensationcurrency></Compensationcurrency>
      <Compensationmin></Compensationmin>
      <Compensationmax></Compensationmax>
      <Applyto>https://jobs.ea.com/en_US/careers/JobDetail/AI-Quality-Lead/212054</Applyto>
      <Location>Galway</Location>
      <Country></Country>
      <Postedate>2026-03-09</Postedate>
    </job>
    <job>
      <externalid>2f942bce-976</externalid>
      <Title>Analog Design, Sr Engineer</Title>
      <Description><![CDATA[<p>Our Hardware Engineers at Synopsys are responsible for designing and developing cutting-edge semiconductor solutions. They work on intricate tasks such as chip architecture, circuit design, and verification to ensure the efficiency and reliability of semiconductor products. These engineers play a crucial role in advancing technology and enabling innovations in various industries.</p>
<p>At Synopsys, we drive the innovations that shape the way we live and connect. Our technology is central to the Era of Pervasive Intelligence, from self-driving cars to learning machines. We lead in chip design, verification, and IP integration, empowering the creation of high-performance silicon chips and software content.</p>
<p>You Are:
You are a passionate and inventive analog circuit design engineer with a deep-rooted curiosity for emerging technologies and industry-leading semiconductor processes. You thrive in dynamic, collaborative environments and are recognised for your ability to balance technical depth with practical implementation.</p>
<p>Responsibilities:
Designing and developing best-in-class ESD and Latch-Up robust solutions for advanced interface IPs using cutting-edge FinFet, FDSOI, and BCD processes.
Owning the full lifecycle of ESD structures—from schematic design, simulation, and layout to silicon qualification and production release.
Leading and executing I/O development, including I/O ring design, review, and optimisation for performance and robustness.
Developing and qualifying Interface Testchips, ensuring comprehensive ESD and Latch-Up validation to meet global customer requirements.
Running ESD simulations by building detailed ESD networks and performing advanced analyses to ensure design integrity.
Applying foundry-provided PERC (Physical Verification Rule Check) rules and using PERC check tools to validate compliance and enhance design quality.
Collaborating closely with foundry partners, design, and layout teams to ensure timely and effective integration of ESD and LU solutions.</p>
<p>The Impact You Will Have:
Elevating the reliability and performance of Synopsys&#39; interface IPs, directly influencing the success of global semiconductor customers.
Driving innovation in analog circuit design for next-generation silicon technologies, helping Synopsys maintain its leadership in the industry.
Reducing field failures and increasing product longevity by delivering robust ESD and Latch-Up protection solutions.
Accelerating time-to-market for customer products through efficient and high-quality design practices.
Fostering a culture of technical excellence and continuous improvement within the analog design team.
Building strong partnerships with foundries and cross-functional teams, enhancing collaboration and knowledge sharing across projects.</p>
<p>What You’ll Need:
Proven experience in analog circuit design, with a focus on I/O development and ESD/LU robustness.
Hands-on expertise with FinFet, FDSOI, and BCD process technologies from leading foundries.
Strong background in ESD and Latch-Up qualification methodologies, including testchip development and validation.
Proficiency in ESD simulation, ESD network construction, and use of industry-standard tools.
Comprehensive understanding of PERC rules and practical experience with PERC verification tools.
Experience working with cross-functional teams including foundry, design, and layout groups.</p>
<p>Who You Are:
An analytical thinker with excellent problem-solving skills and keen attention to detail.
A collaborative team player who values diversity, inclusion, and open communication.
A proactive learner who stays current with industry trends and emerging technologies.
An effective communicator, able to translate complex technical information to diverse audiences.
A results-driven individual who is adaptable, resilient, and comfortable with fast-paced, high-impact work.</p>
<p>The Team You’ll Be A Part Of:
You’ll join a passionate, multidisciplinary team of analog and mixed-signal engineers dedicated to advancing Synopsys’ interface IP portfolio. The team is focused on delivering robust, innovative, and high-quality solutions that meet the rigorous demands of a global customer base. Collaboration, continuous improvement, and technical mentorship are at the core of our culture, ensuring you’ll have the support and opportunities needed to thrive and grow.</p>
<p>Rewards and Benefits:
We offer a comprehensive range of health, wellness, and financial benefits to cater to your needs. Our total rewards include both monetary and non-monetary offerings. Your recruiter will provide more details about the salary range and benefits during the hiring process.</p>
<p style="margin-top:24px;font-size:13px;color:#666;">XML job scraping automation by <a href="https://yubhub.co">YubHub</a></p>]]></Description>
      <Jobtype>full-time</Jobtype>
      <Experiencelevel>senior</Experiencelevel>
      <Workarrangement>onsite</Workarrangement>
      <Salaryrange></Salaryrange>
      <Skills>Analog circuit design, ESD and Latch-Up robustness, FinFet, FDSOI, and BCD process technologies, ESD simulation, PERC rules and verification tools, Cross-functional team collaboration, Machine learning, Artificial intelligence, Cloud computing</Skills>
      <Category>Engineering</Category>
      <Industry>Technology</Industry>
      <Employername>Synopsys</Employername>
      <Employerlogo>https://logos.yubhub.co/careers.synopsys.com.png</Employerlogo>
      <Employerdescription>Synopsys is a leading provider of electronic design automation (EDA) software and services. The company was founded in 1986 and is headquartered in Mountain View, California.</Employerdescription>
      <Employerwebsite>https://careers.synopsys.com</Employerwebsite>
      <Compensationcurrency></Compensationcurrency>
      <Compensationmin></Compensationmin>
      <Compensationmax></Compensationmax>
      <Applyto>https://careers.synopsys.com/job/noida/analog-design-sr-engineer/44408/92446615456</Applyto>
      <Location>Noida</Location>
      <Country></Country>
      <Postedate>2026-03-08</Postedate>
    </job>
    <job>
      <externalid>a8eb2e15-0bb</externalid>
      <Title>Senior Business Systems Analyst, Finance Systems</Title>
      <Description><![CDATA[<p><strong>About the role</strong></p>
<p>We are seeking an experienced Senior Business Systems Analyst to join our Finance Systems team at Anthropic. In this role, you will serve as the internal functional lead for our Workday Financials implementation, owning the design and configuration of the Financial Data Model (FDM), Chart of Accounts, and dimensional structures that will serve as the source of truth for financial reporting.</p>
<p><strong>In this role, you will:</strong></p>
<ul>
<li><strong>ERP Core Financials Implementation:</strong> Serve as internal functional lead for Workday Financials implementation, partnering with consultants to drive configuration decisions, validate designs, and ensure business requirements are met</li>
</ul>
<ul>
<li><strong>Financial Data Model (FDM) Design:</strong> Own the design and configuration of Chart of Accounts, Worktags, dimensional hierarchies, and Accounting Books that will serve as the source of truth for all financial reporting, ensuring support for both GAAP and Management reporting requirements</li>
</ul>
<ul>
<li><strong>Prism Analytics Development:</strong> Develop and maintain Prism/Accounting Center solutions from source analysis and ingestion design through build, testing, cutover, and hypercare, including integration with external data sources like BigQuery and Pigment</li>
</ul>
<ul>
<li><strong>Requirements Gathering &amp; Reporting:</strong> Gather business requirements from Finance, Accounting, and FP&amp;A stakeholders, translating them into hands-on development of executive reporting, dashboards, and analytics solutions</li>
</ul>
<ul>
<li><strong>Workshop Participation &amp; Solution Design:</strong> Participate in implementation workshops, challenge requirements, and translate business needs into buildable designs and testable acceptance criteria; manage defects and data quality issues throughout the project lifecycle</li>
</ul>
<ul>
<li><strong>Cross-Functional Collaboration:</strong> Collaborate with Integrations, Security, and Financials configuration teams to align master data, journals, controls, and performance service level agreements; partner with Data Infrastructure and BizTech teams on system integrations</li>
</ul>
<ul>
<li><strong>Cutover &amp; Hypercare Planning:</strong> Prepare cutover plans, data migration strategies, reconciliation frameworks, and hypercare plans; document data lineage, controls, and audit artifacts to support SOX compliance requirements</li>
</ul>
<ul>
<li><strong>Platform Expansion &amp; Adoption:</strong> Work closely with engineering teams and business stakeholders to drive ongoing expansion and adoption of the Workday platform, identifying opportunities for process improvement and automation</li>
</ul>
<p><strong>You may be a good fit if you:</strong></p>
<ul>
<li>Have 8+ years of experience in finance systems, ERP implementation, or business systems analysis roles, with at least 5 years of hands-on Workday Financials experience</li>
</ul>
<ul>
<li>Possess deep expertise in Workday Financial Data Model (FDM), including Chart of Accounts design, Worktags configuration, dimensional hierarchies, and Accounting Books setup</li>
</ul>
<ul>
<li>Have strong experience with Workday Prism Analytics, including data modeling, source integration, calculated fields, and report development</li>
</ul>
<ul>
<li>Are skilled at translating complex business requirements into technical solutions, bridging the gap between finance stakeholders and technical implementation teams</li>
</ul>
<ul>
<li>Have experience with full ERP implementation lifecycles, including requirements gathering, configuration, testing, data migration, cutover planning, and hypercare</li>
</ul>
<ul>
<li>Possess strong understanding of financial accounting processes including General Ledger, multi-entity consolidation, intercompany accounting, and management reporting</li>
</ul>
<ul>
<li>Have excellent stakeholder management and communication skills, with ability to work effectively with finance leadership, accounting teams, and technical partners</li>
</ul>
<ul>
<li>Demonstrate strong analytical and problem-solving skills with attention to detail and commitment to data accuracy and integrity</li>
</ul>
<ul>
<li>Are comfortable working in fast-paced, high-growth environments with evolving requirements and tight timelines</li>
</ul>
<p><strong>Strong candidates may also have:</strong></p>
<ul>
<li>Background in accounting, finance, or CPA certification with understanding of GAAP/IFRS reporting requirements</li>
</ul>
<ul>
<li>Experience with Workday Accounting Center for complex journal automation and subledger accounting</li>
</ul>
<ul>
<li>Technical proficiency with SQL, Python, or scripting languages for data analysis and integration support</li>
</ul>
<ul>
<li>Experience integrating Workday with external data platforms such as BigQuery or cloud data warehouses</li>
</ul>
<ul>
<li>Knowledge of SOX compliance requirements and internal controls for financial systems</li>
</ul>
<ul>
<li>Experience with EPM/FP&amp;A systems such as Pigment, Anaplan, or Adaptive Planning and their integration with ERP</li>
</ul>
<ul>
<li>Prior experience at high-growth technology companies scaling toward IPO readiness</li>
</ul>
<ul>
<li>Familiarity with Workday HCM and understanding of HCM-Financials integration points</li>
</ul>
<ul>
<li>Experience with data migration tools, ETL processes, and reconciliation frameworks for ERP implementations</li>
</ul>
<p>The annual compensation range for this role is $list</p>
<p style="margin-top:24px;font-size:13px;color:#666;">XML job scraping automation by <a href="https://yubhub.co">YubHub</a></p>]]></Description>
      <Jobtype>full-time</Jobtype>
      <Experiencelevel>senior</Experiencelevel>
      <Workarrangement>hybrid</Workarrangement>
      <Salaryrange>list</Salaryrange>
      <Skills>Workday Financials, Financial Data Model (FDM), Chart of Accounts, Worktags, Dimensional Hierarchies, Accounting Books, Prism Analytics, Data Modeling, Source Integration, Calculated Fields, Report Development, ERP Implementation, Requirements Gathering, Configuration, Testing, Data Migration, Cutover Planning, Hypercare, Financial Accounting, General Ledger, Multi-Entity Consolidation, Intercompany Accounting, Management Reporting, Stakeholder Management, Communication, Analytical Skills, Problem-Solving Skills, Data Accuracy, Integrity, Workday Accounting Center, SQL, Python, Scripting Languages, BigQuery, Cloud Data Warehouses, SOX Compliance, Internal Controls, EPM/FP&amp;A Systems, Pigment, Anaplan, Adaptive Planning, ERP Integration, High-Growth Technology Companies, IPO Readiness, Workday HCM, HCM-Financials Integration, Data Migration Tools, ETL Processes, Reconciliation Frameworks</Skills>
      <Category>Finance</Category>
      <Industry>Technology</Industry>
      <Employername>Anthropic</Employername>
      <Employerlogo>https://logos.yubhub.co/anthropic.com.png</Employerlogo>
      <Employerdescription>Anthropic is a quickly growing organisation with a mission to create reliable, interpretable, and steerable AI systems. The company is working towards public company readiness.</Employerdescription>
      <Employerwebsite>https://job-boards.greenhouse.io</Employerwebsite>
      <Compensationcurrency></Compensationcurrency>
      <Compensationmin></Compensationmin>
      <Compensationmax></Compensationmax>
      <Applyto>https://job-boards.greenhouse.io/anthropic/jobs/4991194008</Applyto>
      <Location>San Francisco, CA | Seattle, WA</Location>
      <Country></Country>
      <Postedate>2026-03-08</Postedate>
    </job>
    <job>
      <externalid>440a65d7-eed</externalid>
      <Title>Software Engineer - Sensing, Consumer Products</Title>
      <Description><![CDATA[<p><strong>Software Engineer - Sensing, Consumer Products</strong></p>
<p><strong>Location</strong></p>
<p>San Francisco</p>
<p><strong>Employment Type</strong></p>
<p>Full time</p>
<p><strong>Department</strong></p>
<p>Consumer Products</p>
<p><strong>Compensation</strong></p>
<ul>
<li>$325K • Offers Equity</li>
</ul>
<p>The base pay offered may vary depending on multiple individualized factors, including market location, job-related knowledge, skills, and experience. If the role is non-exempt, overtime pay will be provided consistent with applicable laws. In addition to the salary range listed above, total compensation also includes generous equity, performance-related bonus(es) for eligible employees, and the following benefits.</p>
<p><strong>Benefits</strong></p>
<ul>
<li>Medical, dental, and vision insurance for you and your family, with employer contributions to Health Savings Accounts</li>
</ul>
<ul>
<li>Pre-tax accounts for Health FSA, Dependent Care FSA, and commuter expenses (parking and transit)</li>
</ul>
<ul>
<li>401(k) retirement plan with employer match</li>
</ul>
<ul>
<li>Paid parental leave (up to 24 weeks for birth parents and 20 weeks for non-birthing parents), plus paid medical and caregiver leave (up to 8 weeks)</li>
</ul>
<ul>
<li>Paid time off: flexible PTO for exempt employees and up to 15 days annually for non-exempt employees</li>
</ul>
<ul>
<li>13+ paid company holidays, and multiple paid coordinated company office closures throughout the year for focus and recharge, plus paid sick or safe time (1 hour per 30 hours worked, or more, as required by applicable state or local law)</li>
</ul>
<ul>
<li>Mental health and wellness support</li>
</ul>
<ul>
<li>Employer-paid basic life and disability coverage</li>
</ul>
<ul>
<li>Annual learning and development stipend to fuel your professional growth</li>
</ul>
<ul>
<li>Daily meals in our offices, and meal delivery credits as eligible</li>
</ul>
<ul>
<li>Relocation support for eligible employees</li>
</ul>
<ul>
<li>Additional taxable fringe benefits, such as charitable donation matching and wellness stipends, may also be provided.</li>
</ul>
<p><strong>About the Team</strong></p>
<p>Consumer Products Research prototypes the future of computing: we explore new modalities, interaction patterns, and system behaviors, then do the engineering required to make those ideas real in rigorous prototypes. The Neosensing team sits at the intersection of sensing, edge algorithms, and systems engineering. We build the end-to-end software that turns new signals into dependable capabilities—collection tooling and protocols, algorithm integration and evaluation hooks, and on-device loops that stay stable under real-world variability. We care deeply about software quality and iteration speed: clean interfaces, debuggability, observability, and performance under tight device constraints.</p>
<p><strong>About the Role</strong></p>
<p>As a Software Engineer on Consumer Products Research, you’ll sit at the boundary between algorithm development and shippable systems. You’ll work closely with algorithm engineers to translate prototypes into clean interfaces, reliable pipelines, and efficient on-device implementations—with strong attention to performance, observability, and real-world failure modes.</p>
<p>This is a software role first: we’re looking for someone who loves writing great code every day, takes pride in engineering craft, and is comfortable going deep enough into the algorithmic details to make the system work end-to-end.</p>
<p><strong>This role is based in San Francisco, CA. We use a hybrid work model of four days in the office per week and offer relocation assistance to new employees.</strong></p>
<p><strong>In this role, you will:</strong></p>
<ul>
<li>Build and ship production software for sensing algorithms, translating algorithm prototypes into reliable end-to-end systems.</li>
</ul>
<ul>
<li>Implement and own key parts of the Python shipping pipeline (integration surfaces, evaluation hooks, and quality/performance guardrails).</li>
</ul>
<ul>
<li>Develop embedded/on-device software in an RTOS environment (e.g., Zephyr) and deploy models to device runtimes and hardware accelerators.</li>
</ul>
<ul>
<li>Optimize real-time on-device perception loops (e.g., detection/tracking-style pipelines) for stability, latency, power, and memory constraints.</li>
</ul>
<ul>
<li>Create data collection + instrumentation tooling to bring up new sensing modalities and accelerate iteration from prototype → dataset → model → device.</li>
</ul>
<ul>
<li>Partner cross-functionally (algorithms, human data, firmware/hardware) to debug, profile, and harden systems against real-world variability.</li>
</ul>
<p><strong>You might thrive in this role if you:</strong></p>
<ul>
<li>Love writing great software and want your work to sit close to novel sensing and edge algorithms.</li>
</ul>
<ul>
<li>Understand algorithm behavior well enough to integrate, debug, and evaluate it—even if you’re not the primary model inventor.</li>
</ul>
<ul>
<li>Have shipped production Python systems and care about clean interfaces, tests, and long-term maintainability.</li>
</ul>
<ul>
<li>Enjoy embedded/on-device work and can debug across hardware, firmware, and higher-level application layers.</li>
</ul>
<ul>
<li>Care about performance engineering and know how to profile and optimize under tight device constraints.</li>
</ul>
<ul>
<li>Take ownership end-to-end and thrive in ambiguous, fast-moving, zero-to-one environments.</li>
</ul>
<p><strong>Bonus:</strong></p>
<ul>
<li>Zephyr (or similar RTOS) experience.</li>
</ul>
<ul>
<li>On-device ML deployment (NPU/GPU/DSP) and accelerator-aware profiling/optimization.</li>
</ul>
<ul>
<li>Background in multimodal sensing, sensor fusion, or on-device perception.</li>
</ul>
<ul>
<li>Experience building data collection systems and human-in-the-loop workflows (protocols, QA, metadata)</li>
</ul>
<p><strong>About OpenAI</strong></p>
<p>OpenAI is an AI research and deployment company dedicated to ensuring that general-purpose artificial intelligence benefits all of humanity. We push the boundaries of the capabilities of AI systems and seek to safely deploy them to the world through our products. AI is an extremely powerful tool that must be created with safety and human needs at its core, and to achieve our mission, we must encompass and value the many different perspectives, voices, and experiences of our users and the broader community.</p>
<p style="margin-top:24px;font-size:13px;color:#666;">XML job scraping automation by <a href="https://yubhub.co">YubHub</a></p>]]></Description>
      <Jobtype>full-time</Jobtype>
      <Experiencelevel>mid</Experiencelevel>
      <Workarrangement>hybrid</Workarrangement>
      <Salaryrange>$325K • Offers Equity</Salaryrange>
      <Skills>Python, Zephyr, RTOS, Embedded/on-device software development, Data collection and instrumentation tooling, Algorithm integration and evaluation, Clean interfaces and long-term maintainability, Performance engineering and profiling/optimization, Zephyr (or similar RTOS) experience, On-device ML deployment (NPU/GPU/DSP) and accelerator-aware profiling/optimization, Background in multimodal sensing, sensor fusion, or on-device perception, Experience building data collection systems and human-in-the-loop workflows (protocols, QA, metadata)</Skills>
      <Category>Engineering</Category>
      <Industry>Technology</Industry>
      <Employername>OpenAI</Employername>
      <Employerlogo>https://logos.yubhub.co/openai.com.png</Employerlogo>
      <Employerdescription>OpenAI is an AI research and deployment company dedicated to ensuring that general-purpose artificial intelligence benefits all of humanity. They push the boundaries of the capabilities of AI systems and seek to safely deploy them to the world through their products.</Employerdescription>
      <Employerwebsite>https://jobs.ashbyhq.com</Employerwebsite>
      <Compensationcurrency></Compensationcurrency>
      <Compensationmin></Compensationmin>
      <Compensationmax></Compensationmax>
      <Applyto>https://jobs.ashbyhq.com/openai/f6dfb6c0-44af-4512-af8c-967b8bb12867</Applyto>
      <Location>San Francisco</Location>
      <Country></Country>
      <Postedate>2026-03-06</Postedate>
    </job>
    <job>
      <externalid>1ee94df2-ca6</externalid>
      <Title>Senior Research Engineer/Scientist - On-Device Transformer Models</Title>
      <Description><![CDATA[<p><strong>Location</strong></p>
<p>San Francisco</p>
<p><strong>Employment Type</strong></p>
<p>Full time</p>
<p><strong>Location Type</strong></p>
<p>Hybrid</p>
<p><strong>Department</strong></p>
<p>Consumer Products</p>
<p><strong>Compensation</strong></p>
<ul>
<li>$380K – $445K • Offers Equity</li>
</ul>
<p>The base pay offered may vary depending on multiple individualized factors, including market location, job-related knowledge, skills, and experience. If the role is non-exempt, overtime pay will be provided consistent with applicable laws. In addition to the salary range listed above, total compensation also includes generous equity, performance-related bonus(es) for eligible employees, and the following benefits.</p>
<ul>
<li>Medical, dental, and vision insurance for you and your family, with employer contributions to Health Savings Accounts</li>
</ul>
<ul>
<li>Pre-tax accounts for Health FSA, Dependent Care FSA, and commuter expenses (parking and transit)</li>
</ul>
<ul>
<li>401(k) retirement plan with employer match</li>
</ul>
<ul>
<li>Paid parental leave (up to 24 weeks for birth parents and 20 weeks for non-birthing parents), plus paid medical and caregiver leave (up to 8 weeks)</li>
</ul>
<ul>
<li>Paid time off: flexible PTO for exempt employees and up to 15 days annually for non-exempt employees</li>
</ul>
<ul>
<li>13+ paid company holidays, and multiple paid coordinated company office closures throughout the year for focus and recharge, plus paid sick or safe time (1 hour per 30 hours worked, or more, as required by applicable state or local law)</li>
</ul>
<ul>
<li>Mental health and wellness support</li>
</ul>
<ul>
<li>Employer-paid basic life and disability coverage</li>
</ul>
<ul>
<li>Annual learning and development stipend to fuel your professional growth</li>
</ul>
<ul>
<li>Daily meals in our offices, and meal delivery credits as eligible</li>
</ul>
<ul>
<li>Relocation support for eligible employees</li>
</ul>
<ul>
<li>Additional taxable fringe benefits, such as charitable donation matching and wellness stipends, may also be provided.</li>
</ul>
<p>More details about our benefits are available to candidates during the hiring process.</p>
<p>This role is at-will and OpenAI reserves the right to modify base pay and other compensation components at any time based on individual performance, team or company results, or market conditions.</p>
<p><strong>About the Team</strong></p>
<p>The Future of Computing Research team is an Applied Research team within the Consumer Products group focused on developing new methods and models to support our vision for the future of computing as we advance forward in our mission of building AGI that benefits all of humanity.</p>
<p><strong>About the Role</strong></p>
<p>As a Research Engineer/Scientist on the Future of Computing Research team, you will work together with _both_ the best ML researchers in the world and the greatest design talent of our generation to push the frontier of model capabilities.</p>
<p><strong>This role is based in San Francisco, CA. We follow a hybrid model with 4 days a week in the office and offer relocation assistance to new employees.</strong></p>
<p><strong>In this role you will:</strong></p>
<ul>
<li>Train and evaluate multimodal SoTA models along axis that are important to our vision for future devices.</li>
<li>Develop novel architectures that improve model performance when scaling the models themselves is not an option.</li>
<li>Run through the necessary walls to take nascent research capabilities and turn them into capabilities we can build on top of.</li>
</ul>
<p><strong>You might thrive in this role if you:</strong></p>
<ul>
<li>Have a research background related to developing on-device transformer models.</li>
<li>Love performance optimization and working with GPU kernel engineers (but you do not need CUDA experience yourself).</li>
<li>Do rigorous science (rather than vibes based). We need confidence in the experiments we run to move quickly.</li>
<li>Have already spent time in the weeds teaching models to speak and perceive.</li>
</ul>
<p><strong>About OpenAI</strong></p>
<p>OpenAI is an AI research and deployment company dedicated to ensuring that general-purpose artificial intelligence benefits all of humanity. We push the boundaries of the capabilities of AI systems and seek to safely deploy them to the world through our products. AI is an extremely powerful tool that must be created with safety and human needs at its core, and to achieve our mission, we must encompass and value the many different perspectives, voices, and experiences that form the full spectrum of humanity.</p>
<p style="margin-top:24px;font-size:13px;color:#666;">XML job scraping automation by <a href="https://yubhub.co">YubHub</a></p>]]></Description>
      <Jobtype>full-time</Jobtype>
      <Experiencelevel>senior</Experiencelevel>
      <Workarrangement>hybrid</Workarrangement>
      <Salaryrange>$380K – $445K • Offers Equity</Salaryrange>
      <Skills>research background related to developing on-device transformer models, performance optimization, GPU kernel engineers, rigorous science, teaching models to speak and perceive, CUDA experience, multimodal SoTA models, novel architectures, nascent research capabilities</Skills>
      <Category>Engineering</Category>
      <Industry>Technology</Industry>
      <Employername>OpenAI</Employername>
      <Employerlogo>https://logos.yubhub.co/openai.com.png</Employerlogo>
      <Employerdescription>OpenAI is an AI research and deployment company dedicated to ensuring that general-purpose artificial intelligence benefits all of humanity. The company was founded in 2015 and has since grown to become a leading player in the field of artificial intelligence.</Employerdescription>
      <Employerwebsite>https://jobs.ashbyhq.com</Employerwebsite>
      <Compensationcurrency></Compensationcurrency>
      <Compensationmin></Compensationmin>
      <Compensationmax></Compensationmax>
      <Applyto>https://jobs.ashbyhq.com/openai/7f9eb43b-423e-43e4-9f42-d14b8ba0f234</Applyto>
      <Location>San Francisco</Location>
      <Country></Country>
      <Postedate>2026-03-06</Postedate>
    </job>
  </jobs>
</source>