<?xml version="1.0" encoding="UTF-8"?>
<source>
  <jobs>
    <job>
      <externalid>f931591c-87a</externalid>
      <Title>Research Scientist, Frontier Risk Evaluations</Title>
      <Description><![CDATA[<p>As a Research Scientist focused on Frontier Risk Evaluations, you will design and create evaluation measures, harnesses and datasets for measuring the risks posed by frontier AI systems.</p>
<p>For example, you might do any or all of the following:</p>
<ul>
<li>Design and build harnesses to test AI models and systems (including agents) for dangerous capabilities such as security vulnerability exploitation, CBRN uplift, and other high-risk activities;</li>
</ul>
<ul>
<li>Work with government agencies or other labs to collectively scope and design evaluations to measure and mitigate risks posed by advanced AI systems;</li>
</ul>
<ul>
<li>Publish evaluation methodologies and write technical reports for policymakers.</li>
</ul>
<p>We are seeking talented researchers to join us in shaping this vision.</p>
<p>Ideally you&#39;d have:</p>
<ul>
<li>Commitment to our mission of promoting safe, secure, and trustworthy AI deployments in the industry as frontier AI capabilities continue to advance;</li>
</ul>
<ul>
<li>Practical experience conducting technical research collaboratively. You should be comfortable building and instrumenting ML pipelines, writing evaluation harnesses, and quickly turning new ideas from the research literature into working prototypes;</li>
</ul>
<ul>
<li>A track record of published research in machine learning, particularly in generative AI;</li>
</ul>
<ul>
<li>At least three years of experience addressing sophisticated ML problems, whether in a research setting or in product development;</li>
</ul>
<ul>
<li>Strong written and verbal communication skills to operate in a cross-functional team.</li>
</ul>
<p>Nice to have:</p>
<ul>
<li>Experience in crafting evaluations and benchmarks, or a background in data science roles related to LLM technologies;</li>
</ul>
<ul>
<li>Experience with red-teaming or adversarial testing of AI systems;</li>
</ul>
<ul>
<li>Familiarity with AI safety policy frameworks (e.g., NIST AI RMF, EU AI Act, Korea AI Basic Act).</li>
</ul>
<p>Our research interviews are crafted to assess candidates&#39; skills in practical ML prototyping and debugging, their grasp of research concepts, and their alignment with our organisational culture. We will not ask any LeetCode-style questions. If you’re excited about advancing AI safety and contributing to our mission, we encourage you to apply, even if your experience doesn’t perfectly align with every requirement.</p>
<p>Compensation packages at Scale for eligible roles include base salary, equity, and benefits. The range displayed on each job posting reflects the minimum and maximum target for new hire salaries for the position, determined by work location and additional factors, including job-related skills, experience, interview performance, and relevant education or training. Scale employees in eligible roles are also granted equity-based compensation, subject to Board of Director approval. Your recruiter can share more about the specific salary range for your preferred location during the hiring process, and confirm whether the hired role will be eligible for equity grant. You’ll also receive benefits including, but not limited to: Comprehensive health, dental and vision coverage, retirement benefits, a learning and development stipend, and generous PTO. Additionally, this role may be eligible for additional benefits such as a commuter stipend.</p>
<p style="margin-top:24px;font-size:13px;color:#666;">XML job scraping automation by <a href="https://yubhub.co">YubHub</a></p>]]></Description>
      <Jobtype>full-time</Jobtype>
      <Experiencelevel>senior</Experiencelevel>
      <Workarrangement>hybrid</Workarrangement>
      <Salaryrange>$216,000-$270,000 USD</Salaryrange>
      <Skills>machine learning, generative AI, ML pipelines, evaluation harnesses, AI safety policy frameworks, crafting evaluations and benchmarks, data science roles related to LLM technologies, red-teaming or adversarial testing of AI systems</Skills>
      <Category>Engineering</Category>
      <Industry>Technology</Industry>
      <Employername>Scale</Employername>
      <Employerlogo>https://logos.yubhub.co/scale.com.png</Employerlogo>
      <Employerdescription>Scale develops reliable AI systems for the world&apos;s most important decisions.</Employerdescription>
      <Employerwebsite>https://scale.com/</Employerwebsite>
      <Compensationcurrency></Compensationcurrency>
      <Compensationmin></Compensationmin>
      <Compensationmax></Compensationmax>
      <Applyto>https://job-boards.greenhouse.io/scaleai/jobs/4677657005</Applyto>
      <Location>San Francisco, CA; New York, NY</Location>
      <Country></Country>
      <Postedate>2026-04-18</Postedate>
    </job>
    <job>
      <externalid>769c0070-5b2</externalid>
      <Title>Research Scientist, Agent Robustness</Title>
      <Description><![CDATA[<p>As a Research Scientist working on Agent Robustness, you will work on the fundamental challenges of building AI agents that are safe and aligned with humans.</p>
<p>For example, you might:</p>
<ul>
<li>Research the science of AI agent capabilities with a focus on how they relate to safety, risk factors, and methodologies for benchmarking them;</li>
<li>Design and build harnesses to test AI agents&#39; tendency to take harmful actions when pressured to do so by users or tricked into doing so by elements of their environment;</li>
<li>Design and build exploits and mitigations for new and unique failure modes that arise as AI agents gain affordances like coding, web browsing, and computer use;</li>
<li>Characterize and design mitigations for potential failure modes or broader risks of systems involving multiple interacting AI agents.</li>
</ul>
<p>Ideally you&#39;d have:</p>
<ul>
<li>Commitment to our mission of promoting safe, secure, and trustworthy AI deployments in the industry as frontier AI capabilities continue to advance;</li>
<li>Practical experience conducting technical research collaboratively;</li>
<li>Experience with post-training and RL techniques such as RLHF, DPO, GRPO, and similar approaches;</li>
<li>A track record of published research in machine learning, particularly in generative AI;</li>
<li>At least three years of experience addressing sophisticated ML problems, whether in a research setting or in product development;</li>
<li>Strong written and verbal communication skills to operate in a cross-functional team.</li>
</ul>
<p>Nice to have:</p>
<ul>
<li>Hands-on experience with agent evaluation frameworks such as SWE-bench, WebArena, OSWorld, Inspect, or similar tools;</li>
<li>Experience with red-teaming, prompt injection, or adversarial testing of AI systems.</li>
</ul>
<p>Our research interviews are crafted to assess candidates&#39; skills in practical ML prototyping and debugging, their grasp of research concepts, and their alignment with our organisational culture. We will not ask any LeetCode-style questions. If you&#39;re excited about advancing AI safety and contributing to our mission, we encourage you to apply, even if your experience doesn&#39;t perfectly align with every requirement.</p>
<p>Compensation packages at Scale for eligible roles include base salary, equity, and benefits. The range displayed on each job posting reflects the minimum and maximum target for new hire salaries for the position, determined by work location and additional factors, including job-related skills, experience, interview performance, and relevant education or training. Scale employees in eligible roles are also granted equity-based compensation, subject to Board of Director approval. Your recruiter can share more about the specific salary range for your preferred location during the hiring process, and confirm whether the hired role will be eligible for equity grant. You&#39;ll also receive benefits including, but not limited to: Comprehensive health, dental and vision coverage, retirement benefits, a learning and development stipend, and generous PTO. Additionally, this role may be eligible for additional benefits such as a commuter stipend.</p>
<p style="margin-top:24px;font-size:13px;color:#666;">XML job scraping automation by <a href="https://yubhub.co">YubHub</a></p>]]></Description>
      <Jobtype>full-time</Jobtype>
      <Experiencelevel>senior</Experiencelevel>
      <Workarrangement>hybrid</Workarrangement>
      <Salaryrange>$216,000-$270,000 USD</Salaryrange>
      <Skills>Commitment to our mission of promoting safe, secure, and trustworthy AI deployments in the industry as frontier AI capabilities continue to advance, Practical experience conducting technical research collaboratively, Experience with post-training and RL techniques such as RLHF, DPO, GRPO, and similar approaches, A track record of published research in machine learning, particularly in generative AI, At least three years of experience addressing sophisticated ML problems, whether in a research setting or in product development, Hands-on experience with agent evaluation frameworks such as SWE-bench, WebArena, OSWorld, Inspect, or similar tools, Experience with red-teaming, prompt injection, or adversarial testing of AI systems</Skills>
      <Category>Engineering</Category>
      <Industry>Technology</Industry>
      <Employername>Scale</Employername>
      <Employerlogo>https://logos.yubhub.co/scale.com.png</Employerlogo>
      <Employerdescription>Scale develops reliable AI systems for the world&apos;s most important decisions.</Employerdescription>
      <Employerwebsite>https://scale.com/</Employerwebsite>
      <Compensationcurrency></Compensationcurrency>
      <Compensationmin></Compensationmin>
      <Compensationmax></Compensationmax>
      <Applyto>https://job-boards.greenhouse.io/scaleai/jobs/4675684005</Applyto>
      <Location>San Francisco, CA; New York, NY</Location>
      <Country></Country>
      <Postedate>2026-04-18</Postedate>
    </job>
    <job>
      <externalid>f578503a-af9</externalid>
      <Title>Senior Analyst - Safety Operations (CSE)</Title>
      <Description><![CDATA[<p>We are seeking a Senior Analyst - Safety Operations (CSE) to join our team. As a Senior Analyst, you will play a critical role in ensuring the safety and integrity of our AI systems. Your primary responsibilities will include processing appeals, auditing automations, and labeling use cases in our system. You will also provide labels, annotations, and inputs on projects involving safety protocols, risk scenarios, and policy compliance. Additionally, you will collaborate with team members to provide feedback on tasks that improve AI&#39;s defenses to detect illegal and unethical behavior, as well as align Grok with our rules enforcement.</p>
<p>To be successful in this role, you will need expertise in improving Large Language Models (LLMs), specifically related to CSE, to maximize efficiencies in enforcement and support. You will also need to have a proven expertise in identifying, mitigating, and preventing Child Sexual Abuse Material (CSAM) and Child Sexual Exploitation (CSE), including grooming behaviors and risks in AI-generated content, with strong knowledge of relevant legal obligations (such as NCMEC reporting) and industry standards for protecting minors.</p>
<p>You will also have experience in online safety and reducing harm to protect our users and preserve Free Speech in the global public square. You will be able to interpret and apply xAI safety policies effectively, and have strong skills in ethical reasoning and risk assessment. You will also have a strong ability to utilize resources, guidelines, and frameworks for accurate safety-focused actions and escalations.</p>
<p>In addition, you will have strong communication, interpersonal, analytical, and ethical decision-making skills. You will be committed to continuous improvement of processes to prioritize safety and risk mitigation. You will also have expertise in data analysis to identify emerging abuse vectors, uncover opportunities for operational efficiencies, and design automations that strengthen enforcement effectiveness and platform safety.</p>
<p>Preferred qualifications include experience working in a Trust and Safety for a social media company, leveraging AI or other automation tools. You will also have experience collaborating with child safety organizations (such as NCMEC) and utilizing specialized detection tools or developing classifiers for CSAM/CSE in social media or generative AI platforms. Additionally, you will have expertise in red-teaming and adversarial testing of Large Language Models to proactively identify novel abuse vectors, jailbreaks, and safety failure modes, with a proven ability to translate findings into concrete improvements for enforcement systems and platform robustness.</p>
<p>This role may involve exposure to sensitive or graphic content, including vulgar language, violent threats, pornography, and other graphic images.</p>
<p style="margin-top:24px;font-size:13px;color:#666;">XML job scraping automation by <a href="https://yubhub.co">YubHub</a></p>]]></Description>
      <Jobtype>full-time</Jobtype>
      <Experiencelevel>senior</Experiencelevel>
      <Workarrangement>onsite</Workarrangement>
      <Salaryrange>$43.75 - $62.50 USD hourly</Salaryrange>
      <Skills>Improving Large Language Models (LLMs), Child Sexual Abuse Material (CSAM) and Child Sexual Exploitation (CSE), Online safety and reducing harm, Ethical reasoning and risk assessment, Data analysis, Experience working in a Trust and Safety for a social media company, Collaborating with child safety organizations, Red-teaming and adversarial testing of Large Language Models</Skills>
      <Category>Engineering</Category>
      <Industry>Technology</Industry>
      <Employername>xAI</Employername>
      <Employerlogo>https://logos.yubhub.co/xai.com.png</Employerlogo>
      <Employerdescription>xAI creates AI systems to understand the universe and aid humanity in its pursuit of knowledge.</Employerdescription>
      <Employerwebsite>https://www.xai.com/</Employerwebsite>
      <Compensationcurrency></Compensationcurrency>
      <Compensationmin></Compensationmin>
      <Compensationmax></Compensationmax>
      <Applyto>https://job-boards.greenhouse.io/xai/jobs/5097904007</Applyto>
      <Location>Palo Alto, CA</Location>
      <Country></Country>
      <Postedate>2026-04-18</Postedate>
    </job>
    <job>
      <externalid>2f818897-404</externalid>
      <Title>Senior Analyst - Safety Operations (CSE)</Title>
      <Description><![CDATA[<p><strong>About the Role</strong></p>
<p>xAI is seeking a Senior Analyst - Safety Operations (CSE) to join our team. As a Senior Analyst, you will play a critical role in ensuring the safety and integrity of our AI systems.</p>
<p><strong>Responsibilities</strong></p>
<ul>
<li>Process appeals, audit automations, and properly label use cases in the system.</li>
<li>Provide labels, annotations, and inputs on projects involving safety protocols, risk scenarios, and policy compliance.</li>
<li>Support the delivery of high-quality curated data that reinforces xAI&#39;s rules and ethical alignment.</li>
<li>Collaborate with team members to provide feedback on tasks that improve AI&#39;s defenses to detect illegal and unethical behavior, as well as align Grok with our rules enforcement.</li>
</ul>
<p><strong>Basic Qualifications</strong></p>
<ul>
<li>Expertise in improving Large Language Models (LLMs), specifically related to CSE, to maximize efficiencies in enforcement and support and ability to propose solutions to increase security and safety of our platform.</li>
<li>Proven expertise in identifying, mitigating, and preventing Child Sexual Abuse Material (CSAM) and Child Sexual Exploitation (CSE), including grooming behaviors and risks in AI-generated content, with strong knowledge of relevant legal obligations (such as NCMEC reporting) and industry standards for protecting minors.</li>
<li>Proven experience in online safety and reducing harm to protect our users and preserve Free Speech in the global public square.</li>
<li>Ability to interpret and apply xAI safety policies effectively.</li>
<li>Proficiency in analyzing complex scenarios, with strong skills in ethical reasoning and risk assessment.</li>
<li>Strong ability to utilize resources, guidelines, and frameworks for accurate safety-focused actions and escalations.</li>
<li>Strong communication, interpersonal, analytical, and ethical decision-making skills.</li>
<li>Commitment to continuous improvement of processes to prioritize safety and risk mitigation.</li>
<li>Expertise in data analysis to identify emerging abuse vectors, uncover opportunities for operational efficiencies, and design automations that strengthen enforcement effectiveness and platform safety.</li>
</ul>
<p><strong>Preferred Skills and Experience</strong></p>
<ul>
<li>Experience working in a Trust and Safety for a social media company, leveraging AI or other automation tools.</li>
<li>Experience collaborating with child safety organizations (such as NCMEC) and utilizing specialized detection tools or developing classifiers for CSAM/CSE in social media or generative AI platforms.</li>
<li>Expertise in red-teaming and adversarial testing of Large Language Models to proactively identify novel abuse vectors, jailbreaks, and safety failure modes, with a proven ability to translate findings into concrete improvements for enforcement systems and platform robustness.</li>
</ul>
<p>This role may involve exposure to sensitive or graphic content, including vulgar language, violent threats, pornography, and other graphic images.</p>
<p style="margin-top:24px;font-size:13px;color:#666;">XML job scraping automation by <a href="https://yubhub.co">YubHub</a></p>]]></Description>
      <Jobtype>full-time</Jobtype>
      <Experiencelevel>senior</Experiencelevel>
      <Workarrangement>remote</Workarrangement>
      <Salaryrange></Salaryrange>
      <Skills>Large Language Models (LLMs), Child Sexual Abuse Material (CSAM), Child Sexual Exploitation (CSE), Online safety, Risk assessment, Ethical reasoning, Data analysis, Automation tools, Social media, Generative AI, Red-teaming, Adversarial testing, Trust and Safety, Child safety organizations, Specialized detection tools, Classifier development</Skills>
      <Category>Engineering</Category>
      <Industry>Technology</Industry>
      <Employername>xAI</Employername>
      <Employerlogo>https://logos.yubhub.co/xai.com.png</Employerlogo>
      <Employerdescription>xAI creates AI systems to understand the universe and aid humanity in its pursuit of knowledge.</Employerdescription>
      <Employerwebsite>https://www.xai.com/</Employerwebsite>
      <Compensationcurrency></Compensationcurrency>
      <Compensationmin></Compensationmin>
      <Compensationmax></Compensationmax>
      <Applyto>https://job-boards.greenhouse.io/xai/jobs/5097907007</Applyto>
      <Location>Bastrop, TX</Location>
      <Country></Country>
      <Postedate>2026-04-18</Postedate>
    </job>
    <job>
      <externalid>f1981394-2ef</externalid>
      <Title>Senior Analyst, Safety Operations</Title>
      <Description><![CDATA[<p>About xAI</p>
<p>xAI&#39;s mission is to create AI systems that can accurately understand the universe and aid humanity in its pursuit of knowledge.</p>
<p><strong>RESPONSIBILITIES:</strong></p>
<ul>
<li>Process appeals, audit automations, and label use cases in the system.</li>
</ul>
<ul>
<li>Provide labels, annotations, and inputs on projects involving safety protocols, risk scenarios, and policy compliance.</li>
</ul>
<ul>
<li>Support the delivery of high-quality curated data that reinforces xAI&#39;s rules and ethical alignment.</li>
</ul>
<ul>
<li>Collaborate with team members to provide feedback on tasks that improve AI&#39;s defenses to detect illegal and unethical behaviour, as well as align Grok with our rules enforcement.</li>
</ul>
<p><strong>BASIC QUALIFICATIONS:</strong></p>
<ul>
<li>Expertise in improving Large Language Models (LLMs) to maximise efficiencies in enforcement and support and ability to propose solutions to increase security and safety of our platform.</li>
</ul>
<ul>
<li>Proven experience in online safety and reducing harm to protect our users and preserve Free Speech in the global public square.</li>
</ul>
<ul>
<li>Ability to interpret and apply xAI safety policies effectively.</li>
</ul>
<ul>
<li>Proficiency in analysing complex scenarios, with strong skills in ethical reasoning and risk assessment.</li>
</ul>
<ul>
<li>Strong ability to utilise resources, guidelines, and frameworks for accurate safety-focused actions and escalations.</li>
</ul>
<ul>
<li>Strong communication, interpersonal, analytical, and ethical decision-making skills.</li>
</ul>
<ul>
<li>Commitment to continuous improvement of processes to prioritise safety and risk mitigation.</li>
</ul>
<ul>
<li>Expertise in data analysis to identify emerging abuse vectors, uncover opportunities for operational efficiencies, and design automations that strengthen enforcement effectiveness and platform safety.</li>
</ul>
<p><strong>PREFERRED SKILLS AND EXPERIENCE:</strong></p>
<ul>
<li>Experience working in a Trust and Safety for a social media company, leveraging AI or other automation tools.</li>
</ul>
<ul>
<li>Expertise in red-teaming and adversarial testing of Large Language Models to proactively identify novel abuse vectors, jailbreaks, and safety failure modes, with a proven ability to translate findings into concrete improvements for enforcement systems and platform robustness.</li>
</ul>
<p>This role may involve exposure to sensitive or graphic content, including vulgar language, violent threats, pornography, and other graphic images.</p>
<p><strong>COMPENSATION AND BENEFITS:</strong></p>
<p>Base salary is just one part of our total rewards package at xAI, which also includes equity, comprehensive medical, vision, and dental coverage, access to a 401(k) retirement plan, short &amp; long-term disability insurance, life insurance, and various other discounts and perks.</p>
<p style="margin-top:24px;font-size:13px;color:#666;">XML job scraping automation by <a href="https://yubhub.co">YubHub</a></p>]]></Description>
      <Jobtype>full-time</Jobtype>
      <Experiencelevel>senior</Experiencelevel>
      <Workarrangement>remote</Workarrangement>
      <Salaryrange></Salaryrange>
      <Skills>Large Language Models (LLMs), online safety, risk assessment, ethical reasoning, data analysis, enforcement effectiveness, platform safety, red-teaming, adversarial testing, Trust and Safety, AI or other automation tools</Skills>
      <Category>Engineering</Category>
      <Industry>Technology</Industry>
      <Employername>xAI</Employername>
      <Employerlogo>https://logos.yubhub.co/xai.com.png</Employerlogo>
      <Employerdescription>xAI creates AI systems to understand the universe and aid humanity in its pursuit of knowledge.</Employerdescription>
      <Employerwebsite>https://www.xai.com/</Employerwebsite>
      <Compensationcurrency></Compensationcurrency>
      <Compensationmin></Compensationmin>
      <Compensationmax></Compensationmax>
      <Applyto>https://job-boards.greenhouse.io/xai/jobs/5093554007</Applyto>
      <Location>Bastrop, TX</Location>
      <Country></Country>
      <Postedate>2026-04-18</Postedate>
    </job>
    <job>
      <externalid>a1811a69-c2f</externalid>
      <Title>Manager, Safety Operations</Title>
      <Description><![CDATA[<p><strong>About the Role</strong></p>
<p>xAI is seeking a Manager, Safety Operations to oversee the processing of appeals and ensure proper labeling of use cases in the system.</p>
<p><strong>Responsibilities</strong></p>
<ul>
<li>Guide the team&#39;s use of proprietary software to provide labels, annotations, and inputs on projects involving safety protocols, risk scenarios, and policy compliance.</li>
<li>Ensure the delivery of high-quality curated data that reinforces xAI&#39;s rules and ethical alignment.</li>
<li>Mentor team members, conduct performance management and calibration, drive feedback on tasks that improve AI&#39;s defenses to detect illegal and unethical behavior, identify emerging abuse vectors, and implement process improvements and automations.</li>
<li>Align Grok with our rules enforcement while collaborating cross-functionally to strengthen overall safety operations.</li>
</ul>
<p><strong>Basic Qualifications</strong></p>
<ul>
<li>Proven leadership and people management experience in AI-driven operations, with a track record of developing high-performing teams.</li>
<li>Expertise in improving Large Language Models (LLMs) to maximize efficiencies in enforcement and support and ability to propose and implement solutions to increase security and safety of our platform.</li>
<li>Proven experience in online safety and reducing harm to protect our users and preserve Free Speech in the global public square.</li>
<li>Ability to interpret, apply, and train teams on xAI safety policies effectively.</li>
<li>Proficiency in analyzing complex scenarios and operational metrics, with strong skills in ethical reasoning, risk assessment, and team performance optimization.</li>
<li>Strong ability to utilize resources, guidelines, and frameworks for accurate safety-focused actions, escalations, and talent development.</li>
<li>Strong leadership, communication, interpersonal, analytical, and ethical decision-making skills.</li>
<li>Quality assurance: Ability to hold the team to our high standard for quality work; managing performance as needed.</li>
<li>Commitment to continuous improvement of processes, people, and operations to prioritize safety and risk mitigation.</li>
<li>Expertise in data analysis to identify emerging abuse vectors, uncover opportunities for operational efficiencies, and design automations that strengthen enforcement effectiveness and platform safety.</li>
</ul>
<p><strong>Preferred Skills and Experience</strong></p>
<ul>
<li>Experience managing teams in Trust and Safety for a social media company, leveraging AI or other automation tools.</li>
<li>Expertise in leading red-teaming and adversarial testing of Large Language Models to proactively identify novel abuse vectors, jailbreaks, and safety failure modes, with a proven ability to translate findings into concrete improvements for enforcement systems, team processes, and platform robustness.</li>
</ul>
<p style="margin-top:24px;font-size:13px;color:#666;">XML job scraping automation by <a href="https://yubhub.co">YubHub</a></p>]]></Description>
      <Jobtype>full-time</Jobtype>
      <Experiencelevel>senior</Experiencelevel>
      <Workarrangement>onsite</Workarrangement>
      <Salaryrange></Salaryrange>
      <Skills>Leadership and people management experience in AI-driven operations, Expertise in improving Large Language Models (LLMs), Proven experience in online safety and reducing harm, Ability to interpret, apply, and train teams on xAI safety policies, Proficiency in analyzing complex scenarios and operational metrics, Strong leadership, communication, interpersonal, analytical, and ethical decision-making skills, Quality assurance: Ability to hold the team to our high standard for quality work, Commitment to continuous improvement of processes, people, and operations, Expertise in data analysis to identify emerging abuse vectors, Experience managing teams in Trust and Safety for a social media company, Expertise in leading red-teaming and adversarial testing of Large Language Models</Skills>
      <Category>Engineering</Category>
      <Industry>Technology</Industry>
      <Employername>xAI</Employername>
      <Employerlogo>https://logos.yubhub.co/xai.com.png</Employerlogo>
      <Employerdescription>xAI creates AI systems to understand the universe and aid humanity in its pursuit of knowledge.</Employerdescription>
      <Employerwebsite>https://www.xai.com/</Employerwebsite>
      <Compensationcurrency></Compensationcurrency>
      <Compensationmin></Compensationmin>
      <Compensationmax></Compensationmax>
      <Applyto>https://job-boards.greenhouse.io/xai/jobs/5090695007</Applyto>
      <Location>Bastrop, TX</Location>
      <Country></Country>
      <Postedate>2026-04-18</Postedate>
    </job>
  </jobs>
</source>