{"version":"0.1","company":{"name":"YubHub","url":"https://yubhub.co","jobsUrl":"https://yubhub.co/jobs/skill/research-engineering"},"x-facet":{"type":"skill","slug":"research-engineering","display":"Research Engineering","count":13},"x-feed-size-limit":100,"x-feed-sort":"enriched_at desc","x-feed-notice":"This feed contains at most 100 jobs (the most recently enriched). For the full corpus, use the paginated /stats/by-facet endpoint or /search.","x-generator":"yubhub-xml-generator","x-rights":"Free to redistribute with attribution: \"Data by YubHub (https://yubhub.co)\"","x-schema":"Each entry in `jobs` follows https://schema.org/JobPosting. YubHub-native raw fields carry `x-` prefix.","jobs":[{"@context":"https://schema.org","@type":"JobPosting","identifier":{"@type":"PropertyValue","name":"YubHub","value":"job_9a42f26c-511"},"title":"Evals Engineer, Applied AI","description":"<p>We are seeking a technically rigorous and driven AI Research Engineer to join our Enterprise Evaluations team. This high-impact role is critical to our mission of delivering the industry&#39;s leading GenAI Evaluation Suite.</p>\n<p>As a hands-on contributor to the core systems that ensure the safety, reliability, and continuous improvement of LLM-powered workflows and agents for the enterprise, you will partner with Scale&#39;s Operations team and enterprise customers to translate ambiguity into structured evaluation data. This involves guiding the creation and maintenance of gold-standard human-rated datasets and expert rubrics that anchor AI evaluation systems.</p>\n<p>Your responsibilities will also include analysing feedback and collected data to identify patterns, refine evaluation frameworks, and establish iterative improvement loops that enhance the quality and relevance of human-curated assessments. You will design, research, and develop LLM-as-a-Judge autorater frameworks and AI-assisted evaluation systems, including creating models that critique, grade, and explain agent outputs.</p>\n<p>To succeed in this role, you will need a strong foundational knowledge of large language models, a passion for tackling complex evaluation challenges, and the ability to thrive in a dynamic, fast-paced research environment. You should be able to think outside the box, stay current with the latest literature in AI evaluation, and be passionate about integrating novel research ideas into our workflows to build best-in-class evaluation systems.</p>\n<p>In addition to your technical expertise, you will need excellent communication and collaboration skills, as you will work closely with cross-functional teams to drive project success.</p>\n<p>If you are a motivated and detail-oriented individual with a passion for AI research and evaluation, we encourage you to apply for this exciting opportunity.</p>\n<p style=\"margin-top:24px;font-size:13px;color:#666;\">XML job scraping automation by <a href=\"https://yubhub.co\">YubHub</a></p>","url":"https://yubhub.co/jobs/job_9a42f26c-511","directApply":true,"hiringOrganization":{"@type":"Organization","name":"Scale AI","sameAs":"https://scale.com/","logo":"https://logos.yubhub.co/scale.com.png"},"x-apply-url":"https://job-boards.greenhouse.io/scaleai/jobs/4629589005","x-work-arrangement":"hybrid","x-experience-level":"mid","x-job-type":"full-time","x-salary-range":"$216,000-$270,000 USD","x-skills-required":["Python","PyTorch","TensorFlow","Large Language Models","Generative AI","Machine Learning","Applied Research","Evaluation Infrastructure"],"x-skills-preferred":["Advanced degree in Computer Science, Machine Learning, or a related quantitative field","Published research in leading ML or AI conferences","Experience designing, building, or deploying LLM-as-a-Judge frameworks or other automated evaluation systems","Experience collaborating with operations or external teams to define high-quality human annotator guidelines","Expertise in ML research engineering, stochastic systems, observability, or LLM-powered applications for model evaluation and analysis"],"datePosted":"2026-04-18T16:01:26.736Z","jobLocation":{"@type":"Place","address":{"@type":"PostalAddress","addressLocality":"San Francisco, CA; New York, NY"}},"employmentType":"FULL_TIME","occupationalCategory":"Engineering","industry":"Technology","skills":"Python, PyTorch, TensorFlow, Large Language Models, Generative AI, Machine Learning, Applied Research, Evaluation Infrastructure, Advanced degree in Computer Science, Machine Learning, or a related quantitative field, Published research in leading ML or AI conferences, Experience designing, building, or deploying LLM-as-a-Judge frameworks or other automated evaluation systems, Experience collaborating with operations or external teams to define high-quality human annotator guidelines, Expertise in ML research engineering, stochastic systems, observability, or LLM-powered applications for model evaluation and analysis","baseSalary":{"@type":"MonetaryAmount","currency":"USD","value":{"@type":"QuantitativeValue","minValue":216000,"maxValue":270000,"unitText":"YEAR"}}},{"@context":"https://schema.org","@type":"JobPosting","identifier":{"@type":"PropertyValue","name":"YubHub","value":"job_fab21c7e-6bf"},"title":"Research Engineer / Scientist, Alignment Science - London","description":"<p>About the role:</p>\n<p>You will contribute to exploratory experimental research on AI safety, with a focus on risks from powerful future systems. As a Research Engineer on Alignment Science, you&#39;ll work on creating methods to ensure advanced AI systems remain safe and harmless in unfamiliar or adversarial scenarios.</p>\n<p>Responsibilities:</p>\n<ul>\n<li>Conduct research on AI control and alignment stress-testing</li>\n<li>Develop and implement new techniques for ensuring AI safety</li>\n<li>Collaborate with other teams, including Interpretability, Fine-Tuning, and the Frontier Red Team</li>\n<li>Test and evaluate the effectiveness of AI safety techniques</li>\n</ul>\n<p>Requirements:</p>\n<ul>\n<li>Significant software, ML, or research engineering experience</li>\n<li>Familiarity with technical AI safety research</li>\n<li>Experience contributing to empirical AI research projects</li>\n</ul>\n<p>Preferred qualifications:</p>\n<ul>\n<li>Experience authoring research papers in machine learning, NLP, or AI safety</li>\n<li>Experience with LLMs</li>\n<li>Experience with reinforcement learning</li>\n</ul>\n<p>Benefits:</p>\n<ul>\n<li>Competitive compensation and benefits</li>\n<li>Optional equity donation matching</li>\n<li>Generous vacation and parental leave</li>\n<li>Flexible working hours</li>\n</ul>\n<p>Note:</p>\n<p>This role requires all candidates to be based at least 25% in London and travel to San Francisco occasionally.</p>\n<p style=\"margin-top:24px;font-size:13px;color:#666;\">XML job scraping automation by <a href=\"https://yubhub.co\">YubHub</a></p>","url":"https://yubhub.co/jobs/job_fab21c7e-6bf","directApply":true,"hiringOrganization":{"@type":"Organization","name":"Anthropic","sameAs":"https://www.anthropic.com/","logo":"https://logos.yubhub.co/anthropic.com.png"},"x-apply-url":"https://job-boards.greenhouse.io/anthropic/jobs/4610158008","x-work-arrangement":"hybrid","x-experience-level":"senior","x-job-type":"full-time","x-salary-range":"£260,000-£370,000 GBP","x-skills-required":["software engineering","machine learning","research engineering","AI safety","technical AI safety research"],"x-skills-preferred":["research paper authoring","LLMs","reinforcement learning"],"datePosted":"2026-04-18T15:55:40.617Z","jobLocation":{"@type":"Place","address":{"@type":"PostalAddress","addressLocality":"London, UK"}},"employmentType":"FULL_TIME","occupationalCategory":"Engineering","industry":"Technology","skills":"software engineering, machine learning, research engineering, AI safety, technical AI safety research, research paper authoring, LLMs, reinforcement learning","baseSalary":{"@type":"MonetaryAmount","currency":"GBP","value":{"@type":"QuantitativeValue","minValue":260000,"maxValue":370000,"unitText":"YEAR"}}},{"@context":"https://schema.org","@type":"JobPosting","identifier":{"@type":"PropertyValue","name":"YubHub","value":"job_95c5ac3a-e98"},"title":"Research Engineer / Scientist, Alignment Science","description":"<p>You will contribute to exploratory experimental research on AI safety, with a focus on risks from powerful future systems. Your work will involve building and running elegant and thorough machine learning experiments to help us understand and steer the behavior of powerful AI systems.</p>\n<p>As a Research Engineer on Alignment Science, you&#39;ll collaborate with other teams including Interpretability, Fine-Tuning, and the Frontier Red Team. Your responsibilities will include testing the robustness of our safety techniques, running multi-agent reinforcement learning experiments, building tooling to efficiently evaluate the effectiveness of novel LLM-generated jailbreaks, and contributing ideas, figures, and writing to research papers, blog posts, and talks.</p>\n<p>You may be a good fit if you have significant software, ML, or research engineering experience, have some experience contributing to empirical AI research projects, and have some familiarity with technical AI safety research. Strong candidates may also have experience authoring research papers in machine learning, NLP, or AI safety, have experience with LLMs, have experience with reinforcement learning, and have experience with Kubernetes clusters and complex shared codebases.</p>\n<p>The annual compensation range for this role is $350,000-$500,000 USD.</p>\n<p style=\"margin-top:24px;font-size:13px;color:#666;\">XML job scraping automation by <a href=\"https://yubhub.co\">YubHub</a></p>","url":"https://yubhub.co/jobs/job_95c5ac3a-e98","directApply":true,"hiringOrganization":{"@type":"Organization","name":"Anthropic","sameAs":"https://www.anthropic.com/","logo":"https://logos.yubhub.co/anthropic.com.png"},"x-apply-url":"https://job-boards.greenhouse.io/anthropic/jobs/4631822008","x-work-arrangement":"hybrid","x-experience-level":"senior","x-job-type":"full-time","x-salary-range":"$350,000-$500,000 USD","x-skills-required":["machine learning","research engineering","AI safety","Python","Kubernetes","LLMs","reinforcement learning"],"x-skills-preferred":["authoring research papers","NLP","AI safety research"],"datePosted":"2026-04-18T15:43:50.095Z","jobLocation":{"@type":"Place","address":{"@type":"PostalAddress","addressLocality":"San Francisco, CA"}},"employmentType":"FULL_TIME","occupationalCategory":"Engineering","industry":"Technology","skills":"machine learning, research engineering, AI safety, Python, Kubernetes, LLMs, reinforcement learning, authoring research papers, NLP, AI safety research","baseSalary":{"@type":"MonetaryAmount","currency":"USD","value":{"@type":"QuantitativeValue","minValue":350000,"maxValue":500000,"unitText":"YEAR"}}},{"@context":"https://schema.org","@type":"JobPosting","identifier":{"@type":"PropertyValue","name":"YubHub","value":"job_53060679-f73"},"title":"Applied Scientist / Research Engineer - EMEA","description":"<p>About Mistral AI</p>\n<p>At Mistral AI, we believe in the power of AI to simplify tasks, save time, and enhance learning and creativity. Our technology is designed to integrate seamlessly into daily working life.</p>\n<p>We are a global company with teams distributed between France, USA, UK, Germany, and Singapore. Our comprehensive AI platform meets enterprise needs, whether on-premises or in cloud environments. Our offerings include le Chat, the AI assistant for life and work.</p>\n<p>About The Job</p>\n<p>Mistral AI is seeking Applied Scientists and Research Engineers to drive innovative research and collaborate with clients on complex research projects. You will develop SOTA models across different modalities such as text, image, and speech. By developing novel methods and research ideas, you will apply these models across a diverse set of use cases and domains.</p>\n<p>Responsibilities</p>\n<p>• Run pre-training, post-training, and deploy state-of-the-art models on clusters with thousands of GPUs. You don&#39;t panic when you see OOM errors or when NCCL feels like not wanting to talk.\n• Generate and curate data for pre-training and post-training, working on evaluations and making sure the model&#39;s performance beats expectations.\n• Develop the necessary tools and frameworks to facilitate data generation, model training, evaluation, and deployment.\n• Collaborate with cross-functional teams to tackle complex use cases using agents and RAG pipelines.\n• Manage research projects and communications with client research teams.</p>\n<p>About You</p>\n<p>• You are fluent in English, and have excellent communication skills. You are at ease explaining complex technical concepts to both technical and non-technical audiences.\n• You&#39;re an expert with PyTorch or JAX.\n• You&#39;re not afraid of contributing to a big codebase and can find yourself around independently with little guidance.\n• You write clean, readable, high-performance, fault-tolerant Python code.\n• You don&#39;t need roadmaps: you just do. You don&#39;t need a manager: you just ship.\n• Low-ego, collaborative, and eager to learn.\n• You have a track record of success through personal projects, professional projects, or in academia.</p>\n<p>It would be great if you</p>\n<p>• Hold a PhD/master in a relevant field (e.g., Mathematics, Physics, Machine Learning), but if you&#39;re an exceptional candidate from a different background, you should apply.\n• Can bring a variety of research experience (agents, multi-modality, robotics, diffusion, time-series).\n• Have contributed to a large codebase used by many (open source or in the industry).\n• Have a track record of publications in top academic journals or conferences.\n• Love improving existing code by fixing typing issues, adding tests, and improving CI pipelines.</p>\n<p>Benefits</p>\n<p>We have local offices in Paris, London, Marseille, Singapore, and Palo Alto. France:\n• Competitive cash salary and equity\n• Food: Daily lunch vouchers\n• Sport: Monthly contribution to a Gympass subscription\n• Transportation: Monthly contribution to a mobility pass\n• Health: Full health insurance for you and your family\n• Parental: Generous parental leave policy\n• Visa sponsorship</p>\n<p>UK:\n• Competitive cash salary and equity\n• Insurance\n• Transportation: Reimburse office parking charges, or 90GBP/month for public transport\n• Sport: 90GBP/month reimbursement for gym membership\n• Meal voucher: £200 monthly allowance for meals\n• Pension plan: SmartPension (percentages are 5% Employee &amp; 3% Employer)</p>\n<p style=\"margin-top:24px;font-size:13px;color:#666;\">XML job scraping automation by <a href=\"https://yubhub.co\">YubHub</a></p>","url":"https://yubhub.co/jobs/job_53060679-f73","directApply":true,"hiringOrganization":{"@type":"Organization","name":"Mistral AI","sameAs":"https://mistral.ai"},"x-apply-url":"https://jobs.lever.co/mistral/b7ae8fc4-5779-4ad2-8f5b-632b4d9498cf","x-work-arrangement":"hybrid","x-experience-level":"senior","x-job-type":"full-time","x-salary-range":null,"x-skills-required":["PyTorch","JAX","Python","Machine Learning","Research Engineering"],"x-skills-preferred":[],"datePosted":"2026-03-10T11:24:35.662Z","jobLocation":{"@type":"Place","address":{"@type":"PostalAddress","addressLocality":"Paris"}},"employmentType":"FULL_TIME","occupationalCategory":"Engineering","industry":"Technology","skills":"PyTorch, JAX, Python, Machine Learning, Research Engineering"},{"@context":"https://schema.org","@type":"JobPosting","identifier":{"@type":"PropertyValue","name":"YubHub","value":"job_4e0b9271-cdd"},"title":"Research Engineer / Scientist, Alignment Science","description":"<p><strong>About the role:</strong></p>\n<p>You want to build and run elegant and thorough machine learning experiments to help us understand and steer the behavior of powerful AI systems. You care about making AI helpful, honest, and harmless, and are interested in the ways that this could be challenging in the context of human-level capabilities. You could describe yourself as both a scientist and an engineer. As a Research Engineer on Alignment Science, you&#39;ll contribute to exploratory experimental research on AI safety, with a focus on risks from powerful future systems (like those we would designate as ASL-3 or ASL-4 under our Responsible Scaling Policy), often in collaboration with other teams including Interpretability, Fine-Tuning, and the Frontier Red Team.</p>\n<p>Our blog provides an overview of topics that the Alignment Science team is either currently exploring or has previously explored. Our current topics of focus include...</p>\n<ul>\n<li><strong>Scalable Oversight:</strong> Developing techniques to keep highly capable models helpful and honest, even as they surpass human-level intelligence in various domains.</li>\n</ul>\n<ul>\n<li><strong>AI Control:</strong> Creating methods to ensure advanced AI systems remain safe and harmless in unfamiliar or adversarial scenarios.</li>\n</ul>\n<ul>\n<li><strong>Alignment Stress-testing</strong> <strong>:</strong> Creating model organisms of misalignment to improve our empirical understanding of how alignment failures might arise.</li>\n</ul>\n<ul>\n<li><strong>Automated Alignment Research:</strong> Building and aligning a system that can speed up &amp; improve alignment research.</li>\n</ul>\n<ul>\n<li><strong>Alignment Assessments</strong>: Understanding and documenting the highest-stakes and most concerning emerging properties of models through pre-deployment alignment and welfare assessments (see our Claude 4 System Card), misalignment-risk safety cases, and coordination with third-party evaluators.</li>\n</ul>\n<ul>\n<li><strong>Safeguards Research</strong>: Developing robust defenses against adversarial attacks, comprehensive evaluation frameworks for model safety, and automated systems to detect and mitigate potential risks before deployment.</li>\n</ul>\n<ul>\n<li><strong>Model Welfare:</strong> Investigating and addressing potential model welfare, moral status, and related questions. See our program announcement and welfare assessment in the Claude 4 system card for more.</li>\n</ul>\n<p>_Note: For this role, we conduct all interviews in Python and prefer candidates to be based in the Bay Area._</p>\n<p><strong>Representative projects:</strong></p>\n<ul>\n<li>Testing the robustness of our safety techniques by training language models to subvert our safety techniques, and seeing how effective they are at subvertinng our interventions.</li>\n</ul>\n<ul>\n<li>Run multi-agent reinforcement learning experiments to test out techniques like AI Debate.</li>\n</ul>\n<ul>\n<li>Build tooling to efficiently evaluate the effectiveness of novel LLM-generated jailbreaks.</li>\n</ul>\n<ul>\n<li>Write scripts and prompts to efficiently produce evaluation questions to test models’ reasoning abilities in safety-relevant contexts.</li>\n</ul>\n<ul>\n<li>Contribute ideas, figures, and writing to research papers, blog posts, and talks.</li>\n</ul>\n<ul>\n<li>Run experiments that feed into key AI safety efforts at Anthropic, like the design and implementation of our Responsible Scaling Policy.</li>\n</ul>\n<p><strong>You may be a good fit if you:</strong></p>\n<ul>\n<li>Have significant software, ML, or research engineering experience</li>\n</ul>\n<ul>\n<li>Have some experience contributing to empirical AI research projects</li>\n</ul>\n<ul>\n<li>Have some familiarity with technical AI safety research</li>\n</ul>\n<ul>\n<li>Prefer fast-moving collaborative projects to extensive solo efforts</li>\n</ul>\n<ul>\n<li>Pick up slack, even if it goes outside your job description</li>\n</ul>\n<ul>\n<li>Care about the impacts of AI</li>\n</ul>\n<p><strong>Strong candidates may also:</strong></p>\n<ul>\n<li>Have experience authoring research papers in machine learning, NLP, or AI safety</li>\n</ul>\n<ul>\n<li>Have experience with LLMs</li>\n</ul>\n<ul>\n<li>Have experience with reinforcement learning</li>\n</ul>\n<ul>\n<li>Have experience with Kubernetes clusters and complex shared codebases</li>\n</ul>\n<p><strong>Candidates need not have:</strong></p>\n<ul>\n<li>100% of the skills needed to perform the job</li>\n</ul>\n<ul>\n<li>Formal certifications or education credentials</li>\n</ul>\n<p>The annual compensation range for this role is listed below.</p>\n<p>For sales roles, the range provided is the role’s On Target Earnings (&quot;OTE&quot;) range, meaning that the range includes both the sales commissions/sales bonuses target and annual base salary for the role.</p>\n<p>Annual Salary:</p>\n<p>$350,000 \\- $500,000USD</p>\n<p><strong><strong>Logistics</strong></strong></p>\n<p><strong>Education requirements:</strong> We require at least a Bachelor&#39;s degree in a related field or equivalent experience.</p>\n<p><strong>Location-based hybrid policy:</strong> Currently, we expect all staff to be in one of our offices at least 25% of the time. However, some roles may require more time in our offices.</p>\n<p><strong>Visa sponsorship:</strong> We do sponsor visas! However, we aren&#39;t able to successfully sponsor visas for every role and every candidate. But if we make you an offer, we will make every reasonable effort to get you a visa, and we retain an immigration lawyer to help with this.</p>\n<p><strong>We encourage you to apply even if you do not believe you meet every single qualification.</strong> Not all strong candidates will meet every single qualification as listed.  Research shows that people who identify as being from underrepresented groups are more prone to experiencing imposter syndrome and doubting the strength of their candidacy, so we urge you not to exclude yourself prematurely and to submit an application if you&#39;re interested in this work. We think AI systems like the ones we&#39;re building have enormous social and ethical implications. We think this makes representation even more important, and we strive to include a range of diverse perspectives on our team.</p>\n<p><strong>Your safety matters to us.</strong> To protect yourself from potential scams, remember that Anthropic recruits through our website and other job boards, and we will never ask you to pay for any part of the recruitment process.</p>\n<p style=\"margin-top:24px;font-size:13px;color:#666;\">XML job scraping automation by <a href=\"https://yubhub.co\">YubHub</a></p>","url":"https://yubhub.co/jobs/job_4e0b9271-cdd","directApply":true,"hiringOrganization":{"@type":"Organization","name":"Anthropic","sameAs":"https://job-boards.greenhouse.io","logo":"https://logos.yubhub.co/anthropic.com.png"},"x-apply-url":"https://job-boards.greenhouse.io/anthropic/jobs/4631822008","x-work-arrangement":"hybrid","x-experience-level":"senior","x-job-type":"full-time","x-salary-range":"$350,000 - $500,000USD","x-skills-required":["Python","Machine Learning","Research Engineering","AI Safety","Scalable Oversight","AI Control","Alignment Stress-testing","Automated Alignment Research","Alignment Assessments","Safeguards Research","Model Welfare"],"x-skills-preferred":["Experience authoring research papers in machine learning, NLP, or AI safety","Experience with LLMs","Experience with reinforcement learning","Experience with Kubernetes clusters and complex shared codebases"],"datePosted":"2026-03-08T13:51:34.613Z","jobLocation":{"@type":"Place","address":{"@type":"PostalAddress","addressLocality":"San Francisco, CA"}},"employmentType":"FULL_TIME","occupationalCategory":"Engineering","industry":"Technology","skills":"Python, Machine Learning, Research Engineering, AI Safety, Scalable Oversight, AI Control, Alignment Stress-testing, Automated Alignment Research, Alignment Assessments, Safeguards Research, Model Welfare, Experience authoring research papers in machine learning, NLP, or AI safety, Experience with LLMs, Experience with reinforcement learning, Experience with Kubernetes clusters and complex shared codebases","baseSalary":{"@type":"MonetaryAmount","currency":"USD","value":{"@type":"QuantitativeValue","minValue":350000,"maxValue":500000,"unitText":"YEAR"}}},{"@context":"https://schema.org","@type":"JobPosting","identifier":{"@type":"PropertyValue","name":"YubHub","value":"job_716d3247-e3f"},"title":"ML/Research Engineer, Safeguards","description":"<p><strong>About Anthropic</strong></p>\n<p>Anthropic&#39;s mission is to create reliable, interpretable, and steerable AI systems. We want AI to be safe and beneficial for our users and for society as a whole. Our team is a quickly growing group of committed researchers, engineers, policy experts, and business leaders working together to build beneficial AI systems.</p>\n<p><strong>About the role</strong></p>\n<p>We are looking for ML Engineers and Research Engineers to help detect and mitigate misuse of our AI systems. As a member of the Safeguards ML team, you will build systems that identify harmful use—from individual policy violations to sophisticated, coordinated attacks—and develop defenses that keep our products safe as capabilities advance. You will also work on systems that protect user wellbeing and ensure our models behave appropriately across a wide range of contexts. This work feeds directly into Anthropic&#39;s Responsible Scaling Policy commitments.</p>\n<p><strong>Responsibilities</strong></p>\n<ul>\n<li>Develop classifiers to detect misuse and anomalous behavior at scale. This includes developing synthetic data pipelines for training classifiers and methods to automatically source representative evaluations to iterate on</li>\n<li>Build systems to monitor for harms that span multiple exchanges, such as coordinated cyber attacks and influence operations, and develop new methods for aggregating and analyzing signals across contexts</li>\n<li>Evaluate and improve the safety of agentic products—developing both threat models and environments to test for agentic risks, and developing and deploying mitigations for prompt injection attacks</li>\n<li>Conduct research on automated red-teaming, adversarial robustness, and other research that helps test for or find misuse</li>\n</ul>\n<p><strong>You may be a good fit if you</strong></p>\n<ul>\n<li>Have 4+ years of experience in ML engineering, research engineering, or applied research, in academia or industry</li>\n<li>Have proficiency in Python and experience building ML systems</li>\n<li>Are comfortable working across the research-to-deployment pipeline, from exploratory experiments to production systems</li>\n<li>Are worried about misuse risks of AI systems, and want to work to mitigate them</li>\n<li>Have strong communication skills and ability to explain complex technical concepts to non-technical stakeholders</li>\n</ul>\n<p><strong>Strong candidates may also have experience with</strong></p>\n<ul>\n<li>Language modeling and transformers</li>\n<li>Building classifiers, anomaly detection systems, or behavioral ML</li>\n<li>Adversarial machine learning or red-teaming</li>\n<li>Interpretability or probes</li>\n<li>Reinforcement learning</li>\n<li>High-performance, large-scale ML systems</li>\n</ul>\n<p><strong>Logistics</strong></p>\n<p><strong>Education requirements:</strong> We require at least a Bachelor&#39;s degree in a related field or equivalent experience. <strong>Location-based hybrid policy:</strong> Currently, we expect all staff to be in one of our offices at least 25% of the time. However, some roles may require more time in our offices.</p>\n<p><strong>Visa sponsorship</strong></p>\n<p>We do sponsor visas! However, we aren&#39;t able to successfully sponsor visas for every role and every candidate. But if we make you an offer, we will make every reasonable effort to get you a visa, and we retain an immigration lawyer to help with this.</p>\n<p><strong>We encourage you to apply even if you do not believe you meet every single qualification.</strong></p>\n<p>Not all strong candidates will meet every single qualification as listed. Research shows that people who identify as being from underrepresented groups are more prone to experiencing imposter syndrome and doubting the strength of their candidacy, so we urge you not to exclude yourself prematurely and to submit an application if you&#39;re interested in this work.</p>\n<p><strong>Your safety matters to us.</strong></p>\n<p>To protect yourself from potential scams, remember that Anthropic recruiters only contact you from @anthropic.com email addresses. In some cases, we may partner with vetted recruiting agencies who will identify themselves as working on behalf of Anthropic. Be cautious of emails from other domains. Legitimate Anthropic recruiters will never ask for money, fees, or banking information before your first day. If you&#39;re ever unsure about a communication, don&#39;t click any links—visit anthropic.com/careers directly for confirmed position openings.</p>\n<p><strong>How we&#39;re different</strong></p>\n<p>We believe that the highest-impact AI research will be big science. At Anthropic we work as a single cohesive team on just a few large-scale research efforts. And we value impact — advancing our long-term goals of steerable, trustworthy AI — rather than work on smaller and more specific puzzles. We view AI research as an empirical science, which has as much in common with physics and biology as with traditional efforts in computer science. We&#39;re an extremely collaborative group, and we host frequent research discussions to ensure that we are pursuing the highest-impact work at any given time. As such, we greatly value communication skills.</p>\n<p>The easiest way to understand our research directions is to read our recent research. This research continues many of the directions our team worked on prior to Anthropic, including: GPT-3, Circuit-Based Interpretability, Multimodal Neurons, Scaling Laws, AI &amp; Compute, Concrete Problems in AI Safety, and Learning from Human Preferences.</p>\n<p><strong>Come work with us!</strong></p>\n<p>Anthropic is a public benefit corporation headquartered in San Francisco. We offer competitive compensation and benefits, optional</p>\n<p style=\"margin-top:24px;font-size:13px;color:#666;\">XML job scraping automation by <a href=\"https://yubhub.co\">YubHub</a></p>","url":"https://yubhub.co/jobs/job_716d3247-e3f","directApply":true,"hiringOrganization":{"@type":"Organization","name":"Anthropic","sameAs":"https://job-boards.greenhouse.io","logo":"https://logos.yubhub.co/anthropic.com.png"},"x-apply-url":"https://job-boards.greenhouse.io/anthropic/jobs/4949336008","x-work-arrangement":"hybrid","x-experience-level":"senior","x-job-type":"full-time","x-salary-range":"$350,000 - $500,000USD","x-skills-required":["Python","Machine Learning","Research Engineering","Adversarial Machine Learning","Red-teaming","Interpretability","Probes","Reinforcement Learning","High-performance, large-scale ML systems"],"x-skills-preferred":["Language modeling and transformers","Building classifiers, anomaly detection systems, or behavioral ML"],"datePosted":"2026-03-08T13:46:45.711Z","jobLocation":{"@type":"Place","address":{"@type":"PostalAddress","addressLocality":"San Francisco, CA | New York City, NY"}},"employmentType":"FULL_TIME","occupationalCategory":"Engineering","industry":"Technology","skills":"Python, Machine Learning, Research Engineering, Adversarial Machine Learning, Red-teaming, Interpretability, Probes, Reinforcement Learning, High-performance, large-scale ML systems, Language modeling and transformers, Building classifiers, anomaly detection systems, or behavioral ML","baseSalary":{"@type":"MonetaryAmount","currency":"USD","value":{"@type":"QuantitativeValue","minValue":350000,"maxValue":500000,"unitText":"YEAR"}}},{"@context":"https://schema.org","@type":"JobPosting","identifier":{"@type":"PropertyValue","name":"YubHub","value":"job_1db86d95-013"},"title":"Researcher, Frontier Biological and Chemical Risks","description":"<p><strong>Job Posting</strong></p>\n<p><strong>Researcher, Frontier Biological and Chemical Risks</strong></p>\n<p><strong>Location</strong></p>\n<p>San Francisco</p>\n<p><strong>Employment Type</strong></p>\n<p>Full time</p>\n<p><strong>Department</strong></p>\n<p>Safety Systems</p>\n<p><strong>Compensation</strong></p>\n<ul>\n<li>Estimated Base Salary $295K – $445K</li>\n</ul>\n<p>The base pay offered may vary depending on multiple individualized factors, including market location, job-related knowledge, skills, and experience. If the role is non-exempt, overtime pay will be provided consistent with applicable laws. In addition to the salary range listed above, total compensation also includes generous equity, performance-related bonus(es) for eligible employees, and the following benefits.</p>\n<ul>\n<li>Medical, dental, and vision insurance for you and your family, with employer contributions to Health Savings Accounts</li>\n</ul>\n<ul>\n<li>Pre-tax accounts for Health FSA, Dependent Care FSA, and commuter expenses (parking and transit)</li>\n</ul>\n<ul>\n<li>401(k) retirement plan with employer match</li>\n</ul>\n<ul>\n<li>Paid parental leave (up to 24 weeks for birth parents and 20 weeks for non-birthing parents), plus paid medical and caregiver leave (up to 8 weeks)</li>\n</ul>\n<ul>\n<li>Paid time off: flexible PTO for exempt employees and up to 15 days annually for non-exempt employees</li>\n</ul>\n<ul>\n<li>13+ paid company holidays, and multiple paid coordinated company office closures throughout the year for focus and recharge, plus paid sick or safe time (1 hour per 30 hours worked, or more, as required by applicable state or local law)</li>\n</ul>\n<ul>\n<li>Mental health and wellness support</li>\n</ul>\n<ul>\n<li>Employer-paid basic life and disability coverage</li>\n</ul>\n<ul>\n<li>Annual learning and development stipend to fuel your professional growth</li>\n</ul>\n<ul>\n<li>Daily meals in our offices, and meal delivery credits as eligible</li>\n</ul>\n<ul>\n<li>Relocation support for eligible employees</li>\n</ul>\n<ul>\n<li>Additional taxable fringe benefits, such as charitable donation matching and wellness stipends, may also be provided.</li>\n</ul>\n<p>More details about our benefits are available to candidates during the hiring process.</p>\n<p>This role is at-will and OpenAI reserves the right to modify base pay and other compensation components at any time based on individual performance, team or company results, or market conditions.</p>\n<p><strong>About the Team</strong></p>\n<p>The Preparedness team is an important part of the Safety Systems org at OpenAI, and is guided by OpenAI’s Preparedness Framework.</p>\n<p>Frontier AI models have the potential to benefit all of humanity, but also pose increasingly severe risks. To ensure that AI promotes positive change, the Preparedness team helps us prepare for the development of increasingly capable frontier AI models. This team is tasked with identifying, tracking, and preparing for catastrophic risks related to frontier AI models.</p>\n<p>The mission of the Preparedness team is to:</p>\n<ol>\n<li>Closely monitor and predict the evolving capabilities of frontier AI systems, with an eye towards misuse risks whose impact could be catastrophic to our society</li>\n</ol>\n<ol>\n<li>Ensure we have concrete procedures, infrastructure and partnerships to mitigate these risks and to safely handle the development of powerful AI systems</li>\n</ol>\n<p>Preparedness tightly connects capability assessment, evaluations, and internal red teaming, and mitigations for frontier models, as well as overall coordination on AGI preparedness. This is fast paced, exciting work that has far reaching importance for the company and for society.</p>\n<p><strong>About the Role</strong></p>\n<p>We are looking to hire exceptional research engineers that can push the boundaries of our frontier models. Specifically, we are looking for those that will help us shape our empirical grasp of the whole spectrum of AI safety concerns and will own individual threads within this endeavor end-to-end.</p>\n<p>You will own the scientific validity of our frontier preparedness capability evaluations—designing new evals grounded in real threat models (including high-consequence domains like CBRN as well as cyber and other frontier-risk areas), and maintaining existing evals so they don’t stale or silently regress. You’ll define datasets, graders, rubrics, and threshold guidance, and produce auditable artifacts (evaluation cards, capability reports, system-card inputs) that leadership can trust during high-stakes launches.</p>\n<p><strong>In this role, you&#39;ll:</strong></p>\n<ul>\n<li>Work on identifying emerging AI safety risks and new methodologies for exploring the impact of these risks</li>\n</ul>\n<ul>\n<li>Build (and then continuously refine) evaluations of frontier AI models that assess the extent of identified risks</li>\n</ul>\n<ul>\n<li>Design and build scalable systems and processes that can support these kinds of evaluations</li>\n</ul>\n<ul>\n<li>Contribute to the refinement of risk management and the overall development of “best practice” guidelines for AI safety evaluations</li>\n</ul>\n<p><strong>You might thrive in this role if you:</strong></p>\n<ul>\n<li>Are passionate and knowledgeable about short-term and long-term AI safety risks</li>\n</ul>\n<ul>\n<li>Demonstrate the ability to think outside the box and have a robust “red-teaming mindset”</li>\n</ul>\n<ul>\n<li>Have experience in ML research engineering, ML observability and monitoring, creating large language model-enabled applications, and/or another technical domain applicable to AI risk</li>\n</ul>\n<ul>\n<li>Are able to operate effectively in a dynamic and extremely fast-paced research environment as well as scope and deliver projects end-to-end</li>\n</ul>\n<p><strong>It would be great if you also have:</strong></p>\n<ul>\n<li>First-hand experience in red-teaming systems—be it computer systems or otherwise</li>\n</ul>\n<ul>\n<li>A good understanding of the (nuances of) societal aspects of AI deployment</li>\n</ul>\n<ul>\n<li>Excellent communication skills and the ability to work cross-functionally</li>\n</ul>\n<p>_This role may require access to technology or technical data controlled under the U.S. Export Administration Regulations or International Traffic in Arms Regulations. Therefore, this role is restricted to individuals described in paragraph (a)(1) of the definition of “U.S. person” in the U.S. Export Administration Regulations, 15 C.F.R. § 772.1, and in the International Traffic in Arms Regulations, 22 C.F.R. § 120.62. U.S. persons are U.S. citizens, U.S. legal permanent residents, individuals granted asylum status in the United States, and individuals who are not U.S. citizens but are lawfully admitted for permanent residence in the United States._</p>\n<p style=\"margin-top:24px;font-size:13px;color:#666;\">XML job scraping automation by <a href=\"https://yubhub.co\">YubHub</a></p>","url":"https://yubhub.co/jobs/job_1db86d95-013","directApply":true,"hiringOrganization":{"@type":"Organization","name":"OpenAI","sameAs":"https://jobs.ashbyhq.com","logo":"https://logos.yubhub.co/openai.com.png"},"x-apply-url":"https://jobs.ashbyhq.com/openai/3fc46cbc-7e5a-4edc-96dc-ca433e76d181","x-work-arrangement":"onsite","x-experience-level":"senior","x-job-type":"full-time","x-salary-range":"$295K – $445K","x-skills-required":["ML research engineering","ML observability and monitoring","creating large language model-enabled applications","AI risk","red-teaming systems","societal aspects of AI deployment"],"x-skills-preferred":["first-hand experience in red-teaming systems","excellent communication skills","ability to work cross-functionally"],"datePosted":"2026-03-06T18:40:56.981Z","jobLocation":{"@type":"Place","address":{"@type":"PostalAddress","addressLocality":"San Francisco"}},"employmentType":"FULL_TIME","occupationalCategory":"Engineering","industry":"Technology","skills":"ML research engineering, ML observability and monitoring, creating large language model-enabled applications, AI risk, red-teaming systems, societal aspects of AI deployment, first-hand experience in red-teaming systems, excellent communication skills, ability to work cross-functionally","baseSalary":{"@type":"MonetaryAmount","currency":"USD","value":{"@type":"QuantitativeValue","minValue":295000,"maxValue":445000,"unitText":"YEAR"}}},{"@context":"https://schema.org","@type":"JobPosting","identifier":{"@type":"PropertyValue","name":"YubHub","value":"job_61280fe7-04a"},"title":"Researcher, Interpretability","description":"<p><strong>Job Posting</strong></p>\n<p><strong>Researcher, Interpretability</strong></p>\n<p><strong>Location</strong></p>\n<p>San Francisco</p>\n<p><strong>Employment Type</strong></p>\n<p>Full time</p>\n<p><strong>Location Type</strong></p>\n<p>Hybrid</p>\n<p><strong>Department</strong></p>\n<p>Safety Systems</p>\n<p><strong>Compensation</strong></p>\n<ul>\n<li>$295K – $445K • Offers Equity</li>\n</ul>\n<p>The base pay offered may vary depending on multiple individualized factors, including market location, job-related knowledge, skills, and experience. If the role is non-exempt, overtime pay will be provided consistent with applicable laws. In addition to the salary range listed above, total compensation also includes generous equity, performance-related bonus(es) for eligible employees, and the following benefits.</p>\n<ul>\n<li>Medical, dental, and vision insurance for you and your family, with employer contributions to Health Savings Accounts</li>\n</ul>\n<ul>\n<li>Pre-tax accounts for Health FSA, Dependent Care FSA, and commuter expenses (parking and transit)</li>\n</ul>\n<ul>\n<li>401(k) retirement plan with employer match</li>\n</ul>\n<ul>\n<li>Paid parental leave (up to 24 weeks for birth parents and 20 weeks for non-birthing parents), plus paid medical and caregiver leave (up to 8 weeks)</li>\n</ul>\n<ul>\n<li>Paid time off: flexible PTO for exempt employees and up to 15 days annually for non-exempt employees</li>\n</ul>\n<ul>\n<li>13+ paid company holidays, and multiple paid coordinated company office closures throughout the year for focus and recharge, plus paid sick or safe time (1 hour per 30 hours worked, or more, as required by applicable state or local law)</li>\n</ul>\n<ul>\n<li>Mental health and wellness support</li>\n</ul>\n<ul>\n<li>Employer-paid basic life and disability coverage</li>\n</ul>\n<ul>\n<li>Annual learning and development stipend to fuel your professional growth</li>\n</ul>\n<ul>\n<li>Daily meals in our offices, and meal delivery credits as eligible</li>\n</ul>\n<ul>\n<li>Relocation support for eligible employees</li>\n</ul>\n<ul>\n<li>Additional taxable fringe benefits, such as charitable donation matching and wellness stipends, may also be provided.</li>\n</ul>\n<p>More details about our benefits are available to candidates during the hiring process.</p>\n<p>This role is at-will and OpenAI reserves the right to modify base pay and other compensation components at any time based on individual performance, team or company results, or market conditions.</p>\n<p><strong>About the Team</strong></p>\n<p>The Interpretability team studies internal representations of deep learning models. We are interested in using representations to understand model behavior, and in engineering models to have more understandable representations. We are particularly interested in applying our understanding to ensure the safety of powerful AI systems. Our working style is collaborative and curiosity-driven.</p>\n<p><strong>About the Role</strong></p>\n<p>OpenAI is seeking a researcher passionate about understanding deep networks, with a strong background in engineering, quantitative reasoning, and the research process. You will develop and carry out a research plan in mechanistic interpretability, in close collaboration with a highly motivated team. You will play a critical role in helping OpenAI ensure future models remain safe even as they grow in capability. This will make a significant impact on our goal of building and deploying safe AGI.</p>\n<p>In this role, you will:</p>\n<ul>\n<li>Develop and publish research on techniques for understanding representations of deep networks.</li>\n</ul>\n<ul>\n<li>Engineer infrastructure for studying model internals at scale.</li>\n</ul>\n<ul>\n<li>Collaborate across teams to work on projects that OpenAI is uniquely suited to pursue.</li>\n</ul>\n<ul>\n<li>Guide research directions toward demonstrable usefulness and/or long-term scalability.</li>\n</ul>\n<p><strong>You might thrive in this role if you:</strong></p>\n<ul>\n<li>Are excited about OpenAI’s mission of ensuring AGI benefits all of humanity, and are aligned with OpenAI’s charter.</li>\n</ul>\n<ul>\n<li>Show enthusiasm for long-term AI safety, and have thought deeply about technical paths to safe AGI.</li>\n</ul>\n<ul>\n<li>Bring experience in the field of AI safety, mechanistic interpretability, or spiritually related disciplines.</li>\n</ul>\n<ul>\n<li>Hold a Ph.D. or have research experience in computer science, machine learning, or a related field.</li>\n</ul>\n<ul>\n<li>Thrive in environments involving large-scale AI systems, and are excited to make use of OpenAI’s unique resources in this area.</li>\n</ul>\n<ul>\n<li>Possess 2+ years of research engineering experience and proficiency in Python or similar languages.</li>\n</ul>\n<ul>\n<li>Are deeply curious.</li>\n</ul>\n<p><strong>About OpenAI</strong></p>\n<p>OpenAI is an AI research and deployment company dedicated to ensuring that general-purpose artificial intelligence benefits all of humanity. We push the boundaries of the capabilities of AI systems and seek to safely deploy them to the world through our products. AI is an extremely powerful tool that must be created with safety and human needs at its core, and to achieve our mission, we must encompass and value the many different perspectives, voices, and experiences that form the full spectrum of humanity.</p>\n<p style=\"margin-top:24px;font-size:13px;color:#666;\">XML job scraping automation by <a href=\"https://yubhub.co\">YubHub</a></p>","url":"https://yubhub.co/jobs/job_61280fe7-04a","directApply":true,"hiringOrganization":{"@type":"Organization","name":"OpenAI","sameAs":"https://jobs.ashbyhq.com","logo":"https://logos.yubhub.co/openai.com.png"},"x-apply-url":"https://jobs.ashbyhq.com/openai/c44268f1-717b-4da3-9943-2557f7d739f0","x-work-arrangement":"hybrid","x-experience-level":"senior","x-job-type":"full-time","x-salary-range":"$295K – $445K • Offers Equity","x-skills-required":["Python","Machine Learning","Deep Learning","Research Engineering","Computer Science"],"x-skills-preferred":["AI Safety","Mechanistic Interpretability","Quantitative Reasoning","Engineering"],"datePosted":"2026-03-06T18:39:59.202Z","jobLocation":{"@type":"Place","address":{"@type":"PostalAddress","addressLocality":"San Francisco"}},"employmentType":"FULL_TIME","occupationalCategory":"Engineering","industry":"Technology","skills":"Python, Machine Learning, Deep Learning, Research Engineering, Computer Science, AI Safety, Mechanistic Interpretability, Quantitative Reasoning, Engineering","baseSalary":{"@type":"MonetaryAmount","currency":"USD","value":{"@type":"QuantitativeValue","minValue":295000,"maxValue":445000,"unitText":"YEAR"}}},{"@context":"https://schema.org","@type":"JobPosting","identifier":{"@type":"PropertyValue","name":"YubHub","value":"job_b0c58796-627"},"title":"Research Engineer, Frontier Evals & Environments","description":"<p><strong>Job Posting</strong></p>\n<p>Research Engineer, Frontier Evals &amp; Environments</p>\n<p><strong>Location</strong></p>\n<p>San Francisco</p>\n<p><strong>Employment Type</strong></p>\n<p>Full time</p>\n<p><strong>Department</strong></p>\n<p>Research</p>\n<p><strong>Compensation</strong></p>\n<ul>\n<li>$205K – $380K • Offers Equity</li>\n</ul>\n<p>The base pay offered may vary depending on multiple individualized factors, including market location, job-related knowledge, skills, and experience. If the role is non-exempt, overtime pay will be provided consistent with applicable laws. In addition to the salary range listed above, total compensation also includes generous equity, performance-related bonus(es) for eligible employees, and the following benefits.</p>\n<ul>\n<li>Medical, dental, and vision insurance for you and your family, with employer contributions to Health Savings Accounts</li>\n</ul>\n<ul>\n<li>Pre-tax accounts for Health FSA, Dependent Care FSA, and commuter expenses (parking and transit)</li>\n</ul>\n<ul>\n<li>401(k) retirement plan with employer match</li>\n</ul>\n<ul>\n<li>Paid parental leave (up to 24 weeks for birth parents and 20 weeks for non-birthing parents), plus paid medical and caregiver leave (up to 8 weeks)</li>\n</ul>\n<ul>\n<li>Paid time off: flexible PTO for exempt employees and up to 15 days annually for non-exempt employees</li>\n</ul>\n<ul>\n<li>13+ paid company holidays, and multiple paid coordinated company office closures throughout the year for focus and recharge, plus paid sick or safe time (1 hour per 30 hours worked, or more, as required by applicable state or local law)</li>\n</ul>\n<ul>\n<li>Mental health and wellness support</li>\n</ul>\n<ul>\n<li>Employer-paid basic life and disability coverage</li>\n</ul>\n<ul>\n<li>Annual learning and development stipend to fuel your professional growth</li>\n</ul>\n<ul>\n<li>Daily meals in our offices, and meal delivery credits as eligible</li>\n</ul>\n<ul>\n<li>Relocation support for eligible employees</li>\n</ul>\n<ul>\n<li>Additional taxable fringe benefits, such as charitable donation matching and wellness stipends, may also be provided.</li>\n</ul>\n<p>More details about our benefits are available to candidates during the hiring process.</p>\n<p>This role is at-will and OpenAI reserves the right to modify base pay and other compensation components at any time based on individual performance, team or company results, or market conditions.</p>\n<p><strong>About the team</strong></p>\n<p>The Frontier Evals &amp; Environments team builds north star model environments to drive progress towards safe AGI/ASI. This team builds ambitious environments to measure and steer our models, and creates self-improvement loops to steer our training, safety, and launch decisions. Some of the team&#39;s open-sourced evaluations include GDPval, SWE-bench Verified, MLE-bench, PaperBench, and SWE-Lancer, and the team built and ran frontier evaluations for GPT4o, o1, o3, GPT 4.5, ChatGPT Agent, and GPT5. If you are interested in feeling firsthand the fast progress of our models, and steering them towards good, this is the team for you.</p>\n<p><strong>About you</strong></p>\n<p>We seek exceptional research engineers that can push the boundaries of our frontier models. Specifically, we are looking for those that will help us shape our empirical grasp of the whole spectrum of AI capabilities measurement and will own individual threads within this endeavor end-to-end.</p>\n<p><strong>In this role, you&#39;ll:</strong></p>\n<ul>\n<li>Create ambitious RL environments to push our models to their limits</li>\n</ul>\n<ul>\n<li>Work on measuring frontier model capabilities, skills, and behaviors</li>\n</ul>\n<ul>\n<li>Develop new methodologies for automatically exploring the behavior of these models</li>\n</ul>\n<ul>\n<li>Help steer training for our largest training runs, and see the future first</li>\n</ul>\n<ul>\n<li>Design scalable systems and processes to support continuous evaluation</li>\n</ul>\n<ul>\n<li>Build self-improvement loops to automate model understanding</li>\n</ul>\n<p><strong>We expect you to be:</strong></p>\n<ul>\n<li>Passionate and knowledgeable about AGI/ASI measurement</li>\n</ul>\n<ul>\n<li>Strong engineering and statistical analysis skills</li>\n</ul>\n<ul>\n<li>Able to think outside the box and have a robust “red-teaming mindset”</li>\n</ul>\n<ul>\n<li>Experienced in ML research engineering, stochastic systems, observability and monitoring, LLM-enabled applications, and/or another technical domain applicable to AI evaluations</li>\n</ul>\n<ul>\n<li>Able to operate effectively in a dynamic and extremely fast-paced research environment as well as scope and deliver projects end-to-end</li>\n</ul>\n<p><strong>It would be great if you also have:</strong></p>\n<ul>\n<li>First-hand experience in red-teaming systems—be it computer systems or otherwise</li>\n</ul>\n<ul>\n<li>An ability to work cross-functionally</li>\n</ul>\n<ul>\n<li>Excellent communication skills</li>\n</ul>\n<p><strong>About OpenAI</strong></p>\n<p>OpenAI is an AI research and deployment company dedicated to ensuring that general-purpose artificial intelligence benefits all of humanity. We push the boundaries of the capabilities of AI systems and seek to safely deploy them to the world through our products. AI is an extremely powerful tool that must be created with safety and human needs at its core, and to achieve our mission, we must encompass and value the many different perspectives, voices, and experiences that form the full spectrum of humanity.</p>\n<p style=\"margin-top:24px;font-size:13px;color:#666;\">XML job scraping automation by <a href=\"https://yubhub.co\">YubHub</a></p>","url":"https://yubhub.co/jobs/job_b0c58796-627","directApply":true,"hiringOrganization":{"@type":"Organization","name":"OpenAI","sameAs":"https://jobs.ashbyhq.com","logo":"https://logos.yubhub.co/openai.com.png"},"x-apply-url":"https://jobs.ashbyhq.com/openai/bba18df5-f30f-4d2c-909c-30e651f95579","x-work-arrangement":"onsite","x-experience-level":"senior","x-job-type":"full-time","x-salary-range":"$205K – $380K • Offers Equity","x-skills-required":["ML research engineering","stochastic systems","observability and monitoring","LLM-enabled applications","AI evaluations"],"x-skills-preferred":["red-teaming systems","cross-functional collaboration","communication skills"],"datePosted":"2026-03-06T18:38:17.127Z","jobLocation":{"@type":"Place","address":{"@type":"PostalAddress","addressLocality":"San Francisco"}},"employmentType":"FULL_TIME","occupationalCategory":"Engineering","industry":"Technology","skills":"ML research engineering, stochastic systems, observability and monitoring, LLM-enabled applications, AI evaluations, red-teaming systems, cross-functional collaboration, communication skills","baseSalary":{"@type":"MonetaryAmount","currency":"USD","value":{"@type":"QuantitativeValue","minValue":205000,"maxValue":380000,"unitText":"YEAR"}}},{"@context":"https://schema.org","@type":"JobPosting","identifier":{"@type":"PropertyValue","name":"YubHub","value":"job_18a83d32-ae1"},"title":"Researcher, Safety Oversight","description":"<p><strong>Job Posting</strong></p>\n<p><strong>Researcher, Safety Oversight</strong></p>\n<p><strong>Location</strong></p>\n<p>San Francisco</p>\n<p><strong>Employment Type</strong></p>\n<p>Full time</p>\n<p><strong>Department</strong></p>\n<p>Safety Systems</p>\n<p><strong>Compensation</strong></p>\n<ul>\n<li>$295K – $445K • Offers Equity</li>\n</ul>\n<p>The base pay offered may vary depending on multiple individualized factors, including market location, job-related knowledge, skills, and experience. If the role is non-exempt, overtime pay will be provided consistent with applicable laws. In addition to the salary range listed above, total compensation also includes generous equity, performance-related bonus(es) for eligible employees, and the following benefits.</p>\n<ul>\n<li>Medical, dental, and vision insurance for you and your family, with employer contributions to Health Savings Accounts</li>\n</ul>\n<ul>\n<li>Pre-tax accounts for Health FSA, Dependent Care FSA, and commuter expenses (parking and transit)</li>\n</ul>\n<ul>\n<li>401(k) retirement plan with employer match</li>\n</ul>\n<ul>\n<li>Paid parental leave (up to 24 weeks for birth parents and 20 weeks for non-birthing parents), plus paid medical and caregiver leave (up to 8 weeks)</li>\n</ul>\n<ul>\n<li>Paid time off: flexible PTO for exempt employees and up to 15 days annually for non-exempt employees</li>\n</ul>\n<ul>\n<li>13+ paid company holidays, and multiple paid coordinated company office closures throughout the year for focus and recharge, plus paid sick or safe time (1 hour per 30 hours worked, or more, as required by applicable state or local law)</li>\n</ul>\n<ul>\n<li>Mental health and wellness support</li>\n</ul>\n<ul>\n<li>Employer-paid basic life and disability coverage</li>\n</ul>\n<ul>\n<li>Annual learning and development stipend to fuel your professional growth</li>\n</ul>\n<ul>\n<li>Daily meals in our offices, and meal delivery credits as eligible</li>\n</ul>\n<ul>\n<li>Relocation support for eligible employees</li>\n</ul>\n<ul>\n<li>Additional taxable fringe benefits, such as charitable donation matching and wellness stipends, may also be provided.</li>\n</ul>\n<p>More details about our benefits are available to candidates during the hiring process.</p>\n<p>This role is at-will and OpenAI reserves the right to modify base pay and other compensation components at any time based on individual performance, team or company results, or market conditions.</p>\n<p><strong>About the Team</strong></p>\n<p>The Safety Systems team is responsible for various safety work to ensure our best models can be safely deployed to the real world to benefit the society, and is at the forefront of OpenAI&#39;s mission to build and deploy safe AGI, driving our commitment to AI safety and fostering a culture of trust and transparency.</p>\n<p>The Safety Oversight Research team aims to fundamentally advance our capabilities to maintain oversight over frontier AI models, and leverage these advances to ensure OpenAI’s deployed models are safe and beneficial. This requires a breadth of new ML research in the areas of human-AI collaboration, reasoning, robustness, and scalable oversight to keep pace with model capabilities.  We invest heavily in developing novel model and system-level methods of identifying and mitigating AI misuse and misalignment.</p>\n<p>Our goal is to learn from deployment and distribute the benefits of AI, while ensuring that this powerful tool is used responsibly and safely.</p>\n<p><strong>About the Role</strong></p>\n<p>OpenAI is seeking a senior researcher with a passion for AI safety and experience in safety research. Your role will set directions for research to maintain effective oversight of safe AGI and work on research projects to identify and mitigate misuse and misalignment in our AI systems. You will play a critical role in defining how a safe AI system should look in the future at OpenAI, making a significant impact on our mission to build and deploy safe AGI.</p>\n<p>In this role, you will:</p>\n<ul>\n<li>Develop and refine AI monitor models to detect and mitigate known and emerging patterns of misuse and misalignment.</li>\n</ul>\n<ul>\n<li>Set research directions and strategies to make our AI systems safer, more aligned, and more robust.</li>\n</ul>\n<ul>\n<li>Evaluate and design effective red-teaming pipelines to examine the end-to-end robustness of our safety systems, and identify areas for future improvement.</li>\n</ul>\n<ul>\n<li>Conduct research to improve models’ ability to reason about questions of human values, and apply these improved models to practical safety challenges.</li>\n</ul>\n<ul>\n<li>Coordinate and collaborate with cross-functional teams, including T&amp;S, legal, policy and other research teams, to ensure that our products meet the highest safety standards.</li>\n</ul>\n<p><strong>You might thrive in this role if you:</strong></p>\n<ul>\n<li>Are excited about OpenAI’s mission of building safe, universally beneficial AGI and are aligned with OpenAI’s charter</li>\n</ul>\n<ul>\n<li>Show enthusiasm for AI safety and dedication to enhancing the safety of cutting-edge AI models for real-world use.</li>\n</ul>\n<ul>\n<li>Bring 4+ years of experience in the field of AI safety, especially in areas like RLHF, human-AI collaboration, fairness &amp; biases.</li>\n</ul>\n<ul>\n<li>Hold a Ph.D. or other degree in computer science, machine learning, or a related field.</li>\n</ul>\n<ul>\n<li>Thrive in environments involving large-scale AI systems.</li>\n</ul>\n<ul>\n<li>Possess 4+ years of research engineering experience and proficiency in Python or similar languages.</li>\n</ul>\n<p><strong>About OpenAI</strong></p>\n<p>OpenAI is an AI research and deployment company dedicated to ensuring that general-purpose artificial intelligence benefits all of humanity. We push the boundaries of the capabilities of AI systems and seek to safely deploy them to the world through our products. AI is an extremely powerful tool that must be created with safety and human needs at its core, and to achieve our mission, we must encompass and value the many different perspectives, voices, and experiences that form the full spectrum of humanity.</p>\n<p style=\"margin-top:24px;font-size:13px;color:#666;\">XML job scraping automation by <a href=\"https://yubhub.co\">YubHub</a></p>","url":"https://yubhub.co/jobs/job_18a83d32-ae1","directApply":true,"hiringOrganization":{"@type":"Organization","name":"OpenAI","sameAs":"https://jobs.ashbyhq.com","logo":"https://logos.yubhub.co/openai.com.png"},"x-apply-url":"https://jobs.ashbyhq.com/openai/9b11373c-1643-4ea6-bbcd-033d5b8a0d3e","x-work-arrangement":"onsite","x-experience-level":"senior","x-job-type":"full-time","x-salary-range":"$295K – $445K • Offers Equity","x-skills-required":["AI safety","RLHF","human-AI collaboration","fairness & biases","Python","research engineering"],"x-skills-preferred":["machine learning","computer science"],"datePosted":"2026-03-06T18:37:17.785Z","jobLocation":{"@type":"Place","address":{"@type":"PostalAddress","addressLocality":"San Francisco"}},"employmentType":"FULL_TIME","occupationalCategory":"Engineering","industry":"Technology","skills":"AI safety, RLHF, human-AI collaboration, fairness & biases, Python, research engineering, machine learning, computer science","baseSalary":{"@type":"MonetaryAmount","currency":"USD","value":{"@type":"QuantitativeValue","minValue":295000,"maxValue":445000,"unitText":"YEAR"}}},{"@context":"https://schema.org","@type":"JobPosting","identifier":{"@type":"PropertyValue","name":"YubHub","value":"job_23395dfc-6ff"},"title":"Recruiter, AI/ML Research","description":"<p><strong>Recruiter, AI/ML Research</strong></p>\n<p><strong>Location</strong></p>\n<p>San Francisco</p>\n<p><strong>Employment Type</strong></p>\n<p>Full time</p>\n<p><strong>Location Type</strong></p>\n<p>Hybrid</p>\n<p><strong>Department</strong></p>\n<p>People</p>\n<p><strong>Compensation</strong></p>\n<ul>\n<li>San Francisco $216K – $240K • Offers Equity</li>\n</ul>\n<p>The base pay offered may vary depending on multiple individualized factors, including market location, job-related knowledge, skills, and experience. If the role is non-exempt, overtime pay will be provided consistent with applicable laws. In addition to the salary range listed above, total compensation also includes generous equity, performance-related bonus(es) for eligible employees, and the following benefits.</p>\n<ul>\n<li>Medical, dental, and vision insurance for you and your family, with employer contributions to Health Savings Accounts</li>\n</ul>\n<ul>\n<li>Pre-tax accounts for Health FSA, Dependent Care FSA, and commuter expenses (parking and transit)</li>\n</ul>\n<ul>\n<li>401(k) retirement plan with employer match</li>\n</ul>\n<ul>\n<li>Paid parental leave (up to 24 weeks for birth parents and 20 weeks for non-birthing parents), plus paid medical and caregiver leave (up to 8 weeks)</li>\n</ul>\n<ul>\n<li>Paid time off: flexible PTO for exempt employees and up to 15 days annually for non-exempt employees</li>\n</ul>\n<ul>\n<li>13+ paid company holidays, and multiple paid coordinated company office closures throughout the year for focus and recharge, plus paid sick or safe time (1 hour per 30 hours worked, or more, as required by applicable state or local law)</li>\n</ul>\n<ul>\n<li>Mental health and wellness support</li>\n</ul>\n<ul>\n<li>Employer-paid basic life and disability coverage</li>\n</ul>\n<ul>\n<li>Annual learning and development stipend to fuel your professional growth</li>\n</ul>\n<ul>\n<li>Daily meals in our offices, and meal delivery credits as eligible</li>\n</ul>\n<ul>\n<li>Relocation support for eligible employees</li>\n</ul>\n<ul>\n<li>Additional taxable fringe benefits, such as charitable donation matching and wellness stipends, may also be provided.</li>\n</ul>\n<p>More details about our benefits are available to candidates during the hiring process.</p>\n<p>This role is at-will and OpenAI reserves the right to modify base pay and other compensation components at any time based on individual performance, team or company results, or market conditions.</p>\n<p><strong>About the Team</strong></p>\n<p>OpenAI’s mission is to build safe artificial general intelligence (AGI) that benefits all of humanity. Achieving this requires bringing the world’s most exceptional talent under one roof to push the boundaries of what’s possible.</p>\n<p>Our Research Recruiting team plays a critical role in this effort. We are an embedded part of the research organization, working side by side with our research staff to deeply understand evolving priorities, build trust, and strategically shape the future of OpenAI’s talent.</p>\n<p><strong>About the Role</strong></p>\n<p>You will own and execute long-term talent strategies to identify, engage, and recruit many of the world’s leading and emerging AI researchers, research engineers, and technical scientists working at the frontier of machine learning. This is not a traditional execution-focused recruiting role.</p>\n<p>You will operate as a strategic partner to OpenAI’s research staff, helping define hiring priorities, shape search strategy, influence candidate evaluation, and guide hiring decisions that directly impact the direction and quality of our frontier-model research and fulfillment of our mission.</p>\n<p><strong>In this role, you will:</strong></p>\n<ul>\n<li>Partner directly with research and technical staff to define hiring priorities, shape search strategies, and anticipate future talent needs as technical roadmaps evolve.</li>\n</ul>\n<ul>\n<li>Proactively identify and cultivate exceptional AI/ML research talent across industry, academia, and emerging labs, often before formal hiring needs exist.</li>\n</ul>\n<ul>\n<li>Use market insights and candidate signals to influence hiring decisions, leveling, and compensation strategy for highly specialized research roles.</li>\n</ul>\n<ul>\n<li>Serve as a trusted advisor throughout candidate evaluation and closing — helping leaders calibrate for research excellence, long-term potential, and organizational fit.</li>\n</ul>\n<ul>\n<li>Collaborate closely with your sourcing partner to execute complex, high-impact searches in ambiguous or rapidly evolving technical domains.</li>\n</ul>\n<p><strong>You might thrive in this role if you:</strong></p>\n<ul>\n<li>Significant experience recruiting within highly technical or specialized environments.</li>\n</ul>\n<ul>\n<li>Deep interest in AI research and a desire to engage directly with global research communities.</li>\n</ul>\n<ul>\n<li>Experience recruiting within highly technical or specialized environments such as ML/AI, distributed systems, infrastructure, scientific computing, or quantitative research.</li>\n</ul>\n<ul>\n<li>Track record of leading complex, ambiguous technical searches from early talent mapping through close.</li>\n</ul>\n<ul>\n<li>Experience navigating high-stakes negotiations with senior technical or research candidates.</li>\n</ul>\n<ul>\n<li>Comfort operating in fast-moving environments where hiring priorities and role definitions may evolve over time.</li>\n</ul>\n<p><strong>Workplace &amp; Location</strong></p>\n<p>This role is based in our San Francisco office and we aren’t considering remote applications at this time. We use a hybrid work model of 3 days in the office with optional work from home on Thursdays and Fridays. We also offer relocation assistance to new employees.</p>\n<p>Our open-plan offices have height-adjustable desks, conference rooms, phone booths, well-stocked kitchens full of snacks and drinks, three in-house prepared meals daily, outdoor space for working and socializing, wellness rooms, private bike storage, and more.</p>\n<p><strong>About OpenAI</strong></p>\n<p>OpenAI is an AI research and deployment company dedicated to ensuring that general-purpose artificial intelligence benefits all of humanity. We push the boundaries of the capabilities of AI systems and seek to safely deploy them to the world through our products. AI is an extremely powerful tool that must be created with safety and human needs at its core, and to achieve our mission, we must encompass and value the many different perspectives, voices, and experiences that form the full spectrum of humanity.</p>\n<p style=\"margin-top:24px;font-size:13px;color:#666;\">XML job scraping automation by <a href=\"https://yubhub.co\">YubHub</a></p>","url":"https://yubhub.co/jobs/job_23395dfc-6ff","directApply":true,"hiringOrganization":{"@type":"Organization","name":"OpenAI","sameAs":"https://jobs.ashbyhq.com","logo":"https://logos.yubhub.co/openai.com.png"},"x-apply-url":"https://jobs.ashbyhq.com/openai/ed871ac9-44ae-47fa-beaa-b9ab7ff31012","x-work-arrangement":"hybrid","x-experience-level":"senior","x-job-type":"full-time","x-salary-range":"$216K – $240K","x-skills-required":["Recruiting","AI/ML Research","Research Recruiting","Talent Acquisition","Strategic Planning","Market Insights","Candidate Evaluation","Hiring Decisions","Compensation Strategy","Leadership Development","Global Research Communities","Highly Technical Environments","ML/AI","Distributed Systems","Infrastructure","Scientific Computing","Quantitative Research"],"x-skills-preferred":["AI Research","Machine Learning","Data Science","Research Engineering","Technical Science","Global Talent Acquisition","Strategic Sourcing","Talent Management","Leadership Development","Global Research Communities"],"datePosted":"2026-03-06T18:34:38.258Z","jobLocation":{"@type":"Place","address":{"@type":"PostalAddress","addressLocality":"San Francisco"}},"employmentType":"FULL_TIME","occupationalCategory":"Engineering","industry":"Technology","skills":"Recruiting, AI/ML Research, Research Recruiting, Talent Acquisition, Strategic Planning, Market Insights, Candidate Evaluation, Hiring Decisions, Compensation Strategy, Leadership Development, Global Research Communities, Highly Technical Environments, ML/AI, Distributed Systems, Infrastructure, Scientific Computing, Quantitative Research, AI Research, Machine Learning, Data Science, Research Engineering, Technical Science, Global Talent Acquisition, Strategic Sourcing, Talent Management, Leadership Development, Global Research Communities","baseSalary":{"@type":"MonetaryAmount","currency":"USD","value":{"@type":"QuantitativeValue","minValue":216000,"maxValue":240000,"unitText":"YEAR"}}},{"@context":"https://schema.org","@type":"JobPosting","identifier":{"@type":"PropertyValue","name":"YubHub","value":"job_dfb4b690-29b"},"title":"Machine Learning Scientist","description":"<p>As a Machine Learning Scientist, you will build the framework and tools necessary to drive our research strategy and innovation roadmap. You will work alongside our researchers to streamline their research and bring our innovations to life.</p>\n<p><strong>What you&#39;ll do</strong></p>\n<ul>\n<li>Implement the technical strategy to support our research roadmap, exploring frontier technologies to shape the future of Sports Gaming</li>\n<li>Work with the research team to support experiments with tooling and platforms, and data acquisition and management</li>\n</ul>\n<p><strong>What you need</strong></p>\n<ul>\n<li>BS in Computer Science, mathematics or related field, or equivalent experience</li>\n<li>Strong technical background, experience working with both research engineering and game development, and a proven track record of delivery</li>\n</ul>\n<p style=\"margin-top:24px;font-size:13px;color:#666;\">XML job scraping automation by <a href=\"https://yubhub.co\">YubHub</a></p>","url":"https://yubhub.co/jobs/job_dfb4b690-29b","directApply":true,"hiringOrganization":{"@type":"Organization","name":"Electronic Arts","sameAs":"https://jobs.ea.com","logo":"https://logos.yubhub.co/jobs.ea.com.png"},"x-apply-url":"https://jobs.ea.com/en_US/careers/JobDetail/Machine-Learning-Scientist-Future-Opportunities/211381","x-work-arrangement":"hybrid","x-experience-level":"mid","x-job-type":"full-time","x-salary-range":null,"x-skills-required":["BS in Computer Science, mathematics or related field, or equivalent experience","Strong technical background, experience working with both research engineering and game development, and a proven track record of delivery"],"x-skills-preferred":["AI and machine learning, their relevant tools and platforms, and their application in gaming"],"datePosted":"2026-01-01T16:53:02.476Z","jobLocation":{"@type":"Place","address":{"@type":"PostalAddress","addressLocality":"Vancouver"}},"employmentType":"FULL_TIME","occupationalCategory":"Engineering","industry":"Technology","skills":"BS in Computer Science, mathematics or related field, or equivalent experience, Strong technical background, experience working with both research engineering and game development, and a proven track record of delivery, AI and machine learning, their relevant tools and platforms, and their application in gaming"},{"@context":"https://schema.org","@type":"JobPosting","identifier":{"@type":"PropertyValue","name":"YubHub","value":"job_68f7e9aa-e46"},"title":"Senior Machine Learning Engineer - Future Opportunities","description":"<p>As a Senior Machine Learning Engineer, you will build the framework and tools necessary to drive our research strategy and innovation roadmap. You will work closely alongside our researchers to streamline their research and bring our innovations to life.</p>\n<p><strong>What you&#39;ll do</strong></p>\n<ul>\n<li>Implement the technical strategy to support our research roadmap, exploring frontier technologies to shape the future of Sports Gaming</li>\n<li>Work closely with the research team to support experiments with tooling and platforms, as well as data acquisition and management</li>\n</ul>\n<p><strong>What you need</strong></p>\n<ul>\n<li>BS in Computer Science, mathematics or related field, or equivalent experience</li>\n<li>Strong technical background, experience working with both research engineering and game development, and a proven track record of delivery</li>\n</ul>\n<p style=\"margin-top:24px;font-size:13px;color:#666;\">XML job scraping automation by <a href=\"https://yubhub.co\">YubHub</a></p>","url":"https://yubhub.co/jobs/job_68f7e9aa-e46","directApply":true,"hiringOrganization":{"@type":"Organization","name":"Electronic Arts","sameAs":"https://jobs.ea.com","logo":"https://logos.yubhub.co/jobs.ea.com.png"},"x-apply-url":"https://jobs.ea.com/en_US/careers/JobDetail/Senior-Machine-Learning-Engineer-Future-Opportunities/210410","x-work-arrangement":"hybrid","x-experience-level":"senior","x-job-type":"regular employee","x-salary-range":"The ranges listed below are what EA in good faith expects to pay applicants for this role in these locations at the time of this posting. If you reside in a different location, a recruiter will advise on the applicable range and benefits.","x-skills-required":["BS in Computer Science, mathematics or related field, or equivalent experience","Strong technical background, experience working with both research engineering and game development, and a proven track record of delivery"],"x-skills-preferred":["AI and machine learning, their relevant tools and platforms, and its application in gaming"],"datePosted":"2026-01-01T16:50:49.830Z","jobLocation":{"@type":"Place","address":{"@type":"PostalAddress","addressLocality":"Vancouver, British Columbia, Canada"}},"occupationalCategory":"Engineering","industry":"Technology","skills":"BS in Computer Science, mathematics or related field, or equivalent experience, Strong technical background, experience working with both research engineering and game development, and a proven track record of delivery, AI and machine learning, their relevant tools and platforms, and its application in gaming"}]}