{"version":"0.1","company":{"name":"YubHub","url":"https://yubhub.co","jobsUrl":"https://yubhub.co/jobs/skill/scalable-oversight"},"x-facet":{"type":"skill","slug":"scalable-oversight","display":"Scalable Oversight","count":4},"x-feed-size-limit":100,"x-feed-sort":"enriched_at desc","x-feed-notice":"This feed contains at most 100 jobs (the most recently enriched). For the full corpus, use the paginated /stats/by-facet endpoint or /search.","x-generator":"yubhub-xml-generator","x-rights":"Free to redistribute with attribution: \"Data by YubHub (https://yubhub.co)\"","x-schema":"Each entry in `jobs` follows https://schema.org/JobPosting. YubHub-native raw fields carry `x-` prefix.","jobs":[{"@context":"https://schema.org","@type":"JobPosting","identifier":{"@type":"PropertyValue","name":"YubHub","value":"job_ba66dcb1-8d9"},"title":"Research Scientist, AI Controls and Monitoring","description":"<p>We&#39;re seeking a Research Scientist to join our team focused on AI Controls and Monitoring. As a key member of our team, you will design methods, systems, and experiments to ensure that advanced AI models and agents remain aligned with intended goals, even in high-stakes or adversarial environments.</p>\n<p>Your responsibilities will include developing monitoring techniques and observability methods, researching mechanisms for layered control, and designing red-team simulations to probe weaknesses in oversight and control mechanisms.</p>\n<p>To succeed in this role, you&#39;ll need a strong background in machine learning, particularly in generative AI, and at least three years of experience addressing sophisticated ML problems. You should be comfortable designing control and monitoring experiments for AI systems, building prototype systems, and quickly turning new ideas from the research literature into working prototypes.</p>\n<p>In addition to your technical expertise, you&#39;ll need strong written and verbal communication skills to operate in a cross-functional team.</p>\n<p>This role offers a competitive salary range of $216,000-$270,000 USD, depending on location and experience, as well as equity-based compensation and benefits, including comprehensive health, dental, and vision coverage, retirement benefits, a learning and development stipend, and generous PTO.</p>\n<p style=\"margin-top:24px;font-size:13px;color:#666;\">XML job scraping automation by <a href=\"https://yubhub.co\">YubHub</a></p>","url":"https://yubhub.co/jobs/job_ba66dcb1-8d9","directApply":true,"hiringOrganization":{"@type":"Organization","name":"Scale","sameAs":"https://scale.com/","logo":"https://logos.yubhub.co/scale.com.png"},"x-apply-url":"https://job-boards.greenhouse.io/scaleai/jobs/4675694005","x-work-arrangement":"hybrid","x-experience-level":"senior","x-job-type":"full-time","x-salary-range":"$216,000-$270,000 USD","x-skills-required":["Machine Learning","Generative AI","AI Control Protocols","AI Risk Evaluations","Runtime Monitoring","Anomaly Detection","Observability"],"x-skills-preferred":["Post-Training and RL Techniques","Scalable Oversight","Interpretability","Debate"],"datePosted":"2026-04-18T15:58:38.219Z","jobLocation":{"@type":"Place","address":{"@type":"PostalAddress","addressLocality":"San Francisco, CA; New York, NY"}},"employmentType":"FULL_TIME","occupationalCategory":"Engineering","industry":"Technology","skills":"Machine Learning, Generative AI, AI Control Protocols, AI Risk Evaluations, Runtime Monitoring, Anomaly Detection, Observability, Post-Training and RL Techniques, Scalable Oversight, Interpretability, Debate","baseSalary":{"@type":"MonetaryAmount","currency":"USD","value":{"@type":"QuantitativeValue","minValue":216000,"maxValue":270000,"unitText":"YEAR"}}},{"@context":"https://schema.org","@type":"JobPosting","identifier":{"@type":"PropertyValue","name":"YubHub","value":"job_b1be4c11-417"},"title":"Senior Research Scientist, Reward Models","description":"<p>As a Senior Research Scientist on our Reward Models team, you&#39;ll lead research efforts to improve how we specify and learn human preferences at scale. Your work will directly shape how our models understand and optimize for what humans actually want , enabling Claude to be more useful, more reliable, and better aligned with human values.</p>\n<p>This role focuses on pushing the frontier of reward modeling for large language models. You&#39;ll develop novel architectures and training methodologies for RLHF, research new approaches to LLM-based evaluation and grading (including rubric-based methods), and investigate techniques to identify and mitigate reward hacking. You&#39;ll collaborate closely with teams across Anthropic, including Finetuning, Alignment Science, and our broader research organization, to ensure your work translates into concrete improvements in both model capabilities and safety.</p>\n<p>We&#39;re looking for someone who can drive ambitious research agendas while also shipping practical improvements to production systems. You&#39;ll have the opportunity to work on some of the most important open problems in AI alignment, with access to frontier models and significant computational resources. Your work will directly advance the science of how we train AI systems to be both highly capable and safe.</p>\n<p>Responsibilities:</p>\n<ul>\n<li>Lead research on novel reward model architectures and training approaches for RLHF</li>\n<li>Develop and evaluate LLM-based grading and evaluation methods, including rubric-driven approaches that improve consistency and interpretability</li>\n<li>Research techniques to detect, characterize, and mitigate reward hacking and specification gaming</li>\n<li>Design experiments to understand reward model generalization, robustness, and failure modes</li>\n<li>Collaborate with the Finetuning team to translate research insights into improvements for production training pipelines</li>\n<li>Contribute to research publications, blog posts, and internal documentation</li>\n<li>Mentor other researchers and help build institutional knowledge around reward modeling</li>\n</ul>\n<p>You may be a good fit if you</p>\n<ul>\n<li>Have a track record of research contributions in reward modeling, RLHF, or closely related areas of machine learning</li>\n<li>Have experience training and evaluating reward models for large language models</li>\n<li>Are comfortable designing and running large-scale experiments with significant computational resources</li>\n<li>Can work effectively across research and engineering, iterating quickly while maintaining scientific rigor</li>\n<li>Enjoy collaborative research and can communicate complex ideas clearly to diverse audiences</li>\n<li>Care deeply about building AI systems that are both highly capable and safe</li>\n</ul>\n<p>Strong candidates may also</p>\n<ul>\n<li>Have published research on reward modeling, preference learning, or RLHF</li>\n<li>Have experience with LLM-as-judge approaches, including calibration and reliability challenges</li>\n<li>Have worked on reward hacking, specification gaming, or related robustness problems</li>\n<li>Have experience with constitutional AI, debate, or other scalable oversight approaches</li>\n<li>Have contributed to production ML systems at scale</li>\n<li>Have familiarity with interpretability techniques as applied to understanding reward model behavior</li>\n</ul>\n<p>The annual compensation range for this role is $350,000-$500,000 USD.</p>\n<p style=\"margin-top:24px;font-size:13px;color:#666;\">XML job scraping automation by <a href=\"https://yubhub.co\">YubHub</a></p>","url":"https://yubhub.co/jobs/job_b1be4c11-417","directApply":true,"hiringOrganization":{"@type":"Organization","name":"Anthropic","sameAs":"https://www.anthropic.com/","logo":"https://logos.yubhub.co/anthropic.com.png"},"x-apply-url":"https://job-boards.greenhouse.io/anthropic/jobs/5024835008","x-work-arrangement":"hybrid","x-experience-level":"senior","x-job-type":"full-time","x-salary-range":"$350,000-$500,000 USD","x-skills-required":["reward modeling","RLHF","LLM-based evaluation and grading","rubric-driven approaches","reward hacking","specification gaming","large-scale experiments","computational resources","research and engineering","collaborative research","complex ideas communication","AI systems development"],"x-skills-preferred":["published research","LLM-as-judge approaches","calibration and reliability challenges","constitutional AI","debate","scalable oversight approaches","production ML systems","interpretability techniques"],"datePosted":"2026-04-18T15:57:50.755Z","jobLocation":{"@type":"Place","address":{"@type":"PostalAddress","addressLocality":"Remote-Friendly (Travel Required) | San Francisco, CA"}},"jobLocationType":"TELECOMMUTE","employmentType":"FULL_TIME","occupationalCategory":"Engineering","industry":"Technology","skills":"reward modeling, RLHF, LLM-based evaluation and grading, rubric-driven approaches, reward hacking, specification gaming, large-scale experiments, computational resources, research and engineering, collaborative research, complex ideas communication, AI systems development, published research, LLM-as-judge approaches, calibration and reliability challenges, constitutional AI, debate, scalable oversight approaches, production ML systems, interpretability techniques","baseSalary":{"@type":"MonetaryAmount","currency":"USD","value":{"@type":"QuantitativeValue","minValue":350000,"maxValue":500000,"unitText":"YEAR"}}},{"@context":"https://schema.org","@type":"JobPosting","identifier":{"@type":"PropertyValue","name":"YubHub","value":"job_4e0b9271-cdd"},"title":"Research Engineer / Scientist, Alignment Science","description":"<p><strong>About the role:</strong></p>\n<p>You want to build and run elegant and thorough machine learning experiments to help us understand and steer the behavior of powerful AI systems. You care about making AI helpful, honest, and harmless, and are interested in the ways that this could be challenging in the context of human-level capabilities. You could describe yourself as both a scientist and an engineer. As a Research Engineer on Alignment Science, you&#39;ll contribute to exploratory experimental research on AI safety, with a focus on risks from powerful future systems (like those we would designate as ASL-3 or ASL-4 under our Responsible Scaling Policy), often in collaboration with other teams including Interpretability, Fine-Tuning, and the Frontier Red Team.</p>\n<p>Our blog provides an overview of topics that the Alignment Science team is either currently exploring or has previously explored. Our current topics of focus include...</p>\n<ul>\n<li><strong>Scalable Oversight:</strong> Developing techniques to keep highly capable models helpful and honest, even as they surpass human-level intelligence in various domains.</li>\n</ul>\n<ul>\n<li><strong>AI Control:</strong> Creating methods to ensure advanced AI systems remain safe and harmless in unfamiliar or adversarial scenarios.</li>\n</ul>\n<ul>\n<li><strong>Alignment Stress-testing</strong> <strong>:</strong> Creating model organisms of misalignment to improve our empirical understanding of how alignment failures might arise.</li>\n</ul>\n<ul>\n<li><strong>Automated Alignment Research:</strong> Building and aligning a system that can speed up &amp; improve alignment research.</li>\n</ul>\n<ul>\n<li><strong>Alignment Assessments</strong>: Understanding and documenting the highest-stakes and most concerning emerging properties of models through pre-deployment alignment and welfare assessments (see our Claude 4 System Card), misalignment-risk safety cases, and coordination with third-party evaluators.</li>\n</ul>\n<ul>\n<li><strong>Safeguards Research</strong>: Developing robust defenses against adversarial attacks, comprehensive evaluation frameworks for model safety, and automated systems to detect and mitigate potential risks before deployment.</li>\n</ul>\n<ul>\n<li><strong>Model Welfare:</strong> Investigating and addressing potential model welfare, moral status, and related questions. See our program announcement and welfare assessment in the Claude 4 system card for more.</li>\n</ul>\n<p>_Note: For this role, we conduct all interviews in Python and prefer candidates to be based in the Bay Area._</p>\n<p><strong>Representative projects:</strong></p>\n<ul>\n<li>Testing the robustness of our safety techniques by training language models to subvert our safety techniques, and seeing how effective they are at subvertinng our interventions.</li>\n</ul>\n<ul>\n<li>Run multi-agent reinforcement learning experiments to test out techniques like AI Debate.</li>\n</ul>\n<ul>\n<li>Build tooling to efficiently evaluate the effectiveness of novel LLM-generated jailbreaks.</li>\n</ul>\n<ul>\n<li>Write scripts and prompts to efficiently produce evaluation questions to test models’ reasoning abilities in safety-relevant contexts.</li>\n</ul>\n<ul>\n<li>Contribute ideas, figures, and writing to research papers, blog posts, and talks.</li>\n</ul>\n<ul>\n<li>Run experiments that feed into key AI safety efforts at Anthropic, like the design and implementation of our Responsible Scaling Policy.</li>\n</ul>\n<p><strong>You may be a good fit if you:</strong></p>\n<ul>\n<li>Have significant software, ML, or research engineering experience</li>\n</ul>\n<ul>\n<li>Have some experience contributing to empirical AI research projects</li>\n</ul>\n<ul>\n<li>Have some familiarity with technical AI safety research</li>\n</ul>\n<ul>\n<li>Prefer fast-moving collaborative projects to extensive solo efforts</li>\n</ul>\n<ul>\n<li>Pick up slack, even if it goes outside your job description</li>\n</ul>\n<ul>\n<li>Care about the impacts of AI</li>\n</ul>\n<p><strong>Strong candidates may also:</strong></p>\n<ul>\n<li>Have experience authoring research papers in machine learning, NLP, or AI safety</li>\n</ul>\n<ul>\n<li>Have experience with LLMs</li>\n</ul>\n<ul>\n<li>Have experience with reinforcement learning</li>\n</ul>\n<ul>\n<li>Have experience with Kubernetes clusters and complex shared codebases</li>\n</ul>\n<p><strong>Candidates need not have:</strong></p>\n<ul>\n<li>100% of the skills needed to perform the job</li>\n</ul>\n<ul>\n<li>Formal certifications or education credentials</li>\n</ul>\n<p>The annual compensation range for this role is listed below.</p>\n<p>For sales roles, the range provided is the role’s On Target Earnings (&quot;OTE&quot;) range, meaning that the range includes both the sales commissions/sales bonuses target and annual base salary for the role.</p>\n<p>Annual Salary:</p>\n<p>$350,000 \\- $500,000USD</p>\n<p><strong><strong>Logistics</strong></strong></p>\n<p><strong>Education requirements:</strong> We require at least a Bachelor&#39;s degree in a related field or equivalent experience.</p>\n<p><strong>Location-based hybrid policy:</strong> Currently, we expect all staff to be in one of our offices at least 25% of the time. However, some roles may require more time in our offices.</p>\n<p><strong>Visa sponsorship:</strong> We do sponsor visas! However, we aren&#39;t able to successfully sponsor visas for every role and every candidate. But if we make you an offer, we will make every reasonable effort to get you a visa, and we retain an immigration lawyer to help with this.</p>\n<p><strong>We encourage you to apply even if you do not believe you meet every single qualification.</strong> Not all strong candidates will meet every single qualification as listed.  Research shows that people who identify as being from underrepresented groups are more prone to experiencing imposter syndrome and doubting the strength of their candidacy, so we urge you not to exclude yourself prematurely and to submit an application if you&#39;re interested in this work. We think AI systems like the ones we&#39;re building have enormous social and ethical implications. We think this makes representation even more important, and we strive to include a range of diverse perspectives on our team.</p>\n<p><strong>Your safety matters to us.</strong> To protect yourself from potential scams, remember that Anthropic recruits through our website and other job boards, and we will never ask you to pay for any part of the recruitment process.</p>\n<p style=\"margin-top:24px;font-size:13px;color:#666;\">XML job scraping automation by <a href=\"https://yubhub.co\">YubHub</a></p>","url":"https://yubhub.co/jobs/job_4e0b9271-cdd","directApply":true,"hiringOrganization":{"@type":"Organization","name":"Anthropic","sameAs":"https://job-boards.greenhouse.io","logo":"https://logos.yubhub.co/anthropic.com.png"},"x-apply-url":"https://job-boards.greenhouse.io/anthropic/jobs/4631822008","x-work-arrangement":"hybrid","x-experience-level":"senior","x-job-type":"full-time","x-salary-range":"$350,000 - $500,000USD","x-skills-required":["Python","Machine Learning","Research Engineering","AI Safety","Scalable Oversight","AI Control","Alignment Stress-testing","Automated Alignment Research","Alignment Assessments","Safeguards Research","Model Welfare"],"x-skills-preferred":["Experience authoring research papers in machine learning, NLP, or AI safety","Experience with LLMs","Experience with reinforcement learning","Experience with Kubernetes clusters and complex shared codebases"],"datePosted":"2026-03-08T13:51:34.613Z","jobLocation":{"@type":"Place","address":{"@type":"PostalAddress","addressLocality":"San Francisco, CA"}},"employmentType":"FULL_TIME","occupationalCategory":"Engineering","industry":"Technology","skills":"Python, Machine Learning, Research Engineering, AI Safety, Scalable Oversight, AI Control, Alignment Stress-testing, Automated Alignment Research, Alignment Assessments, Safeguards Research, Model Welfare, Experience authoring research papers in machine learning, NLP, or AI safety, Experience with LLMs, Experience with reinforcement learning, Experience with Kubernetes clusters and complex shared codebases","baseSalary":{"@type":"MonetaryAmount","currency":"USD","value":{"@type":"QuantitativeValue","minValue":350000,"maxValue":500000,"unitText":"YEAR"}}},{"@context":"https://schema.org","@type":"JobPosting","identifier":{"@type":"PropertyValue","name":"YubHub","value":"job_28cb565e-69a"},"title":"Researcher, Health AI","description":"<p><strong>Researcher, Health AI</strong></p>\n<p><strong>Location</strong></p>\n<p>San Francisco</p>\n<p><strong>Employment Type</strong></p>\n<p>Full time</p>\n<p><strong>Department</strong></p>\n<p>Safety Systems</p>\n<p><strong>Compensation</strong></p>\n<ul>\n<li>$295K – $445K • Offers Equity</li>\n</ul>\n<p>The base pay offered may vary depending on multiple individualized factors, including market location, job-related knowledge, skills, and experience. If the role is non-exempt, overtime pay will be provided consistent with applicable laws. In addition to the salary range listed above, total compensation also includes generous equity, performance-related bonus(es) for eligible employees, and the following benefits.</p>\n<ul>\n<li>Medical, dental, and vision insurance for you and your family, with employer contributions to Health Savings Accounts</li>\n</ul>\n<ul>\n<li>Pre-tax accounts for Health FSA, Dependent Care FSA, and commuter expenses (parking and transit)</li>\n</ul>\n<ul>\n<li>401(k) retirement plan with employer match</li>\n</ul>\n<ul>\n<li>Paid parental leave (up to 24 weeks for birth parents and 20 weeks for non-birthing parents), plus paid medical and caregiver leave (up to 8 weeks)</li>\n</ul>\n<ul>\n<li>Paid time off: flexible PTO for exempt employees and up to 15 days annually for non-exempt employees</li>\n</ul>\n<ul>\n<li>13+ paid company holidays, and multiple paid coordinated company office closures throughout the year for focus and recharge, plus paid sick or safe time (1 hour per 30 hours worked, or more, as required by applicable state or local law)</li>\n</ul>\n<ul>\n<li>Mental health and wellness support</li>\n</ul>\n<ul>\n<li>Employer-paid basic life and disability coverage</li>\n</ul>\n<ul>\n<li>Annual learning and development stipend to fuel your professional growth</li>\n</ul>\n<ul>\n<li>Daily meals in our offices, and meal delivery credits as eligible</li>\n</ul>\n<ul>\n<li>Relocation support for eligible employees</li>\n</ul>\n<ul>\n<li>Additional taxable fringe benefits, such as charitable donation matching and wellness stipends, may also be provided.</li>\n</ul>\n<p>More details about our benefits are available to candidates during the hiring process.</p>\n<p>This role is at-will and OpenAI reserves the right to modify base pay and other compensation components at any time based on individual performance, team or company results, or market conditions.</p>\n<p><strong>About the Team</strong></p>\n<p>The Safety Systems team is dedicated to ensuring the safety, robustness, and reliability of AI models towards their deployment in the real world.</p>\n<p>OpenAI’s charter calls on us to ensure the benefits of AI are distributed widely. Our Health AI team is focused on enabling universal access to high-quality medical information. We work at the intersection of AI safety research and healthcare applications, aiming to create trustworthy AI models that can assist medical professionals and improve patient outcomes.</p>\n<p><strong>About the Role</strong></p>\n<p>We’re seeking strong researchers who are passionate about advancing AI safety and improving global health outcomes. As a Research Scientist, you will contribute to the development of safe and effective AI models for healthcare applications. You will implement practical and general methods to improve the behavior, knowledge, and reasoning of our models in these settings. This will require research into safety and alignment techniques that we aim to generalize towards safe and beneficial AGI.</p>\n<p>This role is based in San Francisco, CA. We use a hybrid work model of 3 days in the office per week and offer relocation assistance to new employees.</p>\n<p><strong>In this role, you will:</strong></p>\n<ul>\n<li>Design and apply practical and scalable methods to improve safety and reliability of our models, including RLHF, automated red teaming, scalable oversight, etc.</li>\n</ul>\n<ul>\n<li>Evaluate methods using health-related data, ensuring models provide accurate, reliable, and trustworthy information.</li>\n</ul>\n<ul>\n<li>Build reusable libraries for applying general alignment techniques to our models.</li>\n</ul>\n<ul>\n<li>Proactively understand the safety of our models and systems, identifying areas of risk.</li>\n</ul>\n<ul>\n<li>Work with cross-team stakeholders to integrate methods in core model training and launch safety improvements in OpenAI’s products.</li>\n</ul>\n<p><strong>You might thrive in this role if you:</strong></p>\n<ul>\n<li>Are excited about OpenAI’s mission of ensuring AGI is universally beneficial and are aligned with OpenAI’s charter.</li>\n</ul>\n<ul>\n<li>Demonstrate passion for AI safety and improving global health outcomes.</li>\n</ul>\n<ul>\n<li>Have 4+ years of experience with deep learning research and LLMs, especially practical alignment topics such as RLHF, automated red teaming, scalable oversight, etc.</li>\n</ul>\n<ul>\n<li>Hold a Ph.D. or other degree in computer science, AI, machine learning, or a related field.</li>\n</ul>\n<ul>\n<li>Stay goal-oriented instead of method-oriented, and are not afraid of unglamorous but high-value work when needed.</li>\n</ul>\n<ul>\n<li>Possess experience making practical model improvements for AI model deployment.</li>\n</ul>\n<ul>\n<li>Own problems end-to-end, and are willing to pick up whatever knowledge you&#39;re missing to get the job done.</li>\n</ul>\n<ul>\n<li>Are a team player who enjoys collaborative work environments.</li>\n</ul>\n<ul>\n<li>Bonus: possess experience in health-related AI research or deployments.</li>\n</ul>\n<p><strong>About OpenAI</strong></p>\n<p>OpenAI is an AI research and deployment company dedicated to ensuring that general-purpose artificial intelligence benefits all of humanity. We push the boundaries of the capabilities of AI systems and seek to safely deploy them to the world through our products. AI is an extremely powerful tool that must be created with safety and human needs at its core, and to achieve our mission, we must encompass and value the many different perspectives, voices, and experiences that form the full spectrum of humanity.</p>\n<p style=\"margin-top:24px;font-size:13px;color:#666;\">XML job scraping automation by <a href=\"https://yubhub.co\">YubHub</a></p>","url":"https://yubhub.co/jobs/job_28cb565e-69a","directApply":true,"hiringOrganization":{"@type":"Organization","name":"OpenAI","sameAs":"https://jobs.ashbyhq.com","logo":"https://logos.yubhub.co/openai.com.png"},"x-apply-url":"https://jobs.ashbyhq.com/openai/bcbe08e3-9593-431d-bc99-37e35e035742","x-work-arrangement":"hybrid","x-experience-level":"senior","x-job-type":"full-time","x-salary-range":"$295K – $445K • Offers Equity","x-skills-required":["Deep learning research","LLMs","RLHF","Automated red teaming","Scalable oversight","Health-related data","AI safety research","Healthcare applications","Trustworthy AI models","Medical professionals","Patient outcomes","Ph.D. or other degree in computer science, AI, machine learning, or a related field"],"x-skills-preferred":["Team player","Collaborative work environments","Health-related AI research or deployments"],"datePosted":"2026-03-06T18:40:30.820Z","jobLocation":{"@type":"Place","address":{"@type":"PostalAddress","addressLocality":"San Francisco"}},"employmentType":"FULL_TIME","occupationalCategory":"Engineering","industry":"Technology","skills":"Deep learning research, LLMs, RLHF, Automated red teaming, Scalable oversight, Health-related data, AI safety research, Healthcare applications, Trustworthy AI models, Medical professionals, Patient outcomes, Ph.D. or other degree in computer science, AI, machine learning, or a related field, Team player, Collaborative work environments, Health-related AI research or deployments","baseSalary":{"@type":"MonetaryAmount","currency":"USD","value":{"@type":"QuantitativeValue","minValue":295000,"maxValue":445000,"unitText":"YEAR"}}}]}