{"version":"0.1","company":{"name":"YubHub","url":"https://yubhub.co","jobsUrl":"https://yubhub.co/jobs/skill/classifiers"},"x-facet":{"type":"skill","slug":"classifiers","display":"Classifiers","count":7},"x-feed-size-limit":100,"x-feed-sort":"enriched_at desc","x-feed-notice":"This feed contains at most 100 jobs (the most recently enriched). For the full corpus, use the paginated /stats/by-facet endpoint or /search.","x-generator":"yubhub-xml-generator","x-rights":"Free to redistribute with attribution: \"Data by YubHub (https://yubhub.co)\"","x-schema":"Each entry in `jobs` follows https://schema.org/JobPosting. YubHub-native raw fields carry `x-` prefix.","jobs":[{"@context":"https://schema.org","@type":"JobPosting","identifier":{"@type":"PropertyValue","name":"YubHub","value":"job_716d3247-e3f"},"title":"ML/Research Engineer, Safeguards","description":"<p><strong>About Anthropic</strong></p>\n<p>Anthropic&#39;s mission is to create reliable, interpretable, and steerable AI systems. We want AI to be safe and beneficial for our users and for society as a whole. Our team is a quickly growing group of committed researchers, engineers, policy experts, and business leaders working together to build beneficial AI systems.</p>\n<p><strong>About the role</strong></p>\n<p>We are looking for ML Engineers and Research Engineers to help detect and mitigate misuse of our AI systems. As a member of the Safeguards ML team, you will build systems that identify harmful use—from individual policy violations to sophisticated, coordinated attacks—and develop defenses that keep our products safe as capabilities advance. You will also work on systems that protect user wellbeing and ensure our models behave appropriately across a wide range of contexts. This work feeds directly into Anthropic&#39;s Responsible Scaling Policy commitments.</p>\n<p><strong>Responsibilities</strong></p>\n<ul>\n<li>Develop classifiers to detect misuse and anomalous behavior at scale. This includes developing synthetic data pipelines for training classifiers and methods to automatically source representative evaluations to iterate on</li>\n<li>Build systems to monitor for harms that span multiple exchanges, such as coordinated cyber attacks and influence operations, and develop new methods for aggregating and analyzing signals across contexts</li>\n<li>Evaluate and improve the safety of agentic products—developing both threat models and environments to test for agentic risks, and developing and deploying mitigations for prompt injection attacks</li>\n<li>Conduct research on automated red-teaming, adversarial robustness, and other research that helps test for or find misuse</li>\n</ul>\n<p><strong>You may be a good fit if you</strong></p>\n<ul>\n<li>Have 4+ years of experience in ML engineering, research engineering, or applied research, in academia or industry</li>\n<li>Have proficiency in Python and experience building ML systems</li>\n<li>Are comfortable working across the research-to-deployment pipeline, from exploratory experiments to production systems</li>\n<li>Are worried about misuse risks of AI systems, and want to work to mitigate them</li>\n<li>Have strong communication skills and ability to explain complex technical concepts to non-technical stakeholders</li>\n</ul>\n<p><strong>Strong candidates may also have experience with</strong></p>\n<ul>\n<li>Language modeling and transformers</li>\n<li>Building classifiers, anomaly detection systems, or behavioral ML</li>\n<li>Adversarial machine learning or red-teaming</li>\n<li>Interpretability or probes</li>\n<li>Reinforcement learning</li>\n<li>High-performance, large-scale ML systems</li>\n</ul>\n<p><strong>Logistics</strong></p>\n<p><strong>Education requirements:</strong> We require at least a Bachelor&#39;s degree in a related field or equivalent experience. <strong>Location-based hybrid policy:</strong> Currently, we expect all staff to be in one of our offices at least 25% of the time. However, some roles may require more time in our offices.</p>\n<p><strong>Visa sponsorship</strong></p>\n<p>We do sponsor visas! However, we aren&#39;t able to successfully sponsor visas for every role and every candidate. But if we make you an offer, we will make every reasonable effort to get you a visa, and we retain an immigration lawyer to help with this.</p>\n<p><strong>We encourage you to apply even if you do not believe you meet every single qualification.</strong></p>\n<p>Not all strong candidates will meet every single qualification as listed. Research shows that people who identify as being from underrepresented groups are more prone to experiencing imposter syndrome and doubting the strength of their candidacy, so we urge you not to exclude yourself prematurely and to submit an application if you&#39;re interested in this work.</p>\n<p><strong>Your safety matters to us.</strong></p>\n<p>To protect yourself from potential scams, remember that Anthropic recruiters only contact you from @anthropic.com email addresses. In some cases, we may partner with vetted recruiting agencies who will identify themselves as working on behalf of Anthropic. Be cautious of emails from other domains. Legitimate Anthropic recruiters will never ask for money, fees, or banking information before your first day. If you&#39;re ever unsure about a communication, don&#39;t click any links—visit anthropic.com/careers directly for confirmed position openings.</p>\n<p><strong>How we&#39;re different</strong></p>\n<p>We believe that the highest-impact AI research will be big science. At Anthropic we work as a single cohesive team on just a few large-scale research efforts. And we value impact — advancing our long-term goals of steerable, trustworthy AI — rather than work on smaller and more specific puzzles. We view AI research as an empirical science, which has as much in common with physics and biology as with traditional efforts in computer science. We&#39;re an extremely collaborative group, and we host frequent research discussions to ensure that we are pursuing the highest-impact work at any given time. As such, we greatly value communication skills.</p>\n<p>The easiest way to understand our research directions is to read our recent research. This research continues many of the directions our team worked on prior to Anthropic, including: GPT-3, Circuit-Based Interpretability, Multimodal Neurons, Scaling Laws, AI &amp; Compute, Concrete Problems in AI Safety, and Learning from Human Preferences.</p>\n<p><strong>Come work with us!</strong></p>\n<p>Anthropic is a public benefit corporation headquartered in San Francisco. We offer competitive compensation and benefits, optional</p>\n<p style=\"margin-top:24px;font-size:13px;color:#666;\">XML job scraping automation by <a href=\"https://yubhub.co\">YubHub</a></p>","url":"https://yubhub.co/jobs/job_716d3247-e3f","directApply":true,"hiringOrganization":{"@type":"Organization","name":"Anthropic","sameAs":"https://job-boards.greenhouse.io","logo":"https://logos.yubhub.co/anthropic.com.png"},"x-apply-url":"https://job-boards.greenhouse.io/anthropic/jobs/4949336008","x-work-arrangement":"hybrid","x-experience-level":"senior","x-job-type":"full-time","x-salary-range":"$350,000 - $500,000USD","x-skills-required":["Python","Machine Learning","Research Engineering","Adversarial Machine Learning","Red-teaming","Interpretability","Probes","Reinforcement Learning","High-performance, large-scale ML systems"],"x-skills-preferred":["Language modeling and transformers","Building classifiers, anomaly detection systems, or behavioral ML"],"datePosted":"2026-03-08T13:46:45.711Z","jobLocation":{"@type":"Place","address":{"@type":"PostalAddress","addressLocality":"San Francisco, CA | New York City, NY"}},"employmentType":"FULL_TIME","occupationalCategory":"Engineering","industry":"Technology","skills":"Python, Machine Learning, Research Engineering, Adversarial Machine Learning, Red-teaming, Interpretability, Probes, Reinforcement Learning, High-performance, large-scale ML systems, Language modeling and transformers, Building classifiers, anomaly detection systems, or behavioral ML","baseSalary":{"@type":"MonetaryAmount","currency":"USD","value":{"@type":"QuantitativeValue","minValue":350000,"maxValue":500000,"unitText":"YEAR"}}},{"@context":"https://schema.org","@type":"JobPosting","identifier":{"@type":"PropertyValue","name":"YubHub","value":"job_138b24e2-2bd"},"title":"Senior Software Engineer, Anti-Abuse & Security","description":"<p>Rewrite this job ad in your own words, matching the tone of voice of the original. Reuse the same section headings from the original ad (e.g. if the ad says &quot;Responsibilities&quot;, use that heading, not &quot;What you&#39;ll do&quot;).</p>\n<p>Start with an opening paragraph (no heading): what the role is, who the company is, why it matters. If the ad mentions salary, include it here.</p>\n<p>Rephrase bullet points in your own words while keeping the factual content. Combine related points where it makes sense.</p>\n<p>For benefits/perks: gather them from anywhere in the ad into one section. If the ad mentions nothing about benefits, omit a benefits section entirely.</p>\n<p>Do not invent information that is not in the original ad.</p>\n<p><strong>About the role</strong> The Anti-Abuse team is the front line defending Replit&#39;s platform from exploitation. We detect and shut down phishing deployments, prevent cryptomining on free-tier infrastructure, stop LLM token farming, and keep bad actors from weaponizing the platform against our users. This is adversarial work: attackers adapt constantly, and we build the detection systems, heuristics, and automated responses that stay ahead of them.</p>\n<p>What makes this role unique is the AI-native nature of Replit&#39;s platform. You&#39;ll work on problems that barely exist elsewhere: building guardrails for AI-generated code, detecting prompt injection attacks at scale, and using LLMs as a defensive tool against abuse. If you want hands-on experience applying AI to security problems, this is one of the few places you can do it in production with real attackers. You&#39;ll own problems end-to-end, from identifying emerging abuse patterns to shipping the systems that stop them at scale.</p>\n<p><strong>In this role you will…</strong></p>\n<ul>\n<li>Design and implement LLM guardrails that detect abuse scenarios in AI-generated code and agent interactions</li>\n<li>Build AI-powered detection systems that use LLMs to identify malicious patterns, classify threats, and automate response decisions</li>\n<li>Build and operate abuse detection systems that identify phishing, cryptomining, account takeover, and financial fraud across millions of daily user actions</li>\n<li>Design automated response mechanisms that enforce platform policies without manual intervention</li>\n<li>Own the full abuse response lifecycle: detection, investigation, enforcement, and handling appeals alongside Support and Legal</li>\n<li>Analyze attack patterns using BigQuery and Hex, turning investigation findings into new detection rules</li>\n<li>Maintain and extend internal detection tools (Slurper, Netwatch) that continuously monitor user activity</li>\n<li>Integrate and tune security scanners (SAST, SCA) in CI pipelines with tight performance SLAs</li>\n<li>Track abuse trends, measure detection effectiveness, and adapt defenses as attack patterns evolve</li>\n</ul>\n<p><strong>Required skills and experience:</strong></p>\n<ul>\n<li>4+ years of experience in security engineering, anti-abuse, trust &amp; safety, or fraud detection</li>\n<li>Strong programming skills in Python and/or TypeScript for building detection systems and automation</li>\n<li>Experience with SQL and data analysis at scale (BigQuery, Snowflake, or similar)</li>\n<li>Experience building or fine-tuning ML/LLM-based classifiers for security or abuse detection</li>\n<li>Familiarity with prompt injection, jailbreaking, and other LLM-specific attack vectors</li>\n<li>Ability to investigate complex abuse patterns and translate findings into automated defenses</li>\n<li>Familiarity with common attack patterns: phishing infrastructure, account takeover, credential stuffing, resource abuse</li>\n<li>Clear communication skills for working across Security, Support, Legal, and Engineering teams.</li>\n</ul>\n<p><strong>Nice to have:</strong></p>\n<ul>\n<li>Experience at a platform company dealing with user-generated content or compute abuse (hosting providers, cloud platforms, developer tools)</li>\n<li>Background in fraud detection, payment abuse, or financial crime</li>\n<li>Familiarity with device fingerprinting, IP reputation, and email validation services</li>\n<li>Experience with CI/CD security tooling (SAST, SCA, Dependabot, Snyk)</li>\n<li>Knowledge of container security, Linux internals, or cloud infrastructure (GCP preferred)</li>\n<li>Prior work with abuse reporting pipelines, trust &amp; safety tooling, or content moderation systems</li>\n</ul>\n<p><strong>Tools + Tech Stack for this role</strong></p>\n<ul>\n<li><strong>Languages:</strong> Python, TypeScript, Go, SQL</li>\n<li><strong>Data:</strong> BigQuery, Hex</li>\n<li><strong>Detection tools:</strong> Slurper, Netwatch, Stytch (device fingerprint); ClearOut (email reputation)</li>\n<li><strong>CI/CD Security: Dependabot, Snyk, SAST/SCA scanners</strong></li>\n<li><strong>Infrastructure: GCP, Kubernetes</strong></li>\n<li><strong>Collaboration: Linear, Slack, Zendesk (for abuse reports)</strong></li>\n</ul>\n<p><strong>This role may</strong> _<strong>not</strong>_ <strong>be a fit if</strong></p>\n<ul>\n<li>You prefer deep security research over building operational detection systems</li>\n<li>You want to focus on vulnerability management, pentesting, or bug bounty triage (that&#39;s our Security team)</li>\n<li>You&#39;re looking for a role with predictable, well-defined problems rather than constantly adapting to adversarial behavior</li>\n<li>You prefer working in isolation rather than partnering closely with Support, Legal, and cross-functional teams</li>\n<li>You&#39;re uncomfortable making enforcement decisions that affect real users</li>\n</ul>\n<p>_This is a full-time role that can be held from our Foster City, CA office. The role has an in-office requirement of Monday, Wednesday, and Friday._</p>\n<p><strong>Full-Time Employee Benefits Include:</strong> 💰 Competitive Salary &amp; Equity 💹 401(k) Program with a 4% match ⚕️ Health, Dental, Vision and Life Insurance 🩼 Short Term and Long Term Disability 🚼 Paid Parental, Medical, Caregiver Leave 🚗 Commuter Benefits 📱 Monthly Wellness Stipend 🧑‍💻 Autonomous Work Environment 🖥 In Office Set-Up Reimbursement 🏝 Flexible Time Off (FTO) + Holidays 🚀 Quarterly Team Gatherings ☕ In Office Amenities</p>\n<p style=\"margin-top:24px;font-size:13px;color:#666;\">XML job scraping automation by <a href=\"https://yubhub.co\">YubHub</a></p>","url":"https://yubhub.co/jobs/job_138b24e2-2bd","directApply":true,"hiringOrganization":{"@type":"Organization","name":"Replit","sameAs":"https://jobs.ashbyhq.com","logo":"https://logos.yubhub.co/replit.com.png"},"x-apply-url":"https://jobs.ashbyhq.com/replit/5bdadf61-7955-46e8-8fdf-bd69818358b7","x-work-arrangement":"hybrid","x-experience-level":"senior","x-job-type":"full-time","x-salary-range":"$190K – $240K","x-skills-required":["security engineering","anti-abuse","trust & safety","fraud detection","Python","TypeScript","SQL","BigQuery","Hex","ML/LLM-based classifiers","prompt injection","jailbreaking","common attack patterns","phishing infrastructure","account takeover","credential stuffing","resource abuse"],"x-skills-preferred":["experience at a platform company","fraud detection","payment abuse","financial crime","device fingerprinting","IP reputation","email validation services","CI/CD security tooling","container security","Linux internals","cloud infrastructure"],"datePosted":"2026-03-07T15:19:04.069Z","jobLocation":{"@type":"Place","address":{"@type":"PostalAddress","addressLocality":"Foster City, CA"}},"employmentType":"FULL_TIME","occupationalCategory":"Engineering","industry":"Technology","skills":"security engineering, anti-abuse, trust & safety, fraud detection, Python, TypeScript, SQL, BigQuery, Hex, ML/LLM-based classifiers, prompt injection, jailbreaking, common attack patterns, phishing infrastructure, account takeover, credential stuffing, resource abuse, experience at a platform company, fraud detection, payment abuse, financial crime, device fingerprinting, IP reputation, email validation services, CI/CD security tooling, container security, Linux internals, cloud infrastructure","baseSalary":{"@type":"MonetaryAmount","currency":"USD","value":{"@type":"QuantitativeValue","minValue":190000,"maxValue":240000,"unitText":"YEAR"}}},{"@context":"https://schema.org","@type":"JobPosting","identifier":{"@type":"PropertyValue","name":"YubHub","value":"job_d2dfc6c9-22d"},"title":"Trust & Safety Operations Analyst, Ads","description":"<p><strong>Job Posting</strong></p>\n<p><strong>Trust &amp; Safety Operations Analyst, Ads</strong></p>\n<p><strong>Location</strong></p>\n<p>San Francisco</p>\n<p><strong>Employment Type</strong></p>\n<p>Full time</p>\n<p><strong>Department</strong></p>\n<p><strong>Compensation</strong></p>\n<ul>\n<li>$189K – $280K • Offers Equity</li>\n</ul>\n<p>The base pay offered may vary depending on multiple individualized factors, including market location, job-related knowledge, skills, and experience. If the role is non-exempt, overtime pay will be provided consistent with applicable laws. In addition to the salary range listed above, total compensation also includes generous equity, performance-related bonus(es) for eligible employees, and the following benefits.</p>\n<ul>\n<li>Medical, dental, and vision insurance for you and your family, with employer contributions to Health Savings Accounts</li>\n</ul>\n<ul>\n<li>Pre-tax accounts for Health FSA, Dependent Care FSA, and commuter expenses (parking and transit)</li>\n</ul>\n<ul>\n<li>401(k) retirement plan with employer match</li>\n</ul>\n<ul>\n<li>Paid parental leave (up to 24 weeks for birth parents and 20 weeks for non-birthing parents), plus paid medical and caregiver leave (up to 8 weeks)</li>\n</ul>\n<ul>\n<li>Paid time off: flexible PTO for exempt employees and up to 15 days annually for non-exempt employees</li>\n</ul>\n<ul>\n<li>13+ paid company holidays, and multiple paid coordinated company office closures throughout the year for focus and recharge, plus paid sick or safe time (1 hour per 30 hours worked, or more, as required by applicable state or local law)</li>\n</ul>\n<ul>\n<li>Mental health and wellness support</li>\n</ul>\n<ul>\n<li>Employer-paid basic life and disability coverage</li>\n</ul>\n<ul>\n<li>Annual learning and development stipend to fuel your professional growth</li>\n</ul>\n<ul>\n<li>Daily meals in our offices, and meal delivery credits as eligible</li>\n</ul>\n<ul>\n<li>Relocation support for eligible employees</li>\n</ul>\n<ul>\n<li>Additional taxable fringe benefits, such as charitable donation matching and wellness stipends, may also be provided.</li>\n</ul>\n<p>More details about our benefits are available to candidates during the hiring process.</p>\n<p>This role is at-will and OpenAI reserves the right to modify base pay and other compensation components at any time based on individual performance, team or company results, or market conditions.</p>\n<p><strong><strong>About the Team</strong></strong></p>\n<p>At OpenAI, our <strong>User Safety &amp; Risk Operations</strong> team is responsible for safeguarding our platform and users from abuse, fraud, and emerging threats. We operate at the intersection of product risk, operational scale, and real-time safety response—supporting users ranging from individuals to global enterprises, as well as advertisers and creators.</p>\n<p>The Ads Trust &amp; Safety Operations team protects our users, advertisers, and creators across all monetized surfaces. As OpenAI introduces new revenue-generating formats and partnerships, this team ensures these experiences remain safe, compliant, high-quality, and aligned with our broader safety standards. We partner closely with Product, Engineering, Policy, and Legal to identify emerging risks, build and mature enforcement systems, and ensure scalable, high-integrity operations.</p>\n<p><strong><strong>About the Role</strong></strong></p>\n<p>We’re looking for a senior operator to help build and scale Ads Trust &amp; Safety Operations at OpenAI. In this role, you’ll drive critical Ads T&amp;S workstreams end-to-end, partnering closely with Product, Policy, Engineering, Legal, and Operations to design scalable enforcement processes, strengthen detection and tooling, and ensure we’re prepared to support Ads and monetization safely at scale.</p>\n<p>You’ll operate at the intersection of strategy and execution—translating ambiguity into structured programs, identifying operational risks, and driving measurable improvements across systems and workflows.</p>\n<p>This role requires someone who is highly operational, excellent at execution, and comfortable driving clarity amid ambiguity. You should be eager to build scalable systems and processes from the ground up and work in lockstep with policy and product teams as we rapidly iterate on advertising strategies and features.</p>\n<p><strong><strong>In this role, you will:</strong></strong></p>\n<ul>\n<li>Own complex, high-impact Ads Trust &amp; Safety problem areas from strategy through execution.</li>\n</ul>\n<ul>\n<li>Design and scale operational workflows for Ads Trust &amp; Safety, including enforcement models, review processes, escalation paths, and quality frameworks.</li>\n</ul>\n<ul>\n<li>Partner closely with Product, Policy, and Engineering to translate risk and policy requirements into scalable systems, tooling, and automation.</li>\n</ul>\n<ul>\n<li>Drive operational readiness for new Ads and monetization launches, features, and markets, identifying risks early and ensuring appropriate mitigations are in place.</li>\n</ul>\n<ul>\n<li>Use data to identify trends, gaps, and emerging risks across Ads surfaces; develop proposals and solutions grounded in metrics and operational signals.</li>\n</ul>\n<ul>\n<li>Contribute to the evolution of Ads Trust &amp; Safety cross-functional strategy, including how safety scales with automation, classifiers, and self-service tooling.</li>\n</ul>\n<ul>\n<li>Act as a senior XFN partner and subject-matter expert, influencing direction through strong judgment, clear communication, and credibility.</li>\n</ul>\n<p><strong><strong>You might thrive in this role if you have:</strong></strong></p>\n<ul>\n<li>5+ years of experience in Trust &amp; Safety, Business Integrity, Fraud &amp; Abuse, Risk Operations, or a closely related domain.</li>\n</ul>\n<ul>\n<li>Deep familiarity with ads ecosystems and advertiser risk</li>\n</ul>\n<ul>\n<li>Proven ability to independently own ambiguous, cross-functional initiatives and drive them to completion.</li>\n</ul>\n<ul>\n<li>Strong operational judgment and systems thinking—able to design solutions that scale beyond manual review.</li>\n</ul>\n<ul>\n<li>Experience working closely with Product, Policy, and Engineering teams on enforcement systems, tooling, or automation.</li>\n</ul>\n<ul>\n<li>Comfort using data and operational metrics to inform decisions, prioritize work, and measure impact.</li>\n</ul>\n<ul>\n<li>Excellent written and verbal communication skills, including the ability to explain complex risk tradeoffs to diverse audiences.</li>\n</ul>\n<ul>\n<li>Experience designing or partnering on automated enforcement, classifiers, or decision-support tools.</li>\n</ul>\n<p><strong><strong>About OpenAI</strong></strong></p>\n<p>OpenAI is an AI research and deployment company dedicated to ensuring that general-purpose artificial intelligence benefits humanity. It was founded in 2015 and has since grown to become a leading player in the AI industry.</p>\n<p style=\"margin-top:24px;font-size:13px;color:#666;\">XML job scraping automation by <a href=\"https://yubhub.co\">YubHub</a></p>","url":"https://yubhub.co/jobs/job_d2dfc6c9-22d","directApply":true,"hiringOrganization":{"@type":"Organization","name":"OpenAI","sameAs":"https://jobs.ashbyhq.com","logo":"https://logos.yubhub.co/openai.com.png"},"x-apply-url":"https://jobs.ashbyhq.com/openai/c9e9e3a5-fb93-4162-b876-6266016819c0","x-work-arrangement":"onsite","x-experience-level":"senior","x-job-type":"full-time","x-salary-range":"$189K – $280K","x-skills-required":["Trust & Safety","Business Integrity","Fraud & Abuse","Risk Operations","Ads ecosystems","advertiser risk","enforcement systems","tooling","automation","data","operational metrics","communication","risk tradeoffs","automated enforcement","classifiers","decision-support tools"],"x-skills-preferred":[],"datePosted":"2026-03-06T18:33:25.010Z","jobLocation":{"@type":"Place","address":{"@type":"PostalAddress","addressLocality":"San Francisco"}},"employmentType":"FULL_TIME","occupationalCategory":"Engineering","industry":"Technology","skills":"Trust & Safety, Business Integrity, Fraud & Abuse, Risk Operations, Ads ecosystems, advertiser risk, enforcement systems, tooling, automation, data, operational metrics, communication, risk tradeoffs, automated enforcement, classifiers, decision-support tools","baseSalary":{"@type":"MonetaryAmount","currency":"USD","value":{"@type":"QuantitativeValue","minValue":189000,"maxValue":280000,"unitText":"YEAR"}}},{"@context":"https://schema.org","@type":"JobPosting","identifier":{"@type":"PropertyValue","name":"YubHub","value":"job_119df59e-db7"},"title":"Software Engineer, AI Safety","description":"<p><strong>Software Engineer, AI Safety</strong></p>\n<p><strong>Location</strong></p>\n<p>San Francisco</p>\n<p><strong>Employment Type</strong></p>\n<p>Full time</p>\n<p><strong>Department</strong></p>\n<p>Safety Systems</p>\n<p><strong>Compensation</strong></p>\n<ul>\n<li>$185K – $325K • Offers Equity</li>\n</ul>\n<p>The base pay offered may vary depending on multiple individualized factors, including market location, job-related knowledge, skills, and experience. If the role is non-exempt, overtime pay will be provided consistent with applicable laws. In addition to the salary range listed above, total compensation also includes generous equity, performance-related bonus(es) for eligible employees, and the following benefits.</p>\n<ul>\n<li>Medical, dental, and vision insurance for you and your family, with employer contributions to Health Savings Accounts</li>\n</ul>\n<ul>\n<li>Pre-tax accounts for Health FSA, Dependent Care FSA, and commuter expenses (parking and transit)</li>\n</ul>\n<ul>\n<li>401(k) retirement plan with employer match</li>\n</ul>\n<ul>\n<li>Paid parental leave (up to 24 weeks for birth parents and 20 weeks for non-birthing parents), plus paid medical and caregiver leave (up to 8 weeks)</li>\n</ul>\n<ul>\n<li>Paid time off: flexible PTO for exempt employees and up to 15 days annually for non-exempt employees</li>\n</ul>\n<ul>\n<li>13+ paid company holidays, and multiple paid coordinated company office closures throughout the year for focus and recharge, plus paid sick or safe time (1 hour per 30 hours worked, or more, as required by applicable state or local law)</li>\n</ul>\n<ul>\n<li>Mental health and wellness support</li>\n</ul>\n<ul>\n<li>Employer-paid basic life and disability coverage</li>\n</ul>\n<ul>\n<li>Annual learning and development stipend to fuel your professional growth</li>\n</ul>\n<ul>\n<li>Daily meals in our offices, and meal delivery credits as eligible</li>\n</ul>\n<ul>\n<li>Relocation support for eligible employees</li>\n</ul>\n<ul>\n<li>Additional taxable fringe benefits, such as charitable donation matching and wellness stipends, may also be provided.</li>\n</ul>\n<p>More details about our benefits are available to candidates during the hiring process.</p>\n<p>This role is at-will and OpenAI reserves the right to modify base pay and other compensation components at any time based on individual performance, team or company results, or market conditions.</p>\n<p><strong>About the Team</strong></p>\n<p>The Safety Systems team is dedicated to ensuring the safety, robustness, and reliability of AI models and their deployment in the real world.</p>\n<p>Building on the many years of our practical alignment work and applied safety efforts, Safety Systems addresses emerging safety issues and develops new fundamental solutions to enable the safe deployment of our most advanced models and future AGI, to make AI that is beneficial and trustworthy.</p>\n<p>Learn more about OpenAI’s approach to safety</p>\n<p><strong>About the Role</strong></p>\n<p>At OpenAI, we&#39;re dedicated to advancing artificial intelligence, and we know that creating a secure and reliable platform is vital to our mission. That&#39;s why we&#39;re seeking a software engineer to help us build out our trust and safety capabilities.</p>\n<p>In this role, you&#39;ll work with our entire engineering team to design and implement systems that detect and prevent abuse, promote user safety, and reduce risk across our platform. You&#39;ll be at the forefront of our efforts to ensure that the immense potential of AI is harnessed in a responsible and sustainable manner.</p>\n<p><strong>In this role, you will:</strong></p>\n<ul>\n<li>Architect, build, and maintain anti-abuse and content moderation infrastructure designed to protect us and end users from unwanted behavior.</li>\n</ul>\n<ul>\n<li>Work closely with our other engineers and researchers to utilize both industry standard and novel AI techniques to measure, monitor and improve AI models’ alignment to human values.</li>\n</ul>\n<ul>\n<li>Diagnose and remediate active incidents on the platform and build new tooling and infrastructure that address the root causes of system failure.</li>\n</ul>\n<p><strong>You might thrive in this role if:</strong></p>\n<ul>\n<li>You have built and run production services in a high growth, rapidly scaling environment.</li>\n</ul>\n<ul>\n<li>You can debug live issues and restore systems quickly.</li>\n</ul>\n<ul>\n<li>You have worked on content safety, fraud, or abuse, or are motivated and excited to work on present-day (“now-term”) AI safety.</li>\n</ul>\n<ul>\n<li>You have experience with Python or with modern languages such as C++, Rust, or Go, and are able to quickly ramp up on Python.</li>\n</ul>\n<ul>\n<li>You understand the trade-offs of capabilities and risks and navigate them to deploy novel products and features safely.</li>\n</ul>\n<ul>\n<li>You can critically assess risks of a new product or feature and devise innovative solutions to mitigate these risks without harming the product experience.</li>\n</ul>\n<ul>\n<li>You’re pragmatic. You know when to build a quick, good-enough fix, and when to invest in a robust, lasting solution.</li>\n</ul>\n<ul>\n<li>You possess strong project management skills. You are self-directed and can remove roadblocks to drive projects to completion with minimal guidance.</li>\n</ul>\n<ul>\n<li>You’ve deployed classifiers or machine learning models, or are excited to learn about modern ML infra.</li>\n</ul>\n<p><strong>Our tech stack</strong></p>\n<ul>\n<li>Our infrastructure is built on Terraform, Kubernetes, Azure, Python, Postgres, and Kafka. While we value experience with these technologies, we are primarily looking for engineers with strong technical skills who understand the fundamental problems these tools solve, and can quickly pick up new tools and frameworks.</li>\n</ul>\n<p><strong>About OpenAI</strong></p>\n<p>OpenAI is an AI research and deployment company dedicated to ensuring that general-purpose artificial intelligence benefits all of humanity. We push the boundaries of the capabilities of AI systems and seek to safely deploy them to the world through our products. AI is an extremely powerful tool that must be created with safety and human needs at its core, and to achieve our mission, we must encompass and value the many different perspectives, voices, and experiences that form the full spectrum of humanity.</p>\n<p style=\"margin-top:24px;font-size:13px;color:#666;\">XML job scraping automation by <a href=\"https://yubhub.co\">YubHub</a></p>","url":"https://yubhub.co/jobs/job_119df59e-db7","directApply":true,"hiringOrganization":{"@type":"Organization","name":"OpenAI","sameAs":"https://jobs.ashbyhq.com","logo":"https://logos.yubhub.co/openai.com.png"},"x-apply-url":"https://jobs.ashbyhq.com/openai/b9dee2a0-9bb3-447e-9bce-2b1bed784e5b","x-work-arrangement":"onsite","x-experience-level":"mid","x-job-type":"full-time","x-salary-range":"$185K – $325K • Offers Equity","x-skills-required":["Python","Terraform","Kubernetes","Azure","Postgres","Kafka","C++","Rust","Go","Content safety","Fraud","Abuse","AI safety","Machine learning","Classifiers","ML infra"],"x-skills-preferred":["Project management","Debugging","System administration","Cloud computing","Containerization","DevOps","Agile development","Scrum","Kanban"],"datePosted":"2026-03-06T18:29:01.424Z","jobLocation":{"@type":"Place","address":{"@type":"PostalAddress","addressLocality":"San Francisco"}},"employmentType":"FULL_TIME","occupationalCategory":"Engineering","industry":"Technology","skills":"Python, Terraform, Kubernetes, Azure, Postgres, Kafka, C++, Rust, Go, Content safety, Fraud, Abuse, AI safety, Machine learning, Classifiers, ML infra, Project management, Debugging, System administration, Cloud computing, Containerization, DevOps, Agile development, Scrum, Kanban","baseSalary":{"@type":"MonetaryAmount","currency":"USD","value":{"@type":"QuantitativeValue","minValue":185000,"maxValue":325000,"unitText":"YEAR"}}},{"@context":"https://schema.org","@type":"JobPosting","identifier":{"@type":"PropertyValue","name":"YubHub","value":"job_9a922408-ad3"},"title":"Member of Technical Staff, Applied Scientist","description":"<p><strong>Summary</strong></p>\n<p>Microsoft are looking for a talented Member of Technical Staff, Applied Scientist at their Mountain View office. This role sits at the heart of designing and building advanced Copilot features such as Deep Research and Web artifact generation. You&#39;ll contribute to the evolution of Copilot by developing scalable methods for evaluating feature performance, designing data collection pipelines for prompt engineering and fine-tuning, and training content classifiers that support intelligent, context-aware interactions.</p>\n<p><strong>About the Role</strong></p>\n<p>This role demands deep expertise in large language models (LLMs) and a strong architectural mindset to shape complex, user-facing systems. You&#39;ll lead evaluation efforts of models deployed within Copilot, ensuring performance aligns with product goals. You&#39;ll also design scalable systems that leverage LLMs to deliver intelligent, user-facing experiences. Additionally, you&#39;ll develop evaluation frameworks and metrics to assess feature performance and user impact, conduct thorough reviews of data analysis and techniques to identify gaps and areas for re-examination, and build data collection pipelines to support prompt engineering and fine-tuning of LLMs.</p>\n<p><strong>Accountabilities</strong></p>\n<ul>\n<li>Architect and implement advanced Copilot features such as Deep Research and Web artifact generation</li>\n<li>Lead evaluation efforts of models deployed within Copilot, ensuring performance aligns with product goals</li>\n</ul>\n<p><strong>The Candidate we&#39;re looking for</strong></p>\n<p><strong>Experience:</strong></p>\n<ul>\n<li>4+ years related experience (e.g., statistics predictive analytics, research)</li>\n</ul>\n<p><strong>Technical skills:</strong></p>\n<ul>\n<li>Experience prompting, evaluating, and working with large language models</li>\n<li>Experience writing production-quality Python code</li>\n</ul>\n<p><strong>Personal attributes:</strong></p>\n<ul>\n<li>Proactive collaborator who communicates clearly</li>\n<li>Thrives in fast-paced environments</li>\n<li>Takes ownership of delivering world-class consumer experiences</li>\n</ul>\n<p><strong>Benefits</strong></p>\n<ul>\n<li>Competitive salary</li>\n<li>Comprehensive benefits package</li>\n<li>Opportunities for professional growth and development</li>\n<li>Collaborative and dynamic work environment</li>\n<li>Recognition and rewards for outstanding performance</li>\n</ul>\n<p style=\"margin-top:24px;font-size:13px;color:#666;\">XML job scraping automation by <a href=\"https://yubhub.co\">YubHub</a></p>","url":"https://yubhub.co/jobs/job_9a922408-ad3","directApply":true,"hiringOrganization":{"@type":"Organization","name":"Microsoft","sameAs":"https://microsoft.ai","logo":"https://logos.yubhub.co/microsoft.ai.png"},"x-apply-url":"https://microsoft.ai/job/member-of-technical-staff-applied-scientist/","x-work-arrangement":"onsite","x-experience-level":"staff","x-job-type":"full-time","x-salary-range":"USD $119,800 – $234,700 per year","x-skills-required":["large language models","prompt engineering","fine-tuning","content classifiers","scalable systems","evaluation frameworks","metrics","data analysis","data collection pipelines"],"x-skills-preferred":["responsible AI","research sciences"],"datePosted":"2026-03-06T07:28:33.009Z","jobLocation":{"@type":"Place","address":{"@type":"PostalAddress","addressLocality":"Mountain View"}},"employmentType":"FULL_TIME","occupationalCategory":"Engineering","industry":"Technology","skills":"large language models, prompt engineering, fine-tuning, content classifiers, scalable systems, evaluation frameworks, metrics, data analysis, data collection pipelines, responsible AI, research sciences","baseSalary":{"@type":"MonetaryAmount","currency":"USD","value":{"@type":"QuantitativeValue","minValue":119800,"maxValue":234700,"unitText":"YEAR"}}},{"@context":"https://schema.org","@type":"JobPosting","identifier":{"@type":"PropertyValue","name":"YubHub","value":"job_58279d6f-fa9"},"title":"Member of Technical Staff - Machine Learning (AI Team)","description":"<p><strong>Summary</strong></p>\n<p>Microsoft AI are looking for a talented Member of Technical Staff - Machine Learning (AI Team) at their Redmond office. This role sits at the heart of strategic decision-making, turning market data into actionable insights for a company that&#39;s revolutionising AI technology. You&#39;ll work directly with leadership to shape the company&#39;s direction in the AI market.</p>\n<p><strong>About the Role</strong></p>\n<p>As a Member of Technical Staff - Machine Learning, you will work to create LLM models for general purpose capabilities and for products. You may be responsible for developing new methods to train core LLM capabilities (including agentive), collecting data, evaluating LLMs, creating data flywheels, tooling for LLM training/evals, writing production quality code, and creating new user-facing features. You should be comfortable creating Reinforcement Learning data, fine tuning, or training classifiers or engineering prompts to create SOTA foundation models and support Microsoft products and the Cloud API.</p>\n<p><strong>Accountabilities</strong></p>\n<ul>\n<li>Own and pursue a research agenda to improve model capability and performance for agentive application.</li>\n<li>Collaborate closely with the other research and product teams, from pretraining to model hosting to unlock new model capabilities.</li>\n<li>Build robust evaluations for tracking modeling improvements.</li>\n<li>Design, implement, test, and debug code across our research stack.</li>\n</ul>\n<p><strong>The Candidate we&#39;re looking for</strong></p>\n<p><strong>Experience:</strong></p>\n<ul>\n<li>Bachelor’s Degree in Computer Science or related technical field AND 2+ years technical engineering experience with coding in languages including, but not limited to, C, C++, C#, Java, JavaScript, or Python OR equivalent experience.</li>\n</ul>\n<p><strong>Technical skills:</strong></p>\n<ul>\n<li>Proficiency in machine learning, software engineering, and data science.</li>\n</ul>\n<p><strong>Personal attributes:</strong></p>\n<ul>\n<li>Strong communication and teamwork skills.</li>\n</ul>\n<p><strong>Benefits</strong></p>\n<ul>\n<li>Competitive salary range of $100,600 - $199,000 per year.</li>\n<li>Comprehensive benefits package, including health insurance, retirement plan, and paid time off.</li>\n<li>Opportunities for professional growth and development.</li>\n<li>Collaborative and dynamic work environment.</li>\n</ul>\n<p style=\"margin-top:24px;font-size:13px;color:#666;\">XML job scraping automation by <a href=\"https://yubhub.co\">YubHub</a></p>","url":"https://yubhub.co/jobs/job_58279d6f-fa9","directApply":true,"hiringOrganization":{"@type":"Organization","name":"Microsoft AI","sameAs":"https://microsoft.ai","logo":"https://logos.yubhub.co/microsoft.ai.png"},"x-apply-url":"https://microsoft.ai/job/member-of-technical-staff-machine-learning-ai-team-5/","x-work-arrangement":"onsite","x-experience-level":"staff","x-job-type":"full-time","x-salary-range":"$100,600 - $199,000 per year","x-skills-required":["machine learning","software engineering","data science","C","C++","C#","Java","JavaScript","Python"],"x-skills-preferred":["Reinforcement Learning","fine tuning","training classifiers","engineering prompts"],"datePosted":"2026-03-06T07:27:38.493Z","jobLocation":{"@type":"Place","address":{"@type":"PostalAddress","addressLocality":"Redmond"}},"employmentType":"FULL_TIME","occupationalCategory":"Engineering","industry":"Technology","skills":"machine learning, software engineering, data science, C, C++, C#, Java, JavaScript, Python, Reinforcement Learning, fine tuning, training classifiers, engineering prompts","baseSalary":{"@type":"MonetaryAmount","currency":"USD","value":{"@type":"QuantitativeValue","minValue":100600,"maxValue":199000,"unitText":"YEAR"}}},{"@context":"https://schema.org","@type":"JobPosting","identifier":{"@type":"PropertyValue","name":"YubHub","value":"job_387f6a7a-42e"},"title":"Member of Technical Staff - Machine Learning (AI Team)","description":"<p><strong>Summary</strong></p>\n<p>Microsoft AI are looking for a talented Member of Technical Staff - Machine Learning (AI Team) at their Mountain View office. This role sits at the heart of strategic decision-making, turning market data into actionable insights for a company that&#39;s revolutionising AI technology. You&#39;ll work directly with leadership to shape the company&#39;s direction in the AI market.</p>\n<p><strong>About the Role</strong></p>\n<p>As a Member of Technical Staff - Machine Learning, you will work to create LLM models for general purpose capabilities and for products. You may be responsible for developing new methods to train core LLM capabilities (including agentive), collecting data, evaluating LLMs, creating data flywheels, tooling for LLM training/evals, writing production quality code, and creating new user-facing features. You should be comfortable creating Reinforcement Learning data, fine tuning, or training classifiers or engineering prompts to create SOTA foundation models and support Microsoft products and the Cloud API.</p>\n<p><strong>Accountabilities</strong></p>\n<ul>\n<li>Own and pursue a research agenda to improve model capability and performance for agentive application.</li>\n<li>Collaborate closely with the other research and product teams, from pretraining to model hosting to unlock new model capabilities.</li>\n<li>Build robust evaluations for tracking modeling improvements.</li>\n<li>Design, implement, test, and debug code across our research stack.</li>\n</ul>\n<p><strong>The Candidate we&#39;re looking for</strong></p>\n<p><strong>Experience:</strong></p>\n<ul>\n<li>Bachelor’s Degree in Computer Science or related technical field AND 2+ years technical engineering experience with coding in languages including, but not limited to, C, C++, C#, Java, JavaScript, or Python OR equivalent experience.</li>\n</ul>\n<p><strong>Technical skills:</strong></p>\n<ul>\n<li>Proficiency in machine learning, software engineering, and data science.</li>\n</ul>\n<p><strong>Personal attributes:</strong></p>\n<ul>\n<li>Strong communication and teamwork skills.</li>\n</ul>\n<p><strong>Benefits</strong></p>\n<ul>\n<li>Competitive salary range of $100,600 - $199,000 per year.</li>\n<li>Comprehensive benefits package, including health insurance, retirement plan, and paid time off.</li>\n<li>Opportunities for professional growth and development.</li>\n<li>Collaborative and dynamic work environment.</li>\n</ul>\n<p style=\"margin-top:24px;font-size:13px;color:#666;\">XML job scraping automation by <a href=\"https://yubhub.co\">YubHub</a></p>","url":"https://yubhub.co/jobs/job_387f6a7a-42e","directApply":true,"hiringOrganization":{"@type":"Organization","name":"Microsoft AI","sameAs":"https://microsoft.ai","logo":"https://logos.yubhub.co/microsoft.ai.png"},"x-apply-url":"https://microsoft.ai/job/member-of-technical-staff-machine-learning-ai-team-4/","x-work-arrangement":"onsite","x-experience-level":"staff","x-job-type":"full-time","x-salary-range":"$100,600 - $199,000 per year","x-skills-required":["machine learning","software engineering","data science"],"x-skills-preferred":["Reinforcement Learning","fine tuning","training classifiers"],"datePosted":"2026-03-06T07:27:29.919Z","jobLocation":{"@type":"Place","address":{"@type":"PostalAddress","addressLocality":"Mountain View"}},"employmentType":"FULL_TIME","occupationalCategory":"Engineering","industry":"Technology","skills":"machine learning, software engineering, data science, Reinforcement Learning, fine tuning, training classifiers","baseSalary":{"@type":"MonetaryAmount","currency":"USD","value":{"@type":"QuantitativeValue","minValue":100600,"maxValue":199000,"unitText":"YEAR"}}}]}