<?xml version="1.0" encoding="UTF-8"?>
<source>
  <jobs>
    <job>
      <externalid>0a0c638b-b06</externalid>
      <Title>Member of Technical Staff - Technical Program Manager</Title>
      <Description><![CDATA[<p>Copilot is evolving into an agentic system that can plan, reason, and execute actions across tools, data, and services. Securing such a system cannot rely on static controls, offline review, or policy-only enforcement. It requires runtime defenses that adapt to intent, behavior, and context as the system operates.</p>
<p>Copilot Security and Privacy is responsible for building these defenses directly into Copilot. Our work focuses on new security primitives for agentic AI, including runtime misuse detection, adaptive guardrails, containment and isolation mechanisms, and feedback-driven control systems informed by offensive security research.</p>
<p>We are hiring a Principal Technical Program Manager (TPM) to own the end-to-end delivery of these capabilities. This is a deeply technical execution role for someone who can operate at the boundary of security engineering, AI research, and platform systems,turning ambiguous threat models into shippable, operable defenses deployed in a globally scaled AI product.</p>
<p>This role is not about process, governance, or coordination. The TPM is accountable for making complex systems land in production, under real-world adversarial pressure. Most security roles protect systems after they exist. This role helps define how agentic AI systems defend themselves while they operate.</p>
<p>You will shape how Copilot detects misuse, enforces boundaries, and recovers safely in real time,working directly on the mechanisms that make autonomy deployable at global scale. The impact is immediate, technical, and measurable in production behavior.</p>
<p>If you want to operate where AI systems, security engineering, and execution reality intersect, this role offers that surface area,without turning you into a policy owner or process layer.</p>
<p>Microsoft’s mission is to empower every person and every organization on the planet to achieve more. As employees we come together with a growth mindset, innovate to empower others, and collaborate to realize our shared goals. Each day we build on our values of respect, integrity, and accountability to create a culture of inclusion where everyone can thrive at work and beyond.</p>
<p>Starting January 26, 2026, Microsoft AI (MAI) employees who live within a 50-mile commute of a designated Microsoft office in the U.S. or 25-mile commute of a non-U.S., country-specific location are expected to work from the office at least four days per week. This expectation is subject to local law and may vary by jurisdiction.</p>
<p>Responsibilities:</p>
<p>Own Delivery of In-Product AI Threat Defenses</p>
<p>Lead execution of runtime threat defense capabilities embedded directly into Copilot execution paths, not layered on externally.</p>
<p>Drive delivery of detection, prevention, and containment mechanisms that operate synchronously and adaptively as agents reason and act.</p>
<p>Ensure defenses are designed as control systems with clear signals, enforcement points, and feedback loops.</p>
<p>Translate Threat Models into Executable Systems</p>
<p>Take emerging and ambiguous agentic AI threat models,including misuse, escalation, and information-flow risks,and convert them into concrete engineering plans.</p>
<p>Partner with security engineers and researchers to translate offensive security insights and red-team findings into production features.</p>
<p>Make judgment calls about enforcement boundaries, degradation strategies, and isolation guarantees.</p>
<p>Drive Cross-Cutting Technical Execution</p>
<p>Coordinate delivery across security engineering, AI research, platform/runtime teams, and Copilot product surfaces.</p>
<p>Own dependency management, sequencing, and delivery risk for systems that are tightly coupled and cannot be built independently.</p>
<p>Resolve technical and organizational tradeoffs where ownership boundaries are unclear and failure modes are novel.</p>
<p>Ensure Operability at Runtime</p>
<p>Define what “working” means for threat defenses: detection quality, false-positive tolerance, performance impact, and blast-radius containment.</p>
<p>Ensure defenses are measurable, testable, and observable in production.</p>
<p>Lead learning loops from live incidents, near-misses, and adversarial testing back into system design.</p>
<p>Qualifications:</p>
<p>Required Qualifications:</p>
<p>Bachelor’s Degree AND 6+ years experience in engineering, product/technical program management, data analysis, or product development OR equivalent experience.</p>
<p>3+ years of experience managing cross-functional and/or cross-team projects.</p>
<p>Preferred Qualifications:</p>
<p>Bachelor’s Degree AND 12+ years experience engineering, product/technical program management, data analysis, or product development OR equivalent experience.</p>
<p>Proven ability to lead execution in high-ambiguity environments where requirements, threats, and system behavior evolve rapidly.</p>
<p>Solid systems thinking: ability to reason about execution paths, failure modes, and adversarial behavior.</p>
<p>Track record of making sound technical tradeoffs and shipping durable solutions without relying on heavy process.</p>
<p>Background in security engineering, distributed systems, applied research, or ML systems prior to or alongside TPM work.</p>
<p>Experience delivering runtime detection, abuse prevention, or adaptive enforcement systems.</p>
<p>Familiarity with agentic AI systems, LLM-based products, or non-deterministic execution environments.</p>
<p>Experience partnering closely with offensive security or red-team functions.</p>
<p>Demonstrated ability to translate research, prototypes, or threat models into production-grade systems.</p>
<p>Solid analytical skills, including working with telemetry, signals, and feedback loops.</p>
<p>#MicrosoftAI #MAIDPS</p>
<p style="margin-top:24px;font-size:13px;color:#666;">XML job scraping automation by <a href="https://yubhub.co">YubHub</a></p>]]></Description>
      <Jobtype>full-time</Jobtype>
      <Experiencelevel>staff</Experiencelevel>
      <Workarrangement>hybrid</Workarrangement>
      <Salaryrange>$139,900 - $274,800 per year</Salaryrange>
      <Skills>security engineering, AI research, platform systems, runtime misuse detection, adaptive guardrails, containment and isolation mechanisms, feedback-driven control systems, offensive security research, agentic AI systems, LLM-based products, non-deterministic execution environments, runtime detection, abuse prevention, adaptive enforcement systems, telemetry, signals, feedback loops, distributed systems, applied research, ML systems, offensive security, red-team functions</Skills>
      <Category>Engineering</Category>
      <Industry>Technology</Industry>
      <Employername>Microsoft</Employername>
      <Employerlogo>https://logos.yubhub.co/microsoft.ai.png</Employerlogo>
      <Employerdescription>Microsoft is a multinational technology company that develops, manufactures, licenses, and supports a wide range of software products, services, and devices.</Employerdescription>
      <Employerwebsite>https://microsoft.ai</Employerwebsite>
      <Compensationcurrency></Compensationcurrency>
      <Compensationmin></Compensationmin>
      <Compensationmax></Compensationmax>
      <Applyto>https://microsoft.ai/job/member-of-technical-staff-technical-program-manager-4/</Applyto>
      <Location>Redmond</Location>
      <Country></Country>
      <Postedate>2026-04-24</Postedate>
    </job>
    <job>
      <externalid>7a38b2e0-5ff</externalid>
      <Title>Member of Technical Staff - Technical Program Manager</Title>
      <Description><![CDATA[<p>Copilot is evolving into an agentic system that can plan, reason, and execute actions across tools, data, and services. Securing such a system cannot rely on static controls, offline review, or policy-only enforcement. It requires runtime defenses that adapt to intent, behavior, and context as the system operates.</p>
<p>Copilot Security and Privacy is responsible for building these defenses directly into Copilot. Our work focuses on new security primitives for agentic AI, including runtime misuse detection, adaptive guardrails, containment and isolation mechanisms, and feedback-driven control systems informed by offensive security research.</p>
<p>We are hiring a Principal Technical Program Manager (TPM) to own the end-to-end delivery of these capabilities. This is a deeply technical execution role for someone who can operate at the boundary of security engineering, AI research, and platform systems,turning ambiguous threat models into shippable, operable defenses deployed in a globally scaled AI product.</p>
<p>This role is not about process, governance, or coordination. The TPM is accountable for making complex systems land in production, under real-world adversarial pressure. Most security roles protect systems after they exist. This role helps define how agentic AI systems defend themselves while they operate.</p>
<p>You will shape how Copilot detects misuse, enforces boundaries, and recovers safely in real time,working directly on the mechanisms that make autonomy deployable at global scale. The impact is immediate, technical, and measurable in production behavior.</p>
<p>If you want to operate where AI systems, security engineering, and execution reality intersect, this role offers that surface area,without turning you into a policy owner or process layer.</p>
<p>Microsoft’s mission is to empower every person and every organization on the planet to achieve more. As employees we come together with a growth mindset, innovate to empower others, and collaborate to realize our shared goals. Each day we build on our values of respect, integrity, and accountability to create a culture of inclusion where everyone can thrive at work and beyond.</p>
<p>Starting January 26, 2026, Microsoft AI (MAI) employees who live within a 50-mile commute of a designated Microsoft office in the U.S. or 25-mile commute of a non-U.S., country-specific location are expected to work from the office at least four days per week. This expectation is subject to local law and may vary by jurisdiction.</p>
<p>Responsibilities:</p>
<p>Own Delivery of In-Product AI Threat Defenses</p>
<p>Lead execution of runtime threat defense capabilities embedded directly into Copilot execution paths, not layered on externally.</p>
<p>Drive delivery of detection, prevention, and containment mechanisms that operate synchronously and adaptively as agents reason and act.</p>
<p>Ensure defenses are designed as control systems with clear signals, enforcement points, and feedback loops.</p>
<p>Translate Threat Models into Executable Systems</p>
<p>Take emerging and ambiguous agentic AI threat models,including misuse, escalation, and information-flow risks,and convert them into concrete engineering plans.</p>
<p>Partner with security engineers and researchers to translate offensive security insights and red-team findings into production features.</p>
<p>Make judgment calls about enforcement boundaries, degradation strategies, and isolation guarantees.</p>
<p>Drive Cross-Cutting Technical Execution</p>
<p>Coordinate delivery across security engineering, AI research, platform/runtime teams, and Copilot product surfaces.</p>
<p>Own dependency management, sequencing, and delivery risk for systems that are tightly coupled and cannot be built independently.</p>
<p>Resolve technical and organizational tradeoffs where ownership boundaries are unclear and failure modes are novel.</p>
<p>Ensure Operability at Runtime</p>
<p>Define what “working” means for threat defenses: detection quality, false-positive tolerance, performance impact, and blast-radius containment.</p>
<p>Ensure defenses are measurable, testable, and observable in production.</p>
<p>Lead learning loops from live incidents, near-misses, and adversarial testing back into system design.</p>
<p>Qualifications:</p>
<p>Required Qualifications:</p>
<p>Bachelor’s Degree AND 6+ years experience in engineering, product/technical program management, data analysis, or product development OR equivalent experience.</p>
<p>3+ years of experience managing cross-functional and/or cross-team projects.</p>
<p>Preferred Qualifications:</p>
<p>Bachelor’s Degree AND 12+ years experience engineering, product/technical program management, data analysis, or product development OR equivalent experience.</p>
<p>Proven ability to lead execution in high-ambiguity environments where requirements, threats, and system behavior evolve rapidly.</p>
<p>Solid systems thinking: ability to reason about execution paths, failure modes, and adversarial behavior.</p>
<p>Track record of making sound technical tradeoffs and shipping durable solutions without relying on heavy process.</p>
<p>Background in security engineering, distributed systems, applied research, or ML systems prior to or alongside TPM work.</p>
<p>Experience delivering runtime detection, abuse prevention, or adaptive enforcement systems.</p>
<p>Familiarity with agentic AI systems, LLM-based products, or non-deterministic execution environments.</p>
<p>Experience partnering closely with offensive security or red-team functions.</p>
<p>Demonstrated ability to translate research, prototypes, or threat models into production-grade systems.</p>
<p>Solid analytical skills, including working with telemetry, signals, and feedback loops.</p>
<p>#MicrosoftAI #MAIDPS</p>
<p style="margin-top:24px;font-size:13px;color:#666;">XML job scraping automation by <a href="https://yubhub.co">YubHub</a></p>]]></Description>
      <Jobtype>full-time</Jobtype>
      <Experiencelevel>staff</Experiencelevel>
      <Workarrangement>hybrid</Workarrangement>
      <Salaryrange>The typical base pay range for this role across the U.S. is USD $139,900 – $274,800 per year.</Salaryrange>
      <Skills>security engineering, agentic AI, runtime misuse detection, adaptive guardrails, containment and isolation mechanisms, feedback-driven control systems, offensive security research, cross-functional project management, cross-team project management, distributed systems, applied research, ML systems, runtime detection, abuse prevention, adaptive enforcement systems, agentic AI systems, LLM-based products, non-deterministic execution environments, offensive security, red-team functions, production-grade systems, telemetry, signals, feedback loops</Skills>
      <Category>Engineering</Category>
      <Industry>Technology</Industry>
      <Employername>Microsoft</Employername>
      <Employerlogo>https://logos.yubhub.co/microsoft.ai.png</Employerlogo>
      <Employerdescription>Microsoft is a multinational technology company that develops, manufactures, licenses, and supports a wide range of software products, services, and devices.</Employerdescription>
      <Employerwebsite>https://microsoft.ai</Employerwebsite>
      <Compensationcurrency></Compensationcurrency>
      <Compensationmin></Compensationmin>
      <Compensationmax></Compensationmax>
      <Applyto>https://microsoft.ai/job/member-of-technical-staff-technical-program-manager-5/</Applyto>
      <Location>Mountain View</Location>
      <Country></Country>
      <Postedate>2026-04-24</Postedate>
    </job>
    <job>
      <externalid>acd79b88-056</externalid>
      <Title>Member of Technical Staff - Technical Program Manager</Title>
      <Description><![CDATA[<p>Copilot is evolving into an agentic system that can plan, reason, and execute actions across tools, data, and services. Securing such a system cannot rely on static controls, offline review, or policy-only enforcement. It requires runtime defenses that adapt to intent, behavior, and context as the system operates.</p>
<p>Copilot Security and Privacy is responsible for building these defenses directly into Copilot. Our work focuses on new security primitives for agentic AI, including runtime misuse detection, adaptive guardrails, containment and isolation mechanisms, and feedback-driven control systems informed by offensive security research.</p>
<p>We are hiring a Principal Technical Program Manager (TPM) to own the end-to-end delivery of these capabilities. This is a deeply technical execution role for someone who can operate at the boundary of security engineering, AI research, and platform systems,turning ambiguous threat models into shippable, operable defenses deployed in a globally scaled AI product.</p>
<p>This role is not about process, governance, or coordination. The TPM is accountable for making complex systems land in production, under real-world adversarial pressure. Most security roles protect systems after they exist. This role helps define how agentic AI systems defend themselves while they operate.</p>
<p>You will shape how Copilot detects misuse, enforces boundaries, and recovers safely in real time,working directly on the mechanisms that make autonomy deployable at global scale. The impact is immediate, technical, and measurable in production behavior.</p>
<p>If you want to operate where AI systems, security engineering, and execution reality intersect, this role offers that surface area,without turning you into a policy owner or process layer.</p>
<p>Microsoft’s mission is to empower every person and every organization on the planet to achieve more. As employees we come together with a growth mindset, innovate to empower others, and collaborate to realize our shared goals. Each day we build on our values of respect, integrity, and accountability to create a culture of inclusion where everyone can thrive at work and beyond.</p>
<p>Starting January 26, 2026, Microsoft AI (MAI) employees who live within a 50-mile commute of a designated Microsoft office in the U.S. or 25-mile commute of a non-U.S., country-specific location are expected to work from the office at least four days per week. This expectation is subject to local law and may vary by jurisdiction.</p>
<p style="margin-top:24px;font-size:13px;color:#666;">XML job scraping automation by <a href="https://yubhub.co">YubHub</a></p>]]></Description>
      <Jobtype>full-time</Jobtype>
      <Experiencelevel>staff</Experiencelevel>
      <Workarrangement>hybrid</Workarrangement>
      <Salaryrange>$139,900 - $274,800 per year</Salaryrange>
      <Skills>security engineering, distributed systems, applied research, ML systems, runtime detection, abuse prevention, adaptive enforcement systems, agentic AI systems, LLM-based products, non-deterministic execution environments, offensive security, red-team functions, research, prototypes, threat models, production-grade systems, telemetry, signals, feedback loops</Skills>
      <Category>Engineering</Category>
      <Industry>Technology</Industry>
      <Employername>Microsoft</Employername>
      <Employerlogo>https://logos.yubhub.co/microsoft.ai.png</Employerlogo>
      <Employerdescription>Microsoft is a multinational technology company that develops, manufactures, licenses, and supports a wide range of software products, services, and devices. Its products include the Windows operating system, Office software suite, and Azure cloud computing platform.</Employerdescription>
      <Employerwebsite>https://microsoft.ai</Employerwebsite>
      <Compensationcurrency></Compensationcurrency>
      <Compensationmin></Compensationmin>
      <Compensationmax></Compensationmax>
      <Applyto>https://microsoft.ai/job/member-of-technical-staff-technical-program-manager-6/</Applyto>
      <Location>New York</Location>
      <Country></Country>
      <Postedate>2026-04-24</Postedate>
    </job>
    <job>
      <externalid>b7959209-0c2</externalid>
      <Title>Safeguards Policy Analyst, Fraud &amp; Scams</Title>
      <Description><![CDATA[<p>As a Safeguards Policy Analyst focused on Fraud &amp; Scams, you will design, build, and execute enforcement workflows that detect and mitigate fraud and scam-related harms on Anthropic&#39;s products.</p>
<p>This role sits within the Integrity &amp; Authenticity (I&amp;A) team, where you will function both as a policy owner and work closely with threat investigative and enforcement teams.</p>
<p>Key responsibilities include drafting, maintaining, and iterating on Fraud &amp; Scams policies; conducting regular structured policy reviews; developing detailed threat models for fraud and scam vectors; and staying current on the fraud and scam landscape.</p>
<p>You will also design and architect automated enforcement systems and human review workflows that scale effectively while maintaining high precision and recall.</p>
<p>Additionally, you will serve as the primary policy point of contact for ML and Engineering teams developing fraud detection classifiers, working to translate policy intent into technical artifacts and training signals.</p>
<p>If you have experience working as a Trust &amp; Safety professional with a focused background in fraud, scams, or financial crime, particularly in a tech platform or AI context, you may be a good fit for this role.</p>
<p>Preferred qualifications include experience at a major technology platform, financial institution, or fraud intelligence firm in a policy, operations, or investigative capacity, familiarity with the generative AI risk landscape, and background in threat intelligence, financial crimes compliance (AML/KYC), or law enforcement focused on cyber-enabled fraud.</p>
<p>The annual compensation range for this role is $245,000-$285,000 USD.</p>
<p style="margin-top:24px;font-size:13px;color:#666;">XML job scraping automation by <a href="https://yubhub.co">YubHub</a></p>]]></Description>
      <Jobtype>full-time</Jobtype>
      <Experiencelevel>mid</Experiencelevel>
      <Workarrangement>hybrid</Workarrangement>
      <Salaryrange>$245,000-$285,000 USD</Salaryrange>
      <Skills>policy design, fraud and scam analysis, threat modeling, automated enforcement systems, human review workflows, ML and Engineering collaboration, generative AI risk landscape, threat intelligence, financial crimes compliance, law enforcement</Skills>
      <Category>Engineering</Category>
      <Industry>Technology</Industry>
      <Employername>Anthropic</Employername>
      <Employerlogo>https://logos.yubhub.co/anthropic.co.png</Employerlogo>
      <Employerdescription>Anthropic creates reliable, interpretable, and steerable AI systems.</Employerdescription>
      <Employerwebsite>https://www.anthropic.co/</Employerwebsite>
      <Compensationcurrency></Compensationcurrency>
      <Compensationmin></Compensationmin>
      <Compensationmax></Compensationmax>
      <Applyto>https://job-boards.greenhouse.io/anthropic/jobs/5174857008</Applyto>
      <Location>Remote-Friendly (Travel-Required) | San Francisco, CA | New York City, NY</Location>
      <Country></Country>
      <Postedate>2026-04-18</Postedate>
    </job>
    <job>
      <externalid>d2dfc6c9-22d</externalid>
      <Title>Trust &amp; Safety Operations Analyst, Ads</Title>
      <Description><![CDATA[<p><strong>Job Posting</strong></p>
<p><strong>Trust &amp; Safety Operations Analyst, Ads</strong></p>
<p><strong>Location</strong></p>
<p>San Francisco</p>
<p><strong>Employment Type</strong></p>
<p>Full time</p>
<p><strong>Department</strong></p>
<p><strong>Compensation</strong></p>
<ul>
<li>$189K – $280K • Offers Equity</li>
</ul>
<p>The base pay offered may vary depending on multiple individualized factors, including market location, job-related knowledge, skills, and experience. If the role is non-exempt, overtime pay will be provided consistent with applicable laws. In addition to the salary range listed above, total compensation also includes generous equity, performance-related bonus(es) for eligible employees, and the following benefits.</p>
<ul>
<li>Medical, dental, and vision insurance for you and your family, with employer contributions to Health Savings Accounts</li>
</ul>
<ul>
<li>Pre-tax accounts for Health FSA, Dependent Care FSA, and commuter expenses (parking and transit)</li>
</ul>
<ul>
<li>401(k) retirement plan with employer match</li>
</ul>
<ul>
<li>Paid parental leave (up to 24 weeks for birth parents and 20 weeks for non-birthing parents), plus paid medical and caregiver leave (up to 8 weeks)</li>
</ul>
<ul>
<li>Paid time off: flexible PTO for exempt employees and up to 15 days annually for non-exempt employees</li>
</ul>
<ul>
<li>13+ paid company holidays, and multiple paid coordinated company office closures throughout the year for focus and recharge, plus paid sick or safe time (1 hour per 30 hours worked, or more, as required by applicable state or local law)</li>
</ul>
<ul>
<li>Mental health and wellness support</li>
</ul>
<ul>
<li>Employer-paid basic life and disability coverage</li>
</ul>
<ul>
<li>Annual learning and development stipend to fuel your professional growth</li>
</ul>
<ul>
<li>Daily meals in our offices, and meal delivery credits as eligible</li>
</ul>
<ul>
<li>Relocation support for eligible employees</li>
</ul>
<ul>
<li>Additional taxable fringe benefits, such as charitable donation matching and wellness stipends, may also be provided.</li>
</ul>
<p>More details about our benefits are available to candidates during the hiring process.</p>
<p>This role is at-will and OpenAI reserves the right to modify base pay and other compensation components at any time based on individual performance, team or company results, or market conditions.</p>
<p><strong><strong>About the Team</strong></strong></p>
<p>At OpenAI, our <strong>User Safety &amp; Risk Operations</strong> team is responsible for safeguarding our platform and users from abuse, fraud, and emerging threats. We operate at the intersection of product risk, operational scale, and real-time safety response—supporting users ranging from individuals to global enterprises, as well as advertisers and creators.</p>
<p>The Ads Trust &amp; Safety Operations team protects our users, advertisers, and creators across all monetized surfaces. As OpenAI introduces new revenue-generating formats and partnerships, this team ensures these experiences remain safe, compliant, high-quality, and aligned with our broader safety standards. We partner closely with Product, Engineering, Policy, and Legal to identify emerging risks, build and mature enforcement systems, and ensure scalable, high-integrity operations.</p>
<p><strong><strong>About the Role</strong></strong></p>
<p>We’re looking for a senior operator to help build and scale Ads Trust &amp; Safety Operations at OpenAI. In this role, you’ll drive critical Ads T&amp;S workstreams end-to-end, partnering closely with Product, Policy, Engineering, Legal, and Operations to design scalable enforcement processes, strengthen detection and tooling, and ensure we’re prepared to support Ads and monetization safely at scale.</p>
<p>You’ll operate at the intersection of strategy and execution—translating ambiguity into structured programs, identifying operational risks, and driving measurable improvements across systems and workflows.</p>
<p>This role requires someone who is highly operational, excellent at execution, and comfortable driving clarity amid ambiguity. You should be eager to build scalable systems and processes from the ground up and work in lockstep with policy and product teams as we rapidly iterate on advertising strategies and features.</p>
<p><strong><strong>In this role, you will:</strong></strong></p>
<ul>
<li>Own complex, high-impact Ads Trust &amp; Safety problem areas from strategy through execution.</li>
</ul>
<ul>
<li>Design and scale operational workflows for Ads Trust &amp; Safety, including enforcement models, review processes, escalation paths, and quality frameworks.</li>
</ul>
<ul>
<li>Partner closely with Product, Policy, and Engineering to translate risk and policy requirements into scalable systems, tooling, and automation.</li>
</ul>
<ul>
<li>Drive operational readiness for new Ads and monetization launches, features, and markets, identifying risks early and ensuring appropriate mitigations are in place.</li>
</ul>
<ul>
<li>Use data to identify trends, gaps, and emerging risks across Ads surfaces; develop proposals and solutions grounded in metrics and operational signals.</li>
</ul>
<ul>
<li>Contribute to the evolution of Ads Trust &amp; Safety cross-functional strategy, including how safety scales with automation, classifiers, and self-service tooling.</li>
</ul>
<ul>
<li>Act as a senior XFN partner and subject-matter expert, influencing direction through strong judgment, clear communication, and credibility.</li>
</ul>
<p><strong><strong>You might thrive in this role if you have:</strong></strong></p>
<ul>
<li>5+ years of experience in Trust &amp; Safety, Business Integrity, Fraud &amp; Abuse, Risk Operations, or a closely related domain.</li>
</ul>
<ul>
<li>Deep familiarity with ads ecosystems and advertiser risk</li>
</ul>
<ul>
<li>Proven ability to independently own ambiguous, cross-functional initiatives and drive them to completion.</li>
</ul>
<ul>
<li>Strong operational judgment and systems thinking—able to design solutions that scale beyond manual review.</li>
</ul>
<ul>
<li>Experience working closely with Product, Policy, and Engineering teams on enforcement systems, tooling, or automation.</li>
</ul>
<ul>
<li>Comfort using data and operational metrics to inform decisions, prioritize work, and measure impact.</li>
</ul>
<ul>
<li>Excellent written and verbal communication skills, including the ability to explain complex risk tradeoffs to diverse audiences.</li>
</ul>
<ul>
<li>Experience designing or partnering on automated enforcement, classifiers, or decision-support tools.</li>
</ul>
<p><strong><strong>About OpenAI</strong></strong></p>
<p>OpenAI is an AI research and deployment company dedicated to ensuring that general-purpose artificial intelligence benefits humanity. It was founded in 2015 and has since grown to become a leading player in the AI industry.</p>
<p style="margin-top:24px;font-size:13px;color:#666;">XML job scraping automation by <a href="https://yubhub.co">YubHub</a></p>]]></Description>
      <Jobtype>full-time</Jobtype>
      <Experiencelevel>senior</Experiencelevel>
      <Workarrangement>onsite</Workarrangement>
      <Salaryrange>$189K – $280K</Salaryrange>
      <Skills>Trust &amp; Safety, Business Integrity, Fraud &amp; Abuse, Risk Operations, Ads ecosystems, advertiser risk, enforcement systems, tooling, automation, data, operational metrics, communication, risk tradeoffs, automated enforcement, classifiers, decision-support tools</Skills>
      <Category>Engineering</Category>
      <Industry>Technology</Industry>
      <Employername>OpenAI</Employername>
      <Employerlogo>https://logos.yubhub.co/openai.com.png</Employerlogo>
      <Employerdescription>OpenAI is an AI research and deployment company dedicated to ensuring that general-purpose artificial intelligence benefits humanity. It was founded in 2015 and has since grown to become a leading player in the AI industry.</Employerdescription>
      <Employerwebsite>https://jobs.ashbyhq.com</Employerwebsite>
      <Compensationcurrency></Compensationcurrency>
      <Compensationmin></Compensationmin>
      <Compensationmax></Compensationmax>
      <Applyto>https://jobs.ashbyhq.com/openai/c9e9e3a5-fb93-4162-b876-6266016819c0</Applyto>
      <Location>San Francisco</Location>
      <Country></Country>
      <Postedate>2026-03-06</Postedate>
    </job>
    <job>
      <externalid>ec06a431-7fa</externalid>
      <Title>Software Engineer - Privacy &amp; Compliance</Title>
      <Description><![CDATA[<p><strong>Software Engineer - Privacy &amp; Compliance</strong></p>
<p><strong>Location</strong></p>
<p>San Francisco; Seattle</p>
<p><strong>Employment Type</strong></p>
<p>Full time</p>
<p><strong>Department</strong></p>
<p>Applied AI</p>
<p><strong>Compensation</strong></p>
<ul>
<li>$230K – $385K • Offers Equity</li>
</ul>
<p>The base pay offered may vary depending on multiple individualized factors, including market location, job-related knowledge, skills, and experience. If the role is non-exempt, overtime pay will be provided consistent with applicable laws. In addition to the salary range listed above, total compensation also includes generous equity, performance-related bonus(es) for eligible employees, and the following benefits.</p>
<ul>
<li>Medical, dental, and vision insurance for you and your family, with employer contributions to Health Savings Accounts</li>
</ul>
<ul>
<li>Pre-tax accounts for Health FSA, Dependent Care FSA, and commuter expenses (parking and transit)</li>
</ul>
<ul>
<li>401(k) retirement plan with employer match</li>
</ul>
<ul>
<li>Paid parental leave (up to 24 weeks for birth parents and 20 weeks for non-birthing parents), plus paid medical and caregiver leave (up to 8 weeks)</li>
</ul>
<ul>
<li>Paid time off: flexible PTO for exempt employees and up to 15 days annually for non-exempt employees</li>
</ul>
<ul>
<li>13+ paid company holidays, and multiple paid coordinated company office closures throughout the year for focus and recharge, plus paid sick or safe time (1 hour per 30 hours worked, or more, as required by applicable state or local law)</li>
</ul>
<ul>
<li>Mental health and wellness support</li>
</ul>
<ul>
<li>Employer-paid basic life and disability coverage</li>
</ul>
<ul>
<li>Annual learning and development stipend to fuel your professional growth</li>
</ul>
<ul>
<li>Daily meals in our offices, and meal delivery credits as eligible</li>
</ul>
<ul>
<li>Relocation support for eligible employees</li>
</ul>
<ul>
<li>Additional taxable fringe benefits, such as charitable donation matching and wellness stipends, may also be provided.</li>
</ul>
<p>More details about our benefits are available to candidates during the hiring process.</p>
<p>This role is at-will and OpenAI reserves the right to modify base pay and other compensation components at any time based on individual performance, team or company results, or market conditions.</p>
<p>We’re looking for a <strong>Software Engineer</strong> to architect and build backend systems that enforce data privacy and automate compliance at scale. You’ll work closely with product, infrastructure, security, and legal teams to embed privacy-by-design into our data and access layers.</p>
<p>This is a hands-on, high-impact role for an experienced engineer who is passionate about protecting user data while enabling innovation.</p>
<p><strong><strong>What You’ll Do</strong></strong></p>
<ul>
<li>Design, build, and operate backend services that enforce policy-driven data access, lifecycle controls, and privacy protections.</li>
</ul>
<ul>
<li>Develop distributed authorization and identity-aware enforcement mechanisms integrated directly into data services and control planes.</li>
</ul>
<ul>
<li>Implement auditability, policy hooks, and enforcement observability to ensure compliance is continuously verifiable.</li>
</ul>
<ul>
<li>Partner with Security, Legal, and Compliance to convert privacy requirements into scalable technical designs and developer-friendly APIs.</li>
</ul>
<ul>
<li>Harden data platforms and backend services through schema-level controls and data handling constraints by default.</li>
</ul>
<ul>
<li>Collaborate with infrastructure teams to ensure consistent enforcement across systems while minimizing duplicated implementations.</li>
</ul>
<ul>
<li>Contribute patterns, libraries, and education that elevate trustworthy data access patterns across the organization.</li>
</ul>
<p><strong><strong>You Might Thrive in This Role If You Have</strong></strong></p>
<ul>
<li><strong>5+ years of industry experience</strong> building and operating backend or infrastructure systems in production.</li>
</ul>
<ul>
<li><strong>Strong software engineering fundamentals</strong>, with fluency in at least one major programming language (e.g., Python, Go, Rust, C++, Java).</li>
</ul>
<ul>
<li>Experience with distributed authorization, RBAC/ACL systems, encryption-based access, or policy engines.</li>
</ul>
<ul>
<li><strong>Familiarity with global privacy regulations</strong> and their architectural implications.</li>
</ul>
<ul>
<li><strong>Ability to influence and collaborate</strong> with teams across legal, compliance, product, and engineering.</li>
</ul>
<ul>
<li>A <strong>bias toward practical, impactful solutions</strong> that balance privacy protections with product needs.</li>
</ul>
<p><strong><strong>Nice to Have</strong></strong></p>
<ul>
<li>Experience with cloud platforms (e.g., Azure, AWS, GCP) and large-scale data systems.</li>
</ul>
<ul>
<li>Background in security engineering, privacy engineering, or data governance.</li>
</ul>
<ul>
<li>Experience with control-plane or metadata-driven enforcement systems.</li>
</ul>
<ul>
<li>Exposure to data platforms or ML infrastructure.</li>
</ul>
<ul>
<li>Prior experience in a regulated or highly sensitive data environment.</li>
</ul>
<p><strong>About OpenAI</strong></p>
<p>OpenAI is an AI research and deployment company dedicated to ensuring that general-purpose artificial intelligence benefits all of humanity. We push the boundaries of the capabilities of AI systems and seek to safely deploy them to the world through our products. AI is an extremely powerful tool that must be created with safety and human needs at its core, and to achieve our mission, we must encompass and value the many different perspectives, voices, and experiences that form the full spectrum of humanity.</p>
<p style="margin-top:24px;font-size:13px;color:#666;">XML job scraping automation by <a href="https://yubhub.co">YubHub</a></p>]]></Description>
      <Jobtype>full-time</Jobtype>
      <Experiencelevel>senior</Experiencelevel>
      <Workarrangement>hybrid</Workarrangement>
      <Salaryrange>$230K – $385K • Offers Equity</Salaryrange>
      <Skills>Python, Go, Rust, C++, Java, Distributed authorization, RBAC/ACL systems, Encryption-based access, Policy engines, Global privacy regulations, Cloud platforms, Large-scale data systems, Security engineering, Privacy engineering, Data governance, Control-plane or metadata-driven enforcement systems, Data platforms, ML infrastructure</Skills>
      <Category>Engineering</Category>
      <Industry>Technology</Industry>
      <Employername>OpenAI</Employername>
      <Employerlogo>https://logos.yubhub.co/openai.com.png</Employerlogo>
      <Employerdescription>OpenAI is an AI research and deployment company dedicated to ensuring that general-purpose artificial intelligence benefits all of humanity. We push the boundaries of the capabilities of AI systems and seek to safely deploy them to the world through our products.</Employerdescription>
      <Employerwebsite>https://jobs.ashbyhq.com</Employerwebsite>
      <Compensationcurrency></Compensationcurrency>
      <Compensationmin></Compensationmin>
      <Compensationmax></Compensationmax>
      <Applyto>https://jobs.ashbyhq.com/openai/23b158fe-709e-4bf5-856c-d10953d32f60</Applyto>
      <Location>San Francisco, Seattle</Location>
      <Country></Country>
      <Postedate>2026-03-06</Postedate>
    </job>
  </jobs>
</source>