<?xml version="1.0" encoding="UTF-8"?>
<source>
  <jobs>
    <job>
      <externalid>231ce599-c30</externalid>
      <Title>Staff Machine Learning Engineer, Content Quality Signals</Title>
      <Description><![CDATA[<p>We&#39;re seeking a Staff Machine Learning Engineer to join our Content Understanding team. As a key member of this team, you will lead modeling strategy for content understanding, including architecture selection, training approach, and evaluation methodology. You will design and ship production models that generate content signals such as embeddings and classifications used across multiple product surfaces. The ideal candidate will have significant industry experience building software and ML pipelines/systems, including technical leadership. They will have strong proficiency in Python and at least one ML stack such as PyTorch / TensorFlow, plus solid software engineering fundamentals. The role requires proven experience training and deploying ML models to production, including model versioning, rollouts, monitoring, and retraining strategies. The successful candidate will have deep hands-on experience in content understanding domains, such as computer vision, NLP, and multimodal/embedding models. They will also have experience working with large-scale datasets and distributed compute. The ideal candidate will be able to influence across teams and drive ambiguous problem areas to measurable outcomes. They will have strong applied skills in evaluation and experimentation, including defining metrics, offline/online alignment, A/B testing, debugging regressions, and model quality analysis.</p>
<p>The role is ideal for a senior modeler who also enjoys developing, productionizing models and leading technical direction across teams. The successful candidate will be able to provide technical leadership through design reviews, mentoring, and raising the quality bar for modeling and ML engineering practices.</p>
<p>In addition to the above responsibilities, the successful candidate will be expected to:</p>
<ul>
<li>Collaborate with infra/platform teams to ensure scalable, reliable training/serving (latency, cost, observability, rollout safety).</li>
<li>Partner with signal-consuming teams (ranking, retrieval, integrity, ads) to define signal contracts, adoption patterns, and success metrics.</li>
<li>Own the full ML lifecycle: data/labeling strategy (human labels + weak supervision), training pipelines, offline evaluation, online experimentation, deployment, and monitoring/retraining.</li>
<li>Provide technical leadership through design reviews, mentoring, and raising the quality bar for modeling and ML engineering practices.</li>
</ul>
<p>Nice to have: experience with Cursor, Copilot, Codex, or similar AI coding assistants for development, debugging, testing, and refactoring; familiarity with LLM-powered productivity tools for documentation search, experiment analysis, SQL/data exploration, and engineering workflow acceleration.</p>
<p style="margin-top:24px;font-size:13px;color:#666;">XML job scraping automation by <a href="https://yubhub.co">YubHub</a></p>]]></Description>
      <Jobtype>full-time</Jobtype>
      <Experiencelevel>staff</Experiencelevel>
      <Workarrangement>remote</Workarrangement>
      <Salaryrange>$189,308-$389,753 USD</Salaryrange>
      <Skills>Python, PyTorch, TensorFlow, Computer Vision, NLP, Multimodal Embedding Models, Large-Scale Datasets, Distributed Compute, Cursor, Copilot, Codex, LLM-Powered Productivity Tools</Skills>
      <Category>Engineering</Category>
      <Industry>Technology</Industry>
      <Employername>Pinterest</Employername>
      <Employerlogo>https://logos.yubhub.co/pinterest.com.png</Employerlogo>
      <Employerdescription>Pinterest is a social media platform that allows users to discover and save images and videos to virtual pinboards.</Employerdescription>
      <Employerwebsite>https://www.pinterest.com/</Employerwebsite>
      <Compensationcurrency></Compensationcurrency>
      <Compensationmin></Compensationmin>
      <Compensationmax></Compensationmax>
      <Applyto>https://job-boards.greenhouse.io/pinterest/jobs/7531060</Applyto>
      <Location>San Francisco, CA, US; Remote, US</Location>
      <Country></Country>
      <Postedate>2026-04-18</Postedate>
    </job>
    <job>
      <externalid>77ff2013-8f9</externalid>
      <Title>Senior Product Manager, Context Engineering</Title>
      <Description><![CDATA[<p>ZoomInfo is where careers accelerate. We move fast, think boldly, and empower you to do the best work of your life. As a Senior Product Manager, Context Engineering, you&#39;ll be surrounded by teammates who care deeply, challenge each other, and celebrate wins.</p>
<p>With tools that amplify your impact and a culture that backs your ambition, you won&#39;t just contribute. You&#39;ll make things happen–fast.</p>
<p><strong>The Opportunity:</strong></p>
<p>ZoomInfo built the industry&#39;s most sophisticated GTM data acquisition infrastructure. Now we&#39;re applying that same rigor to context engineering,the emerging discipline that determines whether AI systems deliver transformative value or incremental improvement.</p>
<p>This role architects the context layer powering our AI intelligence across Copilot, GTM Studio, and MarketingOS. You&#39;ll transform how ZoomInfo&#39;s agentic workflows access, compress, and deliver precisely the right information at exactly the right moment.</p>
<p>The impact is organization-wide: every AI interaction, every intelligent recommendation, every autonomous agent action depends on the context infrastructure you’ll build.</p>
<p>We&#39;ve transitioned to AI-first product thinking company-wide. The context pipelines exist but remain nascent,creating a rare opportunity to define architectural patterns and platform standards that compound value across multiple product teams in the years to come.</p>
<p><strong>What You&#39;ll Do:</strong></p>
<p>Architect Context Acquisition Pipelines</p>
<p>Design and optimize how ZoomInfo retrieves, transforms, and delivers context from our semantic data layer, memory systems, and data producers. You&#39;ll balance retrieval quality against latency and cost constraints, implementing hybrid search strategies, intelligent caching, and context compression techniques that maintain information density while respecting token budgets.</p>
<p>Own the Context Layer Platform</p>
<p>Build infrastructure serving multiple product teams,Copilot, GTM Studio, MarketingOS,as internal customers. Establish API contracts, developer experience standards, and integration patterns that accelerate feature velocity.</p>
<p>Maintain the delicate balance between providing flexible building blocks and opinionated solutions that encode best practices.</p>
<p>Drive Quality Through Measurement</p>
<p>Implement evaluation frameworks using RAGAS metrics and custom benchmarks. Monitor retrieval precision, context relevance, hallucination rates, and system performance in production.</p>
<p>Translate quality signals into architectural improvements, working closely with ML engineers to iterate on embedding models, reranking strategies, and retrieval algorithms.</p>
<p>Navigate Emerging Research</p>
<p>Context engineering evolves weekly. You&#39;ll continuously evaluate innovations,GraphRAG for multi-hop reasoning, test-time compute scaling, multimodal retrieval, compression techniques,determining which advances warrant production investment versus which remain academic curiosities.</p>
<p>Bring external best practices to ZoomInfo while contributing learnings back to the broader community.</p>
<p>Orchestrate Cross-Functional Execution</p>
<p>Translate between three distinct worlds: ML engineers optimizing retrieval algorithms, platform engineers building scalable infrastructure, and product teams shipping customer features.</p>
<p>Establish communication cadences, prioritization frameworks, and decision-making processes that balance urgent requests against strategic platform development.</p>
<p><strong>What You’ll Bring:</strong></p>
<ul>
<li>4-6 years of product management experience with 2+ years in ML/AI infrastructure</li>
</ul>
<ul>
<li>Direct experience with production RAG systems, vector databases, or semantic search, context management</li>
</ul>
<ul>
<li>Experience with graph databases (e.g. Neo4j)</li>
</ul>
<ul>
<li>Track record building platform products serving multiple internal or external customers</li>
</ul>
<ul>
<li>Familiarity with context compression, embedding models, and retrieval evaluation frameworks</li>
</ul>
<ul>
<li>History of defining product vision in nascent technical domains where best practices are still emerging</li>
</ul>
<p><strong>Who You Are:</strong></p>
<p>Technical Foundation</p>
<p>Expert-level understanding of RAG system architecture,you can discuss embedding dimensionality trade-offs, vector database indexing strategies, and reranking approaches with depth.</p>
<p>You&#39;ve built or significantly contributed to production retrieval systems, not just managed them at arm&#39;s length.</p>
<p>Python and SQL proficiency enables you to review code, analyze retrieval issues, and prototype solutions for concept validation.</p>
<p>Platform Product Mindset</p>
<p>Experience building infrastructure products where internal engineering teams are your customers.</p>
<p>You measure success through downstream product velocity improvements and developer satisfaction scores, not just uptime metrics.</p>
<p>You understand platform economics,how each additional team using your infrastructure increases its value through shared learnings and amortized costs.</p>
<p>Intellectual Velocity</p>
<p>You read recent research papers from arXiv, ACL, NeurIPS.</p>
<p>You prototype emerging techniques to understand their practical constraints.</p>
<p>You maintain strong opinions weakly held, updating your architectural assumptions as evidence accumulates.</p>
<p>The discipline moves too fast for static expertise,continuous learning is non-negotiable.</p>
<p>Strategic Communication</p>
<p>You translate between technical depth and business impact fluently.</p>
<p>You can explain to executives why implementing GraphRAG takes 6 months but unlocks $10M in product capabilities.</p>
<p>You can communicate to engineers why business constraints require shipping &#39;good enough&#39; in 3 weeks rather than &#39;optimal&#39; in 3 months.</p>
<p>You influence without formal authority through data, clear reasoning, and earned credibility.</p>
<p><strong>The Environment:</strong></p>
<p>Reporting &amp; Collaboration</p>
<p>Report to the Senior Product Director for Context Engineering, Semantic Data Layer, and Agentic Memory within ZoomInfo&#39;s Intelligence team.</p>
<p>Work alongside PMs responsible for signals and ML scoring/recommendation models.</p>
<p>Together, you ensure our agentic workflows fill context windows with high-quality, information-dense content exactly when needed.</p>
<p>Pace &amp; Problems</p>
<p>Fast-moving engineering team that understands the space.</p>
<p>Company-wide AI adoption push creates both urgency and opportunity.</p>
<p>Expect interesting problems: How do we maintain sub-200ms retrieval latency at scale?</p>
<p>When does GraphRAG justify its indexing cost?</p>
<p>How do we balance context freshness with cache efficiency?</p>
<p>You&#39;ll shape answers that become architectural patterns across the organization.</p>
<p>Impact</p>
<p>Define a nascent discipline at a company that&#39;s already AI-first in product thinking and organizational structure.</p>
<p>Your architectural decisions compound,every improvement to context quality multiplies across Copilot, GTM Studio, MarketingOS, and future products we haven&#39;t imagined yet.</p>
<p>This is infrastructure work with direct line-of-sight to customer value.</p>
<p>#LI-PS1 #LI-remote</p>
<p style="margin-top:24px;font-size:13px;color:#666;">XML job scraping automation by <a href="https://yubhub.co">YubHub</a></p>]]></Description>
      <Jobtype>full-time</Jobtype>
      <Experiencelevel>senior</Experiencelevel>
      <Workarrangement>remote</Workarrangement>
      <Salaryrange>$89,200-$133,800 USD</Salaryrange>
      <Skills>Product Management, ML/AI Infrastructure, RAG Systems, Vector Databases, Semantic Search, Context Management, Graph Databases, Context Compression, Embedding Models, Retrieval Evaluation Frameworks</Skills>
      <Category>Engineering</Category>
      <Industry>Technology</Industry>
      <Employername>ZoomInfo</Employername>
      <Employerlogo>https://logos.yubhub.co/zoominfo.com.png</Employerlogo>
      <Employerdescription>ZoomInfo is a go-to-market intelligence platform that provides AI-ready insights, trusted data, and advanced automation to over 35,000 companies worldwide.</Employerdescription>
      <Employerwebsite>https://www.zoominfo.com/</Employerwebsite>
      <Compensationcurrency></Compensationcurrency>
      <Compensationmin></Compensationmin>
      <Compensationmax></Compensationmax>
      <Applyto>https://job-boards.greenhouse.io/zoominfo/jobs/8206116002</Applyto>
      <Location>Waltham, Massachusetts, United States</Location>
      <Country></Country>
      <Postedate>2026-04-18</Postedate>
    </job>
    <job>
      <externalid>4d14bef3-77e</externalid>
      <Title>Staff Software Engineer - AI Applications</Title>
      <Description><![CDATA[<p>We believe that the way people interact with their finances will drastically improve in the next few years. We&#39;re dedicated to empowering this transformation by building the tools and experiences that thousands of developers use to create their own products. Plaid powers the tools millions of people rely on to live a healthier financial life.</p>
<p>We work with thousands of companies like Venmo, SoFi, several of the Fortune 500, and many of the largest banks to make it easy for people to connect their financial accounts to the apps and services they want to use. Plaid&#39;s network covers 12,000 financial institutions across the US, Canada, UK and Europe.</p>
<p>The AI Applications Team You will have the opportunity to join as one of the founding members of this newly formed team that is dedicated to consolidating and rapidly scaling our successful bets so far, and grow with the team in our quest to accelerate Plaid&#39;s transformation into an AI-first company.</p>
<p>In this role you will lead projects that enable and scale our business with our largest AI customers and partners, starting with personal finance use cases and expanding into many others; examples include:</p>
<ul>
<li>Develop and evolve the preferred integration pattern for Plaid with AI providers - from API adaptations to building the official Plaid MCP Servers, and beyond</li>
</ul>
<ul>
<li>Redefine how Plaid&#39;s consumer link experience embed into conversational interfaces in the most seamless way</li>
</ul>
<ul>
<li>Architect the trust layer for the future of agentic commerce that will become the industry standard</li>
</ul>
<p>Additionally you will be expected to scale and extend our existing successful bets on AI-powered customer experience; examples include:</p>
<ul>
<li>Make the next step-function improvement in our homegrown customer support agent</li>
</ul>
<ul>
<li>Land our multi-turn and multi-agent system that powers a truly delightful experience for our customers; define how to scalably run offline evaluation for complex multi-turn open-ended tasks; research and prototype how Human-In-The-Loop - Reinforcement Learning (RLHF) can power an insights flywheel; pioneer the architecture for customer-specific long-term memory, etc.</li>
</ul>
<ul>
<li>Extend our agentic system to support other critical parts of the customer journey, starting with areas with the highest ROI - top-of-funnel product recommendation, customer onboarding and risk diligence, customer activation and assistance for faster productionization, as well as upselling and cross-selling of Plaid products</li>
</ul>
<p>You will have a front row seat to all the latest industry developments. Over time, with the skills and experience you develop and hone on this team, you can become an influential voice in defining where AI &lt; Fintech will be heading longer term.</p>
<p><strong>Responsibilities</strong></p>
<ul>
<li>Build across the stack. Design, develop, and maintain scalable backend services and APIs, as well as intuitive, high-quality frontend applications that bring those systems to life.</li>
</ul>
<ul>
<li>Work with other AI engineers, software engineers and machine learning engineers to architect, design and implement GenAI-powered products and features</li>
</ul>
<ul>
<li>Collaborate across functions to understand user needs, propose and implement AI-powered solutions where they’re expected to have the highest impact</li>
</ul>
<ul>
<li>Design and execute rapid experiments to push the boundaries on potential business impact from emerging AI capabilities, with a focus on minimal viable testing approaches</li>
</ul>
<ul>
<li>Balance creative exploration of possibilities with rigorous evaluation of technical feasibility, product potential and business impact</li>
</ul>
<p><strong>Requirements</strong></p>
<ul>
<li>Experience building backend services and working with microservices or service-oriented architectures</li>
</ul>
<ul>
<li>Strong working knowledge of HTML, CSS, JavaScript, and modern frontend frameworks or libraries, with comfort building user-facing experiences</li>
</ul>
<ul>
<li>Hands-on experience working with LLMs to build products and shipping them to product with iterating with real user feedback - including but not limited to:</li>
</ul>
<ul>
<li>Prompt engineering</li>
</ul>
<ul>
<li>Fine-tuning</li>
</ul>
<ul>
<li>Retrieval augmented generation (RAG)</li>
</ul>
<ul>
<li>Semantic search</li>
</ul>
<ul>
<li>Vector database and embedding models</li>
</ul>
<ul>
<li>Agent orchestration framework</li>
</ul>
<ul>
<li>Evaluation and monitoring framework of open-ended tasks</li>
</ul>
<ul>
<li>Streaming and SSE</li>
</ul>
<ul>
<li>Common UX and design patterns for GenAI-powered products</li>
</ul>
<ul>
<li>Strong debugging and monitoring experience for production systems</li>
</ul>
<ul>
<li>Ability to deeply understand customer and user needs through user research and rapid experimentation - be your own technical PM</li>
</ul>
<ul>
<li>Ability to balance divergent thinking (exploring possibilities) with convergent thinking (evaluating feasibility), which is critical for driving 0 -&gt;1 projects</li>
</ul>
<ul>
<li>Extremely curious and passionate about working in GenAI applications space</li>
</ul>
<p><strong>Nice-to-Haves</strong></p>
<ul>
<li>Experience training and/or serving ML models in production, or fine-tuning LLMs for domain-specific use cases</li>
</ul>
<ul>
<li>Comfortable operating in privacy/PII-sensitive environments and applying compliance mitigations</li>
</ul>
<p style="margin-top:24px;font-size:13px;color:#666;">XML job scraping automation by <a href="https://yubhub.co">YubHub</a></p>]]></Description>
      <Jobtype>full-time</Jobtype>
      <Experiencelevel>staff</Experiencelevel>
      <Workarrangement>hybrid</Workarrangement>
      <Salaryrange>$228,360-$369,800 per year</Salaryrange>
      <Skills>backend services, microservices, service-oriented architectures, HTML, CSS, JavaScript, modern frontend frameworks, LLMs, prompt engineering, fine-tuning, retrieval augmented generation, semantic search, vector database, embedding models, agent orchestration framework, evaluation and monitoring framework, streaming, SSE, UX and design patterns, debugging, monitoring</Skills>
      <Category>Engineering</Category>
      <Industry>Finance</Industry>
      <Employername>Plaid</Employername>
      <Employerlogo>https://logos.yubhub.co/plaid.com.png</Employerlogo>
      <Employerdescription>Plaid is a financial technology company that provides tools and experiences for developers to create their own products. It was founded in 2013 and is headquartered in San Francisco.</Employerdescription>
      <Employerwebsite>https://plaid.com/</Employerwebsite>
      <Compensationcurrency></Compensationcurrency>
      <Compensationmin></Compensationmin>
      <Compensationmax></Compensationmax>
      <Applyto>https://jobs.lever.co/plaid/a6bf6eeb-6486-4e45-a3b2-e712f32523d3</Applyto>
      <Location>San Francisco</Location>
      <Country></Country>
      <Postedate>2026-04-17</Postedate>
    </job>
    <job>
      <externalid>75ad55ca-61b</externalid>
      <Title>Research Engineer / Research Scientist - Foundations Retrieval IC</Title>
      <Description><![CDATA[<p><strong>Job Posting</strong></p>
<p><strong>Research Engineer / Research Scientist - Foundations Retrieval IC</strong></p>
<p><strong>Location</strong></p>
<p>San Francisco</p>
<p><strong>Employment Type</strong></p>
<p>Full time</p>
<p><strong>Department</strong></p>
<p>Research</p>
<p><strong>Compensation</strong></p>
<ul>
<li>$445K – $555K • Offers Equity</li>
</ul>
<p>The base pay offered may vary depending on multiple individualized factors, including market location, job-related knowledge, skills, and experience. If the role is non-exempt, overtime pay will be provided consistent with applicable laws. In addition to the salary range listed above, total compensation also includes generous equity, performance-related bonus(es) for eligible employees, and the following benefits.</p>
<ul>
<li>Medical, dental, and vision insurance for you and your family, with employer contributions to Health Savings Accounts</li>
</ul>
<ul>
<li>Pre-tax accounts for Health FSA, Dependent Care FSA, and commuter expenses (parking and transit)</li>
</ul>
<ul>
<li>401(k) retirement plan with employer match</li>
</ul>
<ul>
<li>Paid parental leave (up to 24 weeks for birth parents and 20 weeks for non-birthing parents), plus paid medical and caregiver leave (up to 8 weeks)</li>
</ul>
<ul>
<li>Paid time off: flexible PTO for exempt employees and up to 15 days annually for non-exempt employees</li>
</ul>
<ul>
<li>13+ paid company holidays, and multiple paid coordinated company office closures throughout the year for focus and recharge, plus paid sick or safe time (1 hour per 30 hours worked, or more, as required by applicable state or local law)</li>
</ul>
<ul>
<li>Mental health and wellness support</li>
</ul>
<ul>
<li>Employer-paid basic life and disability coverage</li>
</ul>
<ul>
<li>Annual learning and development stipend to fuel your professional growth</li>
</ul>
<ul>
<li>Daily meals in our offices, and meal delivery credits as eligible</li>
</ul>
<ul>
<li>Relocation support for eligible employees</li>
</ul>
<ul>
<li>Additional taxable fringe benefits, such as charitable donation matching and wellness stipends, may also be provided.</li>
</ul>
<p>More details about our benefits are available to candidates during the hiring process.</p>
<p>This role is at-will and OpenAI reserves the right to modify base pay and other compensation components at any time based on individual performance, team or company results, or market conditions.</p>
<p><strong>About the Team</strong></p>
<p>The Foundations Research team works on high-risk, high-reward ideas that could shape the next decade of AI. Our goal is to advance the science and data that enable our training and scaling efforts, with a particular focus on future frontier models. Pushing the boundaries of data, scaling laws, optimization techniques, model architectures, and efficiency improvements to propel our science.</p>
<p><strong>About the Role</strong></p>
<p>We’re looking for a researcher focused on our embedding retrieval efforts. You’ll work with a team of world-class research scientists and engineers developing foundational technology that enables models to retrieve and condition on the right information, at the right time. This includes designing new embedding training objectives, scalable vector store architectures, and dynamic indexing methods.</p>
<p>This work will support retrieval across many OpenAI products and internal research efforts, with opportunities for scientific publication and deep technical impact.</p>
<p>This role is based in San Francisco, CA. We use a hybrid work model of 3 days in the office per week and offer relocation assistance to new employees.</p>
<p><strong>Responsibilities</strong></p>
<ul>
<li>Tackle embedding models and retrieval systems optimized for grounding, relevance, and adaptive reasoning.</li>
</ul>
<ul>
<li>Collaborate with a team of researchers and engineers building end-to-end infrastructure for training, evaluating, and integrating embeddings into frontier models.</li>
</ul>
<ul>
<li>Drive innovation in dense, sparse, and hybrid representation techniques, metric learning, and learning-to-retrieve systems.</li>
</ul>
<ul>
<li>Collaborate closely with Pretraining, Inference, and other Research teams to integrate retrieval throughout the model lifecycle</li>
</ul>
<ul>
<li>Contribute to OpenAI’s long-term vision of AI systems with memory and knowledge access capabilities rooted in learned representations.</li>
</ul>
<p><strong>You Might Thrive in This Role If You Have</strong></p>
<ul>
<li>Proven experience leading high-performance teams of researchers or engineers in ML infrastructure or foundational research.</li>
</ul>
<ul>
<li>Deep technical expertise in representation learning, embedding models, or vector retrieval systems.</li>
</ul>
<ul>
<li>Familiarity with transformer-based LLMs and how embedding spaces can interact with language model objectives.</li>
</ul>
<ul>
<li>Research experience in areas such as contrastive learning, supervised or unsupervised embedding learning, or metric learning.</li>
</ul>
<ul>
<li>A track record of building or scaling large machine learning systems, particularly embedding pipelines in production or research contexts.</li>
</ul>
<ul>
<li>A first-principles mindset for challenging assumptions about how retrieval and memory should work for large models.</li>
</ul>
<p><strong>About OpenAI</strong></p>
<p>OpenAI is an AI research and deployment company dedicated to ensuring that general-purpose artificial intelligence benefits all of humanity. We push the boundaries of the capabilities of AI systems and seek to safely deploy them to the world through our products. AI is an extremely powerful tool that must be created with safety and human needs at its core, and to achieve our mission, we must encompass and value the many different perspectives, voices, and experiences that form the full spectrum of humanity.</p>
<p style="margin-top:24px;font-size:13px;color:#666;">XML job scraping automation by <a href="https://yubhub.co">YubHub</a></p>]]></Description>
      <Jobtype>full-time</Jobtype>
      <Experiencelevel>senior</Experiencelevel>
      <Workarrangement>hybrid</Workarrangement>
      <Salaryrange>$445K – $555K • Offers Equity</Salaryrange>
      <Skills>representation learning, embedding models, vector retrieval systems, transformer-based LLMs, contrastive learning, supervised or unsupervised embedding learning, metric learning, ML infrastructure, foundational research, large machine learning systems, embedding pipelines</Skills>
      <Category>Engineering</Category>
      <Industry>Technology</Industry>
      <Employername>OpenAI</Employername>
      <Employerlogo>https://logos.yubhub.co/openai.com.png</Employerlogo>
      <Employerdescription>OpenAI is an AI research and deployment company dedicated to ensuring that general-purpose artificial intelligence benefits all of humanity. We push the boundaries of the capabilities of AI systems and seek to safely deploy them to the world through our products.</Employerdescription>
      <Employerwebsite>https://jobs.ashbyhq.com</Employerwebsite>
      <Compensationcurrency></Compensationcurrency>
      <Compensationmin></Compensationmin>
      <Compensationmax></Compensationmax>
      <Applyto>https://jobs.ashbyhq.com/openai/020b2aae-8be0-408c-ab49-20eefa8541af</Applyto>
      <Location>San Francisco</Location>
      <Country></Country>
      <Postedate>2026-03-06</Postedate>
    </job>
  </jobs>
</source>