<?xml version="1.0" encoding="UTF-8"?>
<source>
  <jobs>
    <job>
      <externalid>460d00aa-b48</externalid>
      <Title>Senior / Staff+ Software Engineer, Voice Platform</Title>
      <Description><![CDATA[<p>About the role</p>
<p>We&#39;re building the infrastructure that lets people talk to Claude,real-time, bidirectional voice conversations that feel natural, responsive, and safe. This is foundational work for how millions of people will interact with AI.</p>
<p>The Voice Platform team designs and operates the serving systems, streaming pipelines, and APIs that bring Anthropic&#39;s audio models from research into production across Claude.ai, our mobile apps, and the Anthropic API. You&#39;ll work at the intersection of real-time media, low-latency inference, and distributed systems,building infrastructure where every millisecond of latency is felt by the user.</p>
<p>We partner closely with the Audio research team, who train the speech understanding and generation models, and with product teams shipping voice experiences to users. Your job is to make those models fast, reliable, and delightful to talk to at scale.</p>
<p>Responsibilities</p>
<ul>
<li>Design and build the real-time streaming infrastructure that powers voice conversations with Claude,ingesting microphone audio, orchestrating model inference, and streaming synthesized speech back with minimal latency</li>
</ul>
<ul>
<li>Build low-latency serving systems for speech models, optimizing time-to-first-audio and end-to-end conversational responsiveness</li>
</ul>
<ul>
<li>Develop the public and internal APIs that expose voice capabilities to Claude.ai, mobile clients, and third-party developers</li>
</ul>
<ul>
<li>Own the audio transport layer,codecs, jitter buffers, adaptive bitrate, packet loss recovery,so conversations stay smooth across unreliable networks</li>
</ul>
<ul>
<li>Build observability and quality-measurement systems for voice: latency distributions, audio quality metrics, interruption handling, and turn-taking accuracy</li>
</ul>
<ul>
<li>Partner with Audio research to move new model architectures from experiment to production, and feed real-world performance data back into research</li>
</ul>
<ul>
<li>Collaborate with mobile and product engineering on client-side audio capture, playback, and the end-to-end user experience</li>
</ul>
<p>You may be a good fit if you</p>
<ul>
<li>Have 6+ years of experience building distributed systems, real-time infrastructure, or platform services at scale</li>
</ul>
<ul>
<li>Have shipped production systems where latency is measured in tens of milliseconds and users notice when you miss</li>
</ul>
<ul>
<li>Are comfortable working across the stack,from transport protocols and serving infrastructure up to the APIs product teams build on</li>
</ul>
<ul>
<li>Are results-oriented, with a bias toward flexibility and impact</li>
</ul>
<ul>
<li>Pick up slack, even if it goes outside your job description</li>
</ul>
<ul>
<li>Enjoy pair programming (we love to pair!)</li>
</ul>
<ul>
<li>Care about the societal impacts of voice AI and want to help shape how these systems are developed responsibly</li>
</ul>
<ul>
<li>Are comfortable with ambiguity,voice is a fast-moving space, and you&#39;ll help define the architecture as we learn what works</li>
</ul>
<p>Strong candidates may also have experience with</p>
<ul>
<li>Real-time media protocols and stacks: WebRTC, RTP, gRPC bidirectional streaming, or WebSockets at scale</li>
</ul>
<ul>
<li>Audio engineering fundamentals: codecs (Opus, AAC), voice activity detection, echo cancellation, jitter buffering, or audio DSP</li>
</ul>
<ul>
<li>Low-latency ML inference serving, streaming model outputs, or GPU-based serving infrastructure</li>
</ul>
<ul>
<li>Telephony, live streaming, video conferencing, or voice assistant platforms</li>
</ul>
<ul>
<li>Mobile audio pipelines on iOS (AVAudioEngine, AudioUnits) or Android (Oboe, AAudio)</li>
</ul>
<ul>
<li>Working alongside ML researchers to productionize models,speech experience is a plus but not required</li>
</ul>
<p>Representative projects</p>
<ul>
<li>Driving time-to-first-audio below human perceptual thresholds by co-designing the serving pipeline with the Audio research team</li>
</ul>
<ul>
<li>Building a streaming inference orchestrator that interleaves speech recognition, LLM reasoning, and speech synthesis with overlapping execution</li>
</ul>
<ul>
<li>Designing the voice mode API surface for the Anthropic API so developers can build their own voice agents on Claude</li>
</ul>
<ul>
<li>Implementing graceful barge-in and interruption handling so users can cut Claude off mid-sentence naturally</li>
</ul>
<ul>
<li>Instrumenting end-to-end audio quality metrics and building dashboards that catch regressions before users do</li>
</ul>
<p style="margin-top:24px;font-size:13px;color:#666;">XML job scraping automation by <a href="https://yubhub.co">YubHub</a></p>]]></Description>
      <Jobtype>full-time</Jobtype>
      <Experiencelevel>senior</Experiencelevel>
      <Workarrangement>hybrid</Workarrangement>
      <Salaryrange>$320,000-$485,000 USD</Salaryrange>
      <Skills>Real-time media protocols and stacks, Audio engineering fundamentals, Low-latency ML inference serving, Distributed systems, Streaming pipelines, APIs, WebRTC, RTP, gRPC bidirectional streaming, WebSockets, Opus, AAC, Voice activity detection, Echo cancellation, Jitter buffering, Audio DSP, GPU-based serving infrastructure, Telephony, Live streaming, Video conferencing, Voice assistant platforms, Mobile audio pipelines on iOS, Android, Working alongside ML researchers</Skills>
      <Category>Engineering</Category>
      <Industry>Technology</Industry>
      <Employername>Anthropic</Employername>
      <Employerlogo>https://logos.yubhub.co/anthropic.com.png</Employerlogo>
      <Employerdescription>Anthropic is a technology company that creates reliable, interpretable, and steerable AI systems.</Employerdescription>
      <Employerwebsite>https://www.anthropic.com/</Employerwebsite>
      <Compensationcurrency></Compensationcurrency>
      <Compensationmin></Compensationmin>
      <Compensationmax></Compensationmax>
      <Applyto>https://job-boards.greenhouse.io/anthropic/jobs/5172245008</Applyto>
      <Location>San Francisco, CA | New York City, NY | Seattle, WA</Location>
      <Country></Country>
      <Postedate>2026-04-18</Postedate>
    </job>
    <job>
      <externalid>648f4814-708</externalid>
      <Title>Senior Software Engineer, Machine Learning (Commerce)</Title>
      <Description><![CDATA[<p>We are looking for a Senior Machine Learning Engineer to join our Revenue ML team at Discord. This role sits at the intersection of Discord&#39;s two most strategic revenue pillars , our growing 1P Shop and our newly launched Game Commerce platform. You&#39;ll be the founding ML voice for commerce discovery and personalization, building systems from the ground up that power recommendations, social commerce mechanics, and marketing targeting across both first-party and third-party storefronts.</p>
<p>Your responsibilities will include:</p>
<p>Architecting and owning the ML foundations for commerce discovery: user, item, and interaction embeddings that power personalized recommendations across shop surfaces (homepage, cart, post-purchase, wishlist, and more).</p>
<p>Designing and deploying scalable real-time recommendation and ranking systems that support a growing catalog of 1P and 3P items across heterogeneous game publisher inventories.</p>
<p>Building ML-powered marketing targeting systems that identify the right users for the right campaigns , new buyer discounts, drop campaigns, weekly deals, and seasonal promotions , driving conversion without conditioning users to wait for discounts.</p>
<p>Leveraging Discord&#39;s unique social graph to build social commerce ML: gifting recipient prediction, group buying conversion modeling, and friend-group recommendations that differentiate Discord from traditional game storefronts.</p>
<p>Driving deep learning A/B testing infrastructure and model monitoring to translate experimentation results into actionable product decisions.</p>
<p>Partnering closely with Shop, Game Commerce, Revenue Infra, ML Infra, and Data Engineering teams to define ML requirements, surface integration points, and influence the commerce roadmap.</p>
<p>To be successful in this role, you will need:</p>
<p>4+ years of experience as a Machine Learning Engineer, with a track record of owning and shipping recommendation or personalization systems end-to-end.</p>
<p>Deep expertise in applied deep learning , particularly embedding models, two-tower architectures, and retrieval/ranking systems for e-commerce or content recommendation.</p>
<p>Strong proficiency in Python and deep learning frameworks (PyTorch preferred).</p>
<p>Experience building and operating real-time ML serving infrastructure at scale, including feature stores, model serving, and A/B testing frameworks.</p>
<p>Demonstrated ability to work in early-stage, high-ambiguity environments and build ML systems from the ground up, not just improve existing ones.</p>
<p>Experience translating ML evaluation metrics and experiment results into product roadmap decisions and business impact.</p>
<p>Strong cross-functional instincts , you&#39;re comfortable partnering with product, engineering, data science, and business stakeholders to align on priorities and drive execution.</p>
<p>Bonus skills include experience applying graph ML or social network signals (social affinities, community behavior) to recommendation or personalization problems, familiarity with personalized marketing systems: lifecycle targeting, audience segmentation, and campaign optimization, and familiarity with loyalty, rewards, or incentive programs.</p>
<p style="margin-top:24px;font-size:13px;color:#666;">XML job scraping automation by <a href="https://yubhub.co">YubHub</a></p>]]></Description>
      <Jobtype>full-time</Jobtype>
      <Experiencelevel>senior</Experiencelevel>
      <Workarrangement>onsite</Workarrangement>
      <Salaryrange>$220,000 to $247,500 + equity + benefits</Salaryrange>
      <Skills>Machine Learning, Deep Learning, Python, PyTorch, Real-time ML serving infrastructure, Feature stores, Model serving, A/B testing frameworks, Graph ML, Social network signals, Personalized marketing systems, Loyalty, rewards, or incentive programs</Skills>
      <Category>Engineering</Category>
      <Industry>Technology</Industry>
      <Employername>Discord</Employername>
      <Employerlogo>https://logos.yubhub.co/discord.com.png</Employerlogo>
      <Employerdescription>Discord is a communication platform used by over 200 million people every month for various purposes, including playing video games.</Employerdescription>
      <Employerwebsite>https://discord.com/</Employerwebsite>
      <Compensationcurrency></Compensationcurrency>
      <Compensationmin></Compensationmin>
      <Compensationmax></Compensationmax>
      <Applyto>https://job-boards.greenhouse.io/discord/jobs/8438033002</Applyto>
      <Location>San Francisco Bay Area</Location>
      <Country></Country>
      <Postedate>2026-04-18</Postedate>
    </job>
    <job>
      <externalid>cef9a3ff-75c</externalid>
      <Title>Technical Program Manager, Platform</Title>
      <Description><![CDATA[<p>As a Technical Program Manager for Platform, you&#39;ll own the programs that stand up and operate Anthropic&#39;s APIs and serving infrastructure across multiple cloud environments.</p>
<p>This means driving deployments from scoping through production, running the platform work that spans them, and working across API, Platform Foundations, Security, our cloud provider counterparts, and whoever else is on the critical path when dependencies and tradeoffs pile up.</p>
<p>Responsibilities:</p>
<ul>
<li>Own end-to-end program execution for Anthropic’s API across major cloud deployments, from scoping through production launch and steady-state operations</li>
</ul>
<ul>
<li>Drive the platform programs that cut across individual deployments: the shared foundations that get built once and reused, not rebuilt per cloud</li>
</ul>
<ul>
<li>Act as a primary coordination point with cloud provider counterparts, keeping engagement clean across multiple internal teams with touchpoints into the same partner</li>
</ul>
<ul>
<li>Partner with engineering leadership to turn technical direction into executable plans with clear owners, dependencies, and risk tracking</li>
</ul>
<ul>
<li>Build the program scaffolding (roadmaps, status reporting, decision logs, escalation paths) that lets a fast-moving org stay aligned without slowing down</li>
</ul>
<ul>
<li>Drive the hard sequencing conversations when partner commitments, engineering bandwidth, and priorities are in tension, and surface them to leadership with a recommendation</li>
</ul>
<ul>
<li>Identify where program coverage is thin relative to the load and help shape how we staff around it</li>
</ul>
<p>You may be a good fit if you:</p>
<ul>
<li>Have 10+ years of technical program management experience, including ownership of large infrastructure or platform programs with many engineering teams and external partners in the mix</li>
</ul>
<ul>
<li>Have deep technical fluency in cloud APIs, infrastructure, distributed systems, or platform engineering, enough to be a credible partner to senior engineers on architecture and sequencing, not just a tracker of their decisions</li>
</ul>
<ul>
<li>Have run programs spanning organizational boundaries where you had no direct authority over most of the people whose work you depended on, and delivered anyway</li>
</ul>
<ul>
<li>Have direct experience with multi-cloud or hybrid cloud environments, large-scale migrations, or building platform abstraction layers</li>
</ul>
<ul>
<li>Have worked with major cloud providers (AWS, GCP, Azure) or similar large technology partners, and know how to keep those relationships productive when priorities diverge</li>
</ul>
<ul>
<li>Are comfortable operating in ambiguity on the long arc while being ruthlessly concrete on what ships this quarter and who owns it</li>
</ul>
<ul>
<li>Have a track record of making a program get cheaper to run the second and third time, not just landing the first instance</li>
</ul>
<ul>
<li>Thrive in environments where the plan you wrote last month needs rewriting, without losing the thread on what matters</li>
</ul>
<p>Strong candidates may also:</p>
<ul>
<li>Have experience with production serving infrastructure, inference systems, or ML platform work</li>
</ul>
<ul>
<li>Have moved between senior IC and management roles, or have interest in doing so</li>
</ul>
<ul>
<li>Have worked at a company rebuilding systems and org in flight during rapid scale-up</li>
</ul>
<p style="margin-top:24px;font-size:13px;color:#666;">XML job scraping automation by <a href="https://yubhub.co">YubHub</a></p>]]></Description>
      <Jobtype>full-time</Jobtype>
      <Experiencelevel>senior</Experiencelevel>
      <Workarrangement>hybrid</Workarrangement>
      <Salaryrange>$365,000-$435,000 USD</Salaryrange>
      <Skills>Cloud APIs, Infrastructure, Distributed Systems, Platform Engineering, Program Management, Cloud Providers, Multi-Cloud Environments, Hybrid Cloud Environments, Large-Scale Migrations, Platform Abstraction Layers, Production Serving Infrastructure, Inference Systems, ML Platform Work, Senior IC and Management Roles, Rapid Scale-Up</Skills>
      <Category>Engineering</Category>
      <Industry>Technology</Industry>
      <Employername>Anthropic</Employername>
      <Employerlogo>https://logos.yubhub.co/anthropic.com.png</Employerlogo>
      <Employerdescription>Anthropic is a public benefit corporation that creates reliable, interpretable, and steerable AI systems.</Employerdescription>
      <Employerwebsite>https://www.anthropic.com/</Employerwebsite>
      <Compensationcurrency></Compensationcurrency>
      <Compensationmin></Compensationmin>
      <Compensationmax></Compensationmax>
      <Applyto>https://job-boards.greenhouse.io/anthropic/jobs/5157003008</Applyto>
      <Location>San Francisco, CA | New York City, NY | Seattle, WA</Location>
      <Country></Country>
      <Postedate>2026-04-18</Postedate>
    </job>
    <job>
      <externalid>ce9f3d34-c8a</externalid>
      <Title>Senior / Staff+ Software Engineer, Voice Platform</Title>
      <Description><![CDATA[<p>We&#39;re building the infrastructure that lets people talk to Claude,real-time, bidirectional voice conversations that feel natural, responsive, and safe. This is foundational work for how millions of people will interact with AI.</p>
<p>The Voice Platform team designs and operates the serving systems, streaming pipelines, and APIs that bring Anthropic&#39;s audio models from research into production across Claude.ai, our mobile apps, and the Anthropic API. You&#39;ll work at the intersection of real-time media, low-latency inference, and distributed systems,building infrastructure where every millisecond of latency is felt by the user.</p>
<p>We partner closely with the Audio research team, who train the speech understanding and generation models, and with product teams shipping voice experiences to users. Your job is to make those models fast, reliable, and delightful to talk to at scale.</p>
<p>Responsibilities:</p>
<ul>
<li>Design and build the real-time streaming infrastructure that powers voice conversations with Claude,ingesting microphone audio, orchestrating model inference, and streaming synthesized speech back with minimal latency</li>
</ul>
<ul>
<li>Build low-latency serving systems for speech models, optimizing time-to-first-audio and end-to-end conversational responsiveness</li>
</ul>
<ul>
<li>Develop the public and internal APIs that expose voice capabilities to Claude.ai, mobile clients, and third-party developers</li>
</ul>
<ul>
<li>Own the audio transport layer,codecs, jitter buffers, adaptive bitrate, packet loss recovery,so conversations stay smooth across unreliable networks</li>
</ul>
<ul>
<li>Build observability and quality-measurement systems for voice: latency distributions, audio quality metrics, interruption handling, and turn-taking accuracy</li>
</ul>
<ul>
<li>Partner with Audio research to move new model architectures from experiment to production, and feed real-world performance data back into research</li>
</ul>
<ul>
<li>Collaborate with mobile and product engineering on client-side audio capture, playback, and the end-to-end user experience</li>
</ul>
<p>You may be a good fit if you</p>
<ul>
<li>Have 6+ years of experience building distributed systems, real-time infrastructure, or platform services at scale</li>
</ul>
<ul>
<li>Have shipped production systems where latency is measured in tens of milliseconds and users notice when you miss</li>
</ul>
<ul>
<li>Are comfortable working across the stack,from transport protocols and serving infrastructure up to the APIs product teams build on</li>
</ul>
<ul>
<li>Are results-oriented, with a bias toward flexibility and impact</li>
</ul>
<ul>
<li>Pick up slack, even if it goes outside your job description</li>
</ul>
<ul>
<li>Enjoy pair programming (we love to pair!)</li>
</ul>
<ul>
<li>Care about the societal impacts of voice AI and want to help shape how these systems are developed responsibly</li>
</ul>
<ul>
<li>Are comfortable with ambiguity,voice is a fast-moving space, and you&#39;ll help define the architecture as we learn what works</li>
</ul>
<p>Strong candidates may also have experience with</p>
<ul>
<li>Real-time media protocols and stacks: WebRTC, RTP, gRPC bidirectional streaming, or WebSockets at scale</li>
</ul>
<ul>
<li>Audio engineering fundamentals: codecs (Opus, AAC), voice activity detection, echo cancellation, jitter buffering, or audio DSP</li>
</ul>
<ul>
<li>Low-latency ML inference serving, streaming model outputs, or GPU-based serving infrastructure</li>
</ul>
<ul>
<li>Telephony, live streaming, video conferencing, or voice assistant platforms</li>
</ul>
<ul>
<li>Mobile audio pipelines on iOS (AVAudioEngine, AudioUnits) or Android (Oboe, AAudio)</li>
</ul>
<ul>
<li>Working alongside ML researchers to productionize models,speech experience is a plus but not required</li>
</ul>
<p>Representative projects</p>
<ul>
<li>Driving time-to-first-audio below human perceptual thresholds by co-designing the serving pipeline with the Audio research team</li>
</ul>
<ul>
<li>Building a streaming inference orchestrator that interleaves speech recognition, LLM reasoning, and speech synthesis with overlapping execution</li>
</ul>
<ul>
<li>Designing the voice mode API surface for the Anthropic API so developers can build their own voice agents on Claude</li>
</ul>
<ul>
<li>Implementing graceful barge-in and interruption handling so users can cut Claude off mid-sentence naturally</li>
</ul>
<ul>
<li>Instrumenting end-to-end audio quality metrics and building dashboards that catch regressions before users do</li>
</ul>
<p style="margin-top:24px;font-size:13px;color:#666;">XML job scraping automation by <a href="https://yubhub.co">YubHub</a></p>]]></Description>
      <Jobtype>full-time</Jobtype>
      <Experiencelevel>senior</Experiencelevel>
      <Workarrangement>hybrid</Workarrangement>
      <Salaryrange>$320,000-$485,000 USD</Salaryrange>
      <Skills>Real-time media protocols and stacks, Audio engineering fundamentals, Low-latency ML inference serving, Distributed systems, API design, WebRTC, RTP, gRPC bidirectional streaming, WebSockets, Opus, AAC, voice activity detection, echo cancellation, jitter buffering, audio DSP, GPU-based serving infrastructure, telephony, live streaming, video conferencing, voice assistant platforms, mobile audio pipelines on iOS, Android, pair programming</Skills>
      <Category>Engineering</Category>
      <Industry>Technology</Industry>
      <Employername>Anthropic</Employername>
      <Employerlogo>https://logos.yubhub.co/anthropic.com.png</Employerlogo>
      <Employerdescription>Anthropic is a technology company that aims to create reliable, interpretable, and steerable AI systems.</Employerdescription>
      <Employerwebsite>https://www.anthropic.com/</Employerwebsite>
      <Compensationcurrency></Compensationcurrency>
      <Compensationmin></Compensationmin>
      <Compensationmax></Compensationmax>
      <Applyto>https://job-boards.greenhouse.io/anthropic/jobs/5172245008</Applyto>
      <Location>San Francisco, CA | New York City, NY | Seattle, WA</Location>
      <Country></Country>
      <Postedate>2026-04-18</Postedate>
    </job>
    <job>
      <externalid>4aaad5cf-9d0</externalid>
      <Title>Technical Program Manager, Platform</Title>
      <Description><![CDATA[<p>As a Technical Program Manager for Platform, you&#39;ll own the programs that stand up and operate Anthropic&#39;s APIs and serving infrastructure across multiple cloud environments.</p>
<p>This means driving deployments from scoping through production, running the platform work that spans them, and working across API, Platform Foundations, Security, our cloud provider counterparts, and whoever else is on the critical path when dependencies and tradeoffs pile up.</p>
<p>Responsibilities:</p>
<ul>
<li>Own end-to-end program execution for Anthropic’s API across major cloud deployments, from scoping through production launch and steady-state operations</li>
</ul>
<ul>
<li>Drive the platform programs that cut across individual deployments: the shared foundations that get built once and reused, not rebuilt per cloud</li>
</ul>
<ul>
<li>Act as a primary coordination point with cloud provider counterparts, keeping engagement clean across multiple internal teams with touchpoints into the same partner</li>
</ul>
<ul>
<li>Partner with engineering leadership to turn technical direction into executable plans with clear owners, dependencies, and risk tracking</li>
</ul>
<ul>
<li>Build the program scaffolding (roadmaps, status reporting, decision logs, escalation paths) that lets a fast-moving org stay aligned without slowing down</li>
</ul>
<ul>
<li>Drive the hard sequencing conversations when partner commitments, engineering bandwidth, and priorities are in tension, and surface them to leadership with a recommendation</li>
</ul>
<ul>
<li>Identify where program coverage is thin relative to the load and help shape how we staff around it</li>
</ul>
<p>You may be a good fit if you:</p>
<ul>
<li>Have 10+ years of technical program management experience, including ownership of large infrastructure or platform programs with many engineering teams and external partners in the mix</li>
</ul>
<ul>
<li>Have deep technical fluency in cloud APIs, infrastructure, distributed systems, or platform engineering, enough to be a credible partner to senior engineers on architecture and sequencing, not just a tracker of their decisions</li>
</ul>
<ul>
<li>Have run programs spanning organizational boundaries where you had no direct authority over most of the people whose work you depended on, and delivered anyway</li>
</ul>
<ul>
<li>Have direct experience with multi-cloud or hybrid cloud environments, large-scale migrations, or building platform abstraction layers</li>
</ul>
<ul>
<li>Have worked with major cloud providers (AWS, GCP, Azure) or similar large technology partners, and know how to keep those relationships productive when priorities diverge</li>
</ul>
<ul>
<li>Are comfortable operating in ambiguity on the long arc while being ruthlessly concrete on what ships this quarter and who owns it</li>
</ul>
<ul>
<li>Have a track record of making a program get cheaper to run the second and third time, not just landing the first instance</li>
</ul>
<ul>
<li>Thrive in environments where the plan you wrote last month needs rewriting, without losing the thread on what matters</li>
</ul>
<p>Strong candidates may also:</p>
<ul>
<li>Have experience with production serving infrastructure, inference systems, or ML platform work</li>
</ul>
<ul>
<li>Have moved between senior IC and management roles, or have interest in doing so</li>
</ul>
<ul>
<li>Have worked at a company rebuilding systems and org in flight during rapid scale-up</li>
</ul>
<p style="margin-top:24px;font-size:13px;color:#666;">XML job scraping automation by <a href="https://yubhub.co">YubHub</a></p>]]></Description>
      <Jobtype>full-time</Jobtype>
      <Experiencelevel>senior</Experiencelevel>
      <Workarrangement>hybrid</Workarrangement>
      <Salaryrange>$365,000-$435,000 USD</Salaryrange>
      <Skills>Cloud APIs, Infrastructure, Distributed Systems, Platform Engineering, Cloud Provider Partnerships, Program Management, Technical Leadership, Production Serving Infrastructure, Inference Systems, ML Platform Work, Senior IC and Management Roles, Rapid Scale-Up</Skills>
      <Category>Engineering</Category>
      <Industry>Technology</Industry>
      <Employername>Anthropic</Employername>
      <Employerlogo>https://logos.yubhub.co/anthropic.com.png</Employerlogo>
      <Employerdescription>Anthropic is a public benefit corporation that creates reliable, interpretable, and steerable AI systems.</Employerdescription>
      <Employerwebsite>https://www.anthropic.com/</Employerwebsite>
      <Compensationcurrency></Compensationcurrency>
      <Compensationmin></Compensationmin>
      <Compensationmax></Compensationmax>
      <Applyto>https://job-boards.greenhouse.io/anthropic/jobs/5157003008</Applyto>
      <Location>San Francisco, CA | New York City, NY | Seattle, WA</Location>
      <Country></Country>
      <Postedate>2026-04-18</Postedate>
    </job>
    <job>
      <externalid>6f3a053e-c43</externalid>
      <Title>Staff Software Engineer, AI Reliability Engineering</Title>
      <Description><![CDATA[<p>We&#39;re seeking a Staff Software Engineer to join our AI Reliability Engineering team. As a key member of our team, you will develop Service Level Objectives for large language model serving systems, design and implement monitoring and observability systems, and lead incident response for critical AI services.</p>
<p>You will work closely with teams across Anthropic to improve reliability across our most critical serving paths. You will be responsible for making the systems that deliver Claude more robust and resilient, whether during an incident or collaborating on projects.</p>
<p>To be successful in this role, you should have strong distributed systems, infrastructure, or reliability backgrounds. You should be curious and brave, comfortable jumping into unfamiliar systems during an incident and helping drive resolution even when you don&#39;t have deep expertise yet.</p>
<p>You will be working on high-availability serving infrastructure across multiple regions and cloud providers. You will support the reliability of safeguard model serving, which is critical for both site reliability and Anthropic&#39;s safety commitments.</p>
<p>If you&#39;re committed to creating reliable, interpretable, and steerable AI systems, and you&#39;re passionate about working on complex technical problems, we&#39;d love to hear from you.</p>
<p style="margin-top:24px;font-size:13px;color:#666;">XML job scraping automation by <a href="https://yubhub.co">YubHub</a></p>]]></Description>
      <Jobtype>full-time</Jobtype>
      <Experiencelevel>staff</Experiencelevel>
      <Workarrangement>hybrid</Workarrangement>
      <Salaryrange>€235.000-€295.000 EUR</Salaryrange>
      <Skills>distributed systems, infrastructure, reliability, Service Level Objectives, monitoring, observability, incident response, high-availability serving infrastructure, cloud providers, SRE, Production Engineer, chaos engineering, systematic resilience testing, AI-specific observability tools and frameworks, ML hardware accelerators, RDMA, InfiniBand</Skills>
      <Category>Engineering</Category>
      <Industry>Technology</Industry>
      <Employername>Anthropic</Employername>
      <Employerlogo>https://logos.yubhub.co/anthropic.com.png</Employerlogo>
      <Employerdescription>Anthropic is a public benefit corporation that creates reliable, interpretable, and steerable AI systems.</Employerdescription>
      <Employerwebsite>https://www.anthropic.com/</Employerwebsite>
      <Compensationcurrency></Compensationcurrency>
      <Compensationmin></Compensationmin>
      <Compensationmax></Compensationmax>
      <Applyto>https://job-boards.greenhouse.io/anthropic/jobs/5101169008</Applyto>
      <Location>Dublin, IE</Location>
      <Country></Country>
      <Postedate>2026-04-18</Postedate>
    </job>
    <job>
      <externalid>f53caced-334</externalid>
      <Title>Software Engineer, Cloud Inference Safeguards</Title>
      <Description><![CDATA[<p>We are seeking a Software Engineer to build and operate the safety, oversight, and intervention mechanisms that protect Claude on third-party cloud service provider (CSP) platforms.</p>
<p>As the engineer responsible for Safeguards on those surfaces, you will ensure that every request served through our CSP partners is monitored for misuse, enforced against policy, and compliant with the data residency and privacy commitments that enterprise CSP customers expect.</p>
<p>You will sit at the seam between the Safeguards organisation and the Cloud Inference team: taking classifiers, detection signals, and enforcement policies developed by Safeguards and making them run reliably inside a CSP partner&#39;s infrastructure at serving-path latency and scale.</p>
<p>Responsibilities:</p>
<ul>
<li>Build, deploy and operate real-time safeguards infrastructure,classifiers, rate limits, enforcement actions, and intervention hooks,embedded directly in the third-party CSP inference serving path</li>
</ul>
<ul>
<li>Design and maintain the data residency and privacy architecture for safeguards signals on CSP platforms, ensuring we can detect abuse and monitor model behaviour while honouring regionalisation boundaries and enterprise contractual commitments</li>
</ul>
<ul>
<li>Develop telemetry, logging, and evaluation pipelines that give Safeguards, Policy, and T&amp;S operational teams situational awareness over CSP traffic and close the visibility gap between third-party and first-party serving</li>
</ul>
<ul>
<li>Dive into the CSP serving stack to identify the lowest-impact points to gather signals or introduce interventions without degrading latency, stability, or overall architecture</li>
</ul>
<ul>
<li>Hold a high operational bar: own on-call, drive root-cause analyses and postmortems for safeguards incidents on CSP platforms, and build systems that reduce the human intervention required to keep Claude safe</li>
</ul>
<ul>
<li>Work closely with Safeguards research, Policy &amp; Enforcement, the Cloud Inference team, and CSP partner contacts to turn detection research and policy decisions into production enforcement that works inside a partner&#39;s cloud.</li>
</ul>
<p>You may be a good fit if you:</p>
<ul>
<li>Have a Bachelor&#39;s degree in Computer Science, Software Engineering, or comparable experience</li>
</ul>
<ul>
<li>Have 4–10+ years of experience in high-scale, high-reliability software development, ideally with exposure to trust &amp; safety, anti-abuse, fraud, or integrity systems</li>
</ul>
<ul>
<li>Are proficient in Python and comfortable working across the stack,from request-path services to data pipelines to internal tooling</li>
</ul>
<ul>
<li>Think adversarially: you can see a system from a bad actor&#39;s perspective, anticipate how they will respond to countermeasures, and design defences in depth rather than single points of enforcement</li>
</ul>
<ul>
<li>Have experience scaling infrastructure to accommodate rapid traffic growth while keeping latency and reliability within tight budgets</li>
</ul>
<ul>
<li>Are deeply interested in the potential transformative effects of advanced AI systems and are committed to ensuring their safe development</li>
</ul>
<ul>
<li>Have strong communication skills and can explain complex technical and risk tradeoffs to non-technical stakeholders across Policy, Legal, and partner organisations</li>
</ul>
<ul>
<li>Enjoy working in a fast-paced, early environment; comfortable with adapting priorities as driven by the rapidly evolving AI space</li>
</ul>
<p>Strong candidates may also have experience with:</p>
<ul>
<li>Building trust and safety, anti-spam, fraud, or abuse detection and mitigation mechanisms for AI/ML systems, or the infrastructure to support these systems at scale</li>
</ul>
<ul>
<li>Machine learning serving infrastructure (GPUs/TPUs, inference servers, load balancing) and the operational realities of running models in production</li>
</ul>
<ul>
<li>Major cloud platform internals,IAM, Network/service perimeter controls, regional resource constraints, cloud-native logging/monitoring,or experience shipping software that runs inside a partner&#39;s cloud rather than your own</li>
</ul>
<ul>
<li>Data residency, privacy engineering, or compliance-constrained architectures, particularly where telemetry has to stay within regional or contractual boundaries</li>
</ul>
<ul>
<li>Working closely with operational and human-review teams to build custom internal tooling, admin UX, and alerting</li>
</ul>
<ul>
<li>Adversarial mindset: has shipped defences against motivated attackers before, knows what it feels like when they adapt, and can sprint to close a gap before it becomes an incident</li>
</ul>
<ul>
<li>Comfortable operating at the intersection of platform/infra engineering and trust &amp; safety,neither a pure infra engineer nor a pure T&amp;S engineer, but someone who can credibly do both</li>
</ul>
<ul>
<li>Has shipped software that runs inside someone else&#39;s infrastructure (partner cloud, embedded deployment, or similar) and knows how to get things done when you don&#39;t control the whole stack</li>
</ul>
<ul>
<li>Senior enough to own a cross-team seam independently, drive consensus across orgs, and make latency/safety tradeoff calls without escalation</li>
</ul>
<ul>
<li>TypeScript or Rust, and agentic coding tools such as Claude Code</li>
</ul>
<p>The annual compensation range for this role is listed below.</p>
<p>For sales roles, the range provided is the role&#39;s On Target Earnings (&#39;OTE&#39;) range, meaning that the range includes both the sales commissions/sales bonuses target and annual base salary for the role.</p>
<p>Annual Salary: $405,000-$485,000 USD</p>
<p style="margin-top:24px;font-size:13px;color:#666;">XML job scraping automation by <a href="https://yubhub.co">YubHub</a></p>]]></Description>
      <Jobtype>full-time</Jobtype>
      <Experiencelevel>senior</Experiencelevel>
      <Workarrangement>hybrid</Workarrangement>
      <Salaryrange>$405,000-$485,000 USD</Salaryrange>
      <Skills>Python, Cloud service provider (CSP), Data residency and privacy, Machine learning serving infrastructure, Major cloud platform internals, Data residency, privacy engineering, or compliance-constrained architectures, TypeScript, Rust, Agentic coding tools, Claude Code, Trust and safety, Anti-abuse, Fraud, Integrity systems</Skills>
      <Category>Engineering</Category>
      <Industry>Technology</Industry>
      <Employername>Anthropic</Employername>
      <Employerlogo>https://logos.yubhub.co/anthropic.co.png</Employerlogo>
      <Employerdescription>Anthropic is a rapidly growing organisation developing reliable, interpretable, and steerable AI systems.</Employerdescription>
      <Employerwebsite>https://www.anthropic.co/</Employerwebsite>
      <Compensationcurrency></Compensationcurrency>
      <Compensationmin></Compensationmin>
      <Compensationmax></Compensationmax>
      <Applyto>https://job-boards.greenhouse.io/anthropic/jobs/5168829008</Applyto>
      <Location>San Francisco, CA | Seattle, WA</Location>
      <Country></Country>
      <Postedate>2026-04-18</Postedate>
    </job>
    <job>
      <externalid>709b405a-48b</externalid>
      <Title>Staff / Senior Software Engineer, AI Reliability</Title>
      <Description><![CDATA[<p>We&#39;re seeking a Staff / Senior Software Engineer, AI Reliability to join our team. As a key member of our AIRE (AI Reliability Engineering) team, you will partner with teams across Anthropic to improve reliability across our most critical serving paths. You will develop Service Level Objectives for large language model serving systems, design and implement monitoring and observability systems, assist in the design and implementation of high-availability serving infrastructure, lead incident response for critical AI services, and support the reliability of safeguard model serving.</p>
<p>You may be a good fit for this role if you have strong distributed systems, infrastructure, or reliability backgrounds, are curious and brave, think holistically about how systems compose and where the seams are, can build lasting relationships across teams, care about users and feel ownership over outcomes, have excellent communication and collaboration skills, and bring diverse experience.</p>
<p>Strong candidates may also have experience operating large-scale model serving or training infrastructure, experience with one or more ML hardware accelerators, understanding of ML-specific networking optimizations, expertise in AI-specific observability tools and frameworks, experience with chaos engineering and systematic resilience testing, and contributions to open-source infrastructure or ML tooling.</p>
<p>We offer competitive compensation and benefits, optional equity donation matching, generous vacation and parental leave, flexible working hours, and a lovely office space in which to collaborate with colleagues. We value impact and believe that the highest-impact AI research will be big science. We work as a single cohesive team on just a few large-scale research efforts and value communication skills.</p>
<p>If you&#39;re interested in this role, please submit an application even if you don&#39;t believe you meet every single qualification. We encourage diversity and strive to include a range of diverse perspectives on our team.</p>
<p style="margin-top:24px;font-size:13px;color:#666;">XML job scraping automation by <a href="https://yubhub.co">YubHub</a></p>]]></Description>
      <Jobtype>full-time</Jobtype>
      <Experiencelevel>staff</Experiencelevel>
      <Workarrangement>hybrid</Workarrangement>
      <Salaryrange>$325,000-$485,000 USD</Salaryrange>
      <Skills>distributed systems, infrastructure, reliability, Service Level Objectives, monitoring and observability systems, high-availability serving infrastructure, incident response, safeguard model serving, large-scale model serving or training infrastructure, ML hardware accelerators, ML-specific networking optimizations, AI-specific observability tools and frameworks, chaos engineering and systematic resilience testing, open-source infrastructure or ML tooling</Skills>
      <Category>Engineering</Category>
      <Industry>Technology</Industry>
      <Employername>Anthropic</Employername>
      <Employerlogo>https://logos.yubhub.co/anthropic.com.png</Employerlogo>
      <Employerdescription>Anthropic is a public benefit corporation that creates reliable, interpretable, and steerable AI systems.</Employerdescription>
      <Employerwebsite>https://www.anthropic.com/</Employerwebsite>
      <Compensationcurrency></Compensationcurrency>
      <Compensationmin></Compensationmin>
      <Compensationmax></Compensationmax>
      <Applyto>https://job-boards.greenhouse.io/anthropic/jobs/5113224008</Applyto>
      <Location>San Francisco, CA | New York City, NY | Seattle, WA</Location>
      <Country></Country>
      <Postedate>2026-04-18</Postedate>
    </job>
    <job>
      <externalid>3799893d-192</externalid>
      <Title>Principal Engineer, Gemini App Infrastructure</Title>
      <Description><![CDATA[<p>As the Principal Engineer, you will focus on architecting and building the flagship Gemini App infrastructure. You will serve as the technical anchor for the application and orchestration layer, owning the code quality, architectural decisions, and system design of new design systems and functionality.</p>
<p>Key responsibilities include:</p>
<ul>
<li>Architecting the Gemini app serving and orchestration layers, writing design docs, and defining interfaces to ensure the codebase is scalable, modular, and capable of supporting rapid innovation.</li>
<li>Designing and implementing robust CI/CD pipelines and experimentation platforms, building tooling that enables the wider engineering team to utilize A/B testing and feature flags to safely and quickly iterate.</li>
<li>Driving application performance initiatives, debugging complex production issues, and advocating for code quality standards to ensure the infrastructure scales to our product needs.</li>
<li>Acting as the strategic technical counterpart to product and design leadership, assessing feasibility of ambitious concepts, and proposing technical solutions that turn AI capabilities into reality.</li>
<li>Mentoring staff and senior engineers, leading code reviews, and fostering a culture of technical accuracy, psychological safety, and user-centricity.</li>
</ul>
<p>In order to set you up for success, we look for the following skills and experience:</p>
<ul>
<li>Bachelor&#39;s degree in Computer Science or Engineering, or equivalent practical experience.</li>
<li>15 years of experience in software engineering, building and working with systems in the technology organization.</li>
</ul>
<p>In addition, the following would be an advantage:</p>
<ul>
<li>Experience building large-scale serving infrastructure.</li>
<li>Experience implementing observability, telemetry, and real-time monitoring strategies.</li>
<li>Ability to design and refactor complex server-side architectures that have scaled, ideally at the &gt;1 billion user scale.</li>
<li>Ability to analyze data to identify bottlenecks and drive technical decisions regarding performance optimizations.</li>
<li>Ability to unblock teams by solving the hardest technical problems, balancing technical debt with feature work, and driving predictable delivery through architectural clarity.</li>
<li>Ability to drive technical consensus across multiple teams and stakeholders, translating technical constraints into clear options for leadership.</li>
</ul>
<p>The US base salary range for this full-time position is between $307,000 - $427,000 + bonus + equity + benefits.</p>
<p style="margin-top:24px;font-size:13px;color:#666;">XML job scraping automation by <a href="https://yubhub.co">YubHub</a></p>]]></Description>
      <Jobtype>full-time</Jobtype>
      <Experiencelevel>senior</Experiencelevel>
      <Workarrangement>onsite</Workarrangement>
      <Salaryrange>$307,000 - $427,000 + bonus + equity + benefits</Salaryrange>
      <Skills>Bachelor&apos;s degree in Computer Science or Engineering, 15 years of experience in software engineering, Experience building large-scale serving infrastructure, Experience implementing observability, telemetry, and real-time monitoring strategies, Ability to design and refactor complex server-side architectures</Skills>
      <Category>Engineering</Category>
      <Industry>Technology</Industry>
      <Employername>Google DeepMind</Employername>
      <Employerlogo>https://logos.yubhub.co/deepmind.com.png</Employerlogo>
      <Employerdescription>Google DeepMind is a pioneering AI lab focused on advancing AI development to solve complex global challenges.</Employerdescription>
      <Employerwebsite>https://deepmind.com/</Employerwebsite>
      <Compensationcurrency></Compensationcurrency>
      <Compensationmin></Compensationmin>
      <Compensationmax></Compensationmax>
      <Applyto>https://job-boards.greenhouse.io/deepmind/jobs/7793048</Applyto>
      <Location>Mountain View, California, US</Location>
      <Country></Country>
      <Postedate>2026-04-18</Postedate>
    </job>
    <job>
      <externalid>78a9b8f2-81c</externalid>
      <Title>Senior Software Engineer - Data Infrastructure</Title>
      <Description><![CDATA[<p>We believe that the way people interact with their finances will drastically improve in the next few years. We&#39;re dedicated to empowering this transformation by building the tools and experiences that thousands of developers use to create their own products.</p>
<p>Plaid powers the tools millions of people rely on to live a healthier financial life. We work with thousands of companies like Venmo, SoFi, several of the Fortune 500, and many of the largest banks to make it easy for people to connect their financial accounts to the apps and services they want to use.</p>
<p>Making data driven decisions is key to Plaid&#39;s culture. To support that, we need to scale our data systems while maintaining correct and complete data. We provide tooling and guidance to teams across engineering, product, and business and help them explore our data quickly and safely to get the data insights they need, which ultimately helps Plaid serve our customers more effectively.</p>
<p><strong>Responsibilities</strong></p>
<ul>
<li>Contribute towards the long-term technical roadmap for data-driven and machine learning iteration at Plaid</li>
<li>Leading key data infrastructure projects such as improving ML development golden paths, implementing offline streaming solutions for data freshness, building net new ETL pipeline infrastructure, and evolving data warehouse or data lakehouse capabilities.</li>
<li>Working with stakeholders in other teams and functions to define technical roadmaps for key backend systems and abstractions across Plaid.</li>
<li>Debugging, troubleshooting, and reducing operational burden for our Data Platform.</li>
<li>Growing the team via mentorship and leadership, reviewing technical documents and code changes.</li>
</ul>
<p><strong>Qualifications</strong></p>
<ul>
<li>5+ years of software engineering experience</li>
<li>Extensive hands-on software engineering experience, with a strong track record of delivering successful projects within the Data Infrastructure or Platform domain at similar or larger companies.</li>
<li>Deep understanding of one of: ML Infrastructure systems, including Feature Stores, Training Infrastructure, Serving Infrastructure, and Model Monitoring OR Data Infrastructure systems, including Data Warehouses, Data Lakehouses, Apache Spark, Streaming Infrastructure, Workflow Orchestration.</li>
<li>Strong cross-functional collaboration, communication, and project management skills, with proven ability to coordinate effectively.</li>
<li>Proficiency in coding, testing, and system design, ensuring reliable and scalable solutions.</li>
<li>Demonstrated leadership abilities, including experience mentoring and guiding junior engineers.</li>
</ul>
<p><strong>Additional Information</strong></p>
<p>Our mission at Plaid is to unlock financial freedom for everyone. To support that mission, we seek to build a diverse team of driven individuals who care deeply about making the financial ecosystem more equitable.</p>
<p style="margin-top:24px;font-size:13px;color:#666;">XML job scraping automation by <a href="https://yubhub.co">YubHub</a></p>]]></Description>
      <Jobtype>full-time</Jobtype>
      <Experiencelevel>senior</Experiencelevel>
      <Workarrangement>hybrid</Workarrangement>
      <Salaryrange>$190,800-$286,800 per year</Salaryrange>
      <Skills>ML Infrastructure systems, Data Infrastructure systems, Apache Spark, Streaming Infrastructure, Workflow Orchestration, Feature Stores, Training Infrastructure, Serving Infrastructure, Model Monitoring, Data Warehouses, Data Lakehouses</Skills>
      <Category>Engineering</Category>
      <Industry>Finance</Industry>
      <Employername>Plaid</Employername>
      <Employerlogo>https://logos.yubhub.co/plaid.com.png</Employerlogo>
      <Employerdescription>Plaid builds tools and experiences that thousands of developers use to create their own products, connecting financial accounts to apps and services.</Employerdescription>
      <Employerwebsite>https://plaid.com/</Employerwebsite>
      <Compensationcurrency></Compensationcurrency>
      <Compensationmin></Compensationmin>
      <Compensationmax></Compensationmax>
      <Applyto>https://jobs.lever.co/plaid/05b0ae3f-ec60-48d6-ae27-1bd89d928c47</Applyto>
      <Location>San Francisco</Location>
      <Country></Country>
      <Postedate>2026-04-17</Postedate>
    </job>
    <job>
      <externalid>62efca6f-b6f</externalid>
      <Title>Senior AI Engineer</Title>
      <Description><![CDATA[<p>We&#39;re looking for a Senior AI Engineer who is obsessed with building AI systems that actually work in production: reliable, observable, cost-efficient, and genuinely useful. This is not a research role. You will ship AI-powered features that process real financial data for real businesses.</p>
<p>LLM &amp; AI Pipeline Engineering - Design, build, and maintain production-grade LLM integration pipelines , including retrieval-augmented generation (RAG), prompt engineering, output parsing, and chain orchestration.</p>
<p>Develop and operate AI features within Jeeves&#39;s core financial products: spend categorization, document extraction, anomaly detection, financial Q&amp;A, and automated reconciliation.</p>
<p>Implement structured output validation, fallback handling, and confidence scoring to ensure AI decisions meet reliability standards for financial use cases.</p>
<p>Evaluate and integrate AI frameworks and tools (LangChain, LlamaIndex, OpenAI API, Anthropic API, HuggingFace, vector databases) and advocate for the right tool for the job.</p>
<p>Establish prompt versioning and evaluation practices to ensure AI outputs remain accurate and consistent as models and data evolve.</p>
<p>Retrieval &amp; Vector Search - Design and maintain vector search pipelines using databases such as Pinecone, Weaviate, or pgvector to power semantic search and RAG-based features.</p>
<p>Build document ingestion and chunking pipelines for Jeeves&#39;s financial data , processing invoices, receipts, policy documents, and transaction records.</p>
<p>Optimize retrieval quality through embedding model selection, chunk strategy, metadata filtering, and re-ranking techniques.</p>
<p>ML Model Serving &amp; Operations - Collaborate with data scientists to take trained ML models from experimental notebooks to production serving infrastructure.</p>
<p>Build and maintain model serving endpoints with appropriate latency SLOs, input validation, and output monitoring.</p>
<p>Implement model performance monitoring and data drift detection to ensure production models remain accurate over time.</p>
<p>Support model retraining workflows by designing clean data pipelines and feature engineering that can be continuously updated.</p>
<p>Backend Integration &amp; Reliability - Integrate AI services cleanly with Jeeves&#39;s backend microservices , designing clear API contracts, circuit breakers, and graceful degradation patterns.</p>
<p>Write high-quality, testable backend code in Python or Go/Node.js to power AI-integrated features.</p>
<p>Instrument AI components with structured logging, distributed tracing, latency dashboards, and alerting to ensure operational visibility.</p>
<p>Collaboration &amp; Growth - Partner with Product, Backend Engineering, and Data Science to define the AI roadmap and translate requirements into reliable systems.</p>
<p>Contribute to a culture of quality by writing design docs, reviewing peers&#39; AI system designs, and sharing learnings openly.</p>
<p>Help grow the AI engineering practice at Jeeves by establishing patterns, tooling, and best practices that the broader team can build on.</p>
<p style="margin-top:24px;font-size:13px;color:#666;">XML job scraping automation by <a href="https://yubhub.co">YubHub</a></p>]]></Description>
      <Jobtype>full-time</Jobtype>
      <Experiencelevel>senior</Experiencelevel>
      <Workarrangement>remote</Workarrangement>
      <Salaryrange></Salaryrange>
      <Skills>LLM, AI, Python, LangChain, LlamaIndex, OpenAI API, Anthropic API, HuggingFace, vector databases, Pinecone, Weaviate, pgvector, semantic search, RAG-based features, document ingestion, chunking pipelines, embedding model selection, chunk strategy, metadata filtering, re-ranking techniques, model serving infrastructure, latency SLOs, input validation, output monitoring, model performance monitoring, data drift detection, clean data pipelines, feature engineering, API contracts, circuit breakers, graceful degradation patterns, structured logging, distributed tracing, latency dashboards, alerting</Skills>
      <Category>Engineering</Category>
      <Industry>Finance</Industry>
      <Employername>Jeeves</Employername>
      <Employerlogo>https://logos.yubhub.co/jeeves.com.png</Employerlogo>
      <Employerdescription>Jeeves is a financial operating system built for global businesses that provides corporate cards, cross-border payments, and spend management software within one unified platform. It operates across 20+ countries and serves over 5,000 clients.</Employerdescription>
      <Employerwebsite>https://www.jeeves.com/</Employerwebsite>
      <Compensationcurrency></Compensationcurrency>
      <Compensationmin></Compensationmin>
      <Compensationmax></Compensationmax>
      <Applyto>https://jobs.lever.co/tryjeeves/ded9e04e-f18e-4d4c-ae43-4b7882c6200b</Applyto>
      <Location>India</Location>
      <Country></Country>
      <Postedate>2026-04-17</Postedate>
    </job>
    <job>
      <externalid>5579e8fb-227</externalid>
      <Title>Senior AI Engineer</Title>
      <Description><![CDATA[<p>We&#39;re looking for a Senior AI Engineer who is obsessed with building AI systems that actually work in production: reliable, observable, cost-efficient, and genuinely useful. This is not a research role. You will ship AI-powered features that process real financial data for real businesses.</p>
<p>LLM &amp; AI Pipeline Engineering - Design, build, and maintain production-grade LLM integration pipelines , including retrieval-augmented generation (RAG), prompt engineering, output parsing, and chain orchestration.</p>
<p>Develop and operate AI features within Jeeves&#39;s core financial products: spend categorization, document extraction, anomaly detection, financial Q&amp;A, and automated reconciliation.</p>
<p>Implement structured output validation, fallback handling, and confidence scoring to ensure AI decisions meet reliability standards for financial use cases.</p>
<p>Evaluate and integrate AI frameworks and tools (LangChain, LlamaIndex, OpenAI API, Anthropic API, HuggingFace, vector databases) and advocate for the right tool for the job.</p>
<p>Establish prompt versioning and evaluation practices to ensure AI outputs remain accurate and consistent as models and data evolve.</p>
<p>Retrieval &amp; Vector Search - Design and maintain vector search pipelines using databases such as Pinecone, Weaviate, or pgvector to power semantic search and RAG-based features.</p>
<p>Build document ingestion and chunking pipelines for Jeeves&#39;s financial data , processing invoices, receipts, policy documents, and transaction records.</p>
<p>Optimize retrieval quality through embedding model selection, chunk strategy, metadata filtering, and re-ranking techniques.</p>
<p>ML Model Serving &amp; Operations - Collaborate with data scientists to take trained ML models from experimental notebooks to production serving infrastructure.</p>
<p>Build and maintain model serving endpoints with appropriate latency SLOs, input validation, and output monitoring.</p>
<p>Implement model performance monitoring and data drift detection to ensure production models remain accurate over time.</p>
<p>Support model retraining workflows by designing clean data pipelines and feature engineering that can be continuously updated.</p>
<p>Backend Integration &amp; Reliability - Integrate AI services cleanly with Jeeves&#39;s backend microservices , designing clear API contracts, circuit breakers, and graceful degradation patterns.</p>
<p>Write high-quality, testable backend code in Python or Go/Node.js to power AI-integrated features.</p>
<p>Instrument AI components with structured logging, distributed tracing, latency dashboards, and alerting to ensure operational visibility.</p>
<p>Build human-in-the-loop review workflows for AI decisions that require oversight , particularly for high-value financial actions.</p>
<p>Collaboration &amp; Growth - Partner with Product, Backend Engineering, and Data Science to define the AI roadmap and translate requirements into reliable systems.</p>
<p>Contribute to a culture of quality by writing design docs, reviewing peers&#39; AI system designs, and sharing learnings openly.</p>
<p>Help grow the AI engineering practice at Jeeves by establishing patterns, tooling, and best practices that the broader team can build on.</p>
<p style="margin-top:24px;font-size:13px;color:#666;">XML job scraping automation by <a href="https://yubhub.co">YubHub</a></p>]]></Description>
      <Jobtype>full-time</Jobtype>
      <Experiencelevel>senior</Experiencelevel>
      <Workarrangement>remote</Workarrangement>
      <Salaryrange></Salaryrange>
      <Skills>LLM pipeline engineering, RAG architecture, ML system operation, Python programming, AI orchestration framework, ML model serving infrastructure, Observability tooling, Fintech experience, Prompt evaluation frameworks, ML lifecycle management tools, Real-time data streaming</Skills>
      <Category>Engineering</Category>
      <Industry>Finance</Industry>
      <Employername>Jeeves</Employername>
      <Employerlogo>https://logos.yubhub.co/jeeves.com.png</Employerlogo>
      <Employerdescription>Jeeves is a financial operating system built for global businesses that provides corporate cards, cross-border payments, and spend management software within one unified platform, serving over 5,000 clients.</Employerdescription>
      <Employerwebsite>https://www.jeeves.com/</Employerwebsite>
      <Compensationcurrency></Compensationcurrency>
      <Compensationmin></Compensationmin>
      <Compensationmax></Compensationmax>
      <Applyto>https://jobs.lever.co/tryjeeves/2f00206f-6091-4eed-8b5f-1325afdbfe30</Applyto>
      <Location>Brazil</Location>
      <Country></Country>
      <Postedate>2026-04-17</Postedate>
    </job>
    <job>
      <externalid>9cac404c-fb9</externalid>
      <Title>Senior Solutions Architect</Title>
      <Description><![CDATA[<p>We&#39;re seeking a Senior Solutions Architect to bridge our research frontier and customer reality. As a key member of our team, you&#39;ll onboard customers to our suite of models, providing hands-on guidance on prompting strategies, inference optimization, evaluation frameworks, and finetuning approaches. You&#39;ll work alongside our Sales and BD teams on complex customer projects, act as a central internal hub connecting go-to-market, engineering, and applied research teams, and create reusable technical enablement resources. You&#39;ll also translate customer technical feedback into actionable product insights and collaborate with engineering and research teams to implement required updates and new features.</p>
<p>You should have a deep understanding of generative AI, hands-on experience serving generative deep learning models in production settings, and a track record of working directly with customers, iterating on solutions, and providing tailored support. Proficiency in Python and intuitive understanding of API integrations are also essential. Excellent communication skills, honed through collaborating with non-technical stakeholders, are necessary to adapt your message depending on who&#39;s in the room.</p>
<p>Prior experience finetuning diffusion models, working with customization tools like ComfyUI, and contributing to open-source projects in the diffusion model space are highly valued. Deploying models on cloud platforms using state-of-the-art serving infrastructure is also desirable.</p>
<p style="margin-top:24px;font-size:13px;color:#666;">XML job scraping automation by <a href="https://yubhub.co">YubHub</a></p>]]></Description>
      <Jobtype>full-time</Jobtype>
      <Experiencelevel>senior</Experiencelevel>
      <Workarrangement>hybrid</Workarrangement>
      <Salaryrange>$180,000 - $300,000 USD</Salaryrange>
      <Skills>Generative AI, Deep learning models, Python, API integrations, Customer support, Communication skills, Finetuning diffusion models, Customization tools like ComfyUI, Open-source projects in the diffusion model space, Cloud platforms using state-of-the-art serving infrastructure</Skills>
      <Category>Engineering</Category>
      <Industry>Technology</Industry>
      <Employername>Black Forest Labs</Employername>
      <Employerlogo>https://logos.yubhub.co/blackforestlabs.com.png</Employerlogo>
      <Employerdescription>Black Forest Labs is a research lab developing foundational technologies for generative models that power image and video creation.</Employerdescription>
      <Employerwebsite>https://www.blackforestlabs.com/</Employerwebsite>
      <Compensationcurrency></Compensationcurrency>
      <Compensationmin></Compensationmin>
      <Compensationmax></Compensationmax>
      <Applyto>https://job-boards.greenhouse.io/blackforestlabs/jobs/4642947008</Applyto>
      <Location>San Francisco (USA)</Location>
      <Country></Country>
      <Postedate>2026-04-17</Postedate>
    </job>
    <job>
      <externalid>69369815-a11</externalid>
      <Title>Associate/Vice President, AI Infrastructure Engineer</Title>
      <Description><![CDATA[<p>At BlackRock, technology underpins everything we do. AI is a core strategic priority for the firm, embedded across Aladdin and our investment, client, and operational platforms. We are seeking an AI Infrastructure Engineer to help build and operate the foundational infrastructure that enables AI systems to scale safely, securely, and reliably across the enterprise.</p>
<p>This role sits within Aladdin Platform Engineering and focuses on the infrastructure and platform services required to support machine learning models, large language models (LLMs), and emerging AI capabilities in production. The successful candidate will work closely with AI Engineers, Data Scientists, Platform Engineers, Security, and Product partners to deliver resilient, cloud native AI platforms in a highly regulated environment.</p>
<p><strong>Key Responsibilities</strong></p>
<ul>
<li>Design, build, and operate AI-focused infrastructure platforms supporting model development, training, evaluation, and inference.</li>
<li>Engineer scalable, reliable, and secure cloud-native services to support AI workloads across AWS, Azure, and hybrid environments.</li>
<li>Partner with AI Engineering and Data Science teams to improve developer experience, performance, and operational stability of AI systems.</li>
<li>Enable production deployment of ML models and LLMs within governed enterprise environments, aligned with firmwide risk and compliance standards.</li>
<li>Implement and maintain infrastructure as code and automation to ensure repeatable, auditable platform provisioning.</li>
<li>Build and operate observability, monitoring, and alerting solutions for AI platforms, ensuring availability, performance, and cost transparency.</li>
<li>Collaborate with Security and Risk partners to integrate identity, access controls, data protection, and governance into AI infrastructure.</li>
<li>Contribute to architectural decisions and technical standards for AI platforms across Aladdin.</li>
<li>Participate in on-call rotations and operational support as required for critical platforms.</li>
<li>Continuously evaluate emerging AI infrastructure technologies and apply them pragmatically within BlackRock’s enterprise context.</li>
</ul>
<p><strong>Qualifications</strong></p>
<ul>
<li>Strong experience in cloud infrastructure, platform engineering, or systems engineering roles.</li>
<li>4+ hands-on expertise with AWS and/or Azure and/or GCP, including Azure ML, Azure Foundry, AWS Bedrock, Google Vertex, as well as cloud compute, networking, storage, and security services.</li>
<li>Understanding of ML platform operations and governance concepts, including model deployment strategies, lifecycle management, monitoring/observability, and Disaster Recovery</li>
<li>Experience supporting LLMs, generative AI platforms, or model serving infrastructure.</li>
<li>Experience supporting AI and machine learning workloads, with exposure to managed compute for model training and fine-tuning, experimentation over large datasets, and end-to-end MLOps pipeline flow including data ingestion, training, validation, and deployment.</li>
<li>Proficiency with Infrastructure as Code tools (e.g., Terraform, ARM/Bicep, CloudFormation).</li>
<li>Strong programming or scripting skills (e.g., Python, Bash, or similar).</li>
<li>Experience building and operating containerized and Kubernetes-based platforms.</li>
<li>Solid understanding of reliability, scalability, observability, and operational best practices.</li>
<li>Ability to work effectively in cross-functional teams and communicate complex technical concepts clearly.</li>
</ul>
<p><strong>Our Benefits</strong></p>
<p>To help you stay energized, engaged, and inspired, we offer a wide range of employee benefits including: retirement investment and tools designed to help you in building a sound financial future; access to education reimbursement; comprehensive resources to support your physical health and emotional well-being; family support programs; and Flexible Time Off (FTO) so you can relax, recharge, and be there for the people you care about.</p>
<p><strong>Our Hybrid Work Model</strong></p>
<p>BlackRock’s hybrid work model is designed to enable a culture of collaboration and apprenticeship that enriches the experience of our employees, while supporting flexibility for all. Employees are currently required to work at least 4 days in the office per week, with the flexibility to work from home 1 day a week. Some business groups may require more time in the office due to their roles and responsibilities. We remain focused on increasing the impactful moments that arise when we work together in person – aligned with our commitment to performance and innovation. As a new joiner, you can count on this hybrid model to accelerate your learning and onboarding experience here at BlackRock.</p>
<p style="margin-top:24px;font-size:13px;color:#666;">XML job scraping automation by <a href="https://yubhub.co">YubHub</a></p>]]></Description>
      <Jobtype>full-time</Jobtype>
      <Experiencelevel>mid</Experiencelevel>
      <Workarrangement>hybrid</Workarrangement>
      <Salaryrange></Salaryrange>
      <Skills>AWS, Azure, GCP, Cloud compute, Networking, Storage, Security services, ML platform operations, Governance concepts, Model deployment strategies, Lifecycle management, Monitoring/observability, Disaster Recovery, LLMs, Generative AI platforms, Model serving infrastructure, AI and machine learning workloads, Managed compute, Fine-tuning, Experimentation, End-to-end MLOps pipeline flow, Data ingestion, Training, Validation, Deployment, Infrastructure as Code, Terraform, ARM/Bicep, CloudFormation, Programming, Scripting, Containerized and Kubernetes-based platforms, Reliability, Scalability, Observability, Operational best practices, GPU or accelerator-based infrastructure, Financial services or highly regulated industries, Multicloud architectures and enterprise governance requirements</Skills>
      <Category>Engineering</Category>
      <Industry>Finance</Industry>
      <Employername>BlackRock</Employername>
      <Employerlogo>https://logos.yubhub.co/view.com.png</Employerlogo>
      <Employerdescription>BlackRock is a global investment management company that provides a range of investment products and services to institutional and retail clients.</Employerdescription>
      <Employerwebsite>https://jobs.workable.com</Employerwebsite>
      <Compensationcurrency></Compensationcurrency>
      <Compensationmin></Compensationmin>
      <Compensationmax></Compensationmax>
      <Applyto>https://jobs.workable.com/view/2JsY2bUdeEEzUfhn796RPb/associate%2Fvice-president%2C-ai-infrastructure-engineer-in-edinburgh-at-blackrock</Applyto>
      <Location>Edinburgh, Scotland</Location>
      <Country></Country>
      <Postedate>2026-03-09</Postedate>
    </job>
    <job>
      <externalid>c930b80e-7a6</externalid>
      <Title>Staff / Senior Software Engineer, AI Reliability</Title>
      <Description><![CDATA[<p><strong>About Anthropic</strong></p>
<p>Anthropic&#39;s mission is to create reliable, interpretable, and steerable AI systems. We want AI to be safe and beneficial for our users and for society as a whole. Our team is a quickly growing group of committed researchers, engineers, policy experts, and business leaders working together to build beneficial AI systems.</p>
<p><strong>About the Role</strong></p>
<p>AIRE (AI Reliability Engineering) partners with teams across Anthropic to improve reliability across our most critical serving paths -- every hop from the SDK through our network, API layers, serving infrastructure, and accelerators and back. We jump into the trenches alongside partner teams to make the systems that deliver Claude more robust and resilient, be it during an incident or collaborating on projects.</p>
<p>Reliability here is an emergent phenomenon that transcends any single team&#39;s boundaries, so someone has to zoom out and look at the whole picture. That&#39;s us -- and it means few teams at Anthropic offer this kind of dynamic, cross-cutting exposure to the systems that matter most.</p>
<p>Claude has your back. AIRE has Claude&#39;s. Help us keep Claude reliable for everyone who depends on it.</p>
<p><strong>Responsibilities:</strong></p>
<ul>
<li>Develop appropriate Service Level Objectives for large language model serving systems, balancing availability and latency with development velocity.</li>
</ul>
<ul>
<li>Design and implement monitoring and observability systems across the token path.</li>
</ul>
<ul>
<li>Assist in the design and implementation of high-availability serving infrastructure across multiple regions and cloud providers</li>
</ul>
<ul>
<li>Lead incident response for critical AI services, ensuring rapid recovery, thorough incident reviews, and systematic improvements.</li>
</ul>
<ul>
<li>Support the reliability of safeguard model serving -- critical for both site reliability and Anthropic&#39;s safety commitments.</li>
</ul>
<p><strong>You may be a good fit if you:</strong></p>
<ul>
<li>Have strong distributed systems, infrastructure, or reliability backgrounds -- we&#39;re looking for reliability-minded software engineers and SREs.</li>
</ul>
<ul>
<li>Are curious and brave -- comfortable jumping into unfamiliar systems during an incident and helping drive resolution even when you don&#39;t have deep expertise yet.</li>
</ul>
<ul>
<li>Think holistically about how systems compose and where the seams are.</li>
</ul>
<ul>
<li>Can build lasting relationships across teams -- our engagement model depends on being welcomed as teammates, not outsiders with opinions.</li>
</ul>
<ul>
<li>Care about users and feel ownership over outcomes, even for systems you don&#39;t own.</li>
</ul>
<ul>
<li>Have excellent communication and collaboration skills -- you&#39;ll be partnering across the entire company.</li>
</ul>
<ul>
<li>Bring diverse experience -- the team&#39;s strength comes from people who&#39;ve built product stacks, scaled databases, run massive distributed systems, and everything in between.</li>
</ul>
<p><strong>Strong candidates may also:</strong></p>
<ul>
<li>Have been an SRE, Production Engineer, or in similar reliability-focused roles on large scale systems</li>
</ul>
<ul>
<li>Have experience operating large-scale model serving or training infrastructure (&gt;1000 GPUs).</li>
</ul>
<ul>
<li>Have experience with one or more ML hardware accelerators (GPUs, TPUs, Trainium).</li>
</ul>
<ul>
<li>Understand ML-specific networking optimizations like RDMA and InfiniBand.</li>
</ul>
<ul>
<li>Have expertise in AI-specific observability tools and frameworks.</li>
</ul>
<ul>
<li>Have experience with chaos engineering and systematic resilience testing.</li>
</ul>
<ul>
<li>Have contributed to open-source infrastructure or ML tooling.</li>
</ul>
<p><strong>Logistics</strong></p>
<p><strong>Education requirements:</strong> We require at least a Bachelor&#39;s degree in a related field or equivalent experience. <strong>Location-based hybrid policy:</strong> Currently, we expect all staff to be in one of our offices at least 25% of the time. However, some roles may require more time in our offices.</p>
<p><strong>Visa sponsorship:</strong> We do sponsor visas! However, we aren&#39;t able to successfully sponsor visas for every role and every candidate. But if we make you an offer, we will make every reasonable effort to get you a visa, and we retain an immigration lawyer to help with this.</p>
<p><strong>We encourage you to apply even if you do not believe you meet every single qualification. Not all strong candidates will meet every single qualification as listed. Research shows that people who identify as being from underrepresented groups are more prone to experiencing imposter syndrome and doubting the strength of their candidacy, so we urge you not to exclude yourself prematurely and to submit an application if you&#39;re interested in this work.</strong></p>
<p><strong>Your safety matters to us. To protect yourself from potential scams, remember that Anthropic recruiters only contact you from @anthropic.com email addresses. In some cases, we may partner with vetted recruiting agencies who will identify themselves as working on behalf of Anthropic. Be cautious of emails from other domains. Legitimate Anthropic recruiters will never ask for money, fees, or banking information before your first day. If you&#39;re ever unsure about a communication, don&#39;t click any links—visit anthropic.com/careers directly for confirmed position openings.</strong></p>
<p><strong>How we&#39;re different</strong></p>
<p>We believe that the highest-impact AI research will be big science. At Anthropic we work as a single cohesive team on just a few large-scale research efforts. And we value impact — advancing our long-term goals of steerable, trustworthy AI — rather than work on smaller and more specific puzzles. We view AI research as a team sport, where everyone contributes to the overall success of the team.</p>
<p style="margin-top:24px;font-size:13px;color:#666;">XML job scraping automation by <a href="https://yubhub.co">YubHub</a></p>]]></Description>
      <Jobtype>full-time</Jobtype>
      <Experiencelevel>staff</Experiencelevel>
      <Workarrangement>hybrid</Workarrangement>
      <Salaryrange>$325,000 - $485,000 USD</Salaryrange>
      <Skills>distributed systems, infrastructure, reliability, large language model serving systems, monitoring and observability systems, high-availability serving infrastructure, incident response, safeguard model serving, SRE, Production Engineer, ML hardware accelerators, ML-specific networking optimizations, AI-specific observability tools and frameworks, chaos engineering, systematic resilience testing, open-source infrastructure or ML tooling</Skills>
      <Category>Engineering</Category>
      <Industry>Technology</Industry>
      <Employername>Anthropic</Employername>
      <Employerlogo>https://logos.yubhub.co/anthropic.com.png</Employerlogo>
      <Employerdescription>Anthropic is a company that creates reliable, interpretable, and steerable AI systems. It has a quickly growing team of researchers, engineers, policy experts, and business leaders working together to build beneficial AI systems.</Employerdescription>
      <Employerwebsite>https://job-boards.greenhouse.io</Employerwebsite>
      <Compensationcurrency></Compensationcurrency>
      <Compensationmin></Compensationmin>
      <Compensationmax></Compensationmax>
      <Applyto>https://job-boards.greenhouse.io/anthropic/jobs/5113224008</Applyto>
      <Location>San Francisco, CA | New York City, NY | Seattle, WA</Location>
      <Country></Country>
      <Postedate>2026-03-08</Postedate>
    </job>
    <job>
      <externalid>70ec5312-0a5</externalid>
      <Title>Cloud Security Lead</Title>
      <Description><![CDATA[<p>Join us at the forefront of AI and cloud-native security as we work to secure one of the most innovative developer platforms in the world. As the Cloud Security Lead, you will shape the cloud and infrastructure security program that protects millions of developers, enables safe AI-assisted development, and ensures organisations can confidently bring our platform into enterprise environments.</p>
<p>In this role, you will own cloud security across GCP (primary) and supplemental environments in AWS and Azure, as well as containerized systems, SaaS platforms, and our multi-tenant AI infrastructure. You’ll improve our security posture through strong architecture, posture management, secure-by-default development practices, and close partnership with Engineering, Compliance, Security Architecture, and Platform teams.</p>
<p>This is a highly impactful, hands-on leadership role—perfect for someone who wants to solve complex security challenges at scale while influencing product, engineering, and go-to-market teams.</p>
<p><strong>Cloud Security Engineering</strong></p>
<ul>
<li>Lead configuration hardening across GCP, with additional oversight of workloads and integrations running in AWS and Azure.</li>
<li>Own and optimise CSPM platforms across multi-cloud environments—establishing configuration baselines, guardrails, and remediation workflows.</li>
<li>Secure critical SaaS platforms, ensuring proper configurations, access controls, and engineering integrations.</li>
<li>Lead infrastructure vulnerability management across multi-cloud systems, containers, registries, and platform services.</li>
<li>Enhance security across containerised and Kubernetes (GKE/EKS/AKS) workloads, including runtime protections, network policies, and workload identity.</li>
<li>Assess secure logging configurations across cloud/SaaS providers, ensuring audit logs, retention, and routing meet monitoring and architecture needs.</li>
</ul>
<p><strong>Secure Development &amp; Architecture Enablement</strong></p>
<ul>
<li>Partner with engineering teams to make services secure by default, embedding security into development workflows, CI/CD pipelines, and cloud-native deployments.</li>
</ul>
<p><strong>Cross-Functional Responsibilities</strong></p>
<ul>
<li>Collaborate with Security Monitoring, Compliance/GRC, Architecture, DevOps, Platform Engineering, and ML Infrastructure.</li>
<li>Participate in communicating security advisories, best practices, and updates to Replit’s customers.</li>
<li>Support incident investigations as a cloud security subject-matter expert.</li>
</ul>
<p><strong>Required Skills &amp; Experience:</strong></p>
<ul>
<li>7+ years of experience in cloud engineering, with 3+ years in a senior or lead role.</li>
<li>Hands-on experience with CSPM tools (Wiz, Lacework, Prisma, Orca, SCC, etc.).</li>
<li>Deep expertise in GCP security (IAM, VPC, KMS, GKE, Cloud Logging).</li>
<li>Experience securing and governing SaaS platforms and identity integrations.</li>
<li>Operational experience with infrastructure vulnerability management across cloud and container environments.</li>
<li>Working knowledge of AWS and/or Azure security services and configurations.</li>
<li>Experience with container and Kubernetes security across GKE, EKS, or AKS.</li>
<li>Strong IaC security experience with Terraform, Pulumi, or similar tooling.</li>
<li>Familiarity with compliance standards (SOC 2, ISO 27001, PCI DSS).</li>
</ul>
<p><strong>Preferred Qualifications:</strong></p>
<ul>
<li>Experience supporting engineering teams in building secure-first, cloud-native or PaaS environments.</li>
<li>Background securing AI/ML pipelines, model-serving infrastructure, or developer platform services.</li>
<li>Experience in high-growth technology or cloud-native product companies.</li>
<li>Experience with securing AI/agentic systems and sensitive data pipelines.</li>
<li>Automation/scripting with Python.</li>
<li>Relevant certifications (e.g., GCP Professional Cloud Security Engineer, AWS/Azure security certs).</li>
</ul>
<p><strong>What We Value:</strong></p>
<ul>
<li>Problem-solving mindset — Ability to break down complex security and operational challenges into clear engineering solutions.</li>
<li>Autonomy — Comfortable leading initiatives, collaborating effectively, and driving outcomes with minimal oversight.</li>
<li>Communication excellence — Able to translate deep technical concepts for engineers, executives, and enterprise customers.</li>
<li>Continuous learning — Passion for staying current with AI security, cloud-native advances, and emerging threats.</li>
<li>Automation-first approach — Belief in reducing operational toil and building scalable, self-healing systems.</li>
</ul>
<p><strong>Full-Time Employee Benefits Include:</strong></p>
<ul>
<li>Competitive Salary &amp; Equity</li>
<li>401(k) Program with a 4% match</li>
<li>Health, Dental, Vision and Life Insurance</li>
<li>Short Term and Long Term Disability</li>
<li>Paid Parental, Medical, Caregiver Leave</li>
<li>Commuter Benefits</li>
<li>Monthly Wellness Stipend</li>
<li>Autonomous Work Environment</li>
<li>In Office Set-Up Reimbursement</li>
<li>Flexible Time Off (FTO) + Holidays</li>
<li>Quarterly Team Gatherings</li>
<li>In Office Amenities</li>
</ul>
<p style="margin-top:24px;font-size:13px;color:#666;">XML job scraping automation by <a href="https://yubhub.co">YubHub</a></p>]]></Description>
      <Jobtype>full-time</Jobtype>
      <Experiencelevel>senior</Experiencelevel>
      <Workarrangement>hybrid</Workarrangement>
      <Salaryrange>$220K – $325K</Salaryrange>
      <Skills>CSPM tools, GCP security, SaaS platforms, infrastructure vulnerability management, container and Kubernetes security, IaC security, compliance standards, secure-first, cloud-native or PaaS environments, AI/ML pipelines, model-serving infrastructure, developer platform services, Python, relevant certifications</Skills>
      <Category>Engineering</Category>
      <Industry>Technology</Industry>
      <Employername>Replit</Employername>
      <Employerlogo>https://logos.yubhub.co/replit.com.png</Employerlogo>
      <Employerdescription>Replit isCallableWrapper a software creation platform that enables anyone to build applications using natural language. With millions of users worldwide, Replit is democratizing software development by removing traditional barriers to application creation.</Employerdescription>
      <Employerwebsite>https://jobs.ashbyhq.com</Employerwebsite>
      <Compensationcurrency></Compensationcurrency>
      <Compensationmin></Compensationmin>
      <Compensationmax></Compensationmax>
      <Applyto>https://jobs.ashbyhq.com/replit/8027a0f4-4837-4e49-a4dd-8ad1bde23277</Applyto>
      <Location>Foster City, CA</Location>
      <Country></Country>
      <Postedate>2026-03-07</Postedate>
    </job>
  </jobs>
</source>