<?xml version="1.0" encoding="UTF-8"?>
<source>
  <jobs>
    <job>
      <externalid>3829d19f-c93</externalid>
      <Title>Machine Learning Engineer</Title>
      <Description><![CDATA[<p>Join Twilio&#39;s rapidly-growing AI &amp; Data Platform team as an Machine Learning Engineer. You will design, build, and operate the cloud-native data and ML infrastructure that powers every customer interaction, enabling Twilio&#39;s product teams and customers to move from raw events to real-time intelligence.</p>
<p>In this role, you&#39;ll:</p>
<ul>
<li>Architect, implement, and maintain scalable data pipelines and feature stores for batch and real-time workloads.</li>
<li>Build reproducible ML training, evaluation, and inference workflows using modern orchestration and MLOps tooling.</li>
<li>Integrate event streams from Twilio products (e.g., Messaging, Voice, Segment) into unified, analytics-ready datasets.</li>
<li>Monitor, test, and improve data quality, model performance, latency, and cost.</li>
<li>Partner with product, data science, and security teams to ship resilient, compliant services.</li>
<li>Automate deployment with CI/CD, infrastructure-as-code, and container orchestration best practices.</li>
<li>Produce clear documentation, dashboards, and runbooks; share knowledge through code reviews and brown-bag sessions.</li>
</ul>
<p>Twilio values diverse experiences from all kinds of industries, and we encourage everyone who meets the required qualifications to apply.</p>
<p style="margin-top:24px;font-size:13px;color:#666;">XML job scraping automation by <a href="https://yubhub.co">YubHub</a></p>]]></Description>
      <Jobtype>full-time</Jobtype>
      <Experiencelevel>mid</Experiencelevel>
      <Workarrangement>remote</Workarrangement>
      <Salaryrange></Salaryrange>
      <Skills>Python, SQL, ETL/ELT orchestration tools, cloud data warehouses, ML lifecycle tooling, Docker, Kubernetes, major cloud platform, data modeling, distributed computing concepts, streaming frameworks, Twilio Segment, Kafka/Kinesis, infrastructure-as-code, GitHub-based CI/CD pipelines, generative AI workflows, foundation-model fine-tuning, vector databases</Skills>
      <Category>Engineering</Category>
      <Industry>Technology</Industry>
      <Employername>Twilio</Employername>
      <Employerlogo>https://logos.yubhub.co/twilio.com.png</Employerlogo>
      <Employerdescription>Twilio delivers innovative solutions to hundreds of thousands of businesses and empowers millions of developers worldwide to craft personalized customer experiences.</Employerdescription>
      <Employerwebsite>https://www.twilio.com/</Employerwebsite>
      <Compensationcurrency></Compensationcurrency>
      <Compensationmin></Compensationmin>
      <Compensationmax></Compensationmax>
      <Applyto>https://job-boards.greenhouse.io/twilio/jobs/7059734</Applyto>
      <Location>Remote - US</Location>
      <Country></Country>
      <Postedate>2026-04-24</Postedate>
    </job>
    <job>
      <externalid>547d60f2-2ad</externalid>
      <Title>Staff Machine Learning Engineer</Title>
      <Description><![CDATA[<p>Join Twilio&#39;s rapidly-growing Trust Intelligence Platform team as an L4 Machine Learning Engineer. You will design, build, and operate the cloud-native data and ML infrastructure that powers every customer interaction, enabling Twilio&#39;s product teams and customers to move from raw events to real-time intelligence.</p>
<p>In this role, you&#39;ll:</p>
<p>Architect, implement, and maintain scalable data pipelines and feature stores for batch and real-time workloads. Build reproducible ML training, evaluation, and inference workflows using modern orchestration and MLOps tooling. Integrate event streams from Twilio products (e.g., Messaging, Voice, Segment) into unified, analytics-ready datasets. Monitor, test, and improve data quality, model performance, latency, and cost. Partner with product, data science, and security teams to ship resilient, compliant services. Automate deployment with CI/CD, infrastructure-as-code, and container orchestration best practices. Produce clear documentation, dashboards, and runbooks; share knowledge through code reviews and brown-bag sessions. Embrace Twilio&#39;s &#39;We are Builders&#39; values by taking ownership of problems and driving them to completion.</p>
<p>Twilio values diverse experiences from all kinds of industries, and we encourage everyone who meets the required qualifications to apply.</p>
<p style="margin-top:24px;font-size:13px;color:#666;">XML job scraping automation by <a href="https://yubhub.co">YubHub</a></p>]]></Description>
      <Jobtype>full-time</Jobtype>
      <Experiencelevel>staff</Experiencelevel>
      <Workarrangement>remote</Workarrangement>
      <Salaryrange></Salaryrange>
      <Skills>Python, SQL, ETL/ELT orchestration tools, cloud data warehouses, ML lifecycle tooling, Docker, Kubernetes, major cloud platform, data modeling, distributed computing concepts, streaming frameworks, Twilio Segment, Kafka/Kinesis, infrastructure-as-code, GitHub-based CI/CD pipelines, generative AI workflows, foundation-model fine-tuning, vector databases</Skills>
      <Category>Engineering</Category>
      <Industry>Technology</Industry>
      <Employername>Twilio</Employername>
      <Employerlogo>https://logos.yubhub.co/twilio.com.png</Employerlogo>
      <Employerdescription>Twilio is a cloud communication platform that provides software tools for developers to build, scale, and operate real-time communication and collaboration applications.</Employerdescription>
      <Employerwebsite>https://www.twilio.com/</Employerwebsite>
      <Compensationcurrency></Compensationcurrency>
      <Compensationmin></Compensationmin>
      <Compensationmax></Compensationmax>
      <Applyto>https://job-boards.greenhouse.io/twilio/jobs/7061880</Applyto>
      <Location>Remote - US</Location>
      <Country></Country>
      <Postedate>2026-04-24</Postedate>
    </job>
    <job>
      <externalid>9bcc033f-15c</externalid>
      <Title>GenAI Strategic Projects Lead, Public Sector</Title>
      <Description><![CDATA[<p>We&#39;re seeking a GenAI Strategic Projects Lead to own high-impact projects that drive revenue and experimentation. In this role, you&#39;ll work across operations, engineering, and customer engagement to produce world-class training and test and evaluation data for Large Language Models for our Public Sector customers.</p>
<p>This role offers a rare opportunity to make a meaningful impact at the intersection of AI and national security. You will help build Generative AI data-labeling pipelines from the ground up, create operational processes to manage and optimize an in-house expert data workforce, and develop novel technology-driven approaches (e.g., scripts, prompt engineering, hybrid data) to improve the quality of our training and evaluation datasets.</p>
<p>Responsibilities:</p>
<ul>
<li>Develop, build, and maintain the infrastructure required to ensure data pipelines are efficient, scalable, and produce high-quality outputs</li>
<li>Take ownership of day-to-day progress on high-priority data production pipelines, ensuring projects move forward efficiently</li>
<li>Partner with subject matter experts in their fields to validate the quality of our data and to translate deep domain knowledge into scalable processes and measurable outcomes</li>
<li>Work closely with customers to understand their requirements and design data taxonomies that optimize model performance</li>
<li>Utilize analytics and data visualization tools to track progress, identify bottlenecks, and make data-driven decisions to optimize pipeline performance</li>
<li>Influence cross-org collaboration to define and advance human data strategy, influencing technical and non-technical stakeholders to ensure data quality, scalability, and long-term platform leverage</li>
<li>Own larger and larger components of our data delivery processes, until you ultimately serve as the full owner of our most visible and high impact customer pipelines</li>
</ul>
<p>You have:</p>
<ul>
<li>5+ years of experience in product development, data science, or operations</li>
<li>A history of successful project management and comfort in ambiguity</li>
<li>Ability to analyze complex operational data, build queries, and identify trends to inform decisions and optimize processes</li>
<li>Technical aptitude to understand how to produce data for state of the art post-training techniques such as supervised fine tuning (SFT), reinforcement learning through human feedback (RLHF), Reinforcement Learning with Verifiable Rewards (RLVR) etc</li>
</ul>
<p>Nice to have:</p>
<ul>
<li>Experience working in defense tech and/or an AI company</li>
<li>A technical degree in fields like computer science, data science, or engineering</li>
<li>A deep understanding of ML operations for generative AI workflows / products</li>
<li>An active Top Secret security clearance</li>
</ul>
<p>Compensation packages at Scale for eligible roles include base salary, equity, and benefits. The range displayed on each job posting reflects the minimum and maximum target for new hire salaries for the position, determined by work location and additional factors, including job-related skills, experience, interview performance, and relevant education or training.</p>
<p>Your recruiter can share more about the specific salary range for your preferred location during the hiring process, and confirm whether the hired role will be eligible for equity grant. You’ll also receive benefits including, but not limited to: Comprehensive health, dental and vision coverage, retirement benefits, a learning and development stipend, and generous PTO. Additionally, this role may be eligible for additional benefits such as a commuter stipend.</p>
<p>The base salary range for this full-time position in the location of Washington DC is: $169,600-$212,000 USD</p>
<p style="margin-top:24px;font-size:13px;color:#666;">XML job scraping automation by <a href="https://yubhub.co">YubHub</a></p>]]></Description>
      <Jobtype>full-time</Jobtype>
      <Experiencelevel>senior</Experiencelevel>
      <Workarrangement>onsite</Workarrangement>
      <Salaryrange>$169,600-$212,000 USD</Salaryrange>
      <Skills>product development, data science, operations, project management, complex operational data analysis, data visualization tools, cross-org collaboration, human data strategy, data quality, scalability, long-term platform leverage, defense tech, AI company, computer science, engineering, ML operations, generative AI workflows, Top Secret security clearance</Skills>
      <Category>Engineering</Category>
      <Industry>Technology</Industry>
      <Employername>Scale</Employername>
      <Employerlogo>https://logos.yubhub.co/scale.com.png</Employerlogo>
      <Employerdescription>Scale develops reliable AI systems for the world&apos;s most important decisions.</Employerdescription>
      <Employerwebsite>https://scale.com/</Employerwebsite>
      <Compensationcurrency></Compensationcurrency>
      <Compensationmin></Compensationmin>
      <Compensationmax></Compensationmax>
      <Applyto>https://job-boards.greenhouse.io/scaleai/jobs/4648363005</Applyto>
      <Location>Washington, DC</Location>
      <Country></Country>
      <Postedate>2026-04-18</Postedate>
    </job>
  </jobs>
</source>