<?xml version="1.0" encoding="UTF-8"?>
<source>
  <jobs>
    <job>
      <externalid>f2e93c37-5a0</externalid>
      <Title>Staff Software Engineer, Anti-Abuse &amp; Security</Title>
      <Description><![CDATA[<p>The Anti-Abuse team is the front line defending Replit&#39;s platform from exploitation. We detect and shut down phishing deployments, prevent cryptomining on free-tier infrastructure, stop LLM token farming, and keep bad actors from weaponizing the platform against our users.</p>
<p>This is adversarial work: attackers adapt constantly, and we build the detection systems, heuristics, and automated responses that stay ahead of them.</p>
<p>What makes this role unique is the AI-native nature of Replit&#39;s platform. You&#39;ll work on problems that barely exist elsewhere: building guardrails for AI-generated code, detecting prompt injection attacks at scale, and using LLMs as a defensive tool against abuse.</p>
<p>If you want hands-on experience applying AI to security problems, this is one of the few places you can do it in production with real attackers. You&#39;ll own problems end-to-end, from identifying emerging abuse patterns to shipping the systems that stop them at scale.</p>
<p><strong>Responsibilities</strong></p>
<ul>
<li>Design and implement LLM guardrails that detect abuse scenarios in AI-generated code and agent interactions</li>
<li>Build AI-powered detection systems that use LLMs to identify malicious patterns, classify threats, and automate response decisions</li>
<li>Build and operate abuse detection systems that identify phishing, cryptomining, account takeover, and financial fraud across millions of daily user actions</li>
<li>Design automated response mechanisms that enforce platform policies without manual intervention</li>
<li>Own the full abuse response lifecycle: detection, investigation, enforcement, and handling appeals alongside Support and Legal</li>
<li>Analyze attack patterns using BigQuery and Hex, turning investigation findings into new detection rules</li>
<li>Maintain and extend internal detection tools (Slurper, Netwatch) that continuously monitor user activity</li>
<li>Integrate and tune security scanners (SAST, SCA) in CI pipelines with tight performance SLAs</li>
<li>Track abuse trends, measure detection effectiveness, and adapt defenses as attack patterns evolve</li>
</ul>
<p><strong>Requirements</strong></p>
<ul>
<li>8+ years of experience in security engineering, anti-abuse, trust &amp; safety, or fraud detection</li>
<li>Strong programming skills in Python and/or TypeScript for building detection systems and automation</li>
<li>Experience with SQL and data analysis at scale (BigQuery, Snowflake, or similar)</li>
<li>Experience building or fine-tuning ML/LLM-based classifiers for security or abuse detection</li>
<li>Familiarity with prompt injection, jailbreaking, and other LLM-specific attack vectors</li>
<li>Ability to investigate complex abuse patterns and translate findings into automated defenses</li>
<li>Familiarity with common attack patterns: phishing infrastructure, account takeover, credential stuffing, resource abuse</li>
<li>Clear communication skills for working across Security, Support, Legal, and Engineering teams</li>
</ul>
<p><strong>Nice to Have</strong></p>
<ul>
<li>Experience at a platform company dealing with user-generated content or compute abuse (hosting providers, cloud platforms, developer tools)</li>
<li>Background in fraud detection, payment abuse, or financial crime</li>
<li>Familiarity with device fingerprinting, IP reputation, and email validation services</li>
<li>Experience with CI/CD security tooling (SAST, SCA, Dependabot, Snyk)</li>
<li>Knowledge of container security, Linux internals, or cloud infrastructure (GCP preferred)</li>
<li>Prior work with abuse reporting pipelines, trust &amp; safety tooling, or content moderation systems</li>
</ul>
<p><strong>Tools + Tech Stack for this role</strong></p>
<ul>
<li>Languages: Python, TypeScript, Go, SQL</li>
<li>Data: BigQuery, Hex</li>
<li>Detection tools: Slurper, Netwatch, Stytch (device fingerprint); ClearOut (email reputation)</li>
<li>CI/CD Security: Dependabot, Snyk, SAST/SCA scanners</li>
<li>Infrastructure: GCP, Kubernetes</li>
<li>Collaboration: Linear, Slack, Zendesk (for abuse reports)</li>
</ul>
<p style="margin-top:24px;font-size:13px;color:#666;">XML job scraping automation by <a href="https://yubhub.co">YubHub</a></p>]]></Description>
      <Jobtype>Full time</Jobtype>
      <Experiencelevel>staff</Experiencelevel>
      <Workarrangement>hybrid</Workarrangement>
      <Salaryrange>$190K - $240K</Salaryrange>
      <Skills>security engineering, anti-abuse, trust &amp; safety, fraud detection, Python, TypeScript, SQL, BigQuery, Hex, ML/LLM-based classifiers, prompt injection, jailbreaking, common attack patterns, phishing infrastructure, account takeover, credential stuffing, resource abuse, payment abuse, financial crime, device fingerprinting, IP reputation, email validation services, CI/CD security tooling, container security, Linux internals, cloud infrastructure, abuse reporting pipelines, trust &amp; safety tooling, content moderation systems</Skills>
      <Category>Engineering</Category>
      <Industry>Technology</Industry>
      <Employername>Replit</Employername>
      <Employerlogo>https://logos.yubhub.co/replit.com.png</Employerlogo>
      <Employerdescription>Replit is an agentic software creation platform that enables anyone to build applications using natural language. It has millions of users worldwide.</Employerdescription>
      <Employerwebsite>https://replit.com/</Employerwebsite>
      <Compensationcurrency></Compensationcurrency>
      <Compensationmin></Compensationmin>
      <Compensationmax></Compensationmax>
      <Applyto>https://jobs.ashbyhq.com/replit/75e69146-a092-43a1-b1d6-023d433d3ae7</Applyto>
      <Location>Foster City, CA (Hybrid) In office M,W,F</Location>
      <Country></Country>
      <Postedate>2026-04-24</Postedate>
    </job>
    <job>
      <externalid>fc4a0972-622</externalid>
      <Title>Principal Product Manager, AI Model Security</Title>
      <Description><![CDATA[<p>Microsoft Superintelligence team’s mission is to empower every person and every organization on the planet to achieve more.</p>
<p>As employees we come together with a growth mindset, innovate to empower others, and collaborate to realize our shared goals. Each day we build on our values of respect, integrity, and accountability to create a culture of inclusion where everyone can thrive at work and beyond.</p>
<p>This role is part of Microsoft AI’s Superintelligence Team. The MAIST is a startup-like team inside Microsoft AI, created to push the boundaries of AI toward Humanist Superintelligence , ultra-capable systems that remain controllable, safety-aligned, and anchored to human values.</p>
<p>Our mission is to create AI that amplifies human potential while ensuring humanity remains firmly in control. We aim to deliver breakthroughs that benefit society , advancing science, education, and global well-being.</p>
<p>We are hiring a Product Manager to own AI model security , the discipline of making our frontier models resilient against adversarial attack and purpose-built for security practitioners.</p>
<p>This role has a dual mandate: (1) harden our models against the full spectrum of LLM security threats , prompt injection, data exfiltration, jailbreaking, training data extraction, zero-day exploit generation, model poisoning, and agentic workflow exploitation , and (2) partner closely with Microsoft Security product teams (Azure Security, Security Copilot) to ensure our models deliver best-in-class capabilities for real-world security workflows.</p>
<p>Responsibilities:</p>
<p>Own the model security roadmap: Define and prioritize the security hardening strategy for our frontier models across the full OWASP LLM threat surface , prompt injection (direct and indirect), data exfiltration, jailbreak resistance, system prompt leakage, training data extraction, and adversarial manipulation of agentic workflows.</p>
<p>Drive zero-day and exploit defense: Work with researchers to evaluate and mitigate the risk of models being used to generate zero-day exploits, malware, or novel attack vectors.</p>
<p>Build and scale red-teaming frameworks: Design, run, and iterate adversarial testing programs , both automated and human-driven , to continuously probe model vulnerabilities.</p>
<p>Establish metrics (e.g., jailbreak success rate, injection bypass rate, exfiltration resistance) and drive measurable improvement over time.</p>
<p>Partner with Microsoft Security product teams: Work closely with Azure Security and Security Copilot teams to translate their product requirements into model training priorities.</p>
<p>Ensure our models are purpose-built for threat detection, incident triage, vulnerability assessment, log analysis, and compliance reasoning.</p>
<p>Define security-specific model evaluations: Build benchmark suites and evaluation frameworks that measure real-world security usefulness , not just academic performance.</p>
<p>Drive training data strategy to improve domain-specific model quality for security practitioners.</p>
<p>Shape security policy and launch readiness: Establish clear security criteria for model launches.</p>
<p>Own the security dimension of go/no-go decisions, with frameworks that balance capability, risk, and deployment context.</p>
<p>Stay at the frontier: Track the rapidly evolving LLM security landscape , new attack techniques, emerging standards (OWASP, NIST AI RMF), regulatory requirements (EU AI Act), and academic research.</p>
<p>Translate what you learn into actionable product priorities.</p>
<p>Influence model training and architecture: Partner with researchers and engineers to embed security considerations into model training, fine-tuning, RLHF, and post-training safeguards.</p>
<p>Qualifications:</p>
<p>Bachelor’s Degree AND 5+ years experience in product management, security engineering, or software development OR equivalent experience</p>
<p>Demonstrated hands-on experience with AI/ML systems , you have personally built, evaluated, or shipped ML-powered products or security tools</p>
<p>Deep familiarity with LLM security threats: prompt injection, jailbreaking, data exfiltration, adversarial attacks on generative models , through professional experience, red-teaming, or security research</p>
<p>Experience defining product requirements and driving decisions in partnership with researchers or ML engineers</p>
<p>Track record of building evaluation systems, security benchmarks, or adversarial testing frameworks , not just consuming them</p>
<p>Ability to operate autonomously, make decisions with incomplete information, and drive projects from ambiguity to shipped outcomes</p>
<p>Preferred Qualifications:</p>
<p>Technical background in computer science, security, or AI/ML , a postgraduate degree is a plus but not required</p>
<p>Experience in offensive security, penetration testing, or red teaming , ideally applied to AI/ML systems</p>
<p>Familiarity with security workflows and tooling (SIEM, SOAR, EDR, threat intelligence platforms) and how practitioners use them in production</p>
<p>Understanding of the model lifecycle (pre-training, fine-tuning, RLHF, deployment, monitoring) and where security interventions are most effective</p>
<p>Experience working with or within enterprise security organizations (e.g., Microsoft Security, CrowdStrike, Palo Alto Networks, or similar)</p>
<p>Published research, blog posts, or public contributions in AI security, adversarial ML, or LLM red teaming</p>
<p style="margin-top:24px;font-size:13px;color:#666;">XML job scraping automation by <a href="https://yubhub.co">YubHub</a></p>]]></Description>
      <Jobtype>full-time</Jobtype>
      <Experiencelevel></Experiencelevel>
      <Workarrangement></Workarrangement>
      <Salaryrange></Salaryrange>
      <Skills>AI/ML systems, LLM security threats, prompt injection, jailbreaking, data exfiltration, adversarial attacks on generative models, product requirements, security engineering, software development, evaluation systems, security benchmarks, adversarial testing frameworks, autonomous decision-making, project management, offensive security, penetration testing, red teaming, security workflows, tooling, SIEM, SOAR, EDR, threat intelligence platforms, model lifecycle, pre-training, fine-tuning, RLHF, deployment, monitoring</Skills>
      <Category>Engineering</Category>
      <Industry>Technology</Industry>
      <Employername>Microsoft AI</Employername>
      <Employerlogo>https://logos.yubhub.co/microsoft.ai.png</Employerlogo>
      <Employerdescription>Microsoft AI is a technology company that develops and markets artificial intelligence and machine learning products and services.</Employerdescription>
      <Employerwebsite>https://microsoft.ai</Employerwebsite>
      <Compensationcurrency></Compensationcurrency>
      <Compensationmin></Compensationmin>
      <Compensationmax></Compensationmax>
      <Applyto>https://microsoft.ai/job/principal-product-manager-ai-model-security/</Applyto>
      <Location>Redmond</Location>
      <Country></Country>
      <Postedate>2026-04-24</Postedate>
    </job>
    <job>
      <externalid>25c868eb-c32</externalid>
      <Title>Principal Product Manager, AI Model Security</Title>
      <Description><![CDATA[<p>Microsoft Superintelligence team’s mission is to empower every person and every organization on the planet to achieve more.</p>
<p>As employees we come together with a growth mindset, innovate to empower others, and collaborate to realize our shared goals. Each day we build on our values of respect, integrity, and accountability to create a culture of inclusion where everyone can thrive at work and beyond.</p>
<p>This role is part of Microsoft AI’s Superintelligence Team. The MAIST is a startup-like team inside Microsoft AI, created to push the boundaries of AI toward Humanist Superintelligence , ultra-capable systems that remain controllable, safety-aligned, and anchored to human values.</p>
<p>Our mission is to create AI that amplifies human potential while ensuring humanity remains firmly in control. We aim to deliver breakthroughs that benefit society , advancing science, education, and global well-being.</p>
<p>We are hiring a Product Manager to own AI model security , the discipline of making our frontier models resilient against adversarial attack and purpose-built for security practitioners.</p>
<p>This role has a dual mandate: (1) harden our models against the full spectrum of LLM security threats , prompt injection, data exfiltration, jailbreaking, training data extraction, zero-day exploit generation, model poisoning, and agentic workflow exploitation , and (2) partner closely with Microsoft Security product teams (Azure Security, Security Copilot) to ensure our models deliver best-in-class capabilities for real-world security workflows.</p>
<p>Responsibilities:</p>
<p>Own the model security roadmap: Define and prioritize the security hardening strategy for our frontier models across the full OWASP LLM threat surface , prompt injection (direct and indirect), data exfiltration, jailbreak resistance, system prompt leakage, training data extraction, and adversarial manipulation of agentic workflows.</p>
<p>Drive zero-day and exploit defense: Work with researchers to evaluate and mitigate the risk of models being used to generate zero-day exploits, malware, or novel attack vectors.</p>
<p>Build and scale red-teaming frameworks: Design, run, and iterate adversarial testing programs , both automated and human-driven , to continuously probe model vulnerabilities.</p>
<p>Establish metrics (e.g., jailbreak success rate, injection bypass rate, exfiltration resistance) and drive measurable improvement over time.</p>
<p>Partner with Microsoft Security product teams: Work closely with Azure Security and Security Copilot teams to translate their product requirements into model training priorities.</p>
<p>Ensure our models are purpose-built for threat detection, incident triage, vulnerability assessment, log analysis, and compliance reasoning.</p>
<p>Define security-specific model evaluations: Build benchmark suites and evaluation frameworks that measure real-world security usefulness , not just academic performance.</p>
<p>Drive training data strategy to improve domain-specific model quality for security practitioners.</p>
<p>Shape security policy and launch readiness: Establish clear security criteria for model launches.</p>
<p>Own the security dimension of go/no-go decisions, with frameworks that balance capability, risk, and deployment context.</p>
<p>Stay at the frontier: Track the rapidly evolving LLM security landscape , new attack techniques, emerging standards (OWASP, NIST AI RMF), regulatory requirements (EU AI Act), and academic research.</p>
<p>Translate what you learn into actionable product priorities.</p>
<p>Influence model training and architecture: Partner with researchers and engineers to embed security considerations into model training, fine-tuning, RLHF, and post-training safeguards.</p>
<p>Qualifications:</p>
<p>Bachelor’s Degree AND 5+ years experience in product management, security engineering, or software development OR equivalent experience</p>
<p>Demonstrated hands-on experience with AI/ML systems , you have personally built, evaluated, or shipped ML-powered products or security tools</p>
<p>Deep familiarity with LLM security threats: prompt injection, jailbreaking, data exfiltration, adversarial attacks on generative models , through professional experience, red-teaming, or security research</p>
<p>Experience defining product requirements and driving decisions in partnership with researchers or ML engineers</p>
<p>Track record of building evaluation systems, security benchmarks, or adversarial testing frameworks , not just consuming them</p>
<p>Ability to operate autonomously, make decisions with incomplete information, and drive projects from ambiguity to shipped outcomes</p>
<p style="margin-top:24px;font-size:13px;color:#666;">XML job scraping automation by <a href="https://yubhub.co">YubHub</a></p>]]></Description>
      <Jobtype>full-time</Jobtype>
      <Experiencelevel>senior</Experiencelevel>
      <Workarrangement>onsite</Workarrangement>
      <Salaryrange></Salaryrange>
      <Skills>AI/ML systems, LLM security threats, prompt injection, jailbreaking, data exfiltration, adversarial attacks on generative models, product management, security engineering, software development, model training, fine-tuning, RLHF, post-training safeguards</Skills>
      <Category>Engineering</Category>
      <Industry>Technology</Industry>
      <Employername>Microsoft</Employername>
      <Employerlogo>https://logos.yubhub.co/microsoft.ai.png</Employerlogo>
      <Employerdescription>Microsoft is a multinational technology company that develops, manufactures, licenses, and supports a wide range of software products, services, and devices.</Employerdescription>
      <Employerwebsite>https://microsoft.ai</Employerwebsite>
      <Compensationcurrency></Compensationcurrency>
      <Compensationmin></Compensationmin>
      <Compensationmax></Compensationmax>
      <Applyto>https://microsoft.ai/job/principal-product-manager-ai-model-security-2/</Applyto>
      <Location>Mountain View</Location>
      <Country></Country>
      <Postedate>2026-04-24</Postedate>
    </job>
    <job>
      <externalid>138b24e2-2bd</externalid>
      <Title>Senior Software Engineer, Anti-Abuse &amp; Security</Title>
      <Description><![CDATA[<p>Rewrite this job ad in your own words, matching the tone of voice of the original. Reuse the same section headings from the original ad (e.g. if the ad says &quot;Responsibilities&quot;, use that heading, not &quot;What you&#39;ll do&quot;).</p>
<p>Start with an opening paragraph (no heading): what the role is, who the company is, why it matters. If the ad mentions salary, include it here.</p>
<p>Rephrase bullet points in your own words while keeping the factual content. Combine related points where it makes sense.</p>
<p>For benefits/perks: gather them from anywhere in the ad into one section. If the ad mentions nothing about benefits, omit a benefits section entirely.</p>
<p>Do not invent information that is not in the original ad.</p>
<p><strong>About the role</strong> The Anti-Abuse team is the front line defending Replit&#39;s platform from exploitation. We detect and shut down phishing deployments, prevent cryptomining on free-tier infrastructure, stop LLM token farming, and keep bad actors from weaponizing the platform against our users. This is adversarial work: attackers adapt constantly, and we build the detection systems, heuristics, and automated responses that stay ahead of them.</p>
<p>What makes this role unique is the AI-native nature of Replit&#39;s platform. You&#39;ll work on problems that barely exist elsewhere: building guardrails for AI-generated code, detecting prompt injection attacks at scale, and using LLMs as a defensive tool against abuse. If you want hands-on experience applying AI to security problems, this is one of the few places you can do it in production with real attackers. You&#39;ll own problems end-to-end, from identifying emerging abuse patterns to shipping the systems that stop them at scale.</p>
<p><strong>In this role you will…</strong></p>
<ul>
<li>Design and implement LLM guardrails that detect abuse scenarios in AI-generated code and agent interactions</li>
<li>Build AI-powered detection systems that use LLMs to identify malicious patterns, classify threats, and automate response decisions</li>
<li>Build and operate abuse detection systems that identify phishing, cryptomining, account takeover, and financial fraud across millions of daily user actions</li>
<li>Design automated response mechanisms that enforce platform policies without manual intervention</li>
<li>Own the full abuse response lifecycle: detection, investigation, enforcement, and handling appeals alongside Support and Legal</li>
<li>Analyze attack patterns using BigQuery and Hex, turning investigation findings into new detection rules</li>
<li>Maintain and extend internal detection tools (Slurper, Netwatch) that continuously monitor user activity</li>
<li>Integrate and tune security scanners (SAST, SCA) in CI pipelines with tight performance SLAs</li>
<li>Track abuse trends, measure detection effectiveness, and adapt defenses as attack patterns evolve</li>
</ul>
<p><strong>Required skills and experience:</strong></p>
<ul>
<li>4+ years of experience in security engineering, anti-abuse, trust &amp; safety, or fraud detection</li>
<li>Strong programming skills in Python and/or TypeScript for building detection systems and automation</li>
<li>Experience with SQL and data analysis at scale (BigQuery, Snowflake, or similar)</li>
<li>Experience building or fine-tuning ML/LLM-based classifiers for security or abuse detection</li>
<li>Familiarity with prompt injection, jailbreaking, and other LLM-specific attack vectors</li>
<li>Ability to investigate complex abuse patterns and translate findings into automated defenses</li>
<li>Familiarity with common attack patterns: phishing infrastructure, account takeover, credential stuffing, resource abuse</li>
<li>Clear communication skills for working across Security, Support, Legal, and Engineering teams.</li>
</ul>
<p><strong>Nice to have:</strong></p>
<ul>
<li>Experience at a platform company dealing with user-generated content or compute abuse (hosting providers, cloud platforms, developer tools)</li>
<li>Background in fraud detection, payment abuse, or financial crime</li>
<li>Familiarity with device fingerprinting, IP reputation, and email validation services</li>
<li>Experience with CI/CD security tooling (SAST, SCA, Dependabot, Snyk)</li>
<li>Knowledge of container security, Linux internals, or cloud infrastructure (GCP preferred)</li>
<li>Prior work with abuse reporting pipelines, trust &amp; safety tooling, or content moderation systems</li>
</ul>
<p><strong>Tools + Tech Stack for this role</strong></p>
<ul>
<li><strong>Languages:</strong> Python, TypeScript, Go, SQL</li>
<li><strong>Data:</strong> BigQuery, Hex</li>
<li><strong>Detection tools:</strong> Slurper, Netwatch, Stytch (device fingerprint); ClearOut (email reputation)</li>
<li><strong>CI/CD Security: Dependabot, Snyk, SAST/SCA scanners</strong></li>
<li><strong>Infrastructure: GCP, Kubernetes</strong></li>
<li><strong>Collaboration: Linear, Slack, Zendesk (for abuse reports)</strong></li>
</ul>
<p><strong>This role may</strong> _<strong>not</strong>_ <strong>be a fit if</strong></p>
<ul>
<li>You prefer deep security research over building operational detection systems</li>
<li>You want to focus on vulnerability management, pentesting, or bug bounty triage (that&#39;s our Security team)</li>
<li>You&#39;re looking for a role with predictable, well-defined problems rather than constantly adapting to adversarial behavior</li>
<li>You prefer working in isolation rather than partnering closely with Support, Legal, and cross-functional teams</li>
<li>You&#39;re uncomfortable making enforcement decisions that affect real users</li>
</ul>
<p>_This is a full-time role that can be held from our Foster City, CA office. The role has an in-office requirement of Monday, Wednesday, and Friday._</p>
<p><strong>Full-Time Employee Benefits Include:</strong> 💰 Competitive Salary &amp; Equity 💹 401(k) Program with a 4% match ⚕️ Health, Dental, Vision and Life Insurance 🩼 Short Term and Long Term Disability 🚼 Paid Parental, Medical, Caregiver Leave 🚗 Commuter Benefits 📱 Monthly Wellness Stipend 🧑‍💻 Autonomous Work Environment 🖥 In Office Set-Up Reimbursement 🏝 Flexible Time Off (FTO) + Holidays 🚀 Quarterly Team Gatherings ☕ In Office Amenities</p>
<p style="margin-top:24px;font-size:13px;color:#666;">XML job scraping automation by <a href="https://yubhub.co">YubHub</a></p>]]></Description>
      <Jobtype>full-time</Jobtype>
      <Experiencelevel>senior</Experiencelevel>
      <Workarrangement>hybrid</Workarrangement>
      <Salaryrange>$190K – $240K</Salaryrange>
      <Skills>security engineering, anti-abuse, trust &amp; safety, fraud detection, Python, TypeScript, SQL, BigQuery, Hex, ML/LLM-based classifiers, prompt injection, jailbreaking, common attack patterns, phishing infrastructure, account takeover, credential stuffing, resource abuse, experience at a platform company, fraud detection, payment abuse, financial crime, device fingerprinting, IP reputation, email validation services, CI/CD security tooling, container security, Linux internals, cloud infrastructure</Skills>
      <Category>Engineering</Category>
      <Industry>Technology</Industry>
      <Employername>Replit</Employername>
      <Employerlogo>https://logos.yubhub.co/replit.com.png</Employerlogo>
      <Employerdescription>Replit is a software creation platform that enables anyone to build applications using natural language. With millions of users worldwide, Replit is democratizing software development by removing traditional barriers to application creation.</Employerdescription>
      <Employerwebsite>https://jobs.ashbyhq.com</Employerwebsite>
      <Compensationcurrency></Compensationcurrency>
      <Compensationmin></Compensationmin>
      <Compensationmax></Compensationmax>
      <Applyto>https://jobs.ashbyhq.com/replit/5bdadf61-7955-46e8-8fdf-bd69818358b7</Applyto>
      <Location>Foster City, CA</Location>
      <Country></Country>
      <Postedate>2026-03-07</Postedate>
    </job>
  </jobs>
</source>