{"version":"0.1","company":{"name":"YubHub","url":"https://yubhub.co","jobsUrl":"https://yubhub.co/jobs/skill/jailbreaking"},"x-facet":{"type":"skill","slug":"jailbreaking","display":"Jailbreaking","count":4},"x-feed-size-limit":100,"x-feed-sort":"enriched_at desc","x-feed-notice":"This feed contains at most 100 jobs (the most recently enriched). For the full corpus, use the paginated /stats/by-facet endpoint or /search.","x-generator":"yubhub-xml-generator","x-rights":"Free to redistribute with attribution: \"Data by YubHub (https://yubhub.co)\"","x-schema":"Each entry in `jobs` follows https://schema.org/JobPosting. YubHub-native raw fields carry `x-` prefix.","jobs":[{"@context":"https://schema.org","@type":"JobPosting","identifier":{"@type":"PropertyValue","name":"YubHub","value":"job_f2e93c37-5a0"},"title":"Staff Software Engineer, Anti-Abuse & Security","description":"<p>The Anti-Abuse team is the front line defending Replit&#39;s platform from exploitation. We detect and shut down phishing deployments, prevent cryptomining on free-tier infrastructure, stop LLM token farming, and keep bad actors from weaponizing the platform against our users.</p>\n<p>This is adversarial work: attackers adapt constantly, and we build the detection systems, heuristics, and automated responses that stay ahead of them.</p>\n<p>What makes this role unique is the AI-native nature of Replit&#39;s platform. You&#39;ll work on problems that barely exist elsewhere: building guardrails for AI-generated code, detecting prompt injection attacks at scale, and using LLMs as a defensive tool against abuse.</p>\n<p>If you want hands-on experience applying AI to security problems, this is one of the few places you can do it in production with real attackers. You&#39;ll own problems end-to-end, from identifying emerging abuse patterns to shipping the systems that stop them at scale.</p>\n<p><strong>Responsibilities</strong></p>\n<ul>\n<li>Design and implement LLM guardrails that detect abuse scenarios in AI-generated code and agent interactions</li>\n<li>Build AI-powered detection systems that use LLMs to identify malicious patterns, classify threats, and automate response decisions</li>\n<li>Build and operate abuse detection systems that identify phishing, cryptomining, account takeover, and financial fraud across millions of daily user actions</li>\n<li>Design automated response mechanisms that enforce platform policies without manual intervention</li>\n<li>Own the full abuse response lifecycle: detection, investigation, enforcement, and handling appeals alongside Support and Legal</li>\n<li>Analyze attack patterns using BigQuery and Hex, turning investigation findings into new detection rules</li>\n<li>Maintain and extend internal detection tools (Slurper, Netwatch) that continuously monitor user activity</li>\n<li>Integrate and tune security scanners (SAST, SCA) in CI pipelines with tight performance SLAs</li>\n<li>Track abuse trends, measure detection effectiveness, and adapt defenses as attack patterns evolve</li>\n</ul>\n<p><strong>Requirements</strong></p>\n<ul>\n<li>8+ years of experience in security engineering, anti-abuse, trust &amp; safety, or fraud detection</li>\n<li>Strong programming skills in Python and/or TypeScript for building detection systems and automation</li>\n<li>Experience with SQL and data analysis at scale (BigQuery, Snowflake, or similar)</li>\n<li>Experience building or fine-tuning ML/LLM-based classifiers for security or abuse detection</li>\n<li>Familiarity with prompt injection, jailbreaking, and other LLM-specific attack vectors</li>\n<li>Ability to investigate complex abuse patterns and translate findings into automated defenses</li>\n<li>Familiarity with common attack patterns: phishing infrastructure, account takeover, credential stuffing, resource abuse</li>\n<li>Clear communication skills for working across Security, Support, Legal, and Engineering teams</li>\n</ul>\n<p><strong>Nice to Have</strong></p>\n<ul>\n<li>Experience at a platform company dealing with user-generated content or compute abuse (hosting providers, cloud platforms, developer tools)</li>\n<li>Background in fraud detection, payment abuse, or financial crime</li>\n<li>Familiarity with device fingerprinting, IP reputation, and email validation services</li>\n<li>Experience with CI/CD security tooling (SAST, SCA, Dependabot, Snyk)</li>\n<li>Knowledge of container security, Linux internals, or cloud infrastructure (GCP preferred)</li>\n<li>Prior work with abuse reporting pipelines, trust &amp; safety tooling, or content moderation systems</li>\n</ul>\n<p><strong>Tools + Tech Stack for this role</strong></p>\n<ul>\n<li>Languages: Python, TypeScript, Go, SQL</li>\n<li>Data: BigQuery, Hex</li>\n<li>Detection tools: Slurper, Netwatch, Stytch (device fingerprint); ClearOut (email reputation)</li>\n<li>CI/CD Security: Dependabot, Snyk, SAST/SCA scanners</li>\n<li>Infrastructure: GCP, Kubernetes</li>\n<li>Collaboration: Linear, Slack, Zendesk (for abuse reports)</li>\n</ul>\n<p style=\"margin-top:24px;font-size:13px;color:#666;\">XML job scraping automation by <a href=\"https://yubhub.co\">YubHub</a></p>","url":"https://yubhub.co/jobs/job_f2e93c37-5a0","directApply":true,"hiringOrganization":{"@type":"Organization","name":"Replit","sameAs":"https://replit.com/","logo":"https://logos.yubhub.co/replit.com.png"},"x-apply-url":"https://jobs.ashbyhq.com/replit/75e69146-a092-43a1-b1d6-023d433d3ae7","x-work-arrangement":"hybrid","x-experience-level":"staff","x-job-type":"Full time","x-salary-range":"$190K - $240K","x-skills-required":["security engineering","anti-abuse","trust & safety","fraud detection","Python","TypeScript","SQL","BigQuery","Hex","ML/LLM-based classifiers","prompt injection","jailbreaking","common attack patterns","phishing infrastructure","account takeover","credential stuffing","resource abuse"],"x-skills-preferred":["payment abuse","financial crime","device fingerprinting","IP reputation","email validation services","CI/CD security tooling","container security","Linux internals","cloud infrastructure","abuse reporting pipelines","trust & safety tooling","content moderation systems"],"datePosted":"2026-04-24T13:13:20.268Z","jobLocation":{"@type":"Place","address":{"@type":"PostalAddress","addressLocality":"Foster City, CA (Hybrid) In office M,W,F"}},"employmentType":"FULL_TIME","occupationalCategory":"Engineering","industry":"Technology","skills":"security engineering, anti-abuse, trust & safety, fraud detection, Python, TypeScript, SQL, BigQuery, Hex, ML/LLM-based classifiers, prompt injection, jailbreaking, common attack patterns, phishing infrastructure, account takeover, credential stuffing, resource abuse, payment abuse, financial crime, device fingerprinting, IP reputation, email validation services, CI/CD security tooling, container security, Linux internals, cloud infrastructure, abuse reporting pipelines, trust & safety tooling, content moderation systems","baseSalary":{"@type":"MonetaryAmount","currency":"USD","value":{"@type":"QuantitativeValue","minValue":190000,"maxValue":240000,"unitText":"YEAR"}}},{"@context":"https://schema.org","@type":"JobPosting","identifier":{"@type":"PropertyValue","name":"YubHub","value":"job_fc4a0972-622"},"title":"Principal Product Manager, AI Model Security","description":"<p>Microsoft Superintelligence team’s mission is to empower every person and every organization on the planet to achieve more.</p>\n<p>As employees we come together with a growth mindset, innovate to empower others, and collaborate to realize our shared goals. Each day we build on our values of respect, integrity, and accountability to create a culture of inclusion where everyone can thrive at work and beyond.</p>\n<p>This role is part of Microsoft AI’s Superintelligence Team. The MAIST is a startup-like team inside Microsoft AI, created to push the boundaries of AI toward Humanist Superintelligence , ultra-capable systems that remain controllable, safety-aligned, and anchored to human values.</p>\n<p>Our mission is to create AI that amplifies human potential while ensuring humanity remains firmly in control. We aim to deliver breakthroughs that benefit society , advancing science, education, and global well-being.</p>\n<p>We are hiring a Product Manager to own AI model security , the discipline of making our frontier models resilient against adversarial attack and purpose-built for security practitioners.</p>\n<p>This role has a dual mandate: (1) harden our models against the full spectrum of LLM security threats , prompt injection, data exfiltration, jailbreaking, training data extraction, zero-day exploit generation, model poisoning, and agentic workflow exploitation , and (2) partner closely with Microsoft Security product teams (Azure Security, Security Copilot) to ensure our models deliver best-in-class capabilities for real-world security workflows.</p>\n<p>Responsibilities:</p>\n<p>Own the model security roadmap: Define and prioritize the security hardening strategy for our frontier models across the full OWASP LLM threat surface , prompt injection (direct and indirect), data exfiltration, jailbreak resistance, system prompt leakage, training data extraction, and adversarial manipulation of agentic workflows.</p>\n<p>Drive zero-day and exploit defense: Work with researchers to evaluate and mitigate the risk of models being used to generate zero-day exploits, malware, or novel attack vectors.</p>\n<p>Build and scale red-teaming frameworks: Design, run, and iterate adversarial testing programs , both automated and human-driven , to continuously probe model vulnerabilities.</p>\n<p>Establish metrics (e.g., jailbreak success rate, injection bypass rate, exfiltration resistance) and drive measurable improvement over time.</p>\n<p>Partner with Microsoft Security product teams: Work closely with Azure Security and Security Copilot teams to translate their product requirements into model training priorities.</p>\n<p>Ensure our models are purpose-built for threat detection, incident triage, vulnerability assessment, log analysis, and compliance reasoning.</p>\n<p>Define security-specific model evaluations: Build benchmark suites and evaluation frameworks that measure real-world security usefulness , not just academic performance.</p>\n<p>Drive training data strategy to improve domain-specific model quality for security practitioners.</p>\n<p>Shape security policy and launch readiness: Establish clear security criteria for model launches.</p>\n<p>Own the security dimension of go/no-go decisions, with frameworks that balance capability, risk, and deployment context.</p>\n<p>Stay at the frontier: Track the rapidly evolving LLM security landscape , new attack techniques, emerging standards (OWASP, NIST AI RMF), regulatory requirements (EU AI Act), and academic research.</p>\n<p>Translate what you learn into actionable product priorities.</p>\n<p>Influence model training and architecture: Partner with researchers and engineers to embed security considerations into model training, fine-tuning, RLHF, and post-training safeguards.</p>\n<p>Qualifications:</p>\n<p>Bachelor’s Degree AND 5+ years experience in product management, security engineering, or software development OR equivalent experience</p>\n<p>Demonstrated hands-on experience with AI/ML systems , you have personally built, evaluated, or shipped ML-powered products or security tools</p>\n<p>Deep familiarity with LLM security threats: prompt injection, jailbreaking, data exfiltration, adversarial attacks on generative models , through professional experience, red-teaming, or security research</p>\n<p>Experience defining product requirements and driving decisions in partnership with researchers or ML engineers</p>\n<p>Track record of building evaluation systems, security benchmarks, or adversarial testing frameworks , not just consuming them</p>\n<p>Ability to operate autonomously, make decisions with incomplete information, and drive projects from ambiguity to shipped outcomes</p>\n<p>Preferred Qualifications:</p>\n<p>Technical background in computer science, security, or AI/ML , a postgraduate degree is a plus but not required</p>\n<p>Experience in offensive security, penetration testing, or red teaming , ideally applied to AI/ML systems</p>\n<p>Familiarity with security workflows and tooling (SIEM, SOAR, EDR, threat intelligence platforms) and how practitioners use them in production</p>\n<p>Understanding of the model lifecycle (pre-training, fine-tuning, RLHF, deployment, monitoring) and where security interventions are most effective</p>\n<p>Experience working with or within enterprise security organizations (e.g., Microsoft Security, CrowdStrike, Palo Alto Networks, or similar)</p>\n<p>Published research, blog posts, or public contributions in AI security, adversarial ML, or LLM red teaming</p>\n<p style=\"margin-top:24px;font-size:13px;color:#666;\">XML job scraping automation by <a href=\"https://yubhub.co\">YubHub</a></p>","url":"https://yubhub.co/jobs/job_fc4a0972-622","directApply":true,"hiringOrganization":{"@type":"Organization","name":"Microsoft AI","sameAs":"https://microsoft.ai","logo":"https://logos.yubhub.co/microsoft.ai.png"},"x-apply-url":"https://microsoft.ai/job/principal-product-manager-ai-model-security/","x-work-arrangement":null,"x-experience-level":null,"x-job-type":"full-time","x-salary-range":null,"x-skills-required":["AI/ML systems","LLM security threats","prompt injection","jailbreaking","data exfiltration","adversarial attacks on generative models","product requirements","security engineering","software development","evaluation systems","security benchmarks","adversarial testing frameworks","autonomous decision-making","project management","offensive security","penetration testing","red teaming","security workflows","tooling","SIEM","SOAR","EDR","threat intelligence platforms","model lifecycle","pre-training","fine-tuning","RLHF","deployment","monitoring"],"x-skills-preferred":[],"datePosted":"2026-04-24T12:15:26.485Z","jobLocation":{"@type":"Place","address":{"@type":"PostalAddress","addressLocality":"Redmond"}},"employmentType":"FULL_TIME","occupationalCategory":"Engineering","industry":"Technology","skills":"AI/ML systems, LLM security threats, prompt injection, jailbreaking, data exfiltration, adversarial attacks on generative models, product requirements, security engineering, software development, evaluation systems, security benchmarks, adversarial testing frameworks, autonomous decision-making, project management, offensive security, penetration testing, red teaming, security workflows, tooling, SIEM, SOAR, EDR, threat intelligence platforms, model lifecycle, pre-training, fine-tuning, RLHF, deployment, monitoring"},{"@context":"https://schema.org","@type":"JobPosting","identifier":{"@type":"PropertyValue","name":"YubHub","value":"job_25c868eb-c32"},"title":"Principal Product Manager, AI Model Security","description":"<p>Microsoft Superintelligence team’s mission is to empower every person and every organization on the planet to achieve more.</p>\n<p>As employees we come together with a growth mindset, innovate to empower others, and collaborate to realize our shared goals. Each day we build on our values of respect, integrity, and accountability to create a culture of inclusion where everyone can thrive at work and beyond.</p>\n<p>This role is part of Microsoft AI’s Superintelligence Team. The MAIST is a startup-like team inside Microsoft AI, created to push the boundaries of AI toward Humanist Superintelligence , ultra-capable systems that remain controllable, safety-aligned, and anchored to human values.</p>\n<p>Our mission is to create AI that amplifies human potential while ensuring humanity remains firmly in control. We aim to deliver breakthroughs that benefit society , advancing science, education, and global well-being.</p>\n<p>We are hiring a Product Manager to own AI model security , the discipline of making our frontier models resilient against adversarial attack and purpose-built for security practitioners.</p>\n<p>This role has a dual mandate: (1) harden our models against the full spectrum of LLM security threats , prompt injection, data exfiltration, jailbreaking, training data extraction, zero-day exploit generation, model poisoning, and agentic workflow exploitation , and (2) partner closely with Microsoft Security product teams (Azure Security, Security Copilot) to ensure our models deliver best-in-class capabilities for real-world security workflows.</p>\n<p>Responsibilities:</p>\n<p>Own the model security roadmap: Define and prioritize the security hardening strategy for our frontier models across the full OWASP LLM threat surface , prompt injection (direct and indirect), data exfiltration, jailbreak resistance, system prompt leakage, training data extraction, and adversarial manipulation of agentic workflows.</p>\n<p>Drive zero-day and exploit defense: Work with researchers to evaluate and mitigate the risk of models being used to generate zero-day exploits, malware, or novel attack vectors.</p>\n<p>Build and scale red-teaming frameworks: Design, run, and iterate adversarial testing programs , both automated and human-driven , to continuously probe model vulnerabilities.</p>\n<p>Establish metrics (e.g., jailbreak success rate, injection bypass rate, exfiltration resistance) and drive measurable improvement over time.</p>\n<p>Partner with Microsoft Security product teams: Work closely with Azure Security and Security Copilot teams to translate their product requirements into model training priorities.</p>\n<p>Ensure our models are purpose-built for threat detection, incident triage, vulnerability assessment, log analysis, and compliance reasoning.</p>\n<p>Define security-specific model evaluations: Build benchmark suites and evaluation frameworks that measure real-world security usefulness , not just academic performance.</p>\n<p>Drive training data strategy to improve domain-specific model quality for security practitioners.</p>\n<p>Shape security policy and launch readiness: Establish clear security criteria for model launches.</p>\n<p>Own the security dimension of go/no-go decisions, with frameworks that balance capability, risk, and deployment context.</p>\n<p>Stay at the frontier: Track the rapidly evolving LLM security landscape , new attack techniques, emerging standards (OWASP, NIST AI RMF), regulatory requirements (EU AI Act), and academic research.</p>\n<p>Translate what you learn into actionable product priorities.</p>\n<p>Influence model training and architecture: Partner with researchers and engineers to embed security considerations into model training, fine-tuning, RLHF, and post-training safeguards.</p>\n<p>Qualifications:</p>\n<p>Bachelor’s Degree AND 5+ years experience in product management, security engineering, or software development OR equivalent experience</p>\n<p>Demonstrated hands-on experience with AI/ML systems , you have personally built, evaluated, or shipped ML-powered products or security tools</p>\n<p>Deep familiarity with LLM security threats: prompt injection, jailbreaking, data exfiltration, adversarial attacks on generative models , through professional experience, red-teaming, or security research</p>\n<p>Experience defining product requirements and driving decisions in partnership with researchers or ML engineers</p>\n<p>Track record of building evaluation systems, security benchmarks, or adversarial testing frameworks , not just consuming them</p>\n<p>Ability to operate autonomously, make decisions with incomplete information, and drive projects from ambiguity to shipped outcomes</p>\n<p style=\"margin-top:24px;font-size:13px;color:#666;\">XML job scraping automation by <a href=\"https://yubhub.co\">YubHub</a></p>","url":"https://yubhub.co/jobs/job_25c868eb-c32","directApply":true,"hiringOrganization":{"@type":"Organization","name":"Microsoft","sameAs":"https://microsoft.ai","logo":"https://logos.yubhub.co/microsoft.ai.png"},"x-apply-url":"https://microsoft.ai/job/principal-product-manager-ai-model-security-2/","x-work-arrangement":"onsite","x-experience-level":"senior","x-job-type":"full-time","x-salary-range":null,"x-skills-required":["AI/ML systems","LLM security threats","prompt injection","jailbreaking","data exfiltration","adversarial attacks on generative models","product management","security engineering","software development","model training","fine-tuning","RLHF","post-training safeguards"],"x-skills-preferred":[],"datePosted":"2026-04-24T12:12:01.831Z","jobLocation":{"@type":"Place","address":{"@type":"PostalAddress","addressLocality":"Mountain View"}},"employmentType":"FULL_TIME","occupationalCategory":"Engineering","industry":"Technology","skills":"AI/ML systems, LLM security threats, prompt injection, jailbreaking, data exfiltration, adversarial attacks on generative models, product management, security engineering, software development, model training, fine-tuning, RLHF, post-training safeguards"},{"@context":"https://schema.org","@type":"JobPosting","identifier":{"@type":"PropertyValue","name":"YubHub","value":"job_138b24e2-2bd"},"title":"Senior Software Engineer, Anti-Abuse & Security","description":"<p>Rewrite this job ad in your own words, matching the tone of voice of the original. Reuse the same section headings from the original ad (e.g. if the ad says &quot;Responsibilities&quot;, use that heading, not &quot;What you&#39;ll do&quot;).</p>\n<p>Start with an opening paragraph (no heading): what the role is, who the company is, why it matters. If the ad mentions salary, include it here.</p>\n<p>Rephrase bullet points in your own words while keeping the factual content. Combine related points where it makes sense.</p>\n<p>For benefits/perks: gather them from anywhere in the ad into one section. If the ad mentions nothing about benefits, omit a benefits section entirely.</p>\n<p>Do not invent information that is not in the original ad.</p>\n<p><strong>About the role</strong> The Anti-Abuse team is the front line defending Replit&#39;s platform from exploitation. We detect and shut down phishing deployments, prevent cryptomining on free-tier infrastructure, stop LLM token farming, and keep bad actors from weaponizing the platform against our users. This is adversarial work: attackers adapt constantly, and we build the detection systems, heuristics, and automated responses that stay ahead of them.</p>\n<p>What makes this role unique is the AI-native nature of Replit&#39;s platform. You&#39;ll work on problems that barely exist elsewhere: building guardrails for AI-generated code, detecting prompt injection attacks at scale, and using LLMs as a defensive tool against abuse. If you want hands-on experience applying AI to security problems, this is one of the few places you can do it in production with real attackers. You&#39;ll own problems end-to-end, from identifying emerging abuse patterns to shipping the systems that stop them at scale.</p>\n<p><strong>In this role you will…</strong></p>\n<ul>\n<li>Design and implement LLM guardrails that detect abuse scenarios in AI-generated code and agent interactions</li>\n<li>Build AI-powered detection systems that use LLMs to identify malicious patterns, classify threats, and automate response decisions</li>\n<li>Build and operate abuse detection systems that identify phishing, cryptomining, account takeover, and financial fraud across millions of daily user actions</li>\n<li>Design automated response mechanisms that enforce platform policies without manual intervention</li>\n<li>Own the full abuse response lifecycle: detection, investigation, enforcement, and handling appeals alongside Support and Legal</li>\n<li>Analyze attack patterns using BigQuery and Hex, turning investigation findings into new detection rules</li>\n<li>Maintain and extend internal detection tools (Slurper, Netwatch) that continuously monitor user activity</li>\n<li>Integrate and tune security scanners (SAST, SCA) in CI pipelines with tight performance SLAs</li>\n<li>Track abuse trends, measure detection effectiveness, and adapt defenses as attack patterns evolve</li>\n</ul>\n<p><strong>Required skills and experience:</strong></p>\n<ul>\n<li>4+ years of experience in security engineering, anti-abuse, trust &amp; safety, or fraud detection</li>\n<li>Strong programming skills in Python and/or TypeScript for building detection systems and automation</li>\n<li>Experience with SQL and data analysis at scale (BigQuery, Snowflake, or similar)</li>\n<li>Experience building or fine-tuning ML/LLM-based classifiers for security or abuse detection</li>\n<li>Familiarity with prompt injection, jailbreaking, and other LLM-specific attack vectors</li>\n<li>Ability to investigate complex abuse patterns and translate findings into automated defenses</li>\n<li>Familiarity with common attack patterns: phishing infrastructure, account takeover, credential stuffing, resource abuse</li>\n<li>Clear communication skills for working across Security, Support, Legal, and Engineering teams.</li>\n</ul>\n<p><strong>Nice to have:</strong></p>\n<ul>\n<li>Experience at a platform company dealing with user-generated content or compute abuse (hosting providers, cloud platforms, developer tools)</li>\n<li>Background in fraud detection, payment abuse, or financial crime</li>\n<li>Familiarity with device fingerprinting, IP reputation, and email validation services</li>\n<li>Experience with CI/CD security tooling (SAST, SCA, Dependabot, Snyk)</li>\n<li>Knowledge of container security, Linux internals, or cloud infrastructure (GCP preferred)</li>\n<li>Prior work with abuse reporting pipelines, trust &amp; safety tooling, or content moderation systems</li>\n</ul>\n<p><strong>Tools + Tech Stack for this role</strong></p>\n<ul>\n<li><strong>Languages:</strong> Python, TypeScript, Go, SQL</li>\n<li><strong>Data:</strong> BigQuery, Hex</li>\n<li><strong>Detection tools:</strong> Slurper, Netwatch, Stytch (device fingerprint); ClearOut (email reputation)</li>\n<li><strong>CI/CD Security: Dependabot, Snyk, SAST/SCA scanners</strong></li>\n<li><strong>Infrastructure: GCP, Kubernetes</strong></li>\n<li><strong>Collaboration: Linear, Slack, Zendesk (for abuse reports)</strong></li>\n</ul>\n<p><strong>This role may</strong> _<strong>not</strong>_ <strong>be a fit if</strong></p>\n<ul>\n<li>You prefer deep security research over building operational detection systems</li>\n<li>You want to focus on vulnerability management, pentesting, or bug bounty triage (that&#39;s our Security team)</li>\n<li>You&#39;re looking for a role with predictable, well-defined problems rather than constantly adapting to adversarial behavior</li>\n<li>You prefer working in isolation rather than partnering closely with Support, Legal, and cross-functional teams</li>\n<li>You&#39;re uncomfortable making enforcement decisions that affect real users</li>\n</ul>\n<p>_This is a full-time role that can be held from our Foster City, CA office. The role has an in-office requirement of Monday, Wednesday, and Friday._</p>\n<p><strong>Full-Time Employee Benefits Include:</strong> 💰 Competitive Salary &amp; Equity 💹 401(k) Program with a 4% match ⚕️ Health, Dental, Vision and Life Insurance 🩼 Short Term and Long Term Disability 🚼 Paid Parental, Medical, Caregiver Leave 🚗 Commuter Benefits 📱 Monthly Wellness Stipend 🧑‍💻 Autonomous Work Environment 🖥 In Office Set-Up Reimbursement 🏝 Flexible Time Off (FTO) + Holidays 🚀 Quarterly Team Gatherings ☕ In Office Amenities</p>\n<p style=\"margin-top:24px;font-size:13px;color:#666;\">XML job scraping automation by <a href=\"https://yubhub.co\">YubHub</a></p>","url":"https://yubhub.co/jobs/job_138b24e2-2bd","directApply":true,"hiringOrganization":{"@type":"Organization","name":"Replit","sameAs":"https://jobs.ashbyhq.com","logo":"https://logos.yubhub.co/replit.com.png"},"x-apply-url":"https://jobs.ashbyhq.com/replit/5bdadf61-7955-46e8-8fdf-bd69818358b7","x-work-arrangement":"hybrid","x-experience-level":"senior","x-job-type":"full-time","x-salary-range":"$190K – $240K","x-skills-required":["security engineering","anti-abuse","trust & safety","fraud detection","Python","TypeScript","SQL","BigQuery","Hex","ML/LLM-based classifiers","prompt injection","jailbreaking","common attack patterns","phishing infrastructure","account takeover","credential stuffing","resource abuse"],"x-skills-preferred":["experience at a platform company","fraud detection","payment abuse","financial crime","device fingerprinting","IP reputation","email validation services","CI/CD security tooling","container security","Linux internals","cloud infrastructure"],"datePosted":"2026-03-07T15:19:04.069Z","jobLocation":{"@type":"Place","address":{"@type":"PostalAddress","addressLocality":"Foster City, CA"}},"employmentType":"FULL_TIME","occupationalCategory":"Engineering","industry":"Technology","skills":"security engineering, anti-abuse, trust & safety, fraud detection, Python, TypeScript, SQL, BigQuery, Hex, ML/LLM-based classifiers, prompt injection, jailbreaking, common attack patterns, phishing infrastructure, account takeover, credential stuffing, resource abuse, experience at a platform company, fraud detection, payment abuse, financial crime, device fingerprinting, IP reputation, email validation services, CI/CD security tooling, container security, Linux internals, cloud infrastructure","baseSalary":{"@type":"MonetaryAmount","currency":"USD","value":{"@type":"QuantitativeValue","minValue":190000,"maxValue":240000,"unitText":"YEAR"}}}]}