{"version":"0.1","company":{"name":"YubHub","url":"https://yubhub.co","jobsUrl":"https://yubhub.co/jobs/skill/prompt-injection"},"x-facet":{"type":"skill","slug":"prompt-injection","display":"Prompt Injection","count":7},"x-feed-size-limit":100,"x-feed-sort":"enriched_at desc","x-feed-notice":"This feed contains at most 100 jobs (the most recently enriched). For the full corpus, use the paginated /stats/by-facet endpoint or /search.","x-generator":"yubhub-xml-generator","x-rights":"Free to redistribute with attribution: \"Data by YubHub (https://yubhub.co)\"","x-schema":"Each entry in `jobs` follows https://schema.org/JobPosting. YubHub-native raw fields carry `x-` prefix.","jobs":[{"@context":"https://schema.org","@type":"JobPosting","identifier":{"@type":"PropertyValue","name":"YubHub","value":"job_769c0070-5b2"},"title":"Research Scientist, Agent Robustness","description":"<p>As a Research Scientist working on Agent Robustness, you will work on the fundamental challenges of building AI agents that are safe and aligned with humans.</p>\n<p>For example, you might:</p>\n<ul>\n<li>Research the science of AI agent capabilities with a focus on how they relate to safety, risk factors, and methodologies for benchmarking them;</li>\n<li>Design and build harnesses to test AI agents&#39; tendency to take harmful actions when pressured to do so by users or tricked into doing so by elements of their environment;</li>\n<li>Design and build exploits and mitigations for new and unique failure modes that arise as AI agents gain affordances like coding, web browsing, and computer use;</li>\n<li>Characterize and design mitigations for potential failure modes or broader risks of systems involving multiple interacting AI agents.</li>\n</ul>\n<p>Ideally you&#39;d have:</p>\n<ul>\n<li>Commitment to our mission of promoting safe, secure, and trustworthy AI deployments in the industry as frontier AI capabilities continue to advance;</li>\n<li>Practical experience conducting technical research collaboratively;</li>\n<li>Experience with post-training and RL techniques such as RLHF, DPO, GRPO, and similar approaches;</li>\n<li>A track record of published research in machine learning, particularly in generative AI;</li>\n<li>At least three years of experience addressing sophisticated ML problems, whether in a research setting or in product development;</li>\n<li>Strong written and verbal communication skills to operate in a cross-functional team.</li>\n</ul>\n<p>Nice to have:</p>\n<ul>\n<li>Hands-on experience with agent evaluation frameworks such as SWE-bench, WebArena, OSWorld, Inspect, or similar tools;</li>\n<li>Experience with red-teaming, prompt injection, or adversarial testing of AI systems.</li>\n</ul>\n<p>Our research interviews are crafted to assess candidates&#39; skills in practical ML prototyping and debugging, their grasp of research concepts, and their alignment with our organisational culture. We will not ask any LeetCode-style questions. If you&#39;re excited about advancing AI safety and contributing to our mission, we encourage you to apply, even if your experience doesn&#39;t perfectly align with every requirement.</p>\n<p>Compensation packages at Scale for eligible roles include base salary, equity, and benefits. The range displayed on each job posting reflects the minimum and maximum target for new hire salaries for the position, determined by work location and additional factors, including job-related skills, experience, interview performance, and relevant education or training. Scale employees in eligible roles are also granted equity-based compensation, subject to Board of Director approval. Your recruiter can share more about the specific salary range for your preferred location during the hiring process, and confirm whether the hired role will be eligible for equity grant. You&#39;ll also receive benefits including, but not limited to: Comprehensive health, dental and vision coverage, retirement benefits, a learning and development stipend, and generous PTO. Additionally, this role may be eligible for additional benefits such as a commuter stipend.</p>\n<p style=\"margin-top:24px;font-size:13px;color:#666;\">XML job scraping automation by <a href=\"https://yubhub.co\">YubHub</a></p>","url":"https://yubhub.co/jobs/job_769c0070-5b2","directApply":true,"hiringOrganization":{"@type":"Organization","name":"Scale","sameAs":"https://scale.com/","logo":"https://logos.yubhub.co/scale.com.png"},"x-apply-url":"https://job-boards.greenhouse.io/scaleai/jobs/4675684005","x-work-arrangement":"hybrid","x-experience-level":"senior","x-job-type":"full-time","x-salary-range":"$216,000-$270,000 USD","x-skills-required":["Commitment to our mission of promoting safe, secure, and trustworthy AI deployments in the industry as frontier AI capabilities continue to advance","Practical experience conducting technical research collaboratively","Experience with post-training and RL techniques such as RLHF, DPO, GRPO, and similar approaches","A track record of published research in machine learning, particularly in generative AI","At least three years of experience addressing sophisticated ML problems, whether in a research setting or in product development"],"x-skills-preferred":["Hands-on experience with agent evaluation frameworks such as SWE-bench, WebArena, OSWorld, Inspect, or similar tools","Experience with red-teaming, prompt injection, or adversarial testing of AI systems"],"datePosted":"2026-04-18T15:57:29.447Z","jobLocation":{"@type":"Place","address":{"@type":"PostalAddress","addressLocality":"San Francisco, CA; New York, NY"}},"employmentType":"FULL_TIME","occupationalCategory":"Engineering","industry":"Technology","skills":"Commitment to our mission of promoting safe, secure, and trustworthy AI deployments in the industry as frontier AI capabilities continue to advance, Practical experience conducting technical research collaboratively, Experience with post-training and RL techniques such as RLHF, DPO, GRPO, and similar approaches, A track record of published research in machine learning, particularly in generative AI, At least three years of experience addressing sophisticated ML problems, whether in a research setting or in product development, Hands-on experience with agent evaluation frameworks such as SWE-bench, WebArena, OSWorld, Inspect, or similar tools, Experience with red-teaming, prompt injection, or adversarial testing of AI systems","baseSalary":{"@type":"MonetaryAmount","currency":"USD","value":{"@type":"QuantitativeValue","minValue":216000,"maxValue":270000,"unitText":"YEAR"}}},{"@context":"https://schema.org","@type":"JobPosting","identifier":{"@type":"PropertyValue","name":"YubHub","value":"job_ef348b50-2ac"},"title":"Product Security Engineer","description":"<p>Join Airtable as a Product Security Engineer and play a pivotal role in shaping the security of our rapidly evolving platform. You will partner closely with product engineering teams to build paved roads, frameworks, and automated controls that make the secure path the easy path for our engineering teams.</p>\n<p>Your responsibilities will include developing self-service security frameworks and &#39;paved roads&#39; that allow engineering teams to ship secure code by default. You will focus on automated guardrails for common vulnerabilities, while prioritising deep-dive design reviews into complex business logic and data isolation issues. You will also partner with product and engineering teams to review designs early, contribute to threat modelling for new features and complex initiatives, and provide clear, actionable security guidance.</p>\n<p>You will research emerging threats and evolving best practices, specifically regarding AI and LLM safety, and implement controls to secure these workflows. You will manage and evolve our approach to external penetration testing and bug bounties, driving remediation for findings and treating vulnerability management as an engineering problem.</p>\n<p>You will contribute to the long-term roadmaps, metrics, and strategic planning for the security team. As a senior member of the team, you will lead complex threat modelling sessions for major product launches and define secure coding standards, and actively mentor other engineers to raise the technical security bar across the organisation.</p>\n<p>We are looking for a highly experienced Product Security Engineer with a strong background in computer science or a related field, and proficiency in writing clean, maintainable code. You should have deep familiarity with JavaScript or TypeScript, Node.js, and modern web application frameworks, and be able to reason about the security implications of systems built on them. You should also have hands-on experience securing LLM integrations and identifying prompt injection or data leakage risks.</p>\n<p>You will excel at communicating complex security risks to non-security stakeholders and enjoy collaborating cross-functionally to find solutions that balance security with engineering velocity. You will be comfortable working in a fast-paced environment, navigating ambiguity, continuously learning about emerging threats and technologies, and contributing to long-term security strategy.</p>\n<p style=\"margin-top:24px;font-size:13px;color:#666;\">XML job scraping automation by <a href=\"https://yubhub.co\">YubHub</a></p>","url":"https://yubhub.co/jobs/job_ef348b50-2ac","directApply":true,"hiringOrganization":{"@type":"Organization","name":"Airtable","sameAs":"https://airtable.com/","logo":"https://logos.yubhub.co/airtable.com.png"},"x-apply-url":"https://job-boards.greenhouse.io/airtable/jobs/8194662002","x-work-arrangement":"remote","x-experience-level":"senior","x-job-type":"full-time","x-salary-range":null,"x-skills-required":["JavaScript","TypeScript","Node.js","Modern web application frameworks","LLM integrations","Prompt injection","Data leakage risks","Threat modelling","Secure coding standards"],"x-skills-preferred":[],"datePosted":"2026-04-18T15:55:21.514Z","jobLocation":{"@type":"Place","address":{"@type":"PostalAddress","addressLocality":"San Francisco, CA; New York, NY; Remote (Seattle, WA only)"}},"jobLocationType":"TELECOMMUTE","employmentType":"FULL_TIME","occupationalCategory":"Engineering","industry":"Technology","skills":"JavaScript, TypeScript, Node.js, Modern web application frameworks, LLM integrations, Prompt injection, Data leakage risks, Threat modelling, Secure coding standards"},{"@context":"https://schema.org","@type":"JobPosting","identifier":{"@type":"PropertyValue","name":"YubHub","value":"job_0ae6f8dc-4fd"},"title":"Staff Engineer, AI Security","description":"<p>Join the team as Twilio&#39;s next Staff Engineer, AI Security.</p>\n<p>As a Staff Engineer, AI Security on the AppSec team, you&#39;ll lead autonomous defense for the AI lifecycle. Build multi-agent frameworks and secure gateways while integrating real-time security gates and identity standards. By mentoring Security and R&amp;D to define the MLSecOps roadmap, you&#39;ll ensure a &#39;secure-by-default&#39; future for agentic workflows and resilient AI innovation.</p>\n<p>Responsibilities:</p>\n<p>Serve as the primary subject matter expert for all AI and machine learning security initiatives across security and R&amp;D.</p>\n<p>Design and manage AI gateways to provide a centralized control plane for authentication and authorization and rate limiting across all model and tool interactions.</p>\n<p>Build and maintain an autonomous security agentic framework that utilizes multi agent orchestration for end to end investigation and alert triage and remediation.</p>\n<p>Develop agentic identity models using OAuth 2.1 to propagate identity across trust boundaries and prevent the confused deputy problem.</p>\n<p>Help govern the AI augmented software development lifecycle by integrating real time security gates into the developer environment and CI/CD pipeline.</p>\n<p>Manage Agentic Security Solutions that secure AI lifecycle and manage AI workloads at runtime.</p>\n<p>Author company wide AI security standards and implement these security checks across Twilio&#39;s stack.</p>\n<p>Implement human in the loop checkpoints and transactional safety protocols for high impact or destructive agentic actions.</p>\n<p>Partner with engineering leadership to set the long term roadmap for identity centric security and automated posture management.</p>\n<p>Act as a knowledge multiplier by mentoring security engineers and developing secure by default paved road templates for R&amp;D teams</p>\n<p>Qualifications:</p>\n<p>8+ years of experience in security engineering with at least 3 years focused on AI or machine learning security operations (MLSecOps).</p>\n<p>Expertise in orchestrating multi-agent systems with AWS Strands, LangGraph, and CrewAI, specializing in runtime isolation, PII redaction, and defending against indirect prompt injection in agentic environments.</p>\n<p>Hands-on experience with AI-specific frameworks (e.g., MITRE ATLAS, MAESTRO, OWASP Top 10 for LLMs/Agents/MCP) to threat model and defend against a wide spectrum of risks, including direct/indirect prompt injection, training data poisoning, tool poisoning, and data exfiltration within agentic workflows.</p>\n<p>Proficiency in securing end-to-end AI pipelines, from data ingestion and training to model deployment and monitoring.</p>\n<p>Strong communication skills to translate complex AI risks into actionable business logic for stakeholders.</p>\n<p>Desired:</p>\n<p>Hands-on experience in modern application security tooling including SAST and SCA and DAST with experience adapting these tools to catch AI specific vulnerabilities like indirect prompt injection.</p>\n<p>Expertise in identity standards including OAuth 2.1 and PKCE.</p>\n<p>Experience with AI Red Teaming and conducting adversarial simulations against Large Language Models (LLMs) and agentic systems.</p>\n<p>Proficiency in at least one general programming language (Python, Go, etc) with experience in container security and workload isolation.</p>\n<p>Proven ability to operate with autonomy and drive high impact outcomes in ambiguous environments by identifying and executing on critical projects without predefined roadmaps or direct supervision.</p>\n<p style=\"margin-top:24px;font-size:13px;color:#666;\">XML job scraping automation by <a href=\"https://yubhub.co\">YubHub</a></p>","url":"https://yubhub.co/jobs/job_0ae6f8dc-4fd","directApply":true,"hiringOrganization":{"@type":"Organization","name":"Twilio","sameAs":"https://www.twilio.com/","logo":"https://logos.yubhub.co/twilio.com.png"},"x-apply-url":"https://job-boards.greenhouse.io/twilio/jobs/7821462","x-work-arrangement":"remote","x-experience-level":"staff","x-job-type":"full-time","x-salary-range":null,"x-skills-required":["security engineering","AI and machine learning security","multi-agent systems","AWS Strands","LangGraph","CrewAI","runtime isolation","PII redaction","indirect prompt injection","AI-specific frameworks","MITRE ATLAS","MAESTRO","OWASP Top 10 for LLMs/Agents/MCP","end-to-end AI pipelines","data ingestion","training","model deployment","monitoring","strong communication skills"],"x-skills-preferred":["modern application security tooling","SAST and SCA and DAST","identity standards","OAuth 2.1","PKCE","AI Red Teaming","adversarial simulations","Large Language Models","container security","workload isolation"],"datePosted":"2026-04-18T15:44:10.579Z","jobLocation":{"@type":"Place","address":{"@type":"PostalAddress","addressLocality":"Remote - Ireland"}},"jobLocationType":"TELECOMMUTE","employmentType":"FULL_TIME","occupationalCategory":"Engineering","industry":"Technology","skills":"security engineering, AI and machine learning security, multi-agent systems, AWS Strands, LangGraph, CrewAI, runtime isolation, PII redaction, indirect prompt injection, AI-specific frameworks, MITRE ATLAS, MAESTRO, OWASP Top 10 for LLMs/Agents/MCP, end-to-end AI pipelines, data ingestion, training, model deployment, monitoring, strong communication skills, modern application security tooling, SAST and SCA and DAST, identity standards, OAuth 2.1, PKCE, AI Red Teaming, adversarial simulations, Large Language Models, container security, workload isolation"},{"@context":"https://schema.org","@type":"JobPosting","identifier":{"@type":"PropertyValue","name":"YubHub","value":"job_8bf116df-95e"},"title":"Application Security Engineer","description":"<p>Job Title: Application Security Engineer</p>\n<p>About the Role: The Application Security team at Anthropic is at the forefront of building security into every phase of the software development lifecycle. As an Application Security Engineer, you will partner closely with software engineers and researchers to ensure that security is a core consideration from initial design through implementation. You will lead threat modeling and secure design reviews to proactively identify and mitigate risks early, and help with continuous risk assessment. You will build tools and systems to support developers shipping code securely, adhering to secure coding best practices.</p>\n<p>Responsibilities:</p>\n<ul>\n<li>Help secure AI products and internal tools that are introducing industry-novel security risks and pushing established security boundaries</li>\n<li>Lead “shift left” security efforts to build security into the software development lifecycle</li>\n<li>Conduct secure design reviews and threat modeling. Identify and prioritize risks, attack surfaces, and vulnerabilities</li>\n<li>Develop tooling to scale security code reviews and respond to developer questions, including advising developers on remediating vulnerabilities and following secure coding practices</li>\n<li>Manage Anthropic&#39;s vulnerability management program, including integrating data ingestion pipelines, coding logic to prioritize vulnerability fixes, supporting teams remediating vulnerabilities and developing automated systems at scale</li>\n<li>Oversee Anthropic&#39;s bug bounty program. Set scope, validate submissions, perform root cause analysis, coordinate remediation with engineering teams, and award bounties. Cultivate relationships with the ethical hacker community</li>\n<li>Collaborate closely with product engineers and researchers to instill security best practices. Advocate for secure architecture, design, and development</li>\n<li>Develop and document security policies, standards, and playbooks. Conduct security awareness training for engineers</li>\n</ul>\n<p>Requirements:</p>\n<ul>\n<li>5+ years of hands-on experience in application and infrastructure security, including securing cloud-based and containerized environments</li>\n<li>Strong proficiency in at least one programming language (e.g., Python, Rust, Go, Java)</li>\n<li>Lead with empathy, a collaborative spirit, and a learning mindset to work cross-functionally with engineers of all levels to build security into the software development life cycle</li>\n<li>Leverage creative and strategic thinking to reduce risk through secure design and simplicity, not just controls</li>\n<li>Possess broad security knowledge to connect the dots across domains and identify holistic ways to decrease the overall threat surface</li>\n<li>Are keen to distill complex security concepts into clear actions and drive consensus without direct authority</li>\n<li>Embody a proactive mindset to thread security throughout the product lifecycle through activities like threat modeling, secure code review, and education</li>\n<li>Have a strong grasp of offensive security to anticipate risks from an adversary&#39;s perspective, not just check compliance boxes</li>\n<li>Bring experience with modern application stacks, infrastructure, and security tools to implement pragmatic defenses</li>\n<li>Are practiced at collaborating cross-functionally and effectively balancing security requirements with business objectives</li>\n<li>Advocate for security fundamentals like least privilege, defense-in-depth, and eliminating complexity that could sub-linearly scale security through smart design</li>\n</ul>\n<p>Preferred Qualifications:</p>\n<ul>\n<li>Hands-on technical expertise securing complex cloud environments and microservices architectures leveraging technologies like Kubernetes, Docker, and AWS / GCP</li>\n<li>Exposure to offensive security techniques like vulnerability testing, bug bounty, pen testing, and red team exercises</li>\n<li>Familiarity with AI/ML security risks such as prompt injection, data poisoning, model extraction, etc. and mitigations</li>\n<li>Experience building security tools, applications, and automated tools</li>\n<li>Solid foundational knowledge of both software and security engineering principles and are keen to continue learning</li>\n<li>Excellent communication skills, able to distill complex security topics for broad audiences</li>\n<li>Worked and thrived in fast-paced environments, and comfortable navigating ambiguity</li>\n</ul>\n<p>Annual Compensation Range:</p>\n<p>$300,000-$405,000 USD</p>\n<p>Logistics:</p>\n<ul>\n<li>Minimum education: Bachelor’s degree or an equivalent combination of education, training, and/or experience</li>\n<li>Required field of study: A field relevant to the role as demonstrated through coursework, training, or professional experience</li>\n<li>Minimum years of experience: Years of experience required will correlate with the internal job level requirements for the position</li>\n<li>Location-based hybrid policy: Currently, we expect all staff to be in one of our offices at least 25% of the time. However, some roles may require more time in our offices.</li>\n<li>Visa sponsorship: We do sponsor visas! However, we aren&#39;t able to successfully sponsor visas for every role and every candidate. But if we make you an offer, we will make every reasonable effort to get you a visa, and we retain an immigration lawyer to help with this.</li>\n</ul>\n<p>How to Apply:</p>\n<p>If you&#39;re interested in this role, please submit your application through our website. We look forward to reviewing your application!</p>\n<p>Note:</p>\n<p>Your safety matters to us. To protect yourself from potential scams, remember that Anthropic recruiters only contact you from @anthropic.com email addresses. In some cases, we may partner with vetted recruiting agencies who will identify themselves as working on behalf of Anthropic. Be cautious of emails from other domains. Legitimate Anthropic recruiters will never ask for money, fees, or banking information before your first day. If you&#39;re ever unsure about a communication, don&#39;t click any links,visit anthropic.com/careers directly for confirmed position openings.</p>\n<p style=\"margin-top:24px;font-size:13px;color:#666;\">XML job scraping automation by <a href=\"https://yubhub.co\">YubHub</a></p>","url":"https://yubhub.co/jobs/job_8bf116df-95e","directApply":true,"hiringOrganization":{"@type":"Organization","name":"Anthropic","sameAs":"https://www.anthropic.com/","logo":"https://logos.yubhub.co/anthropic.com.png"},"x-apply-url":"https://job-boards.greenhouse.io/anthropic/jobs/4502508008","x-work-arrangement":"hybrid","x-experience-level":"senior","x-job-type":"full-time","x-salary-range":"$300,000-$405,000 USD","x-skills-required":["application security","infrastructure security","cloud-based security","containerized environments","programming languages","Python","Rust","Go","Java","threat modeling","secure design reviews","vulnerability management","bug bounty program","security policies","standards","playbooks","security awareness training"],"x-skills-preferred":["hands-on technical expertise","complex cloud environments","microservices architectures","Kubernetes","Docker","AWS","GCP","offensive security techniques","vulnerability testing","pen testing","red team exercises","AI/ML security risks","prompt injection","data poisoning","model extraction","security tools","applications","automated tools","software engineering principles","communication skills"],"datePosted":"2026-04-18T15:35:09.635Z","jobLocation":{"@type":"Place","address":{"@type":"PostalAddress","addressLocality":"Remote-Friendly (Travel-Required) | San Francisco, CA | Seattle, WA | New York City, NY"}},"jobLocationType":"TELECOMMUTE","employmentType":"FULL_TIME","occupationalCategory":"Engineering","industry":"Technology","skills":"application security, infrastructure security, cloud-based security, containerized environments, programming languages, Python, Rust, Go, Java, threat modeling, secure design reviews, vulnerability management, bug bounty program, security policies, standards, playbooks, security awareness training, hands-on technical expertise, complex cloud environments, microservices architectures, Kubernetes, Docker, AWS, GCP, offensive security techniques, vulnerability testing, pen testing, red team exercises, AI/ML security risks, prompt injection, data poisoning, model extraction, security tools, applications, automated tools, software engineering principles, communication skills","baseSalary":{"@type":"MonetaryAmount","currency":"USD","value":{"@type":"QuantitativeValue","minValue":300000,"maxValue":405000,"unitText":"YEAR"}}},{"@context":"https://schema.org","@type":"JobPosting","identifier":{"@type":"PropertyValue","name":"YubHub","value":"job_68e291fb-412"},"title":"Senior Security Engineer","description":"<p>Talent Wanted. For hazardous journey. Small wages, bitter cold, long months of complete darkness, constant danger, safe return doubtful. Honour and recognition in case of success.</p>\n<p>Fridtjof Nansen crossed the Arctic, going places no human had ever been. Together with our users, we&#39;re doing the same onchain , and someone needs to make sure we don&#39;t get killed on the way there.</p>\n<p>We&#39;re building the single best platform for onchain investing , agentic trading, staking infrastructure, AI-powered analytics , and we&#39;re scaling fast. Fast enough that security can&#39;t be an afterthought bolted on later. It has to be built in, from the start, by someone who knows what they&#39;re doing.</p>\n<p><strong>Our mission:</strong></p>\n<p>Surface the signal and create winners.</p>\n<p><strong>What you&#39;ll do at Nansen</strong></p>\n<p>You&#39;ll be the person who makes sure we can move fast without breaking things that matter. That means embedding security into everything we build , cloud infrastructure, applications, CI/CD pipelines, AI systems, staking operations , across a generalist role that spans the full surface area.</p>\n<ul>\n<li>Run security assessments across systems, architectures, and code , find the vulnerabilities before someone else does</li>\n<li>Advise engineering teams on secure design decisions. You&#39;re a partner, not a blocker</li>\n<li>Deploy and maintain security infrastructure: SIEM, vulnerability scanning, endpoint protection, logging , the things that let us sleep at night</li>\n<li>Secure our CI/CD pipelines and deployment workflows end-to-end</li>\n<li>Own secrets management, key management, and access controls. No shortcuts</li>\n<li>Address LLM security head-on: API key management, prompt injection prevention, and the risks that come with shipping AI-powered products at speed</li>\n<li>Coordinate penetration tests and security audits with external vendors</li>\n<li>Create and maintain secure coding guidelines and code review processes that engineers actually follow</li>\n<li>Represent the Security Team in the incident response process</li>\n<li>Drive compliance readiness , SOC 2, ISO 27001 , pragmatically, not bureaucratically</li>\n</ul>\n<p><strong>What we&#39;re looking for</strong></p>\n<ul>\n<li>You&#39;ve built and hardened production security at scale , you know the difference between a policy document and an actually secure system</li>\n<li>Strong cloud security knowledge (AWS, GCP or equivalent). Container security and network security fundamentals</li>\n<li>Hands-on experience implementing security tooling, not just evaluating it</li>\n<li>Secrets and key management expertise , you&#39;ve managed this at a company where it actually mattered</li>\n<li>You understand the security implications of AI/LLM and agent-based systems. This is new territory and we need someone thinking about it seriously</li>\n<li>CI/CD pipeline security is second nature</li>\n<li>Pragmatic about compliance , you can get us to SOC 2 without drowning the engineering team in process</li>\n<li>You don&#39;t just use AI as a tool. You think with it. AI-first isn&#39;t a checkbox , it&#39;s how you work</li>\n<li>Strong async communication skills , we&#39;re remote-first, Slack-and-docs-heavy, and EMEA hours are preferred for team overlap</li>\n<li>Bonus: blockchain, smart contract, or staking infrastructure security experience. Kubernetes and Terraform security. Incident response or security operations background</li>\n</ul>\n<p><strong>What we offer our crew</strong></p>\n<ul>\n<li>Competitive salary. Meaningful equity. Real ownership in what you build</li>\n<li>Fully remote with two no-meeting days a week , because deep work doesn&#39;t happen in a Google Meet</li>\n<li>Annual company retreat and team off-sites in one of our offices: Singapore, Bangkok, London, and Oslo , flights and accommodation covered</li>\n<li>Unlimited AI tokens , Claude, OpenAI, whatever helps you move fast</li>\n<li>Your own OpenClaw for work</li>\n<li>Nansen Pro account: giving you full access to the most detailed onchain data in the market</li>\n<li>A team that started as data engineers and data scientists that has grown to over 80 builders. Your craft is respected here.</li>\n<li>Speed, ownership, curiosity, courage. These aren&#39;t values on a wall , they&#39;re how we run.</li>\n<li>A front-row seat (and a hand in building) the next chapter of finance</li>\n</ul>\n<p style=\"margin-top:24px;font-size:13px;color:#666;\">XML job scraping automation by <a href=\"https://yubhub.co\">YubHub</a></p>","url":"https://yubhub.co/jobs/job_68e291fb-412","directApply":true,"hiringOrganization":{"@type":"Organization","name":"Nansen","sameAs":"https://nansen.ai/","logo":"https://logos.yubhub.co/nansen.ai.png"},"x-apply-url":"https://job-boards.greenhouse.io/nansen/jobs/5811520004","x-work-arrangement":"remote","x-experience-level":"senior","x-job-type":"full-time","x-salary-range":null,"x-skills-required":["cloud security","container security","network security","security tooling","secrets management","key management","access controls","API key management","prompt injection prevention","LLM security","CI/CD pipeline security","compliance","SOC 2","ISO 27001"],"x-skills-preferred":["blockchain security","smart contract security","staking infrastructure security","Kubernetes security","Terraform security","incident response","security operations"],"datePosted":"2026-04-17T12:47:56.366Z","jobLocation":{"@type":"Place","address":{"@type":"PostalAddress","addressLocality":"Remote Europe | Remote Asia"}},"jobLocationType":"TELECOMMUTE","employmentType":"FULL_TIME","occupationalCategory":"Engineering","industry":"Technology","skills":"cloud security, container security, network security, security tooling, secrets management, key management, access controls, API key management, prompt injection prevention, LLM security, CI/CD pipeline security, compliance, SOC 2, ISO 27001, blockchain security, smart contract security, staking infrastructure security, Kubernetes security, Terraform security, incident response, security operations"},{"@context":"https://schema.org","@type":"JobPosting","identifier":{"@type":"PropertyValue","name":"YubHub","value":"job_f73f108d-30a"},"title":"Senior Security Engineer, Agentic Red Team","description":"<p>Job Title: Senior Security Engineer, Agentic Red Team</p>\n<p>We&#39;re a team of scientists, engineers, machine learning experts, and more, working together to advance the state of the art in artificial intelligence.</p>\n<p><strong>About Us</strong> The Agentic Red Team is a specialized, high-velocity unit within Google DeepMind Security. Our mission is to close the &#39;Agentic Launch Gap&#39;,the critical window where novel AI capabilities outpace traditional security reviews.</p>\n<p><strong>The Role</strong> As a Senior Security Engineer on the Agentic Red Team, you will be the primary technical executor of our adversarial engagements. You will work &#39;in the room&#39; with product builders, identifying architectural flaws during the design phase long before formal reviews begin.</p>\n<p><strong>Key Responsibilities:</strong></p>\n<ul>\n<li>Execute Agile Red Teaming: Conduct rapid, high-impact security assessments on agentic services, focusing on vulnerabilities unique to GenAI such as prompt injection, tool-use escalation, and autonomous lateral movement.</li>\n<li>Develop Advanced Exploits: Engineer and execute complex attack sequences that exploit non-deterministic model behaviors, agentic logic errors, and data poisoning vectors.</li>\n<li>Build Automated Defenses: Write code to transform manual vulnerability discoveries into automated regression testing frameworks (&#39;Auto Red Teaming&#39;) that prevent regression in future model versions.</li>\n<li>Embed with Product Teams: Partner directly with developers during the design and build phases to provide immediate feedback, effectively shortening the feedback loop between offensive findings and defensive engineering.</li>\n<li>Curate Threat Intelligence: Maintain and expand a library of agent-specific attack patterns and exploit primitives to establish robust release criteria for new models.</li>\n</ul>\n<p><strong>About You</strong> In order to set you up for success as a Software Engineer at Google DeepMind, we look for the following skills and experience:</p>\n<ul>\n<li>Bachelor&#39;s degree in Computer Science, Information Security, or equivalent practical experience.</li>\n<li>Experience in Red Teaming, Offensive Security, or Adversarial Machine Learning.</li>\n<li>Strong coding skills in Python, Go, or C++ with experience building security tools or automation.</li>\n<li>Technical understanding of LLM architectures, agentic workflows (e.g., chain-of-thought reasoning), and common AI vulnerability classes.</li>\n</ul>\n<p><strong>Preferred Qualifications</strong></p>\n<ul>\n<li>Hands-on experience developing exploits for GenAI models (e.g., prompt injection, adversarial examples, training data extraction).</li>\n<li>Experience working in a consulting capacity with product teams or in a fast-paced &#39;startup-like&#39; environment.</li>\n<li>Familiarity with AI safety benchmarks, evaluation frameworks, and fuzzing techniques.</li>\n<li>Ability to translate complex probabilistic risks into actionable engineering fixes for developers.</li>\n</ul>\n<p><strong>Salary &amp; Benefits</strong> The US base salary range for this full-time position is between $166,000 - $244,000 + bonus + equity + benefits.</p>\n<p style=\"margin-top:24px;font-size:13px;color:#666;\">XML job scraping automation by <a href=\"https://yubhub.co\">YubHub</a></p>","url":"https://yubhub.co/jobs/job_f73f108d-30a","directApply":true,"hiringOrganization":{"@type":"Organization","name":"Google DeepMind","sameAs":"https://deepmind.com/","logo":"https://logos.yubhub.co/deepmind.com.png"},"x-apply-url":"https://job-boards.greenhouse.io/deepmind/jobs/7596438","x-work-arrangement":"onsite","x-experience-level":"senior","x-job-type":"full-time","x-salary-range":"$166,000 - $244,000 + bonus + equity + benefits","x-skills-required":["Python","Go","C++","Red Teaming","Offensive Security","Adversarial Machine Learning","LLM architectures","agentic workflows","chain-of-thought reasoning","AI vulnerability classes"],"x-skills-preferred":["prompt injection","adversarial examples","training data extraction","AI safety benchmarks","evaluation frameworks","fuzzing techniques"],"datePosted":"2026-03-16T14:39:43.939Z","jobLocation":{"@type":"Place","address":{"@type":"PostalAddress","addressLocality":"Mountain View, California, US; New York City, New York, US; Zurich, Switzerland"}},"employmentType":"FULL_TIME","occupationalCategory":"Engineering","industry":"Technology","skills":"Python, Go, C++, Red Teaming, Offensive Security, Adversarial Machine Learning, LLM architectures, agentic workflows, chain-of-thought reasoning, AI vulnerability classes, prompt injection, adversarial examples, training data extraction, AI safety benchmarks, evaluation frameworks, fuzzing techniques","baseSalary":{"@type":"MonetaryAmount","currency":"USD","value":{"@type":"QuantitativeValue","minValue":166000,"maxValue":244000,"unitText":"YEAR"}}},{"@context":"https://schema.org","@type":"JobPosting","identifier":{"@type":"PropertyValue","name":"YubHub","value":"job_138b24e2-2bd"},"title":"Senior Software Engineer, Anti-Abuse & Security","description":"<p>Rewrite this job ad in your own words, matching the tone of voice of the original. Reuse the same section headings from the original ad (e.g. if the ad says &quot;Responsibilities&quot;, use that heading, not &quot;What you&#39;ll do&quot;).</p>\n<p>Start with an opening paragraph (no heading): what the role is, who the company is, why it matters. If the ad mentions salary, include it here.</p>\n<p>Rephrase bullet points in your own words while keeping the factual content. Combine related points where it makes sense.</p>\n<p>For benefits/perks: gather them from anywhere in the ad into one section. If the ad mentions nothing about benefits, omit a benefits section entirely.</p>\n<p>Do not invent information that is not in the original ad.</p>\n<p><strong>About the role</strong> The Anti-Abuse team is the front line defending Replit&#39;s platform from exploitation. We detect and shut down phishing deployments, prevent cryptomining on free-tier infrastructure, stop LLM token farming, and keep bad actors from weaponizing the platform against our users. This is adversarial work: attackers adapt constantly, and we build the detection systems, heuristics, and automated responses that stay ahead of them.</p>\n<p>What makes this role unique is the AI-native nature of Replit&#39;s platform. You&#39;ll work on problems that barely exist elsewhere: building guardrails for AI-generated code, detecting prompt injection attacks at scale, and using LLMs as a defensive tool against abuse. If you want hands-on experience applying AI to security problems, this is one of the few places you can do it in production with real attackers. You&#39;ll own problems end-to-end, from identifying emerging abuse patterns to shipping the systems that stop them at scale.</p>\n<p><strong>In this role you will…</strong></p>\n<ul>\n<li>Design and implement LLM guardrails that detect abuse scenarios in AI-generated code and agent interactions</li>\n<li>Build AI-powered detection systems that use LLMs to identify malicious patterns, classify threats, and automate response decisions</li>\n<li>Build and operate abuse detection systems that identify phishing, cryptomining, account takeover, and financial fraud across millions of daily user actions</li>\n<li>Design automated response mechanisms that enforce platform policies without manual intervention</li>\n<li>Own the full abuse response lifecycle: detection, investigation, enforcement, and handling appeals alongside Support and Legal</li>\n<li>Analyze attack patterns using BigQuery and Hex, turning investigation findings into new detection rules</li>\n<li>Maintain and extend internal detection tools (Slurper, Netwatch) that continuously monitor user activity</li>\n<li>Integrate and tune security scanners (SAST, SCA) in CI pipelines with tight performance SLAs</li>\n<li>Track abuse trends, measure detection effectiveness, and adapt defenses as attack patterns evolve</li>\n</ul>\n<p><strong>Required skills and experience:</strong></p>\n<ul>\n<li>4+ years of experience in security engineering, anti-abuse, trust &amp; safety, or fraud detection</li>\n<li>Strong programming skills in Python and/or TypeScript for building detection systems and automation</li>\n<li>Experience with SQL and data analysis at scale (BigQuery, Snowflake, or similar)</li>\n<li>Experience building or fine-tuning ML/LLM-based classifiers for security or abuse detection</li>\n<li>Familiarity with prompt injection, jailbreaking, and other LLM-specific attack vectors</li>\n<li>Ability to investigate complex abuse patterns and translate findings into automated defenses</li>\n<li>Familiarity with common attack patterns: phishing infrastructure, account takeover, credential stuffing, resource abuse</li>\n<li>Clear communication skills for working across Security, Support, Legal, and Engineering teams.</li>\n</ul>\n<p><strong>Nice to have:</strong></p>\n<ul>\n<li>Experience at a platform company dealing with user-generated content or compute abuse (hosting providers, cloud platforms, developer tools)</li>\n<li>Background in fraud detection, payment abuse, or financial crime</li>\n<li>Familiarity with device fingerprinting, IP reputation, and email validation services</li>\n<li>Experience with CI/CD security tooling (SAST, SCA, Dependabot, Snyk)</li>\n<li>Knowledge of container security, Linux internals, or cloud infrastructure (GCP preferred)</li>\n<li>Prior work with abuse reporting pipelines, trust &amp; safety tooling, or content moderation systems</li>\n</ul>\n<p><strong>Tools + Tech Stack for this role</strong></p>\n<ul>\n<li><strong>Languages:</strong> Python, TypeScript, Go, SQL</li>\n<li><strong>Data:</strong> BigQuery, Hex</li>\n<li><strong>Detection tools:</strong> Slurper, Netwatch, Stytch (device fingerprint); ClearOut (email reputation)</li>\n<li><strong>CI/CD Security: Dependabot, Snyk, SAST/SCA scanners</strong></li>\n<li><strong>Infrastructure: GCP, Kubernetes</strong></li>\n<li><strong>Collaboration: Linear, Slack, Zendesk (for abuse reports)</strong></li>\n</ul>\n<p><strong>This role may</strong> _<strong>not</strong>_ <strong>be a fit if</strong></p>\n<ul>\n<li>You prefer deep security research over building operational detection systems</li>\n<li>You want to focus on vulnerability management, pentesting, or bug bounty triage (that&#39;s our Security team)</li>\n<li>You&#39;re looking for a role with predictable, well-defined problems rather than constantly adapting to adversarial behavior</li>\n<li>You prefer working in isolation rather than partnering closely with Support, Legal, and cross-functional teams</li>\n<li>You&#39;re uncomfortable making enforcement decisions that affect real users</li>\n</ul>\n<p>_This is a full-time role that can be held from our Foster City, CA office. The role has an in-office requirement of Monday, Wednesday, and Friday._</p>\n<p><strong>Full-Time Employee Benefits Include:</strong> 💰 Competitive Salary &amp; Equity 💹 401(k) Program with a 4% match ⚕️ Health, Dental, Vision and Life Insurance 🩼 Short Term and Long Term Disability 🚼 Paid Parental, Medical, Caregiver Leave 🚗 Commuter Benefits 📱 Monthly Wellness Stipend 🧑‍💻 Autonomous Work Environment 🖥 In Office Set-Up Reimbursement 🏝 Flexible Time Off (FTO) + Holidays 🚀 Quarterly Team Gatherings ☕ In Office Amenities</p>\n<p style=\"margin-top:24px;font-size:13px;color:#666;\">XML job scraping automation by <a href=\"https://yubhub.co\">YubHub</a></p>","url":"https://yubhub.co/jobs/job_138b24e2-2bd","directApply":true,"hiringOrganization":{"@type":"Organization","name":"Replit","sameAs":"https://jobs.ashbyhq.com","logo":"https://logos.yubhub.co/replit.com.png"},"x-apply-url":"https://jobs.ashbyhq.com/replit/5bdadf61-7955-46e8-8fdf-bd69818358b7","x-work-arrangement":"hybrid","x-experience-level":"senior","x-job-type":"full-time","x-salary-range":"$190K – $240K","x-skills-required":["security engineering","anti-abuse","trust & safety","fraud detection","Python","TypeScript","SQL","BigQuery","Hex","ML/LLM-based classifiers","prompt injection","jailbreaking","common attack patterns","phishing infrastructure","account takeover","credential stuffing","resource abuse"],"x-skills-preferred":["experience at a platform company","fraud detection","payment abuse","financial crime","device fingerprinting","IP reputation","email validation services","CI/CD security tooling","container security","Linux internals","cloud infrastructure"],"datePosted":"2026-03-07T15:19:04.069Z","jobLocation":{"@type":"Place","address":{"@type":"PostalAddress","addressLocality":"Foster City, CA"}},"employmentType":"FULL_TIME","occupationalCategory":"Engineering","industry":"Technology","skills":"security engineering, anti-abuse, trust & safety, fraud detection, Python, TypeScript, SQL, BigQuery, Hex, ML/LLM-based classifiers, prompt injection, jailbreaking, common attack patterns, phishing infrastructure, account takeover, credential stuffing, resource abuse, experience at a platform company, fraud detection, payment abuse, financial crime, device fingerprinting, IP reputation, email validation services, CI/CD security tooling, container security, Linux internals, cloud infrastructure","baseSalary":{"@type":"MonetaryAmount","currency":"USD","value":{"@type":"QuantitativeValue","minValue":190000,"maxValue":240000,"unitText":"YEAR"}}}]}