<?xml version="1.0" encoding="UTF-8"?>
<source>
  <jobs>
    <job>
      <externalid>82adee54-ef0</externalid>
      <Title>Strategic Account Executive, Retail &amp; Commercial Banking</Title>
      <Description><![CDATA[<p>JOB DESCRIPTION:</p>
<p>As an Account Executive focused on Retail &amp; Commercial Banking at Anthropic, you&#39;ll be part of the foundational team bringing frontier AI to the institutions that serve millions of consumers and businesses every day.</p>
<p>You&#39;ll drive adoption of Claude across regional and national banks, credit unions, and commercial lenders,helping them transform workflows in customer service, lending operations, risk management, and branch productivity.</p>
<p>You&#39;ll leverage consultative sales expertise and sector knowledge to secure strategic enterprise deals while becoming a trusted partner to stakeholders navigating AI deployment in highly regulated, customer-facing environments.</p>
<p>Responsibilities</p>
<ul>
<li>Own the full sales cycle from prospecting through close, winning new business and driving revenue within retail and commercial banking accounts. Navigate organisational structures to reach decision-makers across lines of business, operations, technology, and innovation teams.</li>
</ul>
<ul>
<li>Design and execute sales strategies tailored to the unique procurement dynamics, budget cycles, and regulatory considerations of depository institutions. Translate market intelligence into targeted account plans and campaigns.</li>
</ul>
<ul>
<li>Identify and develop new use cases across banking workflows,customer support and contact centres, loan origination and underwriting, fraud detection, compliance documentation, and relationship manager enablement,collaborating cross-functionally to differentiate our offerings.</li>
</ul>
<ul>
<li>Build consensus across complex stakeholder ecosystems including business line leaders, Chief Digital Officers, risk and compliance teams, and procurement.</li>
</ul>
<ul>
<li>Serve as the voice of the customer internally, gathering feedback from users and conveying market needs to inform product roadmaps, security requirements, and go-to-market positioning.</li>
</ul>
<ul>
<li>Contribute to the evolution of our financial services sales methodology by documenting learnings, refining playbooks, and identifying process improvements that drive productivity and consistency.</li>
</ul>
<p>You may be a good fit if you have</p>
<ul>
<li>5+ years of enterprise B2B sales experience, with significant time selling into retail banks, commercial banks, or credit unions</li>
</ul>
<ul>
<li>A track record of closing complex, multi-stakeholder deals within depository institutions by navigating both technical requirements and business use cases</li>
</ul>
<ul>
<li>Deep familiarity with how banks buy technology,including vendor risk management, regulatory compliance reviews, and enterprise procurement processes</li>
</ul>
<ul>
<li>Experience negotiating enterprise agreements within banking procurement frameworks, including navigating legal, compliance, and infosec requirements</li>
</ul>
<ul>
<li>Proven history of exceeding revenue targets by effectively managing pipeline and executing a disciplined sales process</li>
</ul>
<ul>
<li>Strong communication skills and the ability to present confidently to audiences ranging from branch operations leaders to C-suite executives</li>
</ul>
<ul>
<li>Understanding of retail and commercial banking operations, customer experience priorities, and competitive dynamics in the sector</li>
</ul>
<ul>
<li>A strategic, analytical mindset combined with creative tactical execution</li>
</ul>
<ul>
<li>Genuine enthusiasm for AI and its potential to transform banking, paired with appreciation for the importance of safe, responsible, and compliant deployment</li>
</ul>
<p>The annual compensation range for this role is listed below.</p>
<p>For sales roles, the range provided is the role’s On Target Earnings (“OTE”) range, meaning that the range includes both the sales commissions/sales bonuses target and annual base salary for the role.</p>
<p>Annual Salary: $290,000-$435,000 USD</p>
<p style="margin-top:24px;font-size:13px;color:#666;">XML job scraping automation by <a href="https://yubhub.co">YubHub</a></p>]]></Description>
      <Jobtype>full-time</Jobtype>
      <Experiencelevel>senior</Experiencelevel>
      <Workarrangement>hybrid</Workarrangement>
      <Salaryrange>$290,000-$435,000 USD</Salaryrange>
      <Skills>Enterprise B2B sales experience, Retail banks, Commercial banks, Credit unions, Vendor risk management, Regulatory compliance reviews, Enterprise procurement processes, Negotiating enterprise agreements, Legal, Compliance, Infosec requirements, Pipeline management, Disciplined sales process, Communication skills, Presentation skills, Retail and commercial banking operations, Customer experience priorities, Competitive dynamics in the sector, Strategic mindset, Analytical mindset, Creative tactical execution, AI enthusiasm, Safe and responsible deployment</Skills>
      <Category>Sales</Category>
      <Industry>Finance</Industry>
      <Employername>Anthropic</Employername>
      <Employerlogo>https://logos.yubhub.co/anthropic.com.png</Employerlogo>
      <Employerdescription>Anthropic is a public benefit corporation that creates reliable, interpretable, and steerable AI systems.</Employerdescription>
      <Employerwebsite>https://www.anthropic.com/</Employerwebsite>
      <Compensationcurrency></Compensationcurrency>
      <Compensationmin></Compensationmin>
      <Compensationmax></Compensationmax>
      <Applyto>https://job-boards.greenhouse.io/anthropic/jobs/5041299008</Applyto>
      <Location>San Francisco, CA | New York City, NY</Location>
      <Country></Country>
      <Postedate>2026-04-18</Postedate>
    </job>
    <job>
      <externalid>e850d882-42f</externalid>
      <Title>Research Engineer, Production Model Post-Training</Title>
      <Description><![CDATA[<p>As a Research Engineer on our Post-Training team, you&#39;ll work at the intersection of cutting-edge research and production engineering, implementing, scaling, and improving post-training techniques like Constitutional AI, RLHF, and other alignment methodologies.</p>
<p>You&#39;ll train our base models through the complete post-training stack to deliver the production Claude models that users interact with.</p>
<p>Our team is a quickly growing group of committed researchers, engineers, policy experts, and business leaders working together to build beneficial AI systems.</p>
<p>We conduct all interviews in Python, and this role may require responding to incidents on short-notice, including on weekends.</p>
<p>Responsibilities:</p>
<p>Implement and optimize post-training techniques at scale on frontier models</p>
<p>Conduct research to develop and optimize post-training recipes that directly improve production model quality</p>
<p>Design, build, and run robust, efficient pipelines for model fine-tuning and evaluation</p>
<p>Develop tools to measure and improve model performance across various dimensions</p>
<p>Collaborate with research teams to translate emerging techniques into production-ready implementations</p>
<p>Debug complex issues in training pipelines and model behavior</p>
<p>Help establish best practices for reliable, reproducible model post-training</p>
<p>You may be a good fit if you:</p>
<p>Thrive in controlled chaos and are energized, rather than overwhelmed, when juggling multiple urgent priorities</p>
<p>Adapt quickly to changing priorities</p>
<p>Maintain clarity when debugging complex, time-sensitive issues</p>
<p>Have strong software engineering skills with experience building complex ML systems</p>
<p>Are comfortable working with large-scale distributed systems and high-performance computing</p>
<p>Have experience with training, fine-tuning, or evaluating large language models</p>
<p>Can balance research exploration with engineering rigor and operational reliability</p>
<p>Are adept at analyzing and debugging model training processes</p>
<p>Enjoy collaborating across research and engineering disciplines</p>
<p>Can navigate ambiguity and make progress in fast-moving research environments</p>
<p>Strong candidates may also:</p>
<p>Have experience with LLMs</p>
<p>Have a keen interest in AI safety and responsible deployment</p>
<p>We welcome candidates at various experience levels, with a preference for senior engineers who have hands-on experience with frontier AI systems.</p>
<p>However, proficiency in Python, deep learning frameworks, and distributed computing is required for this role.</p>
<p>The annual compensation range for this role is $350,000-$500,000 USD.</p>
<p style="margin-top:24px;font-size:13px;color:#666;">XML job scraping automation by <a href="https://yubhub.co">YubHub</a></p>]]></Description>
      <Jobtype>full-time</Jobtype>
      <Experiencelevel>senior</Experiencelevel>
      <Workarrangement>hybrid</Workarrangement>
      <Salaryrange>$350,000-$500,000 USD</Salaryrange>
      <Skills>Python, Deep learning frameworks, Distributed computing, ML systems, Large-scale distributed systems, High-performance computing, Training, fine-tuning, or evaluating large language models, LLMs, AI safety and responsible deployment</Skills>
      <Category>Engineering</Category>
      <Industry>Technology</Industry>
      <Employername>Anthropic</Employername>
      <Employerlogo>https://logos.yubhub.co/anthropic.com.png</Employerlogo>
      <Employerdescription>Anthropic is a public benefit corporation that creates reliable, interpretable, and steerable AI systems.</Employerdescription>
      <Employerwebsite>https://www.anthropic.com/</Employerwebsite>
      <Compensationcurrency></Compensationcurrency>
      <Compensationmin></Compensationmin>
      <Compensationmax></Compensationmax>
      <Applyto>https://job-boards.greenhouse.io/anthropic/jobs/4613592008</Applyto>
      <Location>San Francisco, CA | New York City, NY | Seattle, WA</Location>
      <Country></Country>
      <Postedate>2026-04-18</Postedate>
    </job>
    <job>
      <externalid>b0c17b4f-3f4</externalid>
      <Title>Research Engineer, Production Model Post-Training</Title>
      <Description><![CDATA[<p>About Anthropic</p>
<p>Anthropic&#39;s mission is to create reliable, interpretable, and steerable AI systems. We want AI to be safe and beneficial for our users and for society as a whole.</p>
<p>About the role</p>
<p>Anthropic&#39;s production models undergo sophisticated post-training processes to enhance their capabilities, alignment, and safety. As a Research Engineer on our Post-Training team, you&#39;ll train our base models through the complete post-training stack to deliver the production Claude models that users interact with.</p>
<p>You&#39;ll work at the intersection of cutting-edge research and production engineering, implementing, scaling, and improving post-training techniques like Constitutional AI, RLHF, and other alignment methodologies. Your work will directly impact the quality, safety, and capabilities of our production models.</p>
<p>Responsibilities</p>
<ul>
<li>Implement and optimize post-training techniques at scale on frontier models</li>
<li>Conduct research to develop and optimize post-training recipes that directly improve production model quality</li>
<li>Design, build, and run robust, efficient pipelines for model fine-tuning and evaluation</li>
<li>Develop tools to measure and improve model performance across various dimensions</li>
<li>Collaborate with research teams to translate emerging techniques into production-ready implementations</li>
<li>Debug complex issues in training pipelines and model behavior</li>
<li>Help establish best practices for reliable, reproducible model post-training</li>
</ul>
<p>You may be a good fit if you:</p>
<ul>
<li>Thrive in controlled chaos and are energised, rather than overwhelmed, when juggling multiple urgent priorities</li>
<li>Adapt quickly to changing priorities</li>
<li>Maintain clarity when debugging complex, time-sensitive issues</li>
<li>Have strong software engineering skills with experience building complex ML systems</li>
<li>Are comfortable working with large-scale distributed systems and high-performance computing</li>
<li>Have experience with training, fine-tuning, or evaluating large language models</li>
<li>Can balance research exploration with engineering rigor and operational reliability</li>
<li>Are adept at analyzing and debugging model training processes</li>
<li>Enjoy collaborating across research and engineering disciplines</li>
<li>Can navigate ambiguity and make progress in fast-moving research environments</li>
</ul>
<p>Strong candidates may also:</p>
<ul>
<li>Have experience with LLMs</li>
<li>Have a keen interest in AI safety and responsible deployment</li>
</ul>
<p>Logistics</p>
<ul>
<li>Minimum education: Bachelor’s degree or an equivalent combination of education, training, and/or experience</li>
<li>Required field of study: A field relevant to the role as demonstrated through coursework, training, or professional experience</li>
<li>Minimum years of experience: Years of experience required will correlate with the internal job level requirements for the position</li>
<li>Location-based hybrid policy: Currently, we expect all staff to be in one of our offices at least 25% of the time. However, some roles may require more time in our offices.</li>
<li>Visa sponsorship: We do sponsor visas! However, we aren&#39;t able to successfully sponsor visas for every role and every candidate. But if we make you an offer, we will make every reasonable effort to get you a visa, and we retain an immigration lawyer to help with this.</li>
</ul>
<p>We encourage you to apply even if you do not believe you meet every single qualification. Not all strong candidates will meet every single qualification as listed. Research shows that people who identify as being from underrepresented groups are more prone to experiencing imposter syndrome and doubting the strength of their candidacy, so we urge you not to exclude yourself prematurely and to submit an application if you&#39;re interested in this work.</p>
<p>Your safety matters to us. To protect yourself from potential scams, remember that Anthropic recruiters only contact you from @anthropic.com email addresses. In some cases, we may partner with vetted recruiting agencies who will identify themselves as working on behalf of Anthropic. Be cautious of emails from other domains. Legitimate Anthropic recruiters will never ask for money, fees, or banking information before your first day. If you&#39;re ever unsure about a communication, don&#39;t click any links,visit anthropic.com/careers directly for confirmed position openings.</p>
<p>How we&#39;re different</p>
<p>We believe that the highest-impact AI research will be big science. At Anthropic we work as a single cohesive team on just a few large-scale research efforts. And we value impact , advancing our long-term goals of steerable, trustworthy AI , rather than work on smaller and more specific puzzles. We view AI research as an empirical science, which has as much in common with physics and biology as with traditional efforts in computer science. We&#39;re an extremely collaborative group, and we host frequent research discussions to ensure that we are pursuing the highest-impact work at any given time. As such, we greatly value communication skills.</p>
<p>The easiest way to understand our research directions is to read our recent research. This research continues many of the directions our team worked on prior to Anthropic, including: GPT-3, Circuit-Based Interpretability, Multimodal Neurons, Scaling Laws, AI &amp; Compute, Concrete Problems in AI Safety, and Learning from Human Preferences.</p>
<p>Come work with us!</p>
<p>Anthropic is a public benefit corporation headquartered in San Francisco. We offer competitive compensation and benefits, optional equity donation matching, generous vacation and parental leave, flexible working hours, and a lovely office space in which to collaborate with colleagues.</p>
<p style="margin-top:24px;font-size:13px;color:#666;">XML job scraping automation by <a href="https://yubhub.co">YubHub</a></p>]]></Description>
      <Jobtype>full-time</Jobtype>
      <Experiencelevel>senior</Experiencelevel>
      <Workarrangement>hybrid</Workarrangement>
      <Salaryrange></Salaryrange>
      <Skills>Python, Deep learning frameworks, Distributed computing, Large-scale distributed systems, High-performance computing, Training, fine-tuning, or evaluating large language models, Software engineering, Complex ML systems, LLMs, AI safety and responsible deployment</Skills>
      <Category>Engineering</Category>
      <Industry>Technology</Industry>
      <Employername>Anthropic</Employername>
      <Employerlogo>https://logos.yubhub.co/anthropic.com.png</Employerlogo>
      <Employerdescription>Anthropic is a public benefit corporation that aims to create reliable, interpretable, and steerable AI systems.</Employerdescription>
      <Employerwebsite>https://www.anthropic.com/</Employerwebsite>
      <Compensationcurrency></Compensationcurrency>
      <Compensationmin></Compensationmin>
      <Compensationmax></Compensationmax>
      <Applyto>https://job-boards.greenhouse.io/anthropic/jobs/5112018008</Applyto>
      <Location>Zürich, CH</Location>
      <Country></Country>
      <Postedate>2026-04-18</Postedate>
    </job>
    <job>
      <externalid>9e31d415-b27</externalid>
      <Title>Partner Commercial Programs Lead</Title>
      <Description><![CDATA[<p>As our Partner Commercial Programs Lead, you&#39;ll design and govern the commercial programs that partner channels run on - the incentive structures, eligibility criteria, deal protection frameworks that govern how partners go to market with us, and manage the governance cadence that keeps them current as the business scales.</p>
<p>This is a foundational, global role sitting at the intersection of Partnerships, Finance, and Legal. You&#39;ll serve as the point of resolution for questions of deal attribution, incentive eligibility, and channel conflict, and codify decision criteria so the program scales consistently.</p>
<p>Responsibilities:</p>
<ul>
<li>Design and own the commercial framework spanning all partner channels and geographies , incentive structures, discount and margin frameworks, eligibility criteria, rules of engagement, and deal registration protection</li>
</ul>
<ul>
<li>Run quarterly incentive qualification and governance: validate threshold attainment, resolve attribution disputes, and determine payout eligibility before handoff to Finance</li>
</ul>
<ul>
<li>Adjudicate channel conflict, contested deal registrations, and dual-partner claims against documented criteria - and evolve those criteria as new edge cases surface</li>
</ul>
<ul>
<li>Govern partner investment programs (market development funds, co-investment funds, and similar), deciding eligible activity, matching ratios, and at-risk vs. opportunistic deployment</li>
</ul>
<ul>
<li>Track bilateral obligations with strategic partners , both what Anthropic has committed and what partners owe back , and trigger cure, renegotiation, or escalation when commitments slip</li>
</ul>
<ul>
<li>Act as the primary commercial interface to Legal and Finance, translating partner business needs into contract terms, rebate structures, and localised commercial instruments</li>
</ul>
<ul>
<li>Maintain the authoritative source document for partner commercial terms, holding the line against ad-hoc carve-outs while knowing when the framework genuinely needs to evolve</li>
</ul>
<ul>
<li>Codify decision criteria so that exception handling scales beyond you as the partner program grows</li>
</ul>
<p>You may be a good fit if you have</p>
<ul>
<li>12+ years in partner operations, channel operations, commercial strategy, deal desk, or pricing - with direct ownership of multi-channel commercial frameworks across system integrators, resellers, and/or marketplace partners</li>
</ul>
<ul>
<li>Hands-on experience managing market development funds, partner co-investment programs, or equivalent partner funding vehicles</li>
</ul>
<ul>
<li>A track record of running incentive governance at a regular cadence, where your calls held up under pushback from both sales teams and partners</li>
</ul>
<ul>
<li>Commercial and legal fluency: you&#39;ve drafted commercial terms, rebate structures, and discount frameworks, and worked as the translator between business needs and Legal/Finance constraints</li>
</ul>
<ul>
<li>Demonstrated ability to scale exception handling , turning one-off judgment calls into documented criteria that others can apply consistently</li>
</ul>
<ul>
<li>Comfort operating as a global individual contributor across time zones, with occasional travel to partner and regional team locations</li>
</ul>
<ul>
<li>Sound judgment and a calm, decisive approach when two parties both have a reasonable case and someone has to make the call</li>
</ul>
<p>Strong candidates may also have</p>
<ul>
<li>Experience standing up partner commercial frameworks from zero at a high-growth technology company</li>
</ul>
<ul>
<li>Background in consumption-based or usage-based business models and the incentive design challenges they create</li>
</ul>
<ul>
<li>Exposure to international partner programs requiring localised pricing, legal instruments, and currency/payment terms</li>
</ul>
<ul>
<li>Familiarity with partner relationship management (PRM) systems and how commercial rules get operationalised in tooling</li>
</ul>
<ul>
<li>Interest in the responsible deployment of frontier AI and how partner ecosystems shape who gets access to it</li>
</ul>
<p>The annual compensation range for this role is £195,000-£225,000 GBP.</p>
<p style="margin-top:24px;font-size:13px;color:#666;">XML job scraping automation by <a href="https://yubhub.co">YubHub</a></p>]]></Description>
      <Jobtype>full-time</Jobtype>
      <Experiencelevel>senior</Experiencelevel>
      <Workarrangement>hybrid</Workarrangement>
      <Salaryrange>£195,000-£225,000 GBP</Salaryrange>
      <Skills>Commercial strategy, Deal desk, Pricing, Market development funds, Partner co-investment programs, Incentive governance, Commercial and legal fluency, Exception handling, Consumption-based business models, Usage-based business models, International partner programs, Partner relationship management (PRM) systems, Responsible deployment of frontier AI</Skills>
      <Category>Sales</Category>
      <Industry>Technology</Industry>
      <Employername>Anthropic</Employername>
      <Employerlogo>https://logos.yubhub.co/anthropic.com.png</Employerlogo>
      <Employerdescription>Anthropic is a company that focuses on creating reliable, interpretable, and steerable AI systems.</Employerdescription>
      <Employerwebsite>https://www.anthropic.com/</Employerwebsite>
      <Compensationcurrency></Compensationcurrency>
      <Compensationmin></Compensationmin>
      <Compensationmax></Compensationmax>
      <Applyto>https://job-boards.greenhouse.io/anthropic/jobs/5171200008</Applyto>
      <Location>London, UK</Location>
      <Country></Country>
      <Postedate>2026-04-18</Postedate>
    </job>
    <job>
      <externalid>368082f3-20f</externalid>
      <Title>Account Executive, Mid Market - UKI</Title>
      <Description><![CDATA[<p>As a Mid Market Account Executive at Anthropic, you&#39;ll drive adoption of safe, frontier AI across EMEA , selling into companies of roughly 500 to 2,500 employees, some already building with AI and others just beginning to adopt it.</p>
<p>You&#39;ll bring a consultative sales approach to a wide range of buyers, from engineering and product leaders evaluating the technology to operations and commercial leaders focused on measurable ROI. In close partnership with GTM, product, and marketing, you&#39;ll help sharpen our value proposition, sales motion, and positioning for the mid-market.</p>
<p>The ideal candidate is energised by meeting customers wherever they are on the AI adoption curve , across industries, company types, and levels of technical maturity. You&#39;ll build consensus among diverse stakeholders and execute strategies that drive sustainable, responsible adoption of Anthropic&#39;s technology.</p>
<p>Responsibilities: Drive new business revenue by navigating complex organisations to reach decision-makers and educate them on practical AI applications Execute across a range of buying motions , from fast, product-led technical evaluations to multi-stakeholder procurement , to exceed revenue quota Identify use cases across product, engineering, and operational functions, and collaborate cross-functionally to position Claude as a practical solution Build consensus among engineering and product leaders, C-suite executives, IT, operations, and procurement teams around AI adoption Gather customer feedback to inform product roadmaps and sharpen value propositions for mid-market organisations Refine our mid-market sales methodology by feeding learnings into playbooks and optimising processes across a range of cycle lengths and buyer types</p>
<p>You may be a good fit if you have: 8+ years of B2B software sales experience, with 5+ years closing in mid-market or enterprise accounts Experience selling into the mid-market across any sector , SaaS, infrastructure, vertical software, financial services, healthcare, manufacturing, or otherwise. We care about the selling muscle and the buyer complexity you&#39;ve handled, not the specific industry Track record of closing $100K–$5M deals across cycle lengths ranging from weeks (product-led, technical buyers) to quarters (consensus-driven procurement) Proven ability to navigate complex procurement processes and build consensus among diverse stakeholder groups A consultative selling approach that meets buyers where they are , going deep with technical evaluators and translating to business outcomes with commercial stakeholders History of exceeding quota while managing a mixed book of fast-moving and complex accounts Strong communication skills, with range to engage audiences from technical teams to C-level executives Credibility with technical stakeholders , you&#39;ve sold to engineering or IT leaders, held your own in a technical evaluation, and partnered closely with solutions engineering without hiding behind them The ability to articulate ROI frameworks and demonstrate measurable business outcomes A passion for AI and commitment to its safe, responsible deployment Comfort building in ambiguity , this is an early GTM team in EMEA and the motion is still being shaped. You&#39;ll help shape it</p>
<p>Annual compensation range for this role is €155,000-€205,000 EUR.</p>
<p>Minimum education: Bachelor’s degree or an equivalent combination of education, training, and/or experience Required field of study: A field relevant to the role as demonstrated through coursework, training, or professional experience Minimum years of experience: Years of experience required will correlate with the internal job level requirements for the position Location-based hybrid policy: Currently, we expect all staff to be in one of our offices at least 25% of the time. However, some roles may require more time in our offices. Visa sponsorship: We do sponsor visas! However, we aren&#39;t able to successfully sponsor visas for every role and every candidate. But if we make you an offer, we will make every reasonable effort to get you a visa, and we retain an immigration lawyer to help with this.</p>
<p>We encourage you to apply even if you do not believe you meet every single qualification. Not all strong candidates will meet every single qualification as listed. Research shows that people who identify as being from underrepresented groups are more prone to experiencing imposter syndrome and doubting the strength of their candidacy, so we urge you not to exclude yourself prematurely and to submit an application if you&#39;re interested in this work.</p>
<p>Your safety matters to us. To protect yourself from potential scams, remember that Anthropic recruiters only contact you from @anthropic.com email addresses. In some cases, we may partner with vetted recruiting agencies who will identify themselves as working on behalf of Anthropic. Be cautious of emails from other domains. Legitimate Anthropic recruiters will never ask for money, fees, or banking information before your first day. If you&#39;re ever unsure about a communication, don&#39;t click any links,visit anthropic.com/careers directly for confirmed position openings.</p>
<p>How we&#39;re different: We believe that the highest-impact AI research will be big science. At Anthropic we work as a single cohesive team on just a few large-scale research efforts. And we value impact , advancing our long-term goals of steerable, trustworthy AI , rather than work on smaller and more specific puzzles. We view AI research as an empirical science, which has as much in common with physics and biology as with traditional efforts in computer science. We&#39;re an extremely collaborative group, and we host frequent research discussions to ensure that we are pursuing the highest-impact work at any given time. As such, we greatly value communication skills. The easiest way to understand our research directions is to read our recent research. This research continues many of the directions our team worked on prior to Anthropic, including: GPT-3, Circuit-Based Interpretability, Multimodal Neurons, Scaling Laws, AI &amp; Compute, Concrete Problems in AI Safety, and Learning from Human Preferences.</p>
<p>Come work with us! Anthropic is a public benefit corporation headquartered in San Francisco. We offer competitive compensation and benefits, optional equity donation matching, generous vacation and parental leave, flexible working hours, and a lovely office space in which to collaborate with colleagues.</p>
<p style="margin-top:24px;font-size:13px;color:#666;">XML job scraping automation by <a href="https://yubhub.co">YubHub</a></p>]]></Description>
      <Jobtype>full-time</Jobtype>
      <Experiencelevel>mid</Experiencelevel>
      <Workarrangement>hybrid</Workarrangement>
      <Salaryrange>€155,000-€205,000 EUR</Salaryrange>
      <Skills>B2B software sales experience, Mid-market sales, Complex procurement processes, Consultative selling approach, Technical stakeholders, ROI frameworks, Measurable business outcomes, AI safety and responsible deployment</Skills>
      <Category>Sales</Category>
      <Industry>Technology</Industry>
      <Employername>Anthropic</Employername>
      <Employerlogo>https://logos.yubhub.co/anthropic.com.png</Employerlogo>
      <Employerdescription>Anthropic is an AI safety and research company working to build reliable, interpretable, and steerable AI systems.</Employerdescription>
      <Employerwebsite>https://www.anthropic.com/</Employerwebsite>
      <Compensationcurrency></Compensationcurrency>
      <Compensationmin></Compensationmin>
      <Compensationmax></Compensationmax>
      <Applyto>https://job-boards.greenhouse.io/anthropic/jobs/4948535008</Applyto>
      <Location>Dublin, IE</Location>
      <Country></Country>
      <Postedate>2026-04-18</Postedate>
    </job>
    <job>
      <externalid>453f53c5-e0d</externalid>
      <Title>Research Engineer, AI Observability</Title>
      <Description><![CDATA[<p><strong>About Anthropic</strong></p>
<p>Anthropic&#39;s mission is to create reliable, interpretable, and steerable AI systems. We want AI to be safe and beneficial for our users and for society as a whole. Our team is a quickly growing group of committed researchers, engineers, policy experts, and business leaders working together to build beneficial AI systems.</p>
<p><strong>About the Team</strong></p>
<p>As AI training and deployments scale, the volume of data we need to monitor and understand is exploding. Our team uses Claude itself to make sense of this data. We own an integrated set of tools enabling Anthropic to ask open-ended questions, surface unexpected patterns, and maintain meaningful human oversight over massive datasets.</p>
<p>Our tools are widely adopted internally — powering ongoing enforcement, threat intelligence investigations, model audits, and more — and we’re looking for experienced engineers and researchers to both scale up existing applications and go zero-to-one on new ones.</p>
<p><strong>About the Role</strong></p>
<p>As a Research Engineer on our team, you&#39;ll design and build systems that let AI analyse large, unstructured datasets — think tens or hundreds of thousands of conversations or documents — and produce structured, trustworthy insights. You&#39;ll work across the full stack, from core analysis frameworks through user-facing apps and interfaces.</p>
<p>This is a high-leverage role. The tools you build will be used by dozens of researchers and investigators, and directly shape our ability to measure and mitigate both misuse and misalignment.</p>
<p><strong>Responsibilities:</strong></p>
<ul>
<li>Design and implement AI-based monitoring systems for AI training and deployment</li>
</ul>
<ul>
<li>Extend and improve core frameworks for processing large volumes of unstructured text</li>
</ul>
<ul>
<li>Partner with researchers and safety teams across Anthropic to understand their analytical needs and build solutions</li>
</ul>
<ul>
<li>Develop agentic integrations that allow AI systems to autonomously investigate and act on analytical findings</li>
</ul>
<ul>
<li>Contribute to the strategic direction of the team, including decisions about what to build, what to partner on, and where to invest</li>
</ul>
<p><strong>You May Be a Good Fit If You:</strong></p>
<ul>
<li>Have 5+ years of software engineering experience, with meaningful exposure to ML systems</li>
</ul>
<ul>
<li>Are excited about the problem of scaling human oversight of AI systems</li>
</ul>
<ul>
<li>Are familiar with LLM application development (context engineering, evaluation, orchestration)</li>
</ul>
<ul>
<li>Enjoy building tools that other people use — you care about UX, reliability, and documentation</li>
</ul>
<ul>
<li>Can context-switch between deep infrastructure work and user-facing product thinking</li>
</ul>
<ul>
<li>Thrive in collaborative, cross-functional environments</li>
</ul>
<p><strong>Strong Candidates May Also Have:</strong></p>
<ul>
<li>Research experience in AI safety, alignment, or responsible deployment</li>
</ul>
<ul>
<li>Practical experience with both data science and engineering, including developing and using large-scale data processing frameworks</li>
</ul>
<ul>
<li>Experience with productionizing internal tools or building developer-facing platforms</li>
</ul>
<ul>
<li>Background in building monitoring or observability systems</li>
</ul>
<ul>
<li>Comfort with ambiguity — our team is small and growing, and you&#39;ll help define what we become</li>
</ul>
<p><strong>Logistics</strong></p>
<p><strong>Education requirements:</strong> We require at least a Bachelor&#39;s degree in a related field or equivalent experience. <strong>Location-based hybrid policy:</strong> Currently, we expect all staff to be in one of our offices at least 25% of the time. However, some roles may require more time in our offices.</p>
<p><strong>Visa sponsorship:</strong> We do sponsor visas! However, we aren&#39;t able to successfully sponsor visas for every role and every candidate. But if we make you an offer, we will make every reasonable effort to get you a visa, and we retain an immigration lawyer to help with this.</p>
<p><strong>We encourage you to apply even if you do not believe you meet every single qualification. Not all strong candidates will meet every single qualification as listed. Research shows that people who identify as being from underrepresented groups are more prone to experiencing imposter syndrome and doubting the strength of their candidacy, so we urge you not to exclude yourself prematurely and to submit an application if you&#39;re interested in this work.</strong></p>
<p><strong>Your safety matters to us. To protect yourself from potential scams, remember that Anthropic recruiters only contact you from @anthropic.com email addresses. In some cases, we may partner with vetted recruiting agencies who will identify themselves as working on behalf of Anthropic. Be cautious of emails from other domains. Legitimate Anthropic recruiters will never ask for money, fees, or banking information before your first day. If you&#39;re ever unsure about a communication, don&#39;t click any links—visit anthropic.com/careers directly for confirmed position openings.</strong></p>
<p><strong>How we&#39;re different</strong></p>
<p>We believe that the highest-impact AI research will be big science. At Anthropic we work as a single cohesive team on just a few large-scale research efforts. And we value impact — advancing our long-term goals of steerable, trustworthy AI — rather than work on smaller and more specific puzzles. We view AI research as an empirical science, which has as much in common with physics and biology as with traditional efforts in computer science. We&#39;re an extremely collaborative group, and we host frequent research discussions to ensure that we are pursuing the highest-impact work.</p>
<p style="margin-top:24px;font-size:13px;color:#666;">XML job scraping automation by <a href="https://yubhub.co">YubHub</a></p>]]></Description>
      <Jobtype>full-time</Jobtype>
      <Experiencelevel>senior</Experiencelevel>
      <Workarrangement>hybrid</Workarrangement>
      <Salaryrange>$320,000 - $405,000 USD</Salaryrange>
      <Skills>software engineering, ML systems, LLM application development, context engineering, evaluation, orchestration, UX, reliability, documentation, data science, engineering, large-scale data processing frameworks, productionizing internal tools, developer-facing platforms, monitoring, observability systems, research experience in AI safety, alignment, responsible deployment, practical experience with both data science and engineering, experience with productionizing internal tools or building developer-facing platforms, background in building monitoring or observability systems, comfort with ambiguity</Skills>
      <Category>Engineering</Category>
      <Industry>Technology</Industry>
      <Employername>Anthropic</Employername>
      <Employerlogo>https://logos.yubhub.co/anthropic.com.png</Employerlogo>
      <Employerdescription>Anthropic is a quickly growing organisation with a mission to create reliable, interpretable, and steerable AI systems. Our team is a group of committed researchers, engineers, policy experts, and business leaders working together to build beneficial AI systems.</Employerdescription>
      <Employerwebsite>https://job-boards.greenhouse.io</Employerwebsite>
      <Compensationcurrency></Compensationcurrency>
      <Compensationmin></Compensationmin>
      <Compensationmax></Compensationmax>
      <Applyto>https://job-boards.greenhouse.io/anthropic/jobs/5125083008</Applyto>
      <Location>San Francisco, CA</Location>
      <Country></Country>
      <Postedate>2026-03-08</Postedate>
    </job>
    <job>
      <externalid>617efd60-cc2</externalid>
      <Title>Strategic Account Executive, Investment Banking &amp; Capital Markets</Title>
      <Description><![CDATA[<p><strong>About Anthropic</strong></p>
<p>Anthropic&#39;s mission is to create reliable, interpretable, and steerable AI systems. We want AI to be safe and beneficial for our users and for society as a whole. Our team is a quickly growing group of committed researchers, engineers, policy experts, and business leaders working together to build beneficial AI systems.</p>
<p><strong>Responsibilities</strong></p>
<p>As an Account Executive focused on Investment Banking &amp; Capital Markets at Anthropic, you&#39;ll be part of the foundational team bringing frontier AI to one of the most complex and high-stakes sectors in finance. You&#39;ll drive adoption of Claude across investment banks, capital markets firms, asset managers, and sell-side research institutions—helping them transform workflows in deal execution, research production, trading operations, and client advisory.</p>
<p>You&#39;ll leverage deep consultative sales expertise and sector knowledge to secure strategic enterprise deals while becoming a trusted partner to stakeholders navigating AI deployment in highly regulated environments. In collaboration with GTM, Product, Policy, and Marketing teams, you&#39;ll shape our approach to this critical vertical and help define how AI transforms capital markets.</p>
<p><strong>Responsibilities:</strong></p>
<ul>
<li>Own the full sales cycle from prospecting through close, winning new business and driving revenue within investment banking and capital markets accounts. Navigate complex organisational structures to reach decision-makers across front office, middle office, and technology functions.</li>
</ul>
<ul>
<li>Design and execute sales strategies tailored to the unique procurement dynamics, budget cycles, and risk considerations of capital markets institutions. Translate market intelligence into targeted account plans and campaigns.</li>
</ul>
<ul>
<li>Identify and develop new use cases across investment banking workflows—M&amp;A analysis, equity research, fixed income trading, compliance, and client reporting—collaborating cross-functionally to differentiate our offerings.</li>
</ul>
<ul>
<li>Build consensus across complex stakeholder ecosystems including Managing Directors, technology leadership, risk and compliance officers, and procurement teams.</li>
</ul>
<ul>
<li>Serve as the voice of the customer internally, gathering feedback from users and conveying market needs to inform product roadmaps, security requirements, and go-to-market positioning.</li>
</ul>
<ul>
<li>Contribute to the evolution of our financial services sales methodology by documenting learnings, refining playbooks, and identifying process improvements that drive productivity and consistency.</li>
</ul>
<p><strong>You may be a good fit if you have:</strong></p>
<ul>
<li>7+ years of enterprise B2B sales experience, with significant time selling into investment banks, capital markets firms, or asset managers</li>
</ul>
<ul>
<li>A track record of closing complex, six- and seven-figure deals within financial institutions by navigating both technical requirements and business use cases</li>
</ul>
<ul>
<li>Deep familiarity with how investment banks and capital markets firms buy technology—including vendor risk assessments, security reviews, and multi-stakeholder approval processes</li>
</ul>
<ul>
<li>Experience negotiating enterprise agreements within financial services procurement frameworks, including navigating legal, compliance, and infosec requirements</li>
</ul>
<ul>
<li>Proven history of exceeding revenue targets by effectively managing pipeline and executing a disciplined sales process</li>
</ul>
<ul>
<li>Strong executive presence and the ability to present confidently to audiences ranging from analysts and associates to C-suite executives</li>
</ul>
<ul>
<li>Understanding of investment banking and capital markets workflows, pain points, and competitive dynamics</li>
</ul>
<ul>
<li>A strategic, analytical mindset combined with creative tactical execution</li>
</ul>
<ul>
<li>Genuine enthusiasm for AI and its potential to transform financial services, paired with appreciation for the importance of safe and responsible deployment</li>
</ul>
<p><strong>Logistics</strong></p>
<p><strong>Education requirements:</strong> We require at least a Bachelor&#39;s degree in a related field or equivalent experience. <strong>Location-based hybrid policy:</strong> Currently, we expect all staff to be in one of our offices at least 25% of the time. However, some roles may require more time in our offices.</p>
<p><strong>Visa sponsorship:</strong> We do sponsor visas! However, we aren&#39;t able to successfully sponsor visas for every role and every candidate. But if we make you an offer, we will make every reasonable effort to get you a visa, and we retain an immigration lawyer to help with this.</p>
<p><strong>We encourage you to apply even if you do not believe you meet every single qualification.</strong> Not all strong candidates will meet every single qualification as listed. Research shows that people who identify as being from underrepresented groups are more prone to experiencing imposter syndrome and doubting the strength of their candidacy, so we urge you not to exclude yourself prematurely and to submit an application if you&#39;re interested in this work.</p>
<p><strong>Your safety matters to us.</strong> To protect yourself from potential scams, remember that Anthropic recruiters only contact you from @anthropic.com email addresses. In some cases, we may partner with vetted recruiting agencies who will identify themselves as working on behalf of Anthropic. Be cautious of emails from other domains. Legitimate Anthropic recruiters will never ask for money, fees, or banking information before your first day. If you&#39;re ever unsure about a communication, don&#39;t hesitate to reach out to us directly.</p>
<p style="margin-top:24px;font-size:13px;color:#666;">XML job scraping automation by <a href="https://yubhub.co">YubHub</a></p>]]></Description>
      <Jobtype>full-time</Jobtype>
      <Experiencelevel>senior</Experiencelevel>
      <Workarrangement>hybrid</Workarrangement>
      <Salaryrange>$290,000 - $435,000 USD</Salaryrange>
      <Skills>Enterprise B2B sales experience, Investment banking and capital markets knowledge, Vendor risk assessments, Security reviews, Multi-stakeholder approval processes, Enterprise agreements, Financial services procurement frameworks, Legal, Compliance, Infosec requirements, Revenue targets, Pipeline management, Disciplined sales process, Executive presence, Investment banking and capital markets workflows, Pain points, Competitive dynamics, Strategic mindset, Analytical mindset, Creative tactical execution, Enthusiasm for AI, Appreciation for safe and responsible deployment, Investment banking and capital markets workflows, Pain points, Competitive dynamics, Strategic mindset, Analytical mindset, Creative tactical execution, Enthusiasm for AI, Appreciation for safe and responsible deployment</Skills>
      <Category>Sales</Category>
      <Industry>Finance</Industry>
      <Employername>Anthropic</Employername>
      <Employerlogo>https://logos.yubhub.co/anthropic.com.png</Employerlogo>
      <Employerdescription>Anthropic is a quickly growing organisation with a mission to create reliable, interpretable, and steerable AI systems. The company&apos;s team includes researchers, engineers, policy experts, and business leaders working together to build beneficial AI systems.</Employerdescription>
      <Employerwebsite>https://job-boards.greenhouse.io</Employerwebsite>
      <Compensationcurrency></Compensationcurrency>
      <Compensationmin></Compensationmin>
      <Compensationmax></Compensationmax>
      <Applyto>https://job-boards.greenhouse.io/anthropic/jobs/5041290008</Applyto>
      <Location>New York City, NY; San Francisco, CA</Location>
      <Country></Country>
      <Postedate>2026-03-08</Postedate>
    </job>
    <job>
      <externalid>a97094d0-e90</externalid>
      <Title>Research Engineer, Production Model Post-Training</Title>
      <Description><![CDATA[<p><strong>About the role</strong></p>
<p>Anthropic&#39;s production models undergo sophisticated post-training processes to enhance their capabilities, alignment, and safety. As a Research Engineer on our Post-Training team, you&#39;ll train our base models through the complete post-training stack to deliver the production Claude models that users interact with.</p>
<p>You&#39;ll work at the intersection of cutting-edge research and production engineering, implementing, scaling, and improving post-training techniques like Constitutional AI, RLHF, and other alignment methodologies. Your work will directly impact the quality, safety, and capabilities of our production models.</p>
<p>_Note: For this role, we conduct all interviews in Python. This role may require responding to incidents on short-notice, including on weekends._</p>
<p><strong>Responsibilities:</strong></p>
<ul>
<li>Implement and optimize post-training techniques at scale on frontier models</li>
<li>Conduct research to develop and optimize post-training recipes that directly improve production model quality</li>
<li>Design, build, and run robust, efficient pipelines for model fine-tuning and evaluation</li>
<li>Develop tools to measure and improve model performance across various dimensions</li>
<li>Collaborate with research teams to translate emerging techniques into production-ready implementations</li>
<li>Debug complex issues in training pipelines and model behavior</li>
<li>Help establish best practices for reliable, reproducible model post-training</li>
</ul>
<p><strong>You may be a good fit if you:</strong></p>
<ul>
<li>Thrive in controlled chaos and are energised, rather than overwhelmed, when juggling multiple urgent priorities</li>
<li>Adapt quickly to changing priorities</li>
<li>Maintain clarity when debugging complex, time-sensitive issues</li>
<li>Have strong software engineering skills with experience building complex ML systems</li>
<li>Are comfortable working with large-scale distributed systems and high-performance computing</li>
<li>Have experience with training, fine-tuning, or evaluating large language models</li>
<li>Can balance research exploration with engineering rigor and operational reliability</li>
<li>Are adept at analysing and debugging model training processes</li>
<li>Enjoy collaborating across research and engineering disciplines</li>
<li>Can navigate ambiguity and make progress in fast-moving research environments</li>
</ul>
<p><strong>Strong candidates may also:</strong></p>
<ul>
<li>Have experience with LLMs</li>
<li>Have a keen interest in AI safety and responsible deployment</li>
</ul>
<p>We welcome candidates at various experience levels, with a preference for senior engineers who have hands-on experience with frontier AI systems. However, proficiency in Python, deep learning frameworks, and distributed computing is required for this role.</p>
<p>The annual compensation range for this role is listed below.</p>
<p>For sales roles, the range provided is the role’s On Target Earnings (&quot;OTE&quot;) range, meaning that the range includes both the sales commissions/sales bonuses target and annual base salary for the role.</p>
<p>Annual Salary:</p>
<p>$350,000 - $500,000USD</p>
<p><strong>Logistics</strong></p>
<p><strong>Education requirements:</strong> We require at least a Bachelor&#39;s degree in a related field or equivalent experience.</p>
<p><strong>Location-based hybrid policy:</strong> Currently, we expect all staff to be in one of our offices at least 25% of the time. However, some roles may require more time in our offices.</p>
<p><strong>Visa sponsorship:</strong> We do sponsor visas! However, we aren&#39;t able to successfully sponsor visas for every role and every candidate. But if we make you an offer, we will make every reasonable effort to get you a visa, and we retain an immigration lawyer to help with this.</p>
<p><strong>We encourage you to apply even if you do not believe you meet every single qualification.</strong> Not all strong candidates will meet every single qualification as listed. Research shows that people who identify as being from underrepresented groups are more prone to experiencing imposter syndrome and doubting the strength of their candidacy, so we urge you not to exclude yourself prematurely and to submit an application if you&#39;re interested in this work.</p>
<p><strong>Your safety matters to us.</strong> To protect yourself from potential scams, remember that Anthropic recruiters only contact you from @anthropic.com email addresses. In some cases, we may partner with vetted recruiting agencies who will identify themselves as working on behalf of Anthropic. Be cautious of emails from other domains. Legitimate Anthropic recruiters will never ask for money, fees, or banking information before your first day. If you&#39;re ever unsure about a communication, don&#39;t click any links—visit anthropic.com/careers directly for confirmed position openings.</p>
<p><strong>How we&#39;re different</strong></p>
<p>We believe that the highest-impact AI research will be big science. At Anthropic we work as a single cohesive team on just a few large-scale research efforts. And we value impact — advancing our long-term goals of steerable, trustworthy AI — rather than work on smaller and more specific puzzles. We view AI research as an empirical science, which has as much in common with physics and biology as with traditional efforts in computer science. We&#39;re an extremely collaborative group, and we host frequent research discussions to ensure that we are pursuing the highest-impact work at any given time. As such, we greatly value communication skills.</p>
<p style="margin-top:24px;font-size:13px;color:#666;">XML job scraping automation by <a href="https://yubhub.co">YubHub</a></p>]]></Description>
      <Jobtype>full-time</Jobtype>
      <Experiencelevel>senior</Experiencelevel>
      <Workarrangement>hybrid</Workarrangement>
      <Salaryrange>$350,000 - $500,000USD</Salaryrange>
      <Skills>Python, Deep learning frameworks, Distributed computing, Large-scale distributed systems, High-performance computing, Training, fine-tuning, or evaluating large language models, Experience with LLMs, AI safety and responsible deployment</Skills>
      <Category>Engineering</Category>
      <Industry>Technology</Industry>
      <Employername>Anthropic</Employername>
      <Employerlogo>https://logos.yubhub.co/anthropic.com.png</Employerlogo>
      <Employerdescription>Anthropic is a quickly growing organisation with a mission to create reliable, interpretable, and steerable AI systems. It aims to build beneficial AI systems that are safe and beneficial for users and society as a whole.</Employerdescription>
      <Employerwebsite>https://job-boards.greenhouse.io</Employerwebsite>
      <Compensationcurrency></Compensationcurrency>
      <Compensationmin></Compensationmin>
      <Compensationmax></Compensationmax>
      <Applyto>https://job-boards.greenhouse.io/anthropic/jobs/4613592008</Applyto>
      <Location>San Francisco, CA | New York City, NY | Seattle, WA</Location>
      <Country></Country>
      <Postedate>2026-03-08</Postedate>
    </job>
    <job>
      <externalid>ca30dbae-0f6</externalid>
      <Title>Research Engineer, Production Model Post-Training</Title>
      <Description><![CDATA[<p><strong>About the role</strong></p>
<p>Anthropic&#39;s production models undergo sophisticated post-training processes to enhance their capabilities, alignment, and safety. As a Research Engineer on our Post-Training team, you&#39;ll train our base models through the complete post-training stack to deliver the production Claude models that users interact with.</p>
<p>You&#39;ll work at the intersection of cutting-edge research and production engineering, implementing, scaling, and improving post-training techniques like Constitutional AI, RLHF, and other alignment methodologies. Your work will directly impact the quality, safety, and capabilities of our production models.</p>
<p>_Note: For this role, we conduct all interviews in Python. This role may require responding to incidents on short-notice, including on weekends._</p>
<p><strong>Responsibilities:</strong></p>
<ul>
<li>Implement and optimize post-training techniques at scale on frontier models</li>
</ul>
<ul>
<li>Conduct research to develop and optimize post-training recipes that directly improve production model quality</li>
</ul>
<ul>
<li>Design, build, and run robust, efficient pipelines for model fine-tuning and evaluation</li>
</ul>
<ul>
<li>Develop tools to measure and improve model performance across various dimensions</li>
</ul>
<ul>
<li>Collaborate with research teams to translate emerging techniques into production-ready implementations</li>
</ul>
<ul>
<li>Debug complex issues in training pipelines and model behavior</li>
</ul>
<ul>
<li>Help establish best practices for reliable, reproducible model post-training</li>
</ul>
<p><strong>You may be a good fit if you:</strong></p>
<ul>
<li>Thrive in controlled chaos and are energised, rather than overwhelmed, when juggling multiple urgent priorities</li>
</ul>
<ul>
<li>Adapt quickly to changing priorities</li>
</ul>
<ul>
<li>Maintain clarity when debugging complex, time-sensitive issues</li>
</ul>
<ul>
<li>Have strong software engineering skills with experience building complex ML systems</li>
</ul>
<ul>
<li>Are comfortable working with large-scale distributed systems and high-performance computing</li>
</ul>
<ul>
<li>Have experience with training, fine-tuning, or evaluating large language models</li>
</ul>
<ul>
<li>Can balance research exploration with engineering rigor and operational reliability</li>
</ul>
<ul>
<li>Are adept at analyzing and debugging model training processes</li>
</ul>
<ul>
<li>Enjoy collaborating across research and engineering disciplines</li>
</ul>
<ul>
<li>Can navigate ambiguity and make progress in fast-moving research environments</li>
</ul>
<p><strong>Strong candidates may also:</strong></p>
<ul>
<li>Have experience with LLMs</li>
</ul>
<ul>
<li>Have a keen interest in AI safety and responsible deployment</li>
</ul>
<p><strong>Logistics</strong></p>
<p><strong>Education requirements:</strong> We require at least a Bachelor&#39;s degree in a related field or equivalent experience. <strong>Location-based hybrid policy:</strong> Currently, we expect all staff to be in one of our offices at least 25% of the time. However, some roles may require more time in our offices.</p>
<p><strong>Visa sponsorship:</strong> We do sponsor visas! However, we aren&#39;t able to successfully sponsor visas for every role and every candidate. But if we make you an offer, we will make every reasonable effort to get you a visa, and we retain an immigration lawyer to help with this.</p>
<p><strong>We encourage you to apply even if you do not believe you meet every single qualification. Not all strong candidates will meet every single qualification as listed. Research shows that people who identify as being from underrepresented groups are more prone to experiencing imposter syndrome and doubting the strength of their candidacy, so we urge you not to exclude yourself prematurely and to submit an application if you&#39;re interested in this work.</strong></p>
<p><strong>Your safety matters to us. To protect yourself from potential scams, remember that Anthropic recruiters only contact you from @anthropic.com email addresses. In some cases, we may partner with vetted recruiting agencies who will identify themselves as working on behalf of Anthropic. Be cautious of emails from other domains. Legitimate Anthropic recruiters will never ask for money, fees, or banking information before your first day. If you&#39;re ever unsure about a communication, don&#39;t click any links—visit anthropic.com/careers directly for confirmed position openings.</strong></p>
<p><strong>How we&#39;re different</strong></p>
<p>We believe that the highest-impact AI research will be big science. At Anthropic we work as a single cohesive team on just a few large-scale research efforts. And we value impact — advancing our long-term goals of steerable, trustworthy AI — rather than work on smaller and more specific puzzles. We view AI research as an empirical science, which has as much in common with physics and biology as with traditional efforts in computer science. We&#39;re an extremely collaborative group, and we host frequent research discussions to ensure that we are pursuing the highest-impact work at any given time. As such, we greatly value communication skills.</p>
<p>The easiest way to understand our research directions is to read our recent research. This research continues many of the directions our team worked on prior to Anthropic, including: GPT-3, Circuit-Based Interpretability, Multimodal Neurons, Scaling Laws, AI &amp; Compute, Concrete Problems in AI Safety, and Learning from Human Preferences.</p>
<p><strong>Come work with us!</strong></p>
<p style="margin-top:24px;font-size:13px;color:#666;">XML job scraping automation by <a href="https://yubhub.co">YubHub</a></p>]]></Description>
      <Jobtype>full-time</Jobtype>
      <Experiencelevel>senior</Experiencelevel>
      <Workarrangement>hybrid</Workarrangement>
      <Salaryrange></Salaryrange>
      <Skills>Python, Deep learning frameworks, Distributed computing, Large language models, ML systems, High-performance computing, LLMs, AI safety, Responsible deployment</Skills>
      <Category>Engineering</Category>
      <Industry>Technology</Industry>
      <Employername>Anthropic</Employername>
      <Employerlogo>https://logos.yubhub.co/anthropic.com.png</Employerlogo>
      <Employerdescription>Anthropic&apos;s mission is to create reliable, interpretable, and steerable AI systems. The company is a quickly growing group of committed researchers, engineers, policy experts, and business leaders working together to build beneficial AI systems.</Employerdescription>
      <Employerwebsite>https://job-boards.greenhouse.io</Employerwebsite>
      <Compensationcurrency></Compensationcurrency>
      <Compensationmin></Compensationmin>
      <Compensationmax></Compensationmax>
      <Applyto>https://job-boards.greenhouse.io/anthropic/jobs/5112018008</Applyto>
      <Location>Zürich</Location>
      <Country></Country>
      <Postedate>2026-03-08</Postedate>
    </job>
  </jobs>
</source>