{"version":"0.1","company":{"name":"YubHub","url":"https://yubhub.co","jobsUrl":"https://yubhub.co/jobs/skill/model-training-processes"},"x-facet":{"type":"skill","slug":"model-training-processes","display":"Model Training Processes","count":3},"x-feed-size-limit":100,"x-feed-sort":"enriched_at desc","x-feed-notice":"This feed contains at most 100 jobs (the most recently enriched). For the full corpus, use the paginated /stats/by-facet endpoint or /search.","x-generator":"yubhub-xml-generator","x-rights":"Free to redistribute with attribution: \"Data by YubHub (https://yubhub.co)\"","x-schema":"Each entry in `jobs` follows https://schema.org/JobPosting. YubHub-native raw fields carry `x-` prefix.","jobs":[{"@context":"https://schema.org","@type":"JobPosting","identifier":{"@type":"PropertyValue","name":"YubHub","value":"job_faee4fa2-887"},"title":"Prompt Engineer, Claude Code","description":"<p>As a Prompt Engineer on the Claude Code team, you&#39;ll own Claude&#39;s behaviours specifically within Claude Code , ensuring users get a consistent, safe, and high-quality experience as we ship new models and evolve the product.</p>\n<p>This is a highly specialized role sitting at the intersection of model behaviour and product quality. You&#39;ll be the expert on how Claude behaves inside Claude Code, owning and maintaining the system prompts that ship with each new model snapshot. When a new model drops, you&#39;re the person making sure Claude Code feels right within days , not weeks.</p>\n<p>You&#39;ll work closely with Model Quality and Research to understand emergent behaviours and behavioural regressions, and with product and safeguards teams to respond quickly when something goes wrong.</p>\n<p>This role requires someone who can move fast on behavioural tuning while maintaining rigor, and who cares deeply about the end-to-end developer experience Claude Code delivers. You&#39;ll need strong prompting skills, excellent judgment about model behaviours, and the collaborative skills to work across product, safeguards, and research teams.</p>\n<p>Responsibilities:</p>\n<ul>\n<li>Own Claude Code&#39;s system prompts for each new model snapshot, ensuring behaviours feel consistent and well-tuned</li>\n<li>Review production prompt changes and serve as a resource for particularly challenging prompting problems involving alignment and reputational risks</li>\n<li>Lead incident response for behavioural and policy concerns, coordinating with product and safeguards teams</li>\n<li>Scale prompting and evaluation best practices across claude code and product teams.</li>\n<li>Deliver product evaluations focused on model behaviours</li>\n<li>Define and streamline processes for rolling out prompt changes, including launch criteria and review practices</li>\n<li>Create model-specific prompt guides that document quirks and optimal prompting strategies for each release</li>\n<li>Collaborate with product teams to translate feature requirements into effective prompts</li>\n</ul>\n<p>You may be a good fit if you:</p>\n<ul>\n<li>Are a power user of agentic coding tools and have strong intuition about model capabilities and limitations</li>\n<li>Thrive in high-intensity environments with fast iteration cycles</li>\n<li>Take full ownership of problems and drive them to completion independently</li>\n<li>Are skilled at creating and maintaining behavioural evaluations</li>\n<li>Have strong technical understanding, including comprehension of agent scaffold architectures and model training processes</li>\n<li>Are an experienced coder comfortable working in Python and Typescript</li>\n<li>Have independently driven changes through production systems with strong execution and responsiveness</li>\n<li>Have experience translating user feedback and product needs into coherent prompts and behavioural specifications</li>\n<li>Excel at working across organisational boundaries, collaborating effectively with teams that have differing goals and perspectives</li>\n<li>Have experience translating user feedback and behavioural observations into coherent prompt changes and specifications</li>\n<li>Care deeply about AI safety and making Claude a healthy alternative in the AI landscape</li>\n</ul>\n<p>Annual compensation range for this role is $300,000-$405,000 USD.</p>\n<p style=\"margin-top:24px;font-size:13px;color:#666;\">XML job scraping automation by <a href=\"https://yubhub.co\">YubHub</a></p>","url":"https://yubhub.co/jobs/job_faee4fa2-887","directApply":true,"hiringOrganization":{"@type":"Organization","name":"Anthropic","sameAs":"https://www.anthropic.com/","logo":"https://logos.yubhub.co/anthropic.com.png"},"x-apply-url":"https://job-boards.greenhouse.io/anthropic/jobs/5159669008","x-work-arrangement":"hybrid","x-experience-level":"senior","x-job-type":"full-time","x-salary-range":"$300,000-$405,000 USD","x-skills-required":["prompt engineering","model behaviour","product quality","agentic coding tools","Python","Typescript","collaboration","incident response","process definition","evaluation best practices"],"x-skills-preferred":["AI safety","model training processes","agent scaffold architectures","behavioural evaluations","user feedback analysis"],"datePosted":"2026-04-18T15:59:03.936Z","jobLocation":{"@type":"Place","address":{"@type":"PostalAddress","addressLocality":"San Francisco, CA | New York City, NY | Seattle, WA"}},"employmentType":"FULL_TIME","occupationalCategory":"Engineering","industry":"Technology","skills":"prompt engineering, model behaviour, product quality, agentic coding tools, Python, Typescript, collaboration, incident response, process definition, evaluation best practices, AI safety, model training processes, agent scaffold architectures, behavioural evaluations, user feedback analysis","baseSalary":{"@type":"MonetaryAmount","currency":"USD","value":{"@type":"QuantitativeValue","minValue":300000,"maxValue":405000,"unitText":"YEAR"}}},{"@context":"https://schema.org","@type":"JobPosting","identifier":{"@type":"PropertyValue","name":"YubHub","value":"job_cc85960d-49e"},"title":"Prompt Engineer, Claude Code","description":"<p>As a Prompt Engineer on the Claude Code team, you&#39;ll own Claude&#39;s behaviours specifically within Claude Code , ensuring users get a consistent, safe, and high-quality experience as we ship new models and evolve the product.</p>\n<p>This is a highly specialized role sitting at the intersection of model behaviour and product quality. You&#39;ll be the expert on how Claude behaves inside Claude Code, owning and maintaining the system prompts that ship with each new model snapshot. When a new model drops, you&#39;re the person making sure Claude Code feels right within days , not weeks.</p>\n<p>You&#39;ll work closely with Model Quality and Research to understand emergent behaviours and behavioural regressions, and with product and safeguards teams to respond quickly when something goes wrong.</p>\n<p>This role requires someone who can move fast on behavioural tuning while maintaining rigor, and who cares deeply about the end-to-end developer experience Claude Code delivers. You&#39;ll need strong prompting skills, excellent judgment about model behaviours, and the collaborative skills to work across product, safeguards, and research teams.</p>\n<p>Responsibilities:</p>\n<ul>\n<li>Own Claude Code&#39;s system prompts for each new model snapshot, ensuring behaviours feel consistent and well-tuned</li>\n<li>Review production prompt changes and serve as a resource for particularly challenging prompting problems involving alignment and reputational risks</li>\n<li>Lead incident response for behavioural and policy concerns, coordinating with product and safeguards teams</li>\n<li>Scale prompting and evaluation best practices across claude code and product teams.</li>\n<li>Deliver product evaluations focused on model behaviours</li>\n<li>Define and streamline processes for rolling out prompt changes, including launch criteria and review practices</li>\n<li>Create model-specific prompt guides that document quirks and optimal prompting strategies for each release</li>\n<li>Collaborate with product teams to translate feature requirements into effective prompts</li>\n</ul>\n<p>You may be a good fit if you:</p>\n<ul>\n<li>Are a power user of agentic coding tools and have strong intuition about model capabilities and limitations</li>\n<li>Thrive in high-intensity environments with fast iteration cycles</li>\n<li>Take full ownership of problems and drive them to completion independently</li>\n<li>Are skilled at creating and maintaining behavioural evaluations</li>\n<li>Have strong technical understanding, including comprehension of agent scaffold architectures and model training processes</li>\n<li>Are an experienced coder comfortable working in Python and Typescript</li>\n<li>Have independently driven changes through production systems with strong execution and responsiveness</li>\n<li>Have experience translating user feedback and product needs into coherent prompts and behavioural specifications</li>\n<li>Excel at working across organisational boundaries, collaborating effectively with teams that have differing goals and perspectives</li>\n<li>Have experience translating user feedback and behavioural observations into coherent prompt changes and specifications</li>\n<li>Care deeply about AI safety and making Claude a healthy alternative in the AI landscape</li>\n</ul>\n<p>Annual compensation range for this role is $300,000-$405,000 USD.</p>\n<p style=\"margin-top:24px;font-size:13px;color:#666;\">XML job scraping automation by <a href=\"https://yubhub.co\">YubHub</a></p>","url":"https://yubhub.co/jobs/job_cc85960d-49e","directApply":true,"hiringOrganization":{"@type":"Organization","name":"Anthropic","sameAs":"https://www.anthropic.com/","logo":"https://logos.yubhub.co/anthropic.com.png"},"x-apply-url":"https://job-boards.greenhouse.io/anthropic/jobs/5159669008","x-work-arrangement":"hybrid","x-experience-level":"senior","x-job-type":"full-time","x-salary-range":"$300,000-$405,000 USD","x-skills-required":["Python","Typescript","Agentic coding tools","Model training processes","Agent scaffold architectures"],"x-skills-preferred":[],"datePosted":"2026-04-18T15:43:37.877Z","jobLocation":{"@type":"Place","address":{"@type":"PostalAddress","addressLocality":"San Francisco, CA | New York City, NY | Seattle, WA"}},"employmentType":"FULL_TIME","occupationalCategory":"Engineering","industry":"Technology","skills":"Python, Typescript, Agentic coding tools, Model training processes, Agent scaffold architectures","baseSalary":{"@type":"MonetaryAmount","currency":"USD","value":{"@type":"QuantitativeValue","minValue":300000,"maxValue":405000,"unitText":"YEAR"}}},{"@context":"https://schema.org","@type":"JobPosting","identifier":{"@type":"PropertyValue","name":"YubHub","value":"job_b47fc91b-597"},"title":"Anthropic Fellows Program — ML Systems & Performance","description":"<p>The Anthropic Fellows Program is a 4-month full-time research opportunity designed to foster AI research and engineering talent. We provide funding and mentorship to promising technical talent, regardless of previous experience. Fellows will primarily use external infrastructure to work on an empirical project aligned with our research priorities, with the goal of producing a public output. In one of our earlier cohorts, over 80% of fellows produced papers.</p>\n<p>We run multiple cohorts of Fellows each year and review applications on a rolling basis. This application is for cohorts starting in July 2026 and beyond.</p>\n<p>As a Fellow, you will receive:</p>\n<ul>\n<li>Direct mentorship from Anthropic researchers</li>\n<li>Access to a shared workspace in either Berkeley, California or London, UK</li>\n<li>Connection to the broader AI safety and security research community</li>\n<li>A weekly stipend of $3,850 USD / £2,310 GBP / $4,300 CAD, plus benefits</li>\n<li>Funding for compute and other research expenses</li>\n</ul>\n<p>The interview process will include an initial application and reference check, technical assessments and interviews, and a research discussion.</p>\n<p>We encourage you to apply even if you do not believe you meet every single qualification. Not all strong candidates will meet every single qualification as listed. Research shows that people who identify as being from underrepresented groups are more prone to experiencing imposter syndrome and doubting the strength of their candidacy, so we urge you not to exclude yourself prematurely and to submit an application if you&#39;re interested in this work.</p>\n<p>The expected base stipend for this role is $3,850 USD / £2,310 GBP / $4,300 CAD per week, with an expectation of 40 hours per week for 4 months (with possible extension).</p>\n<p>Fellows will undergo a project selection and mentor matching process. Potential mentors include Alwin Peng and Zygi Straznickas. For a past example of an engineering-heavy project, see &#39;AI agents find $4.6M in blockchain smart contract exploits&#39;.</p>\n<p>Projects in this workstream may include building a CPU simulator for accelerator workloads, adding backends for different accelerators on an open source project, building on demand infrastructure for other infrastructure heavy fellows projects, and building complex synthetic data or environment pipelines.</p>\n<p>To participate in the Fellows program, you must have work authorization in the US, UK, or Canada and be located in that country during the program. Workspace locations are in London and Berkeley, and we are open to remote fellows in the UK, US, or Canada.</p>\n<p>We do not guarantee that we will make any full-time offers to fellows. However, strong performance during the program may indicate that a Fellow would be a good fit for full-time roles at Anthropic. In previous cohorts, 25-50% of fellows received a full-time offer, and we’ve supported many more to go on to do great work on AI safety and security at other organisations.</p>\n<p style=\"margin-top:24px;font-size:13px;color:#666;\">XML job scraping automation by <a href=\"https://yubhub.co\">YubHub</a></p>","url":"https://yubhub.co/jobs/job_b47fc91b-597","directApply":true,"hiringOrganization":{"@type":"Organization","name":"Anthropic","sameAs":"https://anthropic.com/","logo":"https://logos.yubhub.co/anthropic.com.png"},"x-apply-url":"https://job-boards.greenhouse.io/anthropic/jobs/5183051008","x-work-arrangement":"remote","x-experience-level":"entry","x-job-type":"full-time","x-salary-range":null,"x-skills-required":["Python programming","Software engineering","Complex ML systems","Distributed systems","High-performance computing","Training, fine-tuning, or evaluating large language models","Analyzing and debugging model training processes"],"x-skills-preferred":["Experience with training, fine-tuning, or evaluating large language models","Adept at analyzing and debugging model training processes","Strong background in a discipline relevant to a specific Fellows workstream","Experience in areas of research or engineering related to their workstream"],"datePosted":"2026-04-18T15:34:47.218Z","jobLocation":{"@type":"Place","address":{"@type":"PostalAddress","addressLocality":"London, UK; Ontario, CAN; Remote-Friendly, United States; San Francisco, CA"}},"jobLocationType":"TELECOMMUTE","employmentType":"FULL_TIME","occupationalCategory":"Engineering","industry":"Technology","skills":"Python programming, Software engineering, Complex ML systems, Distributed systems, High-performance computing, Training, fine-tuning, or evaluating large language models, Analyzing and debugging model training processes, Experience with training, fine-tuning, or evaluating large language models, Adept at analyzing and debugging model training processes, Strong background in a discipline relevant to a specific Fellows workstream, Experience in areas of research or engineering related to their workstream"}]}