{"version":"0.1","company":{"name":"YubHub","url":"https://yubhub.co","jobsUrl":"https://yubhub.co/jobs/skill/research-paper-authoring"},"x-facet":{"type":"skill","slug":"research-paper-authoring","display":"Research Paper Authoring","count":1},"x-feed-size-limit":100,"x-feed-sort":"enriched_at desc","x-feed-notice":"This feed contains at most 100 jobs (the most recently enriched). For the full corpus, use the paginated /stats/by-facet endpoint or /search.","x-generator":"yubhub-xml-generator","x-rights":"Free to redistribute with attribution: \"Data by YubHub (https://yubhub.co)\"","x-schema":"Each entry in `jobs` follows https://schema.org/JobPosting. YubHub-native raw fields carry `x-` prefix.","jobs":[{"@context":"https://schema.org","@type":"JobPosting","identifier":{"@type":"PropertyValue","name":"YubHub","value":"job_fab21c7e-6bf"},"title":"Research Engineer / Scientist, Alignment Science - London","description":"<p>About the role:</p>\n<p>You will contribute to exploratory experimental research on AI safety, with a focus on risks from powerful future systems. As a Research Engineer on Alignment Science, you&#39;ll work on creating methods to ensure advanced AI systems remain safe and harmless in unfamiliar or adversarial scenarios.</p>\n<p>Responsibilities:</p>\n<ul>\n<li>Conduct research on AI control and alignment stress-testing</li>\n<li>Develop and implement new techniques for ensuring AI safety</li>\n<li>Collaborate with other teams, including Interpretability, Fine-Tuning, and the Frontier Red Team</li>\n<li>Test and evaluate the effectiveness of AI safety techniques</li>\n</ul>\n<p>Requirements:</p>\n<ul>\n<li>Significant software, ML, or research engineering experience</li>\n<li>Familiarity with technical AI safety research</li>\n<li>Experience contributing to empirical AI research projects</li>\n</ul>\n<p>Preferred qualifications:</p>\n<ul>\n<li>Experience authoring research papers in machine learning, NLP, or AI safety</li>\n<li>Experience with LLMs</li>\n<li>Experience with reinforcement learning</li>\n</ul>\n<p>Benefits:</p>\n<ul>\n<li>Competitive compensation and benefits</li>\n<li>Optional equity donation matching</li>\n<li>Generous vacation and parental leave</li>\n<li>Flexible working hours</li>\n</ul>\n<p>Note:</p>\n<p>This role requires all candidates to be based at least 25% in London and travel to San Francisco occasionally.</p>\n<p style=\"margin-top:24px;font-size:13px;color:#666;\">XML job scraping automation by <a href=\"https://yubhub.co\">YubHub</a></p>","url":"https://yubhub.co/jobs/job_fab21c7e-6bf","directApply":true,"hiringOrganization":{"@type":"Organization","name":"Anthropic","sameAs":"https://www.anthropic.com/","logo":"https://logos.yubhub.co/anthropic.com.png"},"x-apply-url":"https://job-boards.greenhouse.io/anthropic/jobs/4610158008","x-work-arrangement":"hybrid","x-experience-level":"senior","x-job-type":"full-time","x-salary-range":"£260,000-£370,000 GBP","x-skills-required":["software engineering","machine learning","research engineering","AI safety","technical AI safety research"],"x-skills-preferred":["research paper authoring","LLMs","reinforcement learning"],"datePosted":"2026-04-18T15:55:40.617Z","jobLocation":{"@type":"Place","address":{"@type":"PostalAddress","addressLocality":"London, UK"}},"employmentType":"FULL_TIME","occupationalCategory":"Engineering","industry":"Technology","skills":"software engineering, machine learning, research engineering, AI safety, technical AI safety research, research paper authoring, LLMs, reinforcement learning","baseSalary":{"@type":"MonetaryAmount","currency":"GBP","value":{"@type":"QuantitativeValue","minValue":260000,"maxValue":370000,"unitText":"YEAR"}}}]}