{"version":"0.1","company":{"name":"YubHub","url":"https://yubhub.co","jobsUrl":"https://yubhub.co/jobs/skill/policy-frameworks"},"x-facet":{"type":"skill","slug":"policy-frameworks","display":"Policy Frameworks","count":2},"x-feed-size-limit":100,"x-feed-sort":"enriched_at desc","x-feed-notice":"This feed contains at most 100 jobs (the most recently enriched). For the full corpus, use the paginated /stats/by-facet endpoint or /search.","x-generator":"yubhub-xml-generator","x-rights":"Free to redistribute with attribution: \"Data by YubHub (https://yubhub.co)\"","x-schema":"Each entry in `jobs` follows https://schema.org/JobPosting. YubHub-native raw fields carry `x-` prefix.","jobs":[{"@context":"https://schema.org","@type":"JobPosting","identifier":{"@type":"PropertyValue","name":"YubHub","value":"job_f931591c-87a"},"title":"Research Scientist, Frontier Risk Evaluations","description":"<p>As a Research Scientist focused on Frontier Risk Evaluations, you will design and create evaluation measures, harnesses and datasets for measuring the risks posed by frontier AI systems.</p>\n<p>For example, you might do any or all of the following:</p>\n<ul>\n<li>Design and build harnesses to test AI models and systems (including agents) for dangerous capabilities such as security vulnerability exploitation, CBRN uplift, and other high-risk activities;</li>\n</ul>\n<ul>\n<li>Work with government agencies or other labs to collectively scope and design evaluations to measure and mitigate risks posed by advanced AI systems;</li>\n</ul>\n<ul>\n<li>Publish evaluation methodologies and write technical reports for policymakers.</li>\n</ul>\n<p>We are seeking talented researchers to join us in shaping this vision.</p>\n<p>Ideally you&#39;d have:</p>\n<ul>\n<li>Commitment to our mission of promoting safe, secure, and trustworthy AI deployments in the industry as frontier AI capabilities continue to advance;</li>\n</ul>\n<ul>\n<li>Practical experience conducting technical research collaboratively. You should be comfortable building and instrumenting ML pipelines, writing evaluation harnesses, and quickly turning new ideas from the research literature into working prototypes;</li>\n</ul>\n<ul>\n<li>A track record of published research in machine learning, particularly in generative AI;</li>\n</ul>\n<ul>\n<li>At least three years of experience addressing sophisticated ML problems, whether in a research setting or in product development;</li>\n</ul>\n<ul>\n<li>Strong written and verbal communication skills to operate in a cross-functional team.</li>\n</ul>\n<p>Nice to have:</p>\n<ul>\n<li>Experience in crafting evaluations and benchmarks, or a background in data science roles related to LLM technologies;</li>\n</ul>\n<ul>\n<li>Experience with red-teaming or adversarial testing of AI systems;</li>\n</ul>\n<ul>\n<li>Familiarity with AI safety policy frameworks (e.g., NIST AI RMF, EU AI Act, Korea AI Basic Act).</li>\n</ul>\n<p>Our research interviews are crafted to assess candidates&#39; skills in practical ML prototyping and debugging, their grasp of research concepts, and their alignment with our organisational culture. We will not ask any LeetCode-style questions. If you’re excited about advancing AI safety and contributing to our mission, we encourage you to apply, even if your experience doesn’t perfectly align with every requirement.</p>\n<p>Compensation packages at Scale for eligible roles include base salary, equity, and benefits. The range displayed on each job posting reflects the minimum and maximum target for new hire salaries for the position, determined by work location and additional factors, including job-related skills, experience, interview performance, and relevant education or training. Scale employees in eligible roles are also granted equity-based compensation, subject to Board of Director approval. Your recruiter can share more about the specific salary range for your preferred location during the hiring process, and confirm whether the hired role will be eligible for equity grant. You’ll also receive benefits including, but not limited to: Comprehensive health, dental and vision coverage, retirement benefits, a learning and development stipend, and generous PTO. Additionally, this role may be eligible for additional benefits such as a commuter stipend.</p>\n<p style=\"margin-top:24px;font-size:13px;color:#666;\">XML job scraping automation by <a href=\"https://yubhub.co\">YubHub</a></p>","url":"https://yubhub.co/jobs/job_f931591c-87a","directApply":true,"hiringOrganization":{"@type":"Organization","name":"Scale","sameAs":"https://scale.com/","logo":"https://logos.yubhub.co/scale.com.png"},"x-apply-url":"https://job-boards.greenhouse.io/scaleai/jobs/4677657005","x-work-arrangement":"hybrid","x-experience-level":"senior","x-job-type":"full-time","x-salary-range":"$216,000-$270,000 USD","x-skills-required":["machine learning","generative AI","ML pipelines","evaluation harnesses","AI safety policy frameworks"],"x-skills-preferred":["crafting evaluations and benchmarks","data science roles related to LLM technologies","red-teaming or adversarial testing of AI systems"],"datePosted":"2026-04-18T15:58:57.212Z","jobLocation":{"@type":"Place","address":{"@type":"PostalAddress","addressLocality":"San Francisco, CA; New York, NY"}},"employmentType":"FULL_TIME","occupationalCategory":"Engineering","industry":"Technology","skills":"machine learning, generative AI, ML pipelines, evaluation harnesses, AI safety policy frameworks, crafting evaluations and benchmarks, data science roles related to LLM technologies, red-teaming or adversarial testing of AI systems","baseSalary":{"@type":"MonetaryAmount","currency":"USD","value":{"@type":"QuantitativeValue","minValue":216000,"maxValue":270000,"unitText":"YEAR"}}},{"@context":"https://schema.org","@type":"JobPosting","identifier":{"@type":"PropertyValue","name":"YubHub","value":"job_c76d0c6d-ec7"},"title":"Technical Policy Manager, Cyber Harms","description":"<p><strong>About the Role:</strong></p>\n<p>We are looking for a cybersecurity expert to lead our efforts to prevent AI misuse in the cyber domain. As a Cyber Harms Technical Policy Manager, you will lead a team applying deep technical expertise to inform the design of safety systems that detect harmful cyber behaviours and prevent misuse by sophisticated threat actors.</p>\n<p><strong>In this role, you will:</strong></p>\n<ul>\n<li>Lead and grow a team of technical specialists focused on cyber threat modelling and evaluation frameworks</li>\n<li>Design and oversee execution of capability evaluations (&#39;evals&#39;) to assess the cyber-relevant capabilities of new models</li>\n<li>Create comprehensive cyber threat models, including attack vectors, exploit chains, precursor identification, and weaponization techniques</li>\n<li>Develop and iterate on usage policies that govern responsible use of our models for emerging capabilities and use cases related to cyber harms</li>\n<li>Serve as the primary domain expert on cyber harms, advising cross-functional teams on threat landscapes and mitigation strategies</li>\n<li>Collaborate closely with internal and external threat modelling experts to develop training data for safety systems, and with ML engineers to train these systems, optimising for both robustness against adversarial attacks and low false-positive rates for legitimate security researchers</li>\n<li>Analyse safety system performance in traffic, identifying gaps and proposing improvements</li>\n<li>Conduct regular reviews of existing policies and enforcement systems to identify and address gaps and ambiguities related to cybersecurity risks</li>\n<li>Develop rigorous stress-testing of safeguards against evolving cyber threats and product surfaces</li>\n<li>Partner with Research, Product, Policy, Security Team, and Frontier Red Team to ensure cybersecurity safety is embedded throughout the model development lifecycle</li>\n<li>Translate cybersecurity domain knowledge into actionable safety requirements and clearly articulated policies</li>\n<li>Contribute to external communications, including model cards, blog posts, and policy documents related to cybersecurity safety</li>\n<li>Monitor emerging technologies and threat landscapes for their potential to contribute to new risks and mitigation strategies, and strategically address these</li>\n<li>Mentor and develop team members, fostering a culture of technical excellence and responsible AI development</li>\n</ul>\n<p><strong>You may be a good fit if you have:</strong></p>\n<ul>\n<li>An M.S. or PhD in Computer Science, Cybersecurity, or a related technical field, OR equivalent professional experience in offensive or defensive cybersecurity</li>\n<li>5+ years of hands-on experience in cybersecurity, with deep expertise in areas such as vulnerability research, exploit development, network security, malware analysis, or penetration testing</li>\n<li>2+ years of experience managing technical teams or leading complex technical projects with multiple stakeholders</li>\n<li>Experience in scientific computing and data analysis, with proficiency in programming (Python preferred)</li>\n<li>Deep expertise in modern cybersecurity, including both offensive techniques (vulnerability research, exploit development, penetration testing, malware analysis) and defensive measures (detection, monitoring, incident response)</li>\n<li>Demonstrated ability to create threat models and translate technical cyber risks into policy frameworks</li>\n<li>Familiarity with responsible disclosure practices, vulnerability coordination, and cybersecurity frameworks (e.g., MITRE ATT&amp;CK, NIST Cybersecurity Framework, CWE/CVE systems)</li>\n<li>Strong analytical and writing skills, with the ability to navigate ambiguity and explain complex technical concepts to non-technical stakeholders</li>\n<li>Experience developing policies or guidelines at scale, balancing safety concerns with enabling legitimate use cases</li>\n<li>A passion for learning new skills and an ability to rapidly adapt to changing techniques and technologies</li>\n<li>Comfort working in a fast-paced environment where priorities may shift as AI capabilities evolve</li>\n<li>Track record of translating specialised technical knowledge into actionable safety policies or enforcement guidelines</li>\n</ul>\n<p><strong>Preferred Qualifications:</strong></p>\n<ul>\n<li>Background in AI/ML systems, particularly experience with large language models</li>\n<li>Experience developing ML-based security systems or adversarial ML research</li>\n<li>Experience working with defence, intelligence, or security organisations (e.g., NSA, CISA, national labs, security contractors)</li>\n<li>Published security research, disclosed vulnerabilities, or participated in bug bounty programs</li>\n<li>Understanding of Trust &amp; Safety operations and content moderation at scale</li>\n<li>Certifications such as OSCP, OSCE, GXPN, or equivalent demonstrating technical depth</li>\n<li>Understanding of dual-use security research concerns and ethical considerations in AI safety</li>\n</ul>\n<p style=\"margin-top:24px;font-size:13px;color:#666;\">XML job scraping automation by <a href=\"https://yubhub.co\">YubHub</a></p>","url":"https://yubhub.co/jobs/job_c76d0c6d-ec7","directApply":true,"hiringOrganization":{"@type":"Organization","name":"Anthropic","sameAs":"https://job-boards.greenhouse.io","logo":"https://logos.yubhub.co/anthropic.com.png"},"x-apply-url":"https://job-boards.greenhouse.io/anthropic/jobs/5066981008","x-work-arrangement":"remote","x-experience-level":"senior","x-job-type":"full-time","x-salary-range":"The annual compensation for this role is not specified in the job posting.","x-skills-required":["cybersecurity","vulnerability research","exploit development","network security","malware analysis","penetration testing","scientific computing","data analysis","programming (Python)","threat modelling","policy frameworks","responsible disclosure practices","vulnerability coordination","cybersecurity frameworks (e.g., MITRE ATT&CK, NIST Cybersecurity Framework, CWE/CVE systems)"],"x-skills-preferred":["AI/ML systems","large language models","ML-based security systems","adversarial ML research","defence, intelligence, or security organisations","NSA, CISA, national labs, security contractors","published security research","disclosed vulnerabilities","bug bounty programs","Trust & Safety operations","content moderation at scale","OSCP, OSCE, GXPN, or equivalent certifications","dual-use security research concerns","ethical considerations in AI safety"],"datePosted":"2026-03-08T13:50:25.823Z","jobLocation":{"@type":"Place","address":{"@type":"PostalAddress","addressLocality":"San Francisco, CA, Washington, DC"}},"jobLocationType":"TELECOMMUTE","employmentType":"FULL_TIME","occupationalCategory":"Engineering","industry":"Technology","skills":"cybersecurity, vulnerability research, exploit development, network security, malware analysis, penetration testing, scientific computing, data analysis, programming (Python), threat modelling, policy frameworks, responsible disclosure practices, vulnerability coordination, cybersecurity frameworks (e.g., MITRE ATT&CK, NIST Cybersecurity Framework, CWE/CVE systems), AI/ML systems, large language models, ML-based security systems, adversarial ML research, defence, intelligence, or security organisations, NSA, CISA, national labs, security contractors, published security research, disclosed vulnerabilities, bug bounty programs, Trust & Safety operations, content moderation at scale, OSCP, OSCE, GXPN, or equivalent certifications, dual-use security research concerns, ethical considerations in AI safety"}]}