{"version":"0.1","company":{"name":"YubHub","url":"https://yubhub.co","jobsUrl":"https://yubhub.co/jobs/skill/research-workflows"},"x-facet":{"type":"skill","slug":"research-workflows","display":"Research Workflows","count":5},"x-feed-size-limit":100,"x-feed-sort":"enriched_at desc","x-feed-notice":"This feed contains at most 100 jobs (the most recently enriched). For the full corpus, use the paginated /stats/by-facet endpoint or /search.","x-generator":"yubhub-xml-generator","x-rights":"Free to redistribute with attribution: \"Data by YubHub (https://yubhub.co)\"","x-schema":"Each entry in `jobs` follows https://schema.org/JobPosting. YubHub-native raw fields carry `x-` prefix.","jobs":[{"@context":"https://schema.org","@type":"JobPosting","identifier":{"@type":"PropertyValue","name":"YubHub","value":"job_905816f2-aaa"},"title":"Fraud Researcher","description":"<p>We believe that the way people interact with their finances will drastically improve in the next few years. As a Senior Fraud Researcher, you will sit at the intersection of live fraud investigation, applied data science, and product innovation. You will lead complex investigations, translate findings into detection improvements, and collaborate tightly with Data Science, ML, and Product teams to shape the next generation of Plaid&#39;s fraud capabilities.</p>\n<p><strong>Responsibilities:</strong></p>\n<p>Live Fraud Investigation &amp; Reconstruction - Lead investigations into complex fraud cases across identities, accounts, devices, and transaction surfaces Provide support to day-to-day fraud operations including SEVs and alert triage Reconstruct attacker sequences and hypothesize actor intent and tooling Distill patterns from noisy signals into clear narratives and actionable insights Bridge investigation outcomes to product and model improvements</p>\n<p>Signal &amp; Tool Utilization at Scale - Operate across Plaid&#39;s fraud tooling , dashboards, alerting systems, network signals, and analytics platforms , to detect and validate anomalies Stress-test existing capabilities, identify systemic gaps, and define new detection primitives Proactively identify gaps in internal fraud tooling and automation, driving enhancements to improve efficiency and scale</p>\n<p>Product &amp; Model Partnership - Collaborate with Data Science, ML/AI, and Product teams to improve labeling, feature sets, evaluation frameworks, and model decay monitoring Surface data quality limitations and systematically formalize missing features Translate exploratory research into reusable feature pipelines, model inputs, or rule augmentations Participate in product discovery, roadmap planning, and post-launch evaluation to ensure fraud-awareness by design</p>\n<p>Deep Applied Fraud Research - Conduct longitudinal and structural analysis of how fraud types manifest in Plaid network data , entity linkages, temporal patterns, attack rotations, tool chains Experiment with network/graph analysis, sequence mining, anomaly detection, and custom heuristics where off-the-shelf approaches fail</p>\n<p>Ecosystem Monitoring &amp; Knowledge Leadership - Continuously survey external fraud trends, adversary techniques, tooling, and emerging threat vectors Proactively perform threat modeling of abuse surfaces and initiate research proposals when patterns emerge</p>\n<p>Case Studies &amp; Reporting - Produce clear, evidence-backed technical reports and case studies for product, engineering, operations, legal, and executive stakeholders Document investigation workflows, attack classifications, and proof-of-concept detection logic Drive post-incident learning by capturing lessons from fraud incidents and feeding them back into defenses</p>\n<p><strong>Qualifications:</strong></p>\n<p>Required - 3+ years of applied fraud experience in a high-velocity environment (fintech, consumer payments, banking, SaaS, marketplace risk, or security research) Investigator mindset: pattern synthesis, hypothesis testing, and skilled triage between signal and noise End-to-end investigation experience reconstructing attacker intent and behavior in multi-step attack sequences across accounts, devices, and identities Post-containment incident response experience with a deep emphasis on post-mortems and root cause analysis Dark and grey-web navigation and investigation experience; ability to assess source credibility and translate external intelligence into actionable insights Strong communication: ability to explain complex, ambiguous behavior to technical and non-technical audiences Tool fluency with data environments and investigative toolchains (BI tools, anomaly detection, case trackers)</p>\n<p>Preferred - SQL for deep data querying and exploratory analysis Python for scripting, rapid prototyping, and analytical workflows Graph/network analysis experience to detect linked behavioral structures or actor networks Familiarity with rule engines, signal gating, and large-scale monitoring systems Experience applying AI tools and agents to accelerate investigations and research workflows Ability to translate fraud research into actionable signals, rules, or labeled datasets that improve model performance</p>\n<p>Nice to Have - Fraud domain certifications (e.g., CFE) Prior work on consumer identity, payments, or risk platform development Exposure to production ML model lifecycles and metrics for drift/decay Experience improving internal fraud tooling, automation, or case management systems</p>\n<p><strong>Additional Information</strong></p>\n<p>Our mission at Plaid is to unlock financial freedom for everyone. To support that mission, we seek to build a diverse team of driven individuals who care deeply about making the financial ecosystem more equitable. We recognize that strong qualifications can come from both prior work experiences and lived experiences. We encourage you to apply to a role even if your experience doesn&#39;t fully match the job description. We are always looking for team members that will bring something unique to Plaid! Plaid is proud to be an equal opportunity employer and values diversity at our company. We do not discriminate based on race, color, national origin, ethnicity, religion or religious belief, sex (including pregnancy, childbirth, or related medical conditions), sexual orientation, gender, gender identity, gender expression, transgender status, sexual stereotypes, age, military or veteran status, disability, or other applicable legally protected characteristics. We also consider qualified applicants with criminal histories, consistent with applicable federal, state, and local laws. Plaid is committed to providing reasonable accommodations for candidates with disabilities in our recruiting process. If you need any assistance with your application or interviews due to a disability, please let us know.</p>\n<p style=\"margin-top:24px;font-size:13px;color:#666;\">XML job scraping automation by <a href=\"https://yubhub.co\">YubHub</a></p>","url":"https://yubhub.co/jobs/job_905816f2-aaa","directApply":true,"hiringOrganization":{"@type":"Organization","name":"Plaid","sameAs":"https://plaid.com/","logo":"https://logos.yubhub.co/plaid.com.png"},"x-apply-url":"https://jobs.lever.co/plaid/bc5f0459-e9cc-4b1d-b141-a33c60df5f17","x-work-arrangement":"hybrid","x-experience-level":"senior","x-job-type":"full-time","x-salary-range":"USD 190,800-286,800 per-year-salary","x-skills-required":["fraud research","data science","machine learning","investigation","pattern synthesis","hypothesis testing","signal and noise","tool fluency","data environments","investigative toolchains","SQL","Python","graph/network analysis","rule engines","signal gating","large-scale monitoring systems"],"x-skills-preferred":["SQL for deep data querying and exploratory analysis","Python for scripting, rapid prototyping, and analytical workflows","graph/network analysis experience to detect linked behavioral structures or actor networks","familiarity with rule engines, signal gating, and large-scale monitoring systems","experience applying AI tools and agents to accelerate investigations and research workflows","ability to translate fraud research into actionable signals, rules, or labeled datasets that improve model performance"],"datePosted":"2026-04-24T16:08:17.361Z","jobLocation":{"@type":"Place","address":{"@type":"PostalAddress","addressLocality":"New York"}},"employmentType":"FULL_TIME","occupationalCategory":"Engineering","industry":"Technology","skills":"fraud research, data science, machine learning, investigation, pattern synthesis, hypothesis testing, signal and noise, tool fluency, data environments, investigative toolchains, SQL, Python, graph/network analysis, rule engines, signal gating, large-scale monitoring systems, SQL for deep data querying and exploratory analysis, Python for scripting, rapid prototyping, and analytical workflows, graph/network analysis experience to detect linked behavioral structures or actor networks, familiarity with rule engines, signal gating, and large-scale monitoring systems, experience applying AI tools and agents to accelerate investigations and research workflows, ability to translate fraud research into actionable signals, rules, or labeled datasets that improve model performance","baseSalary":{"@type":"MonetaryAmount","currency":"USD","value":{"@type":"QuantitativeValue","minValue":190800,"maxValue":286800,"unitText":"YEAR"}}},{"@context":"https://schema.org","@type":"JobPosting","identifier":{"@type":"PropertyValue","name":"YubHub","value":"job_4fde2d89-11c"},"title":"Research Engineer, Economic Research","description":"<p>As a Research Engineer on the Economic Research team, you will design, build, and maintain critical infrastructure that powers Anthropic&#39;s research on AI&#39;s economic impact. You will work with data systems from across Anthropic, including our research tools for privacy-preserving analysis.</p>\n<p>The Economic Research team at Anthropic studies the economic implications of AI on individual, firm, and economy-wide outcomes. We build scalable systems to monitor AI usage patterns and directly measure the impact of AI adoption on real-world outcomes. We publish research and data that is clear-eyed about the economic effects of AI to help policymakers, businesses, and the public understand and navigate the transition to powerful AI.</p>\n<p>In this role, you will work closely with teams across Anthropic,including Data Science and Analytics, Data Infrastructure, Societal Impacts, and Public Policy,to build scalable and robust data systems that support high-leverage, high-impact research. Strong candidates will have a track record building data processing pipelines, architecting &amp; implementing high-quality internal infrastructure, working in a fast-paced startup environment, navigating ambiguity, and demonstrating an eagerness to develop their own research &amp; technical skills.</p>\n<p>Responsibilities:</p>\n<ul>\n<li>Build and maintain data pipelines that process large scale Claude usage logs into canonical, reusable datasets while maintaining user privacy.</li>\n<li>Expand privacy-preserving tools to enable new analytic functionality to support research needs.</li>\n<li>Design and implement novel data systems leveraging language models (e.g., CLIO) where traditional software engineering patterns don&#39;t yet exist.</li>\n<li>Develop and maintain data pipelines that are interoperable across data sources (including ingesting external data) and are designed to support economic analysis.</li>\n<li>Contribute to the strategic development of the economic research data foundations roadmap</li>\n<li>Ensure data reliability, integrity, and privacy compliance across all economic research data infrastructure</li>\n<li>Lead technical design discussions to ensure our infrastructure can support both current needs and future research directions</li>\n<li>Create documentation and best practices that enable self-serve data access for researchers while maintaining security and governance standards.</li>\n<li>Partner closely with researchers, data scientists, policy experts, and other cross-functional partners to advance Anthropic’s safety mission</li>\n</ul>\n<p>You might be a good fit if you have:</p>\n<ul>\n<li>Experience working with Research Scientists and Economists on ambiguous AI and economic projects</li>\n<li>Experience with building and maintaining data infrastructure, large datasets, and internal tools in production environments.</li>\n<li>Experience with cloud infrastructure platforms such as AWS or GCP.</li>\n<li>Take pride in writing clean, well-documented code in Python that others can build upon</li>\n<li>Are comfortable making technical decisions with incomplete information while maintaining high engineering standards</li>\n<li>Are comfortable getting up-to-speed quickly on unfamiliar codebases, and can work well with other engineers with different backgrounds across the organization</li>\n<li>Have a track record of using technical infrastructure to interface effectively with machine learning models</li>\n<li>Have experience deriving insights from imperfect data streams</li>\n<li>Have experience building systems and products on top of LLMs</li>\n<li>Have experience incubating and maturing tooling platforms used by a wide variety of stakeholders</li>\n<li>A passion for Anthropic&#39;s mission of building helpful, honest, and harmless AI and understanding its economic implications.</li>\n<li>A “full-stack mindset”, not hesitating to do what it takes to solve a problem end-to-end, even if it requires going outside the original job description.</li>\n<li>Strong communication skills to collaborate effectively with economists, researchers, and cross-functional partners who may have varying levels of technical expertise.</li>\n</ul>\n<p>Strong candidates may have:</p>\n<ul>\n<li>Background in econometrics, statistics, or quantitative social science research</li>\n<li>Experience building data infrastructure and data foundations for research</li>\n<li>Familiarity with large language models, AI systems, or ML research workflows</li>\n<li>Prior work on projects related to labor economics, technology adoption, or economic measurement</li>\n</ul>\n<p style=\"margin-top:24px;font-size:13px;color:#666;\">XML job scraping automation by <a href=\"https://yubhub.co\">YubHub</a></p>","url":"https://yubhub.co/jobs/job_4fde2d89-11c","directApply":true,"hiringOrganization":{"@type":"Organization","name":"Anthropic","sameAs":"https://www.anthropic.com/","logo":"https://logos.yubhub.co/anthropic.com.png"},"x-apply-url":"https://job-boards.greenhouse.io/anthropic/jobs/5071132008","x-work-arrangement":"hybrid","x-experience-level":"senior","x-job-type":"full-time","x-salary-range":"$300,000-$405,000 USD","x-skills-required":["Python","Data infrastructure","Cloud infrastructure platforms","Machine learning models","Language models","Econometrics","Statistics","Quantitative social science research"],"x-skills-preferred":["LLMs","AI systems","ML research workflows","Labor economics","Technology adoption","Economic measurement"],"datePosted":"2026-04-24T11:26:39.190Z","jobLocation":{"@type":"Place","address":{"@type":"PostalAddress","addressLocality":"San Francisco, CA"}},"employmentType":"FULL_TIME","occupationalCategory":"Engineering","industry":"Technology","skills":"Python, Data infrastructure, Cloud infrastructure platforms, Machine learning models, Language models, Econometrics, Statistics, Quantitative social science research, LLMs, AI systems, ML research workflows, Labor economics, Technology adoption, Economic measurement","baseSalary":{"@type":"MonetaryAmount","currency":"USD","value":{"@type":"QuantitativeValue","minValue":300000,"maxValue":405000,"unitText":"YEAR"}}},{"@context":"https://schema.org","@type":"JobPosting","identifier":{"@type":"PropertyValue","name":"YubHub","value":"job_e4c483d6-b25"},"title":"Research Operations, Discovery","description":"<p><strong>About the Role</strong></p>\n<p>We&#39;re seeking a Science Research Operations team member to build and own the operational infrastructure that keeps our research organisation running at full speed.</p>\n<p>Our science teams are working on some of the hardest and most consequential problems in AI,training large-scale models, running complex experiments, and building novel products at the frontier. What makes that possible isn&#39;t just talent: it&#39;s the coordination, systems, and programs that let researchers spend their time on the science rather than the overhead around it.</p>\n<p>This role sits at the intersection of research operations, technical program management, and product strategy. You&#39;ll work directly with research scientists and research engineers, doing a mix of tasks including running research partnerships, managing complex internal programs, and helping run the team’s day-to-day operations.</p>\n<p><strong>Responsibilities:</strong></p>\n<ul>\n<li>Build and manage custom expert contractor networks, sourcing domain specialists for eval and training data work that requires expertise beyond standard channels</li>\n<li>Run research partnerships with external partners, from scoping through delivery</li>\n<li>Provide end-to-end TPM support for major research pushes,coordinating across teams, tracking dependencies, and keeping stakeholders aligned</li>\n<li>Ensure that our research progress is complemented by products that enable scientists to make maximal use of model capabilities.</li>\n<li>Support recruiting efforts.</li>\n<li>Coordinate external communications for the team, including supporting blog posts and preparing public-facing materials</li>\n<li>Partner with product teams to contribute to science product strategy, product design, and novel product integrations where research and product intersect</li>\n<li>Own team logistics including onboarding coordination, team events, and operational programs that improve team efficiency</li>\n</ul>\n<p><strong>You may be a good fit if you:</strong></p>\n<ul>\n<li>Have experience in research operations, technical program management, or a related role in a fast-moving technical environment</li>\n<li>Can context-switch fluidly between operational work (logistics, tracking, coordination) and higher-order work (strategy, partnerships, product thinking)</li>\n<li>Have a technical background, with experience in software development, machine learning, or biology R&amp;D.</li>\n<li>Are comfortable working directly with research scientists and engineers,you ask good questions, you don&#39;t need things explained twice, and you know when to escalate vs. when to handle it yourself</li>\n<li>Have a track record of building systems and processes from scratch rather than inheriting them</li>\n<li>Bring strong written communication skills and can represent the team accurately in external-facing materials</li>\n<li>Have managed contractors or external partners before, including scoping work, tracking delivery, and ensuring quality</li>\n<li>Are results-oriented, with a bias toward flexibility and impact</li>\n<li>Thrive in ambiguous, fast-moving environments where priorities shift and no two weeks look the same</li>\n</ul>\n<p><strong>Strong candidates may also have:</strong></p>\n<ul>\n<li>Direct experience sourcing and managing expert contractor networks, particularly in technical or scientific domains</li>\n<li>Familiarity with ML research workflows,training runs, evaluations, data pipelines,and what makes them succeed or stall</li>\n<li>Experience contributing to product development or product strategy, not just operations</li>\n</ul>\n<p><strong>Logistics</strong></p>\n<p>Minimum education: Bachelor’s degree or an equivalent combination of education, training, and/or experience Required field of study: A field relevant to the research operations role Minimum years of experience: Years of experience required will correlate with the internal job level requirements for the position Location-based hybrid policy: Currently, we expect all staff to be in one of our offices at least 25% of the time. Visa sponsorship: We do sponsor visas!</p>\n<p style=\"margin-top:24px;font-size:13px;color:#666;\">XML job scraping automation by <a href=\"https://yubhub.co\">YubHub</a></p>","url":"https://yubhub.co/jobs/job_e4c483d6-b25","directApply":true,"hiringOrganization":{"@type":"Organization","name":"Anthropic","sameAs":"https://www.anthropic.com/","logo":"https://logos.yubhub.co/anthropic.com.png"},"x-apply-url":"https://job-boards.greenhouse.io/anthropic/jobs/5188237008","x-work-arrangement":"hybrid","x-experience-level":"mid","x-job-type":"full-time","x-salary-range":"$210,000-$310,000 USD","x-skills-required":["research operations","technical program management","software development","machine learning","biology R&D","contractor management","external partner management","written communication","project coordination","team logistics"],"x-skills-preferred":["expert contractor network management","ML research workflows","product development","product strategy"],"datePosted":"2026-04-18T15:56:46.902Z","jobLocation":{"@type":"Place","address":{"@type":"PostalAddress","addressLocality":"San Francisco, CA | New York City, NY"}},"employmentType":"FULL_TIME","occupationalCategory":"Engineering","industry":"Technology","skills":"research operations, technical program management, software development, machine learning, biology R&D, contractor management, external partner management, written communication, project coordination, team logistics, expert contractor network management, ML research workflows, product development, product strategy","baseSalary":{"@type":"MonetaryAmount","currency":"USD","value":{"@type":"QuantitativeValue","minValue":210000,"maxValue":310000,"unitText":"YEAR"}}},{"@context":"https://schema.org","@type":"JobPosting","identifier":{"@type":"PropertyValue","name":"YubHub","value":"job_22ff82ac-40b"},"title":"Software Engineer, Research Data Platform","description":"<p>We&#39;re looking for engineers who love working directly with users and who excel at building data products. The Research Data Platform team builds the tools that Anthropic&#39;s researchers use every day to manage, query, and analyze the data that goes into training and evaluating frontier models.</p>\n<p>As a software engineer on this team, you will:</p>\n<ul>\n<li>Build and operate data pipelines that extract data from research training runs and land it in storage systems that are easy and fast to query</li>\n<li>Work closely with researchers to design and build APIs, libraries, and web interfaces that support data management, exploration, and analysis</li>\n<li>Develop dataset management, data cataloging, and provenance tooling that researchers use in their day-to-day work</li>\n<li>Embed with research teams to understand their workflows, identify high-leverage tooling opportunities, and ship solutions quickly</li>\n<li>Collaborate with adjacent teams to build on existing systems rather than reinventing them</li>\n</ul>\n<p>You may be a good fit if you have significant software engineering experience, particularly building data-intensive applications or internal tooling. You should enjoy working directly with users, gathering requirements iteratively, and shipping things that get adopted. You should also be results-oriented, with a bias towards flexibility and impact.</p>\n<p>Strong candidates may also have experience with large-scale ETL, columnar storage formats, and query engines, high-volume time series data, data cataloging, lineage, or metadata management systems, ML experiment tracking or metrics platforms, complex data visualization, and full-stack web application development.</p>\n<p style=\"margin-top:24px;font-size:13px;color:#666;\">XML job scraping automation by <a href=\"https://yubhub.co\">YubHub</a></p>","url":"https://yubhub.co/jobs/job_22ff82ac-40b","directApply":true,"hiringOrganization":{"@type":"Organization","name":"Anthropic","sameAs":"https://www.anthropic.com/","logo":"https://logos.yubhub.co/anthropic.com.png"},"x-apply-url":"https://job-boards.greenhouse.io/anthropic/jobs/5191226008","x-work-arrangement":"hybrid","x-experience-level":"mid","x-job-type":"full-time","x-salary-range":"$320,000-$405,000 USD","x-skills-required":["software engineering","data-intensive applications","internal tooling","data pipelines","storage systems","APIs","libraries","web interfaces","dataset management","data cataloging","provenance tooling","research workflows","adjacent teams"],"x-skills-preferred":["large-scale ETL","columnar storage formats","query engines","high-volume time series data","lineage","metadata management systems","ML experiment tracking","metrics platforms","complex data visualization","full-stack web application development"],"datePosted":"2026-04-18T15:51:29.293Z","jobLocation":{"@type":"Place","address":{"@type":"PostalAddress","addressLocality":"San Francisco, CA | New York City, NY"}},"employmentType":"FULL_TIME","occupationalCategory":"Engineering","industry":"Technology","skills":"software engineering, data-intensive applications, internal tooling, data pipelines, storage systems, APIs, libraries, web interfaces, dataset management, data cataloging, provenance tooling, research workflows, adjacent teams, large-scale ETL, columnar storage formats, query engines, high-volume time series data, lineage, metadata management systems, ML experiment tracking, metrics platforms, complex data visualization, full-stack web application development","baseSalary":{"@type":"MonetaryAmount","currency":"USD","value":{"@type":"QuantitativeValue","minValue":320000,"maxValue":405000,"unitText":"YEAR"}}},{"@context":"https://schema.org","@type":"JobPosting","identifier":{"@type":"PropertyValue","name":"YubHub","value":"job_d951cdda-dc5"},"title":"Research Operations, Discovery","description":"<p><strong>About the Role</strong></p>\n<p>We&#39;re seeking a Science Research Operations team member to build and own the operational infrastructure that keeps our research organisation running at full speed.</p>\n<p>Our science teams are working on some of the hardest and most consequential problems in AI,training large-scale models, running complex experiments, and building novel products at the frontier. What makes that possible isn&#39;t just talent: it&#39;s the coordination, systems, and programs that let researchers spend their time on the science rather than the overhead around it.</p>\n<p>This role sits at the intersection of research operations, technical program management, and product strategy. You&#39;ll work directly with research scientists and research engineers, doing a mix of tasks including running research partnerships, managing complex internal programs, and helping run the team’s day-to-day operations.</p>\n<p><strong>Responsibilities:</strong></p>\n<ul>\n<li>Build and manage custom expert contractor networks, sourcing domain specialists for eval and training data work that requires expertise beyond standard channels</li>\n<li>Run research partnerships with external partners, from scoping through delivery</li>\n<li>Provide end-to-end TPM support for major research pushes,coordinating across teams, tracking dependencies, and keeping stakeholders aligned</li>\n<li>Ensure that our research progress is complemented by products that enable scientists to make maximal use of model capabilities.</li>\n<li>Support recruiting efforts.</li>\n<li>Coordinate external communications for the team, including supporting blog posts and preparing public-facing materials</li>\n<li>Partner with product teams to contribute to science product strategy, product design, and novel product integrations where research and product intersect</li>\n<li>Own team logistics including onboarding coordination, team events, and operational programs that improve team efficiency</li>\n</ul>\n<p><strong>You may be a good fit if you:</strong></p>\n<ul>\n<li>Have experience in research operations, technical program management, or a related role in a fast-moving technical environment</li>\n<li>Can context-switch fluidly between operational work (logistics, tracking, coordination) and higher-order work (strategy, partnerships, product thinking)</li>\n<li>Have a technical background, with experience in software development, machine learning, or biology R&amp;D.</li>\n<li>Are comfortable working directly with research scientists and engineers,you ask good questions, you don&#39;t need things explained twice, and you know when to escalate vs. when to handle it yourself</li>\n<li>Have a track record of building systems and processes from scratch rather than inheriting them</li>\n<li>Bring strong written communication skills and can represent the team accurately in external-facing materials</li>\n<li>Have managed contractors or external partners before, including scoping work, tracking delivery, and ensuring quality</li>\n<li>Are results-oriented, with a bias toward flexibility and impact</li>\n<li>Thrive in ambiguous, fast-moving environments where priorities shift and no two weeks look the same</li>\n</ul>\n<p><strong>Strong candidates may also have:</strong></p>\n<ul>\n<li>Direct experience sourcing and managing expert contractor networks, particularly in technical or scientific domains</li>\n<li>Familiarity with ML research workflows,training runs, evaluations, data pipelines,and what makes them succeed or stall</li>\n<li>Experience contributing to product development or product strategy, not just operations</li>\n</ul>\n<p><strong>Logistics</strong></p>\n<p>Minimum education: Bachelor’s degree or an equivalent combination of education, training, and/or experience Required field of study: A field relevant to the research operations role Minimum years of experience: Years of experience required will correlate with the internal job level requirements for the position Location-based hybrid policy: Currently, we expect all staff to be in one of our offices at least 25% of the time. However, some roles may require more time in our offices. Visa sponsorship: We do sponsor visas!</p>\n<p style=\"margin-top:24px;font-size:13px;color:#666;\">XML job scraping automation by <a href=\"https://yubhub.co\">YubHub</a></p>","url":"https://yubhub.co/jobs/job_d951cdda-dc5","directApply":true,"hiringOrganization":{"@type":"Organization","name":"Anthropic","sameAs":"https://www.anthropic.com/","logo":"https://logos.yubhub.co/anthropic.com.png"},"x-apply-url":"https://job-boards.greenhouse.io/anthropic/jobs/5188237008","x-work-arrangement":"hybrid","x-experience-level":"mid","x-job-type":"full-time","x-salary-range":"$210,000-$310,000 USD","x-skills-required":["research operations","technical program management","software development","machine learning","biology R&D","contractor management","external partner management","written communication","team logistics"],"x-skills-preferred":["expert contractor network management","ML research workflows","product development","product strategy"],"datePosted":"2026-04-18T15:45:31.775Z","jobLocation":{"@type":"Place","address":{"@type":"PostalAddress","addressLocality":"San Francisco, CA | New York City, NY"}},"employmentType":"FULL_TIME","occupationalCategory":"Engineering","industry":"Technology","skills":"research operations, technical program management, software development, machine learning, biology R&D, contractor management, external partner management, written communication, team logistics, expert contractor network management, ML research workflows, product development, product strategy","baseSalary":{"@type":"MonetaryAmount","currency":"USD","value":{"@type":"QuantitativeValue","minValue":210000,"maxValue":310000,"unitText":"YEAR"}}}]}