<?xml version="1.0" encoding="UTF-8"?>
<source>
  <jobs>
    <job>
      <externalid>2da35265-f56</externalid>
      <Title>Machine Learning Research Engineer - Robotics</Title>
      <Description><![CDATA[<p>We&#39;re seeking a Machine Learning Research Engineer to join our Robotics business unit. As a key contributor, you&#39;ll conduct applied research in Robotics and develop ML pipelines for training and fine-tuning on data collected by Scale. In this role, you&#39;ll advance Robotic research, shape Scale&#39;s robotics offerings, and expand the frontier of Robotics data and model evaluation.</p>
<p>Key responsibilities include: Collaborating closely with Robotics customers to drive the industry forward in using VLA data Developing ML pipelines to train/fine-tune models using Scale&#39;s data Conducting research on robotics data collection, cross-embodiment training, and policy fine-tuning Developing novel methods for evaluating VLA models, including new robotics industry benchmarks Partnering with cross-functional stakeholders and Scale&#39;s customers to improve data collection Collaborating with product teams to bring ML outcomes to Scale&#39;s platform</p>
<p>You&#39;ll have: Practical experience building training VLA models and/or building robotics data 3+ years of relevant industry experience in areas relating to: robotics, computer vision, embodied AI, sim-to-real, imitation learning, reinforcement learning, and vision language actions models PhD or equivalent experience in Machine Learning or Robotics A track record of published research in robotics Experience conducting data collection and performing evaluations Strong written and verbal communication skills and the ability to work with cross-functional teams and customers Intellectual curiosity, empathy, and ability to operate with a high degree of autonomy</p>
<p>Nice to haves: Experience working with robotics hardware platforms (robotic arms, perception systems, etc.) Experience deploying machine learning models on robotic systems in the field Experience with teleoperated or human-driven data for robotics (ALOHA, UMI, hand tracking, etc.)</p>
<p style="margin-top:24px;font-size:13px;color:#666;">XML job scraping automation by <a href="https://yubhub.co">YubHub</a></p>]]></Description>
      <Jobtype>full-time</Jobtype>
      <Experiencelevel>senior</Experiencelevel>
      <Workarrangement>onsite</Workarrangement>
      <Salaryrange>$248,800-$311,000 USD</Salaryrange>
      <Skills>Machine Learning, Robotics, Computer Vision, Embodied AI, Sim-to-Real, Imitation Learning, Reinforcement Learning, Vision Language Actions Models, Robotics Hardware Platforms, Deploying Machine Learning Models, Teleoperated or Human-Driven Data</Skills>
      <Category>Engineering</Category>
      <Industry>Technology</Industry>
      <Employername>Scale</Employername>
      <Employerlogo>https://logos.yubhub.co/scale.com.png</Employerlogo>
      <Employerdescription>Scale develops reliable AI systems for the world&apos;s most important decisions.</Employerdescription>
      <Employerwebsite>https://www.scale.com/</Employerwebsite>
      <Compensationcurrency></Compensationcurrency>
      <Compensationmin></Compensationmin>
      <Compensationmax></Compensationmax>
      <Applyto>https://job-boards.greenhouse.io/scaleai/jobs/4600908005</Applyto>
      <Location>San Francisco, CA</Location>
      <Country></Country>
      <Postedate>2026-04-18</Postedate>
    </job>
    <job>
      <externalid>10bf8d86-b30</externalid>
      <Title>Research Engineer, Safeguards Labs</Title>
      <Description><![CDATA[<p><strong>About the Role</strong></p>
<p>We&#39;re hiring research engineers to define and execute the Labs research agenda. You&#39;ll scope your own projects, run experiments end-to-end, and decide when an idea is ready to hand off to a production team , or when to kill it and move on.</p>
<p><strong>Responsibilities:</strong></p>
<ul>
<li>Lead and contribute to research projects investigating new methods for detecting misuse of Claude, identifying malicious organisations and accounts, strengthening model safeguards, and other safety needs.</li>
</ul>
<ul>
<li>Design and run offline analyses over model usage data to surface abuse patterns, build classifiers and detection systems, and evaluate their effectiveness.</li>
</ul>
<ul>
<li>Develop and iterate on prototypes that could eventually feed signals into the real-time safeguards path, partnering with engineers on tech transfer.</li>
</ul>
<ul>
<li>Contribute to a broader research portfolio investigating methods for detecting abusive behaviour in chat-based or agentive workflows, and for training the model to robustly refrain from dangerous responses or behaviours without over-refusing.</li>
</ul>
<ul>
<li>Build evaluations and methodologies for measuring whether safeguards actually work, including in agentic settings.</li>
</ul>
<ul>
<li>Write up findings clearly so they inform decisions across Trust &amp; Safety, research, and product teams.</li>
</ul>
<p><strong>You may be a good fit if you:</strong></p>
<ul>
<li>Have a track record of independently driving research projects from ambiguous problem statements to concrete results, ideally in AI, ML, security, integrity, or a related technical field.</li>
</ul>
<ul>
<li>Are comfortable scoping your own work and switching between research, engineering, and analysis as a project demands.</li>
</ul>
<ul>
<li>Have working familiarity with how large language models operate , sampling, prompting, training , even if LLMs aren&#39;t your primary background.</li>
</ul>
<ul>
<li>Are proficient in Python and comfortable working with large datasets.</li>
</ul>
<ul>
<li>Care about the societal impacts of AI and want your work to directly reduce real-world harm.</li>
</ul>
<p><strong>Strong candidates may also have:</strong></p>
<ul>
<li>Experience building and training machine learning models, including classifiers for abuse, fraud, integrity, or security applications.</li>
</ul>
<ul>
<li>Knowledge of evaluation methodologies for language models and experience designing evals.</li>
</ul>
<ul>
<li>Experience with agentic environments and evaluating model behaviour in them.</li>
</ul>
<ul>
<li>Background in trust and safety, integrity, fraud detection, threat intelligence, or adversarial ML.</li>
</ul>
<ul>
<li>Experience with red teaming, jailbreak research, or interpretability methods like steering vectors.</li>
</ul>
<ul>
<li>A history of taking research prototypes and transferring them into production systems.</li>
</ul>
<p><strong>Logistics</strong></p>
<ul>
<li>Minimum education: Bachelor’s degree or an equivalent combination of education, training, and/or experience</li>
</ul>
<ul>
<li>Required field of study: A field relevant to the role as demonstrated through coursework, training, or professional experience</li>
</ul>
<ul>
<li>Minimum years of experience: Years of experience required will correlate with the internal job level requirements for the position</li>
</ul>
<p><strong>Benefits</strong></p>
<ul>
<li>Competitive compensation and benefits</li>
</ul>
<ul>
<li>Optional equity donation matching</li>
</ul>
<ul>
<li>Generous vacation and parental leave</li>
</ul>
<ul>
<li>Flexible working hours</li>
</ul>
<ul>
<li>Lovely office space in which to collaborate with colleagues</li>
</ul>
<p><strong>Visa Sponsorship</strong></p>
<ul>
<li>We do sponsor visas! However, we aren&#39;t able to successfully sponsor visas for every role and every candidate. But if we make you an offer, we will make every reasonable effort to get you a visa, and we retain an immigration lawyer to help with this.</li>
</ul>
<p style="margin-top:24px;font-size:13px;color:#666;">XML job scraping automation by <a href="https://yubhub.co">YubHub</a></p>]]></Description>
      <Jobtype>full-time</Jobtype>
      <Experiencelevel>senior</Experiencelevel>
      <Workarrangement>hybrid</Workarrangement>
      <Salaryrange>$350,000-$850,000 USD</Salaryrange>
      <Skills>Python, Machine learning, Large language models, Security, Integrity, Experience building and training machine learning models, Knowledge of evaluation methodologies for language models, Experience with agentic environments, Background in trust and safety, Experience with red teaming</Skills>
      <Category>Engineering</Category>
      <Industry>Technology</Industry>
      <Employername>Anthropic</Employername>
      <Employerlogo>https://logos.yubhub.co/anthropic.com.png</Employerlogo>
      <Employerdescription>Anthropic is a public benefit corporation that creates reliable, interpretable, and steerable AI systems.</Employerdescription>
      <Employerwebsite>https://www.anthropic.com/</Employerwebsite>
      <Compensationcurrency></Compensationcurrency>
      <Compensationmin></Compensationmin>
      <Compensationmax></Compensationmax>
      <Applyto>https://job-boards.greenhouse.io/anthropic/jobs/5191785008</Applyto>
      <Location>San Francisco, CA | New York City, NY</Location>
      <Country></Country>
      <Postedate>2026-04-18</Postedate>
    </job>
    <job>
      <externalid>4fde2d89-11c</externalid>
      <Title>Research Engineer, Economic Research</Title>
      <Description><![CDATA[<p>As a Research Engineer on the Economic Research team, you will design, build, and maintain critical infrastructure that powers Anthropic&#39;s research on AI&#39;s economic impact. You will work with data systems from across Anthropic, including our research tools for privacy-preserving analysis.\n\nThe Economic Research team at Anthropic studies the economic implications of AI on individual, firm, and economy-wide outcomes. We build scalable systems to monitor AI usage patterns and directly measure the impact of AI adoption on real-world outcomes. We publish research and data that is clear-eyed about the economic effects of AI to help policymakers, businesses, and the public understand and navigate the transition to powerful AI.\n\nIn this role, you will work closely with teams across Anthropic,including Data Science and Analytics, Data Infrastructure, Societal Impacts, and Public Policy,to build scalable and robust data systems that support high-leverage, high-impact research. Strong candidates will have a track record building data processing pipelines, architecting &amp; implementing high-quality internal infrastructure, working in a fast-paced startup environment, navigating ambiguity, and demonstrating an eagerness to develop their own research &amp; technical skills.\n\nResponsibilities:\n\n<em> Build and maintain data pipelines that process large scale Claude usage logs into canonical, reusable datasets while maintaining user privacy.\n</em> Expand privacy-preserving tools to enable new analytic functionality to support research needs.\n<em> Design and implement novel data systems leveraging language models (e.g., CLIO) where traditional software engineering patterns don&#39;t yet exist.\n</em> Develop and maintain data pipelines that are interoperable across data sources (including ingesting external data) and are designed to support economic analysis.\n<em> Contribute to the strategic development of the economic research data foundations roadmap\n</em> Ensure data reliability, integrity, and privacy compliance across all economic research data infrastructure\n<em> Lead technical design discussions to ensure our infrastructure can support both current needs and future research directions\n</em> Create documentation and best practices that enable self-serve data access for researchers while maintaining security and governance standards.\n<em> Partner closely with researchers, data scientists, policy experts, and other cross-functional partners to advance Anthropic’s safety mission\n\nYou might be a good fit if you have:\n\n</em> Experience working with Research Scientists and Economists on ambiguous AI and economic projects\n<em> Experience with building and maintaining data infrastructure, large datasets, and internal tools in production environments.\n</em> Experience with cloud infrastructure platforms such as AWS or GCP.\n<em> Take pride in writing clean, well-documented code in Python that others can build upon\n</em> Are comfortable making technical decisions with incomplete information while maintaining high engineering standards\n<em> Are comfortable getting up-to-speed quickly on unfamiliar codebases, and can work well with other engineers with different backgrounds across the organization\n</em> Have a track record of using technical infrastructure to interface effectively with machine learning models\n<em> Have experience deriving insights from imperfect data streams\n</em> Have experience building systems and products on top of LLMs\n<em> Have experience incubating and maturing tooling platforms used by a wide variety of stakeholders\n</em> A passion for Anthropic&#39;s mission of building helpful, honest, and harmless AI and understanding its economic implications.\n<em> A “full-stack mindset”, not hesitating to do what it takes to solve a problem end-to-end, even if it requires going outside the original job description.\n</em> Strong communication skills to collaborate effectively with economists, researchers, and cross-functional partners who may have varying levels of technical expertise.\n\nStrong candidates may have:\n\n<em> Background in econometrics, statistics, or quantitative social science research\n</em> Experience building data infrastructure and data foundations for research\n<em> Familiarity with large language models, AI systems, or ML research workflows\n</em> Prior work on projects related to labor economics, technology adoption, or economic measurement\n\nSome Examples of Our Recent Work\n\n<em> Anthropic Economic Index Report: Economic Primitives\n</em> Anthropic Economic Index Report: Uneven Geographic and Enterprise AI Adoption\n<em> Estimating AI productivity gains from Claude conversations\n</em> The Anthropic Economic Index\n\nDeadline to apply: None. Applications are reviewed on a rolling basis\n\nThe annual compensation range for this role is listed below.\n\nFor sales roles, the range provided is the role’s On Target Earnings (&quot;OTE&quot;) range, meaning that the range includes both the sales commissions/sales bonuses target and annual base salary for the role.\n\nAnnual Salary: $300,000-$405,000 USD\n\nLogistics\n\nMinimum education: Bachelor’s degree or an equivalent combination of education, training, and/or experience\nRequired field of study: A field relevant to the role as demonstrated through coursework, training, or professional experience\nMinimum years of experience: Years of experience required will correlate with the internal job level requirements for the position\nLocation-based hybrid policy: Currently, we expect all staff to be in one of our offices at least 25% of the time. However, some roles may require more time in our offices.\nVisa sponsorship: We do sponsor visas! However, we aren&#39;t able to successfully sponsor visas for every role and every candidate. But if we make you an offer, we will make every reasonable effort to get you a visa, and we retain an immigration lawyer to help with this.\n\nWe encourage you to apply even if you do not believe you meet every single qualification. Not all strong candidates will meet every single qualification as listed. Research shows that people who identify as being from underrepresented groups are more prone to experiencing imposter syndrome and doubting the strength of their candidacy, so we urge you not to exclude yourself prematurely and to submit an application if you&#39;re interested in this work. We think AI systems like the ones we&#39;re building have enormous social and ethical implications. We think this makes representation even more important, and we strive to include a range of diverse perspectives on our team.\n\nYour safety matters to us. To protect yourself from potential scams, remember that Anthropic recruiters only contact you from @anthropic.com email addresses. In some cases, we may partner with vetted recruiting agencies who will identify themselves as working on behalf of Anthropic. Be cautious of emails from other domains. Legitimate Anthropic recruiters will never ask for money, fees, or banking information before your first day. If you&#39;re ever unsure about a communication, don&#39;t click any links,visit anthropic.com/careers directly for confirmed position openings.\n\nHow we&#39;re different\n\nWe believe that the highest-impact AI research will be big science. At Anthropic we work as a single cohesive team on just a few large-scale research efforts. And we value impact , advancing our long-term goals of steerable, trustworthy AI , rather than work on small\n</p>
<p style="margin-top:24px;font-size:13px;color:#666;">XML job scraping automation by <a href="https://yubhub.co">YubHub</a></p>]]></Description>
      <Jobtype>full-time</Jobtype>
      <Experiencelevel>senior</Experiencelevel>
      <Workarrangement>hybrid</Workarrangement>
      <Salaryrange>$300,000-$405,000 USD</Salaryrange>
      <Skills>Python, Cloud infrastructure platforms (AWS or GCP), Data infrastructure, Large datasets, Internal tools, Machine learning models, Econometrics, Statistics, Quantitative social science research, Large language models, AI systems, ML research workflows, Full-stack mindset, Strong communication skills, Ambiguity tolerance, Problem-solving skills, Collaboration skills</Skills>
      <Category>Engineering</Category>
      <Industry>Technology</Industry>
      <Employername>Anthropic</Employername>
      <Employerlogo>https://logos.yubhub.co/anthropic.com.png</Employerlogo>
      <Employerdescription>Anthropic is a company that creates reliable, interpretable, and steerable AI systems.</Employerdescription>
      <Employerwebsite>https://www.anthropic.com/</Employerwebsite>
      <Compensationcurrency></Compensationcurrency>
      <Compensationmin></Compensationmin>
      <Compensationmax></Compensationmax>
      <Applyto>https://job-boards.greenhouse.io/anthropic/jobs/5071132008</Applyto>
      <Location>San Francisco, CA</Location>
      <Country></Country>
      <Postedate>2026-04-18</Postedate>
    </job>
    <job>
      <externalid>059293a1-afa</externalid>
      <Title>Systems Engineer, Data</Title>
      <Description><![CDATA[<p>About Us</p>
<p>At Cloudflare, we are on a mission to help build a better Internet. Today the company runs one of the world’s largest networks that powers millions of websites and other Internet properties for customers ranging from individual bloggers to SMBs to Fortune 500 companies.</p>
<p>We protect and accelerate any Internet application online without adding hardware, installing software, or changing a line of code. Internet properties powered by Cloudflare all have web traffic routed through its intelligent global network, which gets smarter with every request. As a result, they see significant improvement in performance and a decrease in spam and other attacks.</p>
<p>We were named to Entrepreneur Magazine’s Top Company Cultures list and ranked among the World’s Most Innovative Companies by Fast Company.</p>
<p>About the Team</p>
<p>The Core Data team’s mission is building a centralized data platform for Cloudflare that provides secure, democratized access to data for internal customers throughout the company. We operate infrastructure and craft tools to empower both technical and non-technical users to answer their most important questions. We facilitate access to data from federated sources across the company for dashboarding, ad-hoc querying and in-product use cases. We power data pipelines and data products, secure and monitor data, and drive data governance at Cloudflare.</p>
<p>Our work enables every individual at the company to act with greater information and make more informed decisions.</p>
<p>About the Role</p>
<p>We are looking for a systems engineer with a strong background in data to help us expand and maintain our data infrastructure. You’ll contribute to the technical implementation of our scaling data platform, manage access while accounting for privacy and security, build data pipelines, and develop tools to automate accessibility and usefulness of data. You’ll collaborate with teams including Product Growth, Marketing, and Billing to help them make informed decisions and power usage-based invoicing platforms, as well as work with product teams to bring new data-driven solutions to Cloudflare customers.</p>
<p>Responsibilities</p>
<ul>
<li>Contribute to the design and execution of technical architecture for highly visible data infrastructure at the company.</li>
<li>Design and develop tools and infrastructure to improve and scale our data systems at Cloudflare.</li>
<li>Build and maintain data pipelines and data products to serve customers throughout the company, including tools to automate delivery of those services.</li>
<li>Gain deep knowledge of our data platforms and tools to guide and enable stakeholders with their data needs.</li>
<li>Work across our tech stack, which includes Kubernetes, Trino, Iceberg, Clickhouse, and PostgreSQL, with software built using Go, Javascript/Typescript, Python, and others.</li>
<li>Collaborate with peers to reinforce a culture of exceptional delivery and accountability on the team.</li>
</ul>
<p>Requirements</p>
<ul>
<li>3-5+ years of experience as a software engineer with a focus on building and maintaining data infrastructure.</li>
<li>Experience participating in technical initiatives in a cross-functional context, working with stakeholders to deliver value.</li>
<li>Practical experience with data infrastructure components, such as Trino, Spark, Iceberg/Delta Lake, Kafka, Clickhouse, or PostgreSQL.</li>
<li>Hands-on experience building and debugging data pipelines.</li>
<li>Proficient using backend languages like Go, Python, or Typescript, along with strong SQL skills.</li>
<li>Strong analytical skills, with a focus on understanding how data is used to drive business value.</li>
<li>Solid communication skills, with the ability to explain technical concepts to both technical and non-technical audiences.</li>
</ul>
<p>Desirable Skills</p>
<ul>
<li>Experience with data orchestration and infrastructure platforms like Airflow and DBT.</li>
<li>Experience deploying and managing services in Kubernetes.</li>
<li>Familiarity with data governance processes, privacy requirements, or auditability.</li>
<li>Interest in or knowledge of machine learning models and MLOps.</li>
</ul>
<p>What Makes Cloudflare Special?</p>
<p>We’re not just a highly ambitious, large-scale technology company. We’re a highly ambitious, large-scale technology company with a soul. Fundamental to our mission to help build a better Internet is protecting the free and open Internet.</p>
<p>Project Galileo: Since 2014, we&#39;ve equipped more than 2,400 journalism and civil society organizations in 111 countries with powerful tools to defend themselves against attacks that would otherwise censor their work, technology already used by Cloudflare’s enterprise customers--at no cost.</p>
<p>Athenian Project: In 2017, we created the Athenian Project to ensure that state and local governments have the highest level of protection and reliability for free, so that their constituents have access to election information and voter registration. Since the project, we&#39;ve provided services to more than 425 local government election websites in 33 states.</p>
<p>1.1.1.1: We released 1.1.1.1 to help fix the foundation of the Internet by building a faster, more secure and privacy-centric public DNS resolver. This is available publicly for everyone to use - it is the first consumer-focused service Cloudflare has ever released.</p>
<p>Here’s the deal - we don’t store client IP addresses never, ever. We will continue to abide by our privacy commitment and ensure that no user data is sold to advertisers or used to target consumers.</p>
<p>Sound like something you’d like to be a part of? We’d love to hear from you!</p>
<p>This position may require access to information protected under U.S. export control laws, including the U.S. Export Administration Regulations. Please note that any offer of employment may be conditioned on your authorization to receive software or technology controlled under these U.S. export laws without sponsorship for an export license.</p>
<p>Cloudflare is proud to be an equal opportunity employer. We are committed to providing equal employment opportunity for all people and place great value in both diversity and inclusiveness. All qualified applicants will be considered for employment without regard to their, or any other person&#39;s, perceived or actual race, color, religion, sex, gender, gender identity, gender expression, sexual orientation, national origin, ancestry, citizenship, age, physical or mental disability, medical condition, family care status, or any other basis protected by law. We are an AA/Veterans/Disabled Employer. Cloudflare provides reasonable accommodations to qualified individuals with disabilities. Please tell us if you require a reasonable accommodation to apply for a job. Examples of reasonable accommodations include, but are not limited to, changing the application process, providing documents in an alternate format, using a sign language interpreter, or using specialized equipment. If you require a reasonable accommodation to apply for a job, please contact us via e-mail at hr@cloudflare.com or via mail at 101 Townsend St. San Francisco, CA 94107.</p>
<p style="margin-top:24px;font-size:13px;color:#666;">XML job scraping automation by <a href="https://yubhub.co">YubHub</a></p>]]></Description>
      <Jobtype>full-time</Jobtype>
      <Experiencelevel>senior</Experiencelevel>
      <Workarrangement>hybrid</Workarrangement>
      <Salaryrange></Salaryrange>
      <Skills>data infrastructure, data pipelines, data products, Kubernetes, Trino, Iceberg, Clickhouse, PostgreSQL, Go, Javascript/Typescript, Python, SQL, data orchestration, infrastructure platforms, Airflow, DBT, machine learning models, MLOps</Skills>
      <Category>Engineering</Category>
      <Industry>Technology</Industry>
      <Employername>Cloudflare</Employername>
      <Employerlogo>https://logos.yubhub.co/cloudflare.com.png</Employerlogo>
      <Employerdescription>Cloudflare is a technology company that helps build a better Internet by powering millions of websites and other Internet properties for customers ranging from individual bloggers to SMBs to Fortune 500 companies.</Employerdescription>
      <Employerwebsite>https://www.cloudflare.com/</Employerwebsite>
      <Compensationcurrency></Compensationcurrency>
      <Compensationmin></Compensationmin>
      <Compensationmax></Compensationmax>
      <Applyto>https://job-boards.greenhouse.io/cloudflare/jobs/7527453</Applyto>
      <Location>Hybrid</Location>
      <Country></Country>
      <Postedate>2026-04-18</Postedate>
    </job>
    <job>
      <externalid>d4a6ec69-e81</externalid>
      <Title>Staff Machine Learning Engineer, Dev Platform Data and Discovery</Title>
      <Description><![CDATA[<p>We&#39;re looking for a highly skilled Staff Machine Learning Engineer to join our Developer Platform team. As a Staff Machine Learning Engineer, you will own projects from ideation to production, working with a cross-functional team to solve hard problems and create engaging interactive experiences for Reddit communities.</p>
<p>Responsibilities:</p>
<ul>
<li>Lead projects from concept, design, implementation, to rollout, ensuring the highest quality and performance.</li>
<li>Identify opportunities to enhance ranking capabilities by diving deep into our platform and understanding the needs of our customers.</li>
<li>Design and develop applied machine learning models for modeled personalization, game taxonomy, and more, from ideation to production deployment.</li>
<li>Collaborate with data scientists, product managers, and backend software engineers.</li>
<li>Mentor junior team members, share knowledge, and contribute to the technical growth of the team.</li>
<li>Provide guidance on machine learning best practices and methodologies.</li>
<li>Conduct A/B tests and experiments to iterate and fine-tune algorithms and models.</li>
<li>Stay updated on state-of-the-art algorithmic techniques and recognize promising innovations, adapting them to Reddit&#39;s unique platform and community.</li>
</ul>
<p>Minimum Qualifications:</p>
<ul>
<li>7+ years of experience in a relevant industry or academic background, preferably in a quantitative/modeling or highly scalable computing environment.</li>
<li>Prior experience with personalized feed ranking.</li>
<li>Proven track record of delivering complex machine learning projects from conception to deployment, preferably in real-world applications.</li>
<li>Ability to lead and mentor machine learning engineers or data scientists.</li>
<li>Strong communication skills to collaborate effectively with cross-functional teams and stakeholders.</li>
<li>Demonstrated ability to innovate and stay updated with the latest advancements in machine learning and AI.</li>
</ul>
<p>Preferred Qualifications:</p>
<ul>
<li>Experience of orchestrating complicated data pipelines and system engineering on large-scale datasets.</li>
<li>Proficiency with programming languages and statistical analysis.</li>
<li>Prior experience with Sequence Modeling, Reinforcement Learning, or Transformer Architecture.</li>
<li>Experience in Bayesian methodology and experimentation.</li>
</ul>
<p>Benefits:</p>
<ul>
<li>Comprehensive Healthcare Benefits and Income Replacement Programs.</li>
<li>401k with Employer Match.</li>
<li>Family Planning Support.</li>
<li>Gender-Affirming Care.</li>
<li>Mental Health &amp; Coaching Benefits.</li>
<li>Flexible Vacation &amp; Paid Volunteer Time Off.</li>
<li>Generous Paid Parental Leave.</li>
</ul>
<p>Pay Transparency:</p>
<p>This job posting may span more than one career level. In addition to base salary, this job is eligible to receive equity in the form of restricted stock units, and depending on the position offered, it may also be eligible to receive a commission.</p>
<p>To provide greater transparency to candidates, we share base salary ranges for all US-based job postings regardless of state. We set standard base pay ranges for all roles based on function, level, and country location, benchmarked against similar stage growth companies.</p>
<p>Final offer amounts are determined by multiple factors including, skills, depth of work experience and relevant licenses/credentials, and may vary from the amounts listed below.</p>
<p>The base salary range for this position is $230,000-$322,000 USD.</p>
<p style="margin-top:24px;font-size:13px;color:#666;">XML job scraping automation by <a href="https://yubhub.co">YubHub</a></p>]]></Description>
      <Jobtype>full-time</Jobtype>
      <Experiencelevel>staff</Experiencelevel>
      <Workarrangement>remote</Workarrangement>
      <Salaryrange>$230,000-$322,000 USD</Salaryrange>
      <Skills>Machine Learning, Personalized Feed Ranking, Applied Machine Learning Models, Programming Languages, Statistical Analysis, Sequence Modeling, Reinforcement Learning, Transformer Architecture, Bayesian Methodology, Experimentation</Skills>
      <Category>Engineering</Category>
      <Industry>Technology</Industry>
      <Employername>Reddit</Employername>
      <Employerlogo>https://logos.yubhub.co/redditinc.com.png</Employerlogo>
      <Employerdescription>Reddit is a community-driven platform with over 121 million daily active unique visitors and 100,000+ active communities.</Employerdescription>
      <Employerwebsite>https://www.redditinc.com</Employerwebsite>
      <Compensationcurrency></Compensationcurrency>
      <Compensationmin></Compensationmin>
      <Compensationmax></Compensationmax>
      <Applyto>https://job-boards.greenhouse.io/reddit/jobs/7377109</Applyto>
      <Location>Remote - United States</Location>
      <Country></Country>
      <Postedate>2026-04-18</Postedate>
    </job>
    <job>
      <externalid>41c3ee08-08e</externalid>
      <Title>Optimization Software Engineer</Title>
      <Description><![CDATA[<p>We are looking for a talented mid-level Software Engineer with a strong background in optimization to join our growing team at Anduril Labs. In this role, you will be instrumental in developing advanced algorithms and software solutions to tackle complex, multi-domain optimization problems critical to national defense and Anduril&#39;s autonomous systems.</p>
<p>The ideal candidate possesses deep expertise in classical optimization algorithms, robust Python programming skills, and a solid foundation in data modeling. Experience with developing hybrid quantum optimization solutions is a plus.</p>
<p>You will leverage state-of-the-art, GenAI-powered development tools such as Claude Code to accelerate solution development and enhance our optimization software. This role demands creative problem-solving, a self-starter mentality, and the ability to rapidly apply algorithmic theory and mathematic modeling to practical, real-world optimization challenges.</p>
<p>You will be designing, implementing, and deploying optimization algorithms and services that integrate seamlessly into larger defense systems, working across various platforms (on-prem, cloud, and hybrid quantum computing environments).</p>
<p>Familiarity with modeling linear and non-linear optimization problems, rapid prototyping, integrating optimization solutions into existing architectures, leveraging APIs, and utilizing open-source tools will be crucial.</p>
<p>If you thrive in a dynamic environment that values creative problem-solving, love writing code, excel as both an individual contributor and team player, are eager to learn, and bring a can-do attitude, this role is for you.</p>
<p>Key Responsibilities:</p>
<ul>
<li>Design, develop, and implement highly efficient optimization algorithms and software solutions to solve challenging problems in areas such as resource allocation, scheduling, routing, mission planning, control systems, and supply chain logistics.</li>
</ul>
<ul>
<li>Apply classical optimization techniques (e.g., linear programming, mixed-integer linear programming, combinatorial optimization, network flow, dynamic programming, heuristics, metaheuristics) to model and explore novel approaches.</li>
</ul>
<ul>
<li>Utilize GenAI tools (e.g., OpenAI Codes, Claude Code, GitHub Copilot) to rapidly prototype, refine, and test algorithmic solutions, improving development velocity and code quality.</li>
</ul>
<ul>
<li>Develop robust data models and efficient data pipelines to support complex optimization problems, ensuring data integrity and efficient processing for algorithmic inputs and outputs.</li>
</ul>
<ul>
<li>Collaborate with multidisciplinary teams (software engineers, data scientists, domain experts, product managers) to integrate optimization engines and services into larger defense systems and platforms.</li>
</ul>
<ul>
<li>Perform rigorous testing, validation, and performance analysis of optimization solutions, ensuring scalability, reliability, and accuracy under diverse operational conditions.</li>
</ul>
<ul>
<li>Participate actively in the entire Software Development Lifecycle (SDLC) from requirements gathering and design to deployment, monitoring, and maintenance.</li>
</ul>
<ul>
<li>Support Anduril- and customer-funded R&amp;D efforts, contributing to technical documentation, presentations, and patent applications.</li>
</ul>
<p>Requirements:</p>
<ul>
<li>Bachelor&#39;s degree in Computer Science, Software Engineering, Applied Mathematics, Operations Research, or a related quantitative field.</li>
</ul>
<ul>
<li>3+ years of professional experience in software development with a dedicated focus on optimization, algorithmic problem-solving, or operations research.</li>
</ul>
<ul>
<li>Experience solving optimization problems in defense, transportation, supply chain, logistics, network optimization, smart grids or similar.</li>
</ul>
<ul>
<li>Expert proficiency in Python for scientific computing and robust software development.</li>
</ul>
<ul>
<li>Strong theoretical and practical understanding of classical optimization algorithms (e.g., linear programming, mixed-integer linear programming, constraint programming, network flow, dynamic programming, heuristics, meta heuristics).</li>
</ul>
<ul>
<li>Hands-on experience with optimization libraries and commercial/open-source solvers (e.g., SciPy Optimize, PuLP, CVXPY, Gurobi, CPLEX, OR-Tools, GEKKO).</li>
</ul>
<ul>
<li>Solid experience with data modeling, data structures, and algorithms to efficiently prepare, process, and manage data for optimization problems.</li>
</ul>
<ul>
<li>Demonstrable hands-on experience using GenAI tools (e.g., OpenAI Codex, Claude Code, Gemini Code Assist, GitHub Copilot, Amazon CodeWhisperer, or similar) for software development, code generation, debugging, and algorithmic exploration.</li>
</ul>
<ul>
<li>Proficiency in using numerical computing libraries such as NumPy, SciPy, and Pandas.</li>
</ul>
<ul>
<li>Demonstrated understanding and application of software testing principles and practices, including unit testing, integration testing, and end-to-end testing.</li>
</ul>
<ul>
<li>Ability to develop, test, and deploy software effectively on Linux-based systems.</li>
</ul>
<ul>
<li>Eligible to obtain and maintain an active U.S. Top Secret SCI security clearance.</li>
</ul>
<ul>
<li>Experience with Git version control, build tools, and CI/CD pipelines.</li>
</ul>
<ul>
<li>Strong problem-solving skills, meticulous attention to detail, and the ability to work effectively in a collaborative team environment.</li>
</ul>
<ul>
<li>Excellent communication and interpersonal skills, with the ability to effectively articulate complex technical concepts to diverse audiences.</li>
</ul>
<p>Preferred Qualifications:</p>
<ul>
<li>Master&#39;s or Ph.D. in Computer Science, Applied Mathematics, Operations Research, or a closely related quantitative field.</li>
</ul>
<ul>
<li>Familiarity with or a strong interest in quantum optimization algorithms, quantum computing concepts, or quantum-inspired heuristic approaches.</li>
</ul>
<ul>
<li>Experience with D-Wave’s quantum annealing platform is a plus.</li>
</ul>
<ul>
<li>Experience with performance-critical programming languages such as C++ or Java.</li>
</ul>
<ul>
<li>Experience with cloud platforms (e.g., AWS, Azure, GCP) for deploying scalable optimization solutions or high-performance computing (HPC) environments.</li>
</ul>
<ul>
<li>Prior experience in defense, aerospace, logistics, supply chain management, robotics, or manufacturing optimization domains.</li>
</ul>
<ul>
<li>Familiarity with integrating machine learning models with optimization techniques (e.g., prescriptive analytics, reinforcement learning for optimization).</li>
</ul>
<ul>
<li>Excellent communication skills with the ability to articulate complex technical concepts, present findings, and influence technical direction across diverse teams.</li>
</ul>
<ul>
<li>Willingness to travel up to approximately 10%.</li>
</ul>
<p style="margin-top:24px;font-size:13px;color:#666;">XML job scraping automation by <a href="https://yubhub.co">YubHub</a></p>]]></Description>
      <Jobtype>full-time</Jobtype>
      <Experiencelevel>mid</Experiencelevel>
      <Workarrangement>onsite</Workarrangement>
      <Salaryrange>$132,000-$198,000 USD</Salaryrange>
      <Skills>Python, Classical optimization algorithms, Data modeling, GenAI tools, Optimization libraries, Commercial/open-source solvers, Numerical computing libraries, Software testing principles, Linux-based systems, Git version control, Build tools, CI/CD pipelines, Quantum optimization algorithms, Quantum computing concepts, Quantum-inspired heuristic approaches, Performance-critical programming languages, Cloud platforms, High-performance computing environments, Machine learning models, Prescriptive analytics, Reinforcement learning for optimization</Skills>
      <Category>Engineering</Category>
      <Industry>Technology</Industry>
      <Employername>Anduril Industries</Employername>
      <Employerlogo>https://logos.yubhub.co/anduril.com.png</Employerlogo>
      <Employerdescription>Anduril Industries is a defense technology company that develops advanced technology for the U.S. and allied military.</Employerdescription>
      <Employerwebsite>https://www.anduril.com/</Employerwebsite>
      <Compensationcurrency></Compensationcurrency>
      <Compensationmin></Compensationmin>
      <Compensationmax></Compensationmax>
      <Applyto>https://job-boards.greenhouse.io/andurilindustries/jobs/5089067007</Applyto>
      <Location>Washington, District of Columbia, United States</Location>
      <Country></Country>
      <Postedate>2026-04-18</Postedate>
    </job>
    <job>
      <externalid>2bf29bb5-f9d</externalid>
      <Title>Research Engineer, Economic Research</Title>
      <Description><![CDATA[<p>As a Research Engineer on the Economic Research team, you will design, build, and maintain critical infrastructure that powers Anthropic&#39;s research on AI&#39;s economic impact. You will work with data systems from across Anthropic, including our research tools for privacy-preserving analysis.\n\nThe Economic Research team at Anthropic studies the economic implications of AI on individual, firm, and economy-wide outcomes. We build scalable systems to monitor AI usage patterns and directly measure the impact of AI adoption on real-world outcomes. We publish research and data that is clear-eyed about the economic effects of AI to help policymakers, businesses, and the public understand and navigate the transition to powerful AI.\n\nIn this role, you will work closely with teams across Anthropic,including Data Science and Analytics, Data Infrastructure, Societal Impacts, and Public Policy,to build scalable and robust data systems that support high-leverage, high-impact research. Strong candidates will have a track record building data processing pipelines, architecting &amp; implementing high-quality internal infrastructure, working in a fast-paced startup environment, navigating ambiguity, and demonstrating an eagerness to develop their own research &amp; technical skills.\n\nResponsibilities:\n\n<em> Build and maintain data pipelines that process large scale Claude usage logs into canonical, reusable datasets while maintaining user privacy.\n</em> Expand privacy-preserving tools to enable new analytic functionality to support research needs.\n<em> Design and implement novel data systems leveraging language models (e.g., CLIO) where traditional software engineering patterns don&#39;t yet exist.\n</em> Develop and maintain data pipelines that are interoperable across data sources (including ingesting external data) and are designed to support economic analysis.\n<em> Contribute to the strategic development of the economic research data foundations roadmap\n</em> Ensure data reliability, integrity, and privacy compliance across all economic research data infrastructure\n<em> Lead technical design discussions to ensure our infrastructure can support both current needs and future research directions\n</em> Create documentation and best practices that enable self-serve data access for researchers while maintaining security and governance standards.\n<em> Partner closely with researchers, data scientists, policy experts, and other cross-functional partners to advance Anthropic’s safety mission\n\nYou might be a good fit if you have:\n\n</em> Experience working with Research Scientists and Economists on ambiguous AI and economic projects\n<em> Experience with building and maintaining data infrastructure, large datasets, and internal tools in production environments.\n</em> Experience with cloud infrastructure platforms such as AWS or GCP.\n<em> Take pride in writing clean, well-documented code in Python that others can build upon\n</em> Are comfortable making technical decisions with incomplete information while maintaining high engineering standards\n<em> Are comfortable getting up-to-speed quickly on unfamiliar codebases, and can work well with other engineers with different backgrounds across the organization\n</em> Have a track record of using technical infrastructure to interface effectively with machine learning models\n<em> Have experience deriving insights from imperfect data streams\n</em> Have experience building systems and products on top of LLMs\n<em> Have experience incubating and maturing tooling platforms used by a wide variety of stakeholders\n</em> A passion for Anthropic&#39;s mission of building helpful, honest, and harmless AI and understanding its economic implications.\n<em> A “full-stack mindset”, not hesitating to do what it takes to solve a problem end-to-end, even if it requires going outside the original job description.\n</em> Strong communication skills to collaborate effectively with economists, researchers, and cross-functional partners who may have varying levels of technical expertise.\n\nStrong candidates may have:\n\n<em> Background in econometrics, statistics, or quantitative social science research\n</em> Experience building data infrastructure and data foundations for research\n<em> Familiarity with large language models, AI systems, or ML research workflows\n</em> Prior work on projects related to labor economics, technology adoption, or economic measurement\n\nSome Examples of Our Recent Work\n\n<em> Anthropic Economic Index Report: Economic Primitives\n</em> Anthropic Economic Index Report: Uneven Geographic and Enterprise AI Adoption\n<em> Estimating AI productivity gains from Claude conversations\n</em> The Anthropic Economic Index\n\nDeadline to apply: None. Applications are reviewed on a rolling basis\n\nThe annual compensation range for this role is listed below.\n\nFor sales roles, the range provided is the role’s On Target Earnings (&quot;OTE&quot;) range, meaning that the range includes both the sales commissions/sales bonuses target and annual base salary for the role.\n\nAnnual Salary: $300,000-$405,000 USD\n\nLogistics\n\nMinimum education: Bachelor’s degree or an equivalent combination of education, training, and/or experience\nRequired field of study: A field relevant to the role as demonstrated through coursework, training, or professional experience\nMinimum years of experience: Years of experience required will correlate with the internal job level requirements for the position\nLocation-based hybrid policy: Currently, we expect all staff to be in one of our offices at least 25% of the time. However, some roles may require more time in our offices.\nVisa sponsorship: We do sponsor visas! However, we aren&#39;t able to successfully sponsor visas for every role and every candidate. But if we make you an offer, we will make every reasonable effort to get you a visa, and we retain an immigration lawyer to help with this.\n\nWe encourage you to apply even if you do not believe you meet every single qualification. Not all strong candidates will meet every single qualification as listed. Research shows that people who identify as being from underrepresented groups are more prone to experiencing imposter syndrome and doubting the strength of their candidacy, so we urge you not to exclude yourself prematurely and to submit an application if you&#39;re interested in this work. We think AI systems like the ones we&#39;re building have enormous social and ethical implications. We think this makes representation even more important, and we strive to include a range of diverse perspectives on our team.\n\nYour safety matters to us. To protect yourself from potential scams, remember that Anthropic recruiters only contact you from @anthropic.com email addresses. In some cases, we may partner with vetted recruiting agencies who will identify themselves as working on behalf of Anthropic. Be cautious of emails from other domains. Legitimate Anthropic recruiters will never ask for money, fees, or banking information before your first day. If you&#39;re ever unsure about a communication, don&#39;t click any links,visit anthropic.com/careers directly for confirmed position openings.\n\nHow we&#39;re different\n\nWe believe that the highest-impact AI research will be big science. At Anthropic we work as a single cohesive team on just a few large-scale research efforts. And we value impact , advancing our long-term goals of steerable, trustworthy AI , rather than work on small\n</p>
<p style="margin-top:24px;font-size:13px;color:#666;">XML job scraping automation by <a href="https://yubhub.co">YubHub</a></p>]]></Description>
      <Jobtype>full-time</Jobtype>
      <Experiencelevel>senior</Experiencelevel>
      <Workarrangement>hybrid</Workarrangement>
      <Salaryrange>$300,000-$405,000 USD</Salaryrange>
      <Skills>Python, Cloud infrastructure platforms (AWS or GCP), Data infrastructure, Large datasets, Internal tools, Machine learning models, Language models (LLMs), Econometrics, Statistics, Quantitative social science research, Full-stack mindset, Strong communication skills, Ambiguity tolerance, Research and development, Incubating and maturing tooling platforms</Skills>
      <Category>Engineering</Category>
      <Industry>Technology</Industry>
      <Employername>Anthropic</Employername>
      <Employerlogo>https://logos.yubhub.co/anthropic.com.png</Employerlogo>
      <Employerdescription>Anthropic is a company that creates reliable, interpretable, and steerable AI systems.</Employerdescription>
      <Employerwebsite>https://www.anthropic.com/</Employerwebsite>
      <Compensationcurrency></Compensationcurrency>
      <Compensationmin></Compensationmin>
      <Compensationmax></Compensationmax>
      <Applyto>https://job-boards.greenhouse.io/anthropic/jobs/5071132008</Applyto>
      <Location>San Francisco, CA</Location>
      <Country></Country>
      <Postedate>2026-04-18</Postedate>
    </job>
    <job>
      <externalid>306a6a6f-98c</externalid>
      <Title>AI Tutor - Crypto</Title>
      <Description><![CDATA[<p>As a Crypto Expert, you will be vital in enhancing xAI&#39;s frontier AI models by supplying high-quality annotations, evaluations, and expert reasoning using proprietary labeling tools. You will work closely with technical teams to support the creation and refinement of new AI tasks, focusing especially on cryptocurrency and digital asset markets.</p>
<p>Your deep domain knowledge will guide the selection and rigorous solving of complex problems in quantitative crypto strategies , including on-chain analysis, DeFi protocols, perpetual futures &amp; derivatives trading, cross-exchange arbitrage, market microstructure in fragmented venues, MEV-aware execution, machine learning for crypto alpha signals, and portfolio/risk management in high-volatility 24/7 markets.</p>
<p>This role demands sharp quantitative thinking, quick adaptation to evolving instructions, and the ability to deliver precise, technically robust critiques and solutions in a dynamic environment.</p>
<p>Responsibilities:</p>
<ul>
<li>Utilize proprietary software to deliver accurate labels, rankings, critiques, and in-depth solutions on assigned projects</li>
<li>Consistently produce high-quality, curated data adhering to rigorous technical and domain standards</li>
<li>Partner with engineers and researchers to iterate on new training tasks, evaluation frameworks, and crypto-specific benchmarks</li>
<li>Offer actionable feedback to enhance the efficiency, accuracy, and usability of annotation and data-collection interfaces</li>
<li>Identify and solve challenging problems from crypto &amp; digital asset domains where you have strong expertise , examples include:</li>
</ul>
<ul>
<li>On-chain metrics analysis and wallet/flow clustering for alpha generation</li>
<li>DeFi yield farming, liquidity provision, and impermanent loss modeling</li>
<li>Cross-exchange / CEX-DEX arbitrage and triangular opportunities</li>
<li>Perpetual futures funding rate strategies and basis trading</li>
<li>Market microstructure in crypto order books (fragmented liquidity, MEV, sandwich attacks)</li>
<li>Machine learning models for price prediction, sentiment from social/on-chain, volatility forecasting</li>
<li>Tokenomics evaluation, airdrop/IDO quantitative assessment, and risk premia in altcoins</li>
<li>Portfolio optimization and risk management in 24/7 high-volatility environments</li>
</ul>
<ul>
<li>Provide rigorous critiques of model outputs, alternative quantitative approaches, mathematical derivations, code snippets, and step-by-step crypto reasoning</li>
<li>Efficiently interpret, analyze, and complete tasks based on detailed (and evolving) guidelines</li>
</ul>
<p>Basic Qualifications:</p>
<ul>
<li>Master’s or PhD in a quantitative discipline: Quantitative Finance, Financial Engineering, Computer Science (with crypto/blockchain focus), Statistics, Applied Mathematics, Economics (quantitative), Physics, Operations Research, Data Science, or closely related field or equivalent professional experience as a quantitative crypto trader, systematic strategist, or on-chain analyst</li>
<li>Superior written and verbal English communication (technical papers, explanatory breakdowns, professional correspondence)</li>
<li>Extensive hands-on familiarity with crypto data sources and tools (CoinGecko, CoinMarketCap, Dune Analytics, Glassnode, Nansen, Chainalysis, Messari, DefiLlama, The Graph, blockchain explorers, CEX APIs, on-chain datasets, etc.)</li>
<li>Outstanding analytical skills, attention to detail, and sound judgment under partial information</li>
</ul>
<p>Preferred Skills and Experience:</p>
<ul>
<li>Professional experience in quantitative crypto trading, systematic strategies, or on-chain research at a crypto hedge fund, prop desk, market-making firm, DeFi protocol, or digital asset investment firm</li>
<li>Publications or public analyses in crypto quant topics (e.g., journals, conferences, reputable blogs, GitHub repos with notable traction)</li>
<li>Teaching, mentoring, or content-creation experience in crypto/quant finance (university, bootcamps, Twitter threads, newsletters)</li>
<li>Proficiency in Python for crypto analysis (pandas, NumPy, ccxt, web3.py, etherscan APIs, polars, scikit-learn, PyTorch/TensorFlow for ML models, etc.) and/or Rust/Solidity familiarity</li>
<li>Experience with backtesting crypto strategies, handling tick-level or on-chain data, managing API rate limits, and dealing with 24/7 market quirks</li>
<li>Knowledge of MEV, flash loans, oracle manipulation risks, liquidation cascades, or other crypto-native phenomena</li>
<li>CFA, FRM, CQF, or blockchain-specific certifications (e.g., Certified Blockchain Expert)</li>
<li>Prior involvement with LLMs, reinforcement learning, or AI evaluation in financial/crypto contexts (strong plus)</li>
</ul>
<p>Location and Other Expectations:</p>
<ul>
<li>Tutor roles may be offered as full-time, part-time, or contractor positions, depending on role needs and candidate fit.</li>
<li>For contractor positions, hours will vary widely based on project scope and contractor availability, with no fixed commitments required. On average most projects may involve at least 10 hours per week to achieve deliverables effectively though this is not a fixed commitment and depends on the scope of work.</li>
<li>Tutor roles may be performed remotely from any location worldwide, subject to legal eligibility, time-zone compatibility, and role specific needs.</li>
<li>For US based candidates, please note we are unable to hire in the states of Wyoming and Illinois at this time.</li>
<li>We are unable to provide visa sponsorship.</li>
<li>For those who will be working from a personal device, your computer must be a Chromebook, Mac with MacOS 11.0 or later, or Windows 10 or later.</li>
</ul>
<p>Compensation and Benefits:</p>
<p>US based candidates: $45/hour - $100/hour depending on factors including relevant experience, skills, education, geographic location, and qualifications. International candidates: $25/hour - $75/hour depending on factors including relevant experience, skills, education, geographic location, and qualifications.</p>
<p style="margin-top:24px;font-size:13px;color:#666;">XML job scraping automation by <a href="https://yubhub.co">YubHub</a></p>]]></Description>
      <Jobtype>full-time|part-time|contract|temporary|internship</Jobtype>
      <Experiencelevel>senior</Experiencelevel>
      <Workarrangement>remote</Workarrangement>
      <Salaryrange>$45/hour - $100/hour</Salaryrange>
      <Skills>Proprietary software, Python for crypto analysis, Rust/Solidity familiarity, Machine learning models, Quantitative finance, Financial engineering, Computer science, Statistics, Applied mathematics, Economics, Physics, Operations research, Data science, Professional experience in quantitative crypto trading, Publications or public analyses in crypto quant topics, Teaching, mentoring, or content-creation experience in crypto/quant finance, Proficiency in Python for crypto analysis, Experience with backtesting crypto strategies, Knowledge of MEV, flash loans, oracle manipulation risks, liquidation cascades, or other crypto-native phenomena, CFA, FRM, CQF, or blockchain-specific certifications, Prior involvement with LLMs, reinforcement learning, or AI evaluation in financial/crypto contexts</Skills>
      <Category>Engineering</Category>
      <Industry>Technology</Industry>
      <Employername>xAI</Employername>
      <Employerlogo>https://logos.yubhub.co/xai.com.png</Employerlogo>
      <Employerdescription>xAI is a small organisation focused on engineering excellence, aiming to create AI systems that can accurately understand the universe and aid humanity in its pursuit of knowledge.</Employerdescription>
      <Employerwebsite>https://www.xai.com/</Employerwebsite>
      <Compensationcurrency></Compensationcurrency>
      <Compensationmin></Compensationmin>
      <Compensationmax></Compensationmax>
      <Applyto>https://job-boards.greenhouse.io/xai/jobs/5040344007</Applyto>
      <Location>Remote</Location>
      <Country></Country>
      <Postedate>2026-04-18</Postedate>
    </job>
    <job>
      <externalid>2e513a92-ec5</externalid>
      <Title>Research Scientist (Generative Modeling)</Title>
      <Description><![CDATA[<p>We are seeking a talented Research Scientist with a strong background in generative modeling, particularly diffusion models, to join our modeling team. This role is ideal for candidates with deep expertise in diffusion models applied to images, videos, or 3D assets and scenes.</p>
<p>While experience in one or more of the following areas is a strong plus: large-scale model training, research in 3D computer vision.</p>
<p>You will collaborate closely with researchers, engineers, and product teams to bring advanced 3D modeling and machine learning techniques into real-world applications, ensuring that our technology remains at the forefront of visual innovation. This role involves significant hands-on research and engineering work, driving projects from conceptualization through to production deployment.</p>
<p>Key responsibilities include designing, implementing, and training large-scale diffusion models for generating 3D worlds, developing and experimenting with large-scale diffusion models to add novel control signals, adapting to target aesthetic preferences, or distilling for efficient inference, collaborating closely with research and product teams to understand and translate product requirements into effective technical roadmaps, contributing hands-on to all stages of model development including data curation, experimentation, evaluation, and deployment, continuously exploring and integrating cutting-edge research in diffusion and generative AI more broadly, acting as a key technical resource within the team, mentoring colleagues, and driving best practices in generative modeling and ML engineering.</p>
<p>Ideal candidate profile includes 3+ years of experience in generative modeling or applied ML roles, extensive experience with machine learning frameworks such as PyTorch or TensorFlow, especially in the context of diffusion models and other generative models, deep expertise in at least one area of generative modeling, strong history of publications or open-source contributions involving large-scale diffusion models, strong coding proficiency in Python and experience with GPU-accelerated computing, ability to engage effectively with researchers and cross-functional teams, clearly translating complex technical ideas into actionable tasks and outcomes, comfortable operating within a dynamic startup environment with high levels of ambiguity, ownership, and innovation.</p>
<p>Nice to have includes contributions to open-source projects in the fields of computer vision, graphics, or ML, familiarity with large-scale training infrastructure, experience integrating machine learning models into production environments, led or been involved with the development or training of large-scale, state-of-the-art generative models.</p>
<p style="margin-top:24px;font-size:13px;color:#666;">XML job scraping automation by <a href="https://yubhub.co">YubHub</a></p>]]></Description>
      <Jobtype>full-time</Jobtype>
      <Experiencelevel>senior</Experiencelevel>
      <Workarrangement>onsite</Workarrangement>
      <Salaryrange>$250,000 - $325,000 base salary (good-faith estimate for San Francisco Bay Area upon hire; actual offer based on experience, skills, and qualifications)</Salaryrange>
      <Skills>generative modeling, diffusion models, PyTorch, TensorFlow, machine learning frameworks, large-scale model training, research in 3D computer vision, data curation, experimentation, evaluation, deployment, GPU-accelerated computing, Python, open-source contributions, large-scale training infrastructure, integrating machine learning models into production environments, leading or being involved with the development or training of large-scale, state-of-the-art generative models</Skills>
      <Category>Engineering</Category>
      <Industry>Technology</Industry>
      <Employername>World Labs</Employername>
      <Employerlogo>https://logos.yubhub.co/worldlabs.ai.png</Employerlogo>
      <Employerdescription>World Labs builds foundational world models that can perceive, generate, reason, and interact with the 3D world.</Employerdescription>
      <Employerwebsite>https://worldlabs.ai</Employerwebsite>
      <Compensationcurrency></Compensationcurrency>
      <Compensationmin></Compensationmin>
      <Compensationmax></Compensationmax>
      <Applyto>https://job-boards.greenhouse.io/worldlabs/jobs/4089324009</Applyto>
      <Location>San Francisco</Location>
      <Country></Country>
      <Postedate>2026-04-17</Postedate>
    </job>
    <job>
      <externalid>c01f9dc6-b17</externalid>
      <Title>Data Scientist - Staff or Senior (United Kingdom)</Title>
      <Description><![CDATA[<p>In this role, you will build predictive models and apply scientific computing, statistical, and physics-based methods to find places with evidence of ore-forming processes and predict locations of ore-grade mineralization in 2D and 3D.</p>
<p>You will help build a worldwide dataset for our exploration program, with careful attention to identifying and quantifying uncertainty in the data and predictions.</p>
<p>You will create models and develop software to accelerate discovery of critical battery metals.</p>
<p>You will join an outstanding team of data scientists and engineers and work closely with (*applicant&#39;s) world-renowned geoscientists to incorporate our best understanding of the chemical and physical processes that create ore deposits.</p>
<p>Working with your geoscience colleagues, you will create 2D and 3D geologic predictions, identify exploration targets, design field programs to collect data, and use that data to reduce uncertainty in our predictions and guide the next phase of field work.</p>
<p>Ultimately, your role is to help KoBold make valuable discoveries by building data tools to solve scientific problems.</p>
<p>As one of the early members of this team, you will help build these tools from the ground up.</p>
<p>Responsibilities:</p>
<ul>
<li>Help develop KoBold&#39;s proprietary software exploration tools.</li>
<li>Find and curate geophysical, geochemical, geologic, and geographic data and integrate it into KoBold&#39;s proprietary data system.</li>
<li>Build models to make statistically valid predictions about the locations of compositional anomalies within the Earth&#39;s crust.</li>
<li>Create effective visualizations for evaluating model performance and enabling rapid interaction with the underlying data and key features.</li>
<li>Develop and apply data processing, statistical, and physics-based techniques to geoscientific data , from computer vision to geophysical inversions , and use the results to guide our targeting efforts and inform our acquisition and exploration decisions.</li>
<li>Present to and collaborate with our external partners and stakeholders.</li>
</ul>
<p>Qualifications:</p>
<ul>
<li>Technical skills, including extensive experience with Python&#39;s data science packages and general software engineering practices.</li>
<li>Collaborative software development (git), and familiarity with software engineering best practices like unit test / integration test suites, and CICD pipelines.</li>
<li>Cloud computing resources.</li>
<li>Building predictive models, applying them to different problems, and evaluating and interpreting the results.</li>
<li>Data from a variety of physical systems.</li>
<li>Geospatial analyses and visualizations.</li>
</ul>
<p>Technical knowledge:</p>
<ul>
<li>Broad skills in and knowledge of applied statistics and Bayesian inference.</li>
<li>Substantial understanding of machine learning algorithms.</li>
</ul>
<p>Training and work experience:</p>
<ul>
<li>An advanced degree in the physical sciences, engineering, computer science, or mathematics.</li>
<li>A minimum work experience of 4 years post PhD or 8 years post MS, ideally as a data scientist or data engineer.</li>
<li>Experience leading technical teams to apply novel scientific approaches to core business problems.</li>
</ul>
<p>Work practices and motivation:</p>
<ul>
<li>Ability to take ownership and responsibility of large projects.</li>
<li>Ability to explain technical problems to and collaborate on solutions with domain experts.</li>
<li>Communicates well on a collaborative, cross-functional team.</li>
<li>Excitement about joining a fast-growing early-stage company, comfort with a dynamic work environment, and eagerness to take on a range of responsibilities.</li>
<li>Ability to independently prioritize multiple tasks effectively.</li>
<li>Intellectual curiosity and eagerness to learn about all aspects of mineral exploration, particularly in the geology domain.</li>
<li>Enjoys constantly learning such that you are driving insights through using our tools in exploration and willing to work directly with geologists in the field.</li>
<li>Keen not just to build cool technology, but to figure out what technical product to build to best achieve the business objectives of the company.</li>
<li>A valid passport and willingness to travel to observe our work at Mingomba or at an exploration site around the world.</li>
</ul>
<p>Preferred skills include creating machine learning models on geospatial data, geostatistics, image processing or computer vision, and distributed computing applications for machine learning and other computations.</p>
<p style="margin-top:24px;font-size:13px;color:#666;">XML job scraping automation by <a href="https://yubhub.co">YubHub</a></p>]]></Description>
      <Jobtype>full-time</Jobtype>
      <Experiencelevel>senior</Experiencelevel>
      <Workarrangement>remote</Workarrangement>
      <Salaryrange>$140,00 - $240,000 (USD) plus equity and benefits</Salaryrange>
      <Skills>Python&apos;s data science packages, General software engineering practices, Collaborative software development (git), Software engineering best practices, Cloud computing resources, Building predictive models, Applying models to different problems, Evaluating and interpreting results, Data from a variety of physical systems, Geospatial analyses and visualizations, Applied statistics and Bayesian inference, Machine learning algorithms, Creating machine learning models on geospatial data, Geostatistics, Image processing or computer vision, Distributed computing applications for machine learning and other computations</Skills>
      <Category>Engineering</Category>
      <Industry>Technology</Industry>
      <Employername>KoBold Metals</Employername>
      <Employerlogo>https://logos.yubhub.co/koboldmetals.com.png</Employerlogo>
      <Employerdescription>KoBold Metals is a mineral exploration company using AI to explore for metals needed for a low-carbon economy, with a global portfolio of over 50 exploration properties.</Employerdescription>
      <Employerwebsite>https://www.koboldmetals.com/</Employerwebsite>
      <Compensationcurrency></Compensationcurrency>
      <Compensationmin></Compensationmin>
      <Compensationmax></Compensationmax>
      <Applyto>https://job-boards.greenhouse.io/koboldmetals/jobs/4677631005</Applyto>
      <Location>Remote</Location>
      <Country></Country>
      <Postedate>2026-04-17</Postedate>
    </job>
    <job>
      <externalid>ccc99db2-dd4</externalid>
      <Title>Data Scientist - Staff or Senior (Australia)</Title>
      <Description><![CDATA[<p>We are hiring a Data Scientist to help accelerate our mission. In this role, you will build predictive models and apply scientific computing, statistical, and physics-based methods to find places where there is evidence of ore-forming processes at work and to predict the locations of ore-grade mineralization in 2D and 3D. You will help build a worldwide dataset that underlies our exploration program, with careful attention to identifying and quantifying uncertainty in the data and in our predictions. You will create models and develop software to accelerate discovery of critical battery metals.</p>
<p>You will join an outstanding team of data scientists and engineers and will work closely with KoBold&#39;s world-renowned geoscientists to incorporate our best understanding of the chemical and physical processes that create ore deposits. Working with your geoscience colleagues, you will create 2D and 3D geologic predictions, identify exploration targets, design field programs to collect data, and use that data to reduce the uncertainty in our predictions and guide the next phase of field work.</p>
<p>Ultimately, your role is to help KoBold make valuable discoveries by building data tools to solve scientific problems. As one of the early members of this team, you will help build these tools from the ground up.</p>
<p>Responsibilities:</p>
<ul>
<li>Help develop KoBold&#39;s proprietary software exploration tools.</li>
<li>Find and curate geophysical, geochemical, geologic, and geographic data and integrate it into KoBold&#39;s proprietary data system.</li>
<li>Build models to make statistically valid predictions about the locations of compositional anomalies within the Earth&#39;s crust.</li>
<li>Create effective visualizations for evaluating model performance and enabling rapid interaction with the underlying data and key features.</li>
<li>Develop and apply data processing, statistical, and physics-based techniques to geoscientific data , from computer vision to geophysical inversions , and use the results to guide our targeting efforts and inform our acquisition and exploration decisions.</li>
<li>Present to and collaborate with our external partners and stakeholders.</li>
</ul>
<p style="margin-top:24px;font-size:13px;color:#666;">XML job scraping automation by <a href="https://yubhub.co">YubHub</a></p>]]></Description>
      <Jobtype>full-time</Jobtype>
      <Experiencelevel>senior</Experiencelevel>
      <Workarrangement>remote</Workarrangement>
      <Salaryrange>$140,000 - $240,000 (USD) plus equity and benefits</Salaryrange>
      <Skills>Python&apos;s data science packages, General software engineering practices, Collaborative software development (git), Cloud computing resources, Building predictive models, Applying machine learning algorithms, Data from a variety of physical systems, Geospatial analyses and visualizations, Creating machine learning models on geospatial data, Geostatistics, Image processing or computer vision, Distributed computing applications for machine learning and other computations</Skills>
      <Category>Engineering</Category>
      <Industry>Technology</Industry>
      <Employername>KoBold Metals</Employername>
      <Employerlogo>https://logos.yubhub.co/koboldmetals.com.png</Employerlogo>
      <Employerdescription>KoBold Metals is a mineral exploration company using AI to explore for metals needed for a low-carbon economy. It has a global portfolio of over 50 exploration properties.</Employerdescription>
      <Employerwebsite>https://www.koboldmetals.com/</Employerwebsite>
      <Compensationcurrency></Compensationcurrency>
      <Compensationmin></Compensationmin>
      <Compensationmax></Compensationmax>
      <Applyto>https://job-boards.greenhouse.io/koboldmetals/jobs/4677639005</Applyto>
      <Location>Remote</Location>
      <Country></Country>
      <Postedate>2026-04-17</Postedate>
    </job>
    <job>
      <externalid>8d242bab-985</externalid>
      <Title>Technical Program Manager, Agentic Development Platform (Modeling &amp; Evals)</Title>
      <Description><![CDATA[<p>We are looking for a Technical Program Manager to lead critical initiatives across Modeling, Data, Evaluations, and User Signals for the Antigravity team. You will play a key role in enhancing our models and product by managing the end-to-end lifecycle of data contributions, model development, evaluation processes, and feedback loops.</p>
<p>This role involves close collaboration with research teams, managing custom model pipelines, analyzing user signals from multiple sources, and overseeing vendor-based testing.</p>
<p><strong>Key Responsibilities</strong></p>
<ul>
<li>Drive the roadmap on data, evaluations, and modeling improvements to core models, features, and new use cases in collaboration with the Antigravity research teams.</li>
<li>Manage the evaluation process for new and existing models, and provide feedback to the modeling and research teams.</li>
<li>Partner with modeling teams to ensure seamless handoffs and coordination of data and evaluation analysis.</li>
<li>Manage approval processes working closely with the research and engineering teams as well as cross-functional stakeholders to successfully develop and launch models.</li>
<li>Establish and refine systems for collecting, triaging, and analyzing both internal and external user feedback to ensure resolution of high-priority issues.</li>
<li>Coordinate with vendors for product testing and report on key findings to the engineering and product teams.</li>
<li>Manage compute resources for modeling efforts and support team infrastructure needs.</li>
<li>Act as a point of contact for resolving technical issues for the team.</li>
</ul>
<p><strong>About You</strong></p>
<p>To be successful as a Technical Program Manager at DeepMind, we look for the following skills and experience:</p>
<ul>
<li>Bachelor&#39;s degree in Computer Science, a related technical field, or equivalent practical experience.</li>
<li>5 years of experience in a technical program management role in a research environment.</li>
<li>Experience working with machine learning models, data pipelines, and evaluation processes.</li>
<li>Strong analytical skills and experience with data analysis.</li>
</ul>
<p>In addition, the following would be an advantage:</p>
<ul>
<li>Master’s degree or PhD in Computer Science or a related technical field.</li>
<li>8+ years of relevant work experience in a technical environment.</li>
<li>Experience working on end-to-end model flywheel processes, including data collection strategies, model evaluation techniques, and metrics.</li>
<li>Experience working with modeling research teams, including managing model training and deployment processes.</li>
<li>Proven ability to lead complex projects with cross-team stakeholders, influencing and leading without managerial authority.</li>
<li>Excellent interpersonal and communication skills, with a demonstrated ability to work effectively in ambiguous, fast-paced R&amp;D environments.</li>
</ul>
<p>The US base salary range for this full-time position is between $156,000 - $229,000 + bonus + equity + benefits.</p>
<p style="margin-top:24px;font-size:13px;color:#666;">XML job scraping automation by <a href="https://yubhub.co">YubHub</a></p>]]></Description>
      <Jobtype>full-time</Jobtype>
      <Experiencelevel>senior</Experiencelevel>
      <Workarrangement>onsite</Workarrangement>
      <Salaryrange>$156,000 - $229,000 + bonus + equity + benefits</Salaryrange>
      <Skills>Bachelor&apos;s degree in Computer Science, a related technical field, or equivalent practical experience, 5 years of experience in a technical program management role in a research environment, Experience working with machine learning models, data pipelines, and evaluation processes, Strong analytical skills and experience with data analysis, Master’s degree or PhD in Computer Science or a related technical field</Skills>
      <Category>Engineering</Category>
      <Industry>Technology</Industry>
      <Employername>DeepMind</Employername>
      <Employerlogo>https://logos.yubhub.co/deepmind.com.png</Employerlogo>
      <Employerdescription>DeepMind is a UK-based artificial intelligence research laboratory. It was founded in 2010 and acquired by Alphabet Inc. in 2014.</Employerdescription>
      <Employerwebsite>https://deepmind.com/</Employerwebsite>
      <Compensationcurrency></Compensationcurrency>
      <Compensationmin></Compensationmin>
      <Compensationmax></Compensationmax>
      <Applyto>https://job-boards.greenhouse.io/deepmind/jobs/7477606</Applyto>
      <Location>Mountain View, California, US</Location>
      <Country></Country>
      <Postedate>2026-03-31</Postedate>
    </job>
    <job>
      <externalid>d4dabbbc-b6f</externalid>
      <Title>Principal Data Scientist</Title>
      <Description><![CDATA[<p>Are you ready to join a world-class team and make a significant impact on the gaming industry? At Aristocrat, we aim to bring happiness to life through the power of play. We seek a Principal Data Scientist to help us reach our ambitious goals. You will have a vital role in enhancing gameplay, boosting player engagement, and improving business outcomes with your advanced data expertise. This opportunity allows you to work on innovative projects, collaborate with diverse teams, and guide critical initiatives that will develop the future of our leading games.</p>
<p>Key Responsibilities:</p>
<ul>
<li>Lead high-impact data science initiatives end-to-end, including problem framing, methodology selection, experiment development, implementation partnership, and impact measurement.</li>
<li>Build and deliver machine learning and reinforcement learning solutions to improve player engagement, retention, monetization, and operational outcomes.</li>
<li>Lead the modeling framework for complex systems, guaranteeing comprehensive evaluation and monitoring of causal inference, uplift modeling, sequential decisioning, bandits/reinforcement learning, and forecasting.</li>
<li>Partner with game teams to define success metrics, guardrails, and decision frameworks, translating analytical results into actionable product and operational actions.</li>
<li>Define and uphold engineering standards and guidelines for model development, including validation, uncertainty, reproducibility, and bias/quality checks.</li>
<li>Drive scalable experimentation with A/B and Multi-armed bandit testing frameworks, power analysis, variance reduction, and online-offline alignment.</li>
<li>Work together with Data Engineering, MLOps, and Game Tech teams to guarantee dependable data foundations, feature accessibility, and model deployment pathways.</li>
<li>Build internal data products to improve the speed and quality of decision-making, such as AB-test calculators, decision tools, and automated insights.</li>
<li>Provide technical leadership through building and code reviews, mentoring, and coaching, improving the standard of data science craft across the organization.</li>
<li>Serve as a reliable collaborator throughout the organization, promoting data-informed decision-making and enabling business units to embrace data products.</li>
<li>Translate complex analytical insights into actionable recommendations, presenting them to senior leadership to inform critical business decisions and encourage collaborators.</li>
</ul>
<p style="margin-top:24px;font-size:13px;color:#666;">XML job scraping automation by <a href="https://yubhub.co">YubHub</a></p>]]></Description>
      <Jobtype>full-time</Jobtype>
      <Experiencelevel>senior</Experiencelevel>
      <Workarrangement>hybrid</Workarrangement>
      <Salaryrange></Salaryrange>
      <Skills>PhD or MSc in Data Science, Computer Science, Statistics, Physics, Mathematics, or a related quantitative field, 5+ years of professional data science experience, Demonstrated proficiency in clustering, predictive modeling, reinforcement learning, and Bayesian statistics, Hands-on experience in software engineering, MLOps, and deploying machine learning models at scale, Proficiency in SQL, Python, and familiarity with big data technologies (e.g., Kafka, Spark) and/or cloud platforms (e.g., GCP, AWS, or Azure), Industry knowledge: Experience in gaming or digital entertainment is a strong plus</Skills>
      <Category>Engineering</Category>
      <Industry>Technology</Industry>
      <Employername>Aristocrat</Employername>
      <Employerlogo>https://logos.yubhub.co/aristocrat.com.png</Employerlogo>
      <Employerdescription>Aristocrat is a global gaming company with a portfolio of regulated land-based gaming, social casino, and regulated online real money gaming products.</Employerdescription>
      <Employerwebsite>https://www.aristocrat.com/</Employerwebsite>
      <Compensationcurrency></Compensationcurrency>
      <Compensationmin></Compensationmin>
      <Compensationmax></Compensationmax>
      <Applyto>https://aristocrat.wd3.myworkdayjobs.com/en-US/AristocratExternalCareersSite/job/London-United-Kingdom/Principal-Data-Scientist_R0020855</Applyto>
      <Location>London</Location>
      <Country></Country>
      <Postedate>2026-03-10</Postedate>
    </job>
    <job>
      <externalid>87533f45-d48</externalid>
      <Title>Data Scientist</Title>
      <Description><![CDATA[<p>The Bing organization is looking for a Data and Applied Scientist for its online metric team. The team is pushing the boundaries regarding online A/B experimentation capabilities and methods, as well as informs our understanding for how users engage with search engines and integrated AI experiences, and then translates the insights into actionable metrics. These metrics provide direction to the Bing and MAI organisation and help engineers make the right ship decisions for their thousands of online controlled experiments.</p>
<p>Hundreds of millions of users visit Bing.com worldwide monthly, and we have a large opportunity to grow further. At Bing, we celebrate our data-driven culture: changes only ship when their impact is understood and positive. This role is a great opportunity to make solid, and even multiplicative impact on the org.</p>
<p>As a data and applied scientist in this team, you will have the chance to work on and deliver metrics that drive the direction of major Bing initiatives. You will have the chance to be involved in the analysis and decision making for 1000s of online controlled A/B experiments.</p>
<p>Microsoft’s mission is to empower every person and every organisation on the planet to achieve more. As employees we come together with a growth mindset, innovate to empower others, and collaborate to realise our shared goals. Each day we build on our values of respect, integrity, and accountability to create a culture of inclusion where everyone can thrive at work and beyond.</p>
<p>Starting January 26, 2026, Microsoft AI (MAI) employees who live within a 50-mile commute of a designated Microsoft office in the U.S. or 25-mile commute of a non-U.S., country-specific location are expected to work from the office at least four days per week. This expectation is subject to local law and may vary by jurisdiction.</p>
<p>Responsibilities:</p>
<ul>
<li>Design, analyse, and interpret A/B online experiments to evaluate user engagement and generate actionable, trustworthy insights that inform product decisions.</li>
<li>Perform hands-on analysis of large-scale telemetry data using advanced statistical methods, algorithms, and data tools to uncover meaningful patterns and trends.</li>
<li>Develop and monitor key success metrics to measure and improve Bing customer satisfaction, engagement, and retention.</li>
<li>Formulate data-driven strategies to understand correlations among critical Bing metrics and deliver clear, rigorous ship decisions.</li>
<li>Effectively communicate analytical methodologies, data visualisations, and evidence-based recommendations across cross-functional teams to drive alignment and impact.</li>
</ul>
<p>Qualifications:</p>
<ul>
<li>Doctorate in Data Science, Mathematics, Statistics, Econometrics, Economics, Operations Research, Computer Science, or related field OR Master’s Degree in Data Science, Mathematics, Statistics, Econometrics, Economics, Operations Research, Computer Science, or related field AND 1+ year(s) data-science experience (e.g., managing structured and unstructured data, applying statistical techniques and reporting results) OR Bachelor’s Degree in Data Science, Mathematics, Statistics, Econometrics, Economics, Operations Research, Computer Science, or related field AND 2+ years data-science experience (e.g., managing structured and unstructured data, applying statistical techniques and reporting results) OR equivalent experience.</li>
</ul>
<p>Preferred Qualifications:</p>
<ul>
<li>Doctorate in Data Science, Mathematics, Statistics, Econometrics, Economics, Operations Research, Computer Science, or related field AND 1+ year(s) data-science experience (e.g., managing structured and unstructured data, applying statistical techniques and reporting results) OR Master’s Degree in Data Science, Mathematics, Statistics, Econometrics, Economics, Operations Research, Computer Science, or related field AND 3+ years data-science experience (e.g., managing structured and unstructured data, applying statistical techniques and reporting results) OR Bachelor’s Degree in Data Science, Mathematics, Statistics, Econometrics, Economics, Operations Research, Computer Science, or related field AND 5+ years data-science experience (e.g., managing structured and unstructured data, applying statistical techniques and reporting results) OR equivalent experience.</li>
</ul>
<p>Experience with SQL-like query languages. Hands on experience in design &amp; problem-solving skills with one or more programming languages, such as Python, Java, C# or C++. Ability to work independently, influence others, and solid communication and collaboration skills. Familiarity with search engines, dealing with online instrumentation. Experience with large datasets and interest in consumer online search behaviours. Experience with building machine learning models on large scale data.</p>
<p style="margin-top:24px;font-size:13px;color:#666;">XML job scraping automation by <a href="https://yubhub.co">YubHub</a></p>]]></Description>
      <Jobtype>full-time</Jobtype>
      <Experiencelevel>senior</Experiencelevel>
      <Workarrangement>hybrid</Workarrangement>
      <Salaryrange>$100,600 – $199,000 per year</Salaryrange>
      <Skills>Data Science, Mathematics, Statistics, Econometrics, Economics, Operations Research, Computer Science, SQL-like query languages, Python, Java, C#, C++, Search engines, Online instrumentation, Large datasets, Machine learning models, Data Science, Mathematics, Statistics, Econometrics, Economics, Operations Research, Computer Science, SQL-like query languages, Python, Java, C#, C++, Search engines, Online instrumentation, Large datasets, Machine learning models</Skills>
      <Category>Engineering</Category>
      <Industry>Technology</Industry>
      <Employername>Microsoft</Employername>
      <Employerlogo>https://logos.yubhub.co/microsoft.ai.png</Employerlogo>
      <Employerdescription>Microsoft is a multinational technology company that develops, manufactures, licenses, and supports a wide range of software products, services, and devices. It has a large global presence with hundreds of millions of users worldwide.</Employerdescription>
      <Employerwebsite>https://microsoft.ai</Employerwebsite>
      <Compensationcurrency></Compensationcurrency>
      <Compensationmin></Compensationmin>
      <Compensationmax></Compensationmax>
      <Applyto>https://microsoft.ai/job/data-scientist/</Applyto>
      <Location>Redmond</Location>
      <Country></Country>
      <Postedate>2026-03-08</Postedate>
    </job>
    <job>
      <externalid>d446c7ab-5f0</externalid>
      <Title>Member of Technical Staff, Applied Scientist - Windows Copilot</Title>
      <Description><![CDATA[<p><strong>Summary</strong></p>
<p>Microsoft are looking for a talented Member of Technical Staff, Applied Scientist - Windows Copilot at their Redmond office. This role sits at the heart of redefining how AI enhances everyday computing. You&#39;ll work directly with leadership to shape the company&#39;s direction in the Windows Copilot team.</p>
<p><strong>About the Role</strong></p>
<p>The Windows Copilot team is at the forefront of redefining how AI enhances everyday computing. This team owns the full stack—from cutting-edge AI infrastructure and model development, to crafting seamless user experiences directly in Windows. We’re building and shipping consumer-scale services that are transforming how users interact with their PCs. Whether it’s designing intelligent prompts, engineering robust backends to interface with models, or developing native Windows platform and UX features, this team does it all.</p>
<p>As an Applied Scientist, you will lead the end-to-end model-building process, including problem understanding, data curation, model development, deployment, and iteration based on real-world feedback. This role will serve as a critical bridge between Microsoft Research and the engineering team, taking increasing ownership of model training responsibilities.</p>
<p><strong>Accountabilities</strong></p>
<ul>
<li>Develop and refine data pipelines and infrastructure to support AI model development for Copilot.</li>
<li>Collaborate with research teams to integrate cutting-edge AI advancements into production systems.</li>
<li>Design, train, and evaluate machine learning models, ensuring performance optimization and scalability.</li>
<li>Work closely with engineering and product teams to ensure AI-driven experiences meet quality and user experience standards.</li>
<li>Conduct rigorous data analysis and experimentation, leveraging insights to improve Copilot’s intelligence.</li>
<li>Overcome obstacles to deliver iterative improvements in AI performance and responsiveness.</li>
<li>Stay ahead of the latest innovations in deep learning, reinforcement learning, and generative AI.</li>
</ul>
<p><strong>The Candidate we&#39;re looking for</strong></p>
<p><strong>Experience:</strong></p>
<ul>
<li>Bachelor’s Degree in Statistics, Econometrics, Computer Science, Electrical or Computer Engineering, or related field AND 4+ years related experience (e.g., statistics predictive analytics, research) OR Master’s Degree in Statistics, Econometrics, Computer Science, Electrical or Computer Engineering, or related field AND 3+ years related experience (e.g., statistics, predictive analytics, research) OR Doctorate in Statistics, Econometrics, Computer Science, Electrical or Computer Engineering, or related field AND 1+ year(s) related experience (e.g., statistics, predictive analytics, research) OR equivalent experience.</li>
</ul>
<p><strong>Technical skills:</strong></p>
<ul>
<li>Proven track record of deploying machine learning models in large-scale production environments.</li>
<li>9+ years of experience building data pipelines, training deep learning models, and optimizing AI workflows.</li>
</ul>
<p><strong>Personal attributes:</strong></p>
<ul>
<li>Strong problem-solving skills and ability to work independently.</li>
<li>Excellent communication and collaboration skills.</li>
<li>Ability to work in a fast-paced environment and adapt to changing priorities.</li>
</ul>
<p><strong>Benefits</strong></p>
<ul>
<li>Competitive salary range: $119,800 - $234,700 per year.</li>
<li>Comprehensive benefits package, including medical, dental, and vision insurance.</li>
<li>401(k) matching program.</li>
<li>Paid time off and holidays.</li>
<li>Opportunities for professional growth and development.</li>
<li>Collaborative and dynamic work environment.</li>
</ul>
<p style="margin-top:24px;font-size:13px;color:#666;">XML job scraping automation by <a href="https://yubhub.co">YubHub</a></p>]]></Description>
      <Jobtype>full-time</Jobtype>
      <Experiencelevel>staff</Experiencelevel>
      <Workarrangement>onsite</Workarrangement>
      <Salaryrange>$119,800 - $234,700 per year</Salaryrange>
      <Skills>machine learning, deep learning, data pipelines, data curation, model development, deployment, iteration, research, statistics, econometrics, computer science, electrical engineering, computer engineering, proven track record of deploying machine learning models in large-scale production environments, 9+ years of experience building data pipelines, training deep learning models, and optimizing AI workflows</Skills>
      <Category>Engineering</Category>
      <Industry>Technology</Industry>
      <Employername>Microsoft</Employername>
      <Employerlogo>https://logos.yubhub.co/microsoft.ai.png</Employerlogo>
      <Employerdescription>Microsoft is a multinational technology company that develops, manufactures, licenses, and supports a wide range of software products, services, and devices. The company is a leader in the technology industry and is known for its innovative products and services, including the Windows operating system, Office software suite, and Azure cloud computing platform.</Employerdescription>
      <Employerwebsite>https://microsoft.ai</Employerwebsite>
      <Compensationcurrency></Compensationcurrency>
      <Compensationmin></Compensationmin>
      <Compensationmax></Compensationmax>
      <Applyto>https://microsoft.ai/job/member-of-technical-staff-applied-scientist-windows-copilot/</Applyto>
      <Location>Redmond</Location>
      <Country></Country>
      <Postedate>2026-03-06</Postedate>
    </job>
    <job>
      <externalid>b6ccc692-081</externalid>
      <Title>AI Security Engineer</Title>
      <Description><![CDATA[<p>Perplexity is seeking a highly skilled AI Security Engineer to join their security team, driving the protection of next-generation AI systems against adversarial threats. In this role, you&#39;ll design and implement robust mechanisms to secure self-hosted models, LLM APIs, agents, MCPs, and the core AI stack.</p>
<p><strong>What you&#39;ll do</strong></p>
<ul>
<li>Define, build, and refine mechanisms to secure AI systems (including self-hosted models, LLM APIs, agents, MCPs, and other core components of the AI stack) against adversarial behavior of all kinds</li>
<li>Understand technically complex AI systems, identify potential weaknesses in their architecture, and implement improvements</li>
</ul>
<p><strong>What you need</strong></p>
<ul>
<li>Hands-on coding and prompting experience</li>
<li>Bachelor of Science or Master of Science in Computer Science or a related field, or equivalent experience</li>
</ul>
<p style="margin-top:24px;font-size:13px;color:#666;">XML job scraping automation by <a href="https://yubhub.co">YubHub</a></p>]]></Description>
      <Jobtype>full-time</Jobtype>
      <Experiencelevel>senior</Experiencelevel>
      <Workarrangement>remote</Workarrangement>
      <Salaryrange>$220K – $405K</Salaryrange>
      <Skills>hands-on coding and prompting experience, Bachelor of Science or Master of Science in Computer Science or a related field, or equivalent experience, good understanding of LLMs, AI architecture patterns, machine learning models, and related technologies such as MCP, experience developing and implementing security procedures and policies</Skills>
      <Category>Engineering</Category>
      <Industry>Technology</Industry>
      <Employername>Perplexity</Employername>
      <Employerlogo>https://logos.yubhub.co/perplexity.com.png</Employerlogo>
      <Employerdescription>Perplexity is a leading AI company that provides innovative solutions for various industries. With a strong focus on security, they aim to protect next-generation AI systems against adversarial threats.</Employerdescription>
      <Employerwebsite>https://jobs.ashbyhq.com</Employerwebsite>
      <Compensationcurrency></Compensationcurrency>
      <Compensationmin></Compensationmin>
      <Compensationmax></Compensationmax>
      <Applyto>https://jobs.ashbyhq.com/perplexity/cdbf6ccb-2078-4499-b0a6-af8a04754eee</Applyto>
      <Location>San Francisco, London, New York City, Remote (United States), Serbia</Location>
      <Country></Country>
      <Postedate>2026-03-04</Postedate>
    </job>
  </jobs>
</source>