<?xml version="1.0" encoding="UTF-8"?>
<source>
  <jobs>
    <job>
      <externalid>cba88898-896</externalid>
      <Title>Research Engineer, Infrastructure, Kernels</Title>
      <Description><![CDATA[<p>We&#39;re looking for an infrastructure research engineer to design, optimize, and maintain the compute foundations that power large-scale language model training. You will develop high-performance ML kernels (e.g., CUDA, CuTe, Triton), enable efficient low-precision arithmetic, and improve the distributed compute stack that makes training large models possible.</p>
<p>This role is perfect for an engineer who enjoys working close to the metal and across the research boundary. You&#39;ll collaborate with researchers and systems architects to bridge algorithmic design with hardware efficiency. You&#39;ll prototype new kernel implementations, profile performance across hardware generations, and help define the numerical and parallelism strategies that determine how we scale next-generation AI systems.</p>
<p><strong>Responsibilities</strong></p>
<ul>
<li>Design and implement custom ML kernels (e.g., CUDA, CuTe, Triton) for core LLM operations such as attention, matrix multiplication, gating, and normalization, optimized for modern GPU and accelerator architectures.</li>
<li>Design and think through compute primitives to reduce memory bandwidth bottlenecks and improve kernel compute efficiency.</li>
<li>Collaborate with research teams to align kernel-level optimizations with model architecture and algorithmic goals.</li>
<li>Develop and maintain a library of reusable kernels and performance benchmarks that serve as the foundation for internal model training.</li>
<li>Contribute to infrastructure stability and scalability, ensuring reproducibility, consistency across precision formats, and high utilization of compute resources.</li>
<li>Document and share insights through internal talks, technical papers, or open-source contributions to strengthen the broader ML systems community.</li>
</ul>
<p><strong>Skills and Qualifications</strong></p>
<p>Minimum qualifications:</p>
<ul>
<li>Bachelor’s degree or equivalent experience in computer science, electrical engineering, statistics, machine learning, physics, robotics, or similar.</li>
<li>Strong engineering skills, ability to contribute performant, maintainable code and debug in complex codebases</li>
<li>Understanding of deep learning frameworks (e.g., PyTorch, JAX) and their underlying system architectures.</li>
<li>Thrive in a highly collaborative environment involving many, different cross-functional partners and subject matter experts.</li>
<li>A bias for action with a mindset to take initiative to work across different stacks and different teams where you spot the opportunity to make sure something ships.</li>
<li>Proficiency in CUDA, CuTe, Triton, or other GPU programming frameworks.</li>
<li>Demonstrated ability to analyze, profile, and optimize compute-intensive workloads.</li>
</ul>
<p>Preferred qualifications:</p>
<ul>
<li>Experience training or supporting large-scale language models with tens of billions of parameters or more.</li>
<li>Track record of improving research productivity through infrastructure design or process improvements.</li>
<li>Experience developing or tuning kernels for deep learning frameworks such as PyTorch, JAX, or custom accelerators.</li>
<li>Familiarity with tensor parallelism, pipeline parallelism, or distributed data processing frameworks.</li>
<li>Experience implementing low-precision formats (FP8, INT8, block floating point) or contributing to related compiler stacks (e.g., XLA, TVM).</li>
<li>Contributions to open-source GPU, ML systems, or compiler optimization projects.</li>
<li>Prior research or engineering experience in numerical optimization, communication-efficient training, or scalable AI infrastructure.</li>
</ul>
<p style="margin-top:24px;font-size:13px;color:#666;">XML job scraping automation by <a href="https://yubhub.co">YubHub</a></p>]]></Description>
      <Jobtype>full-time</Jobtype>
      <Experiencelevel>senior</Experiencelevel>
      <Workarrangement>onsite</Workarrangement>
      <Salaryrange>$350,000 - $475,000 USD</Salaryrange>
      <Skills>CUDA, CuTe, Triton, GPU programming frameworks, Deep learning frameworks (e.g., PyTorch, JAX), Computer science, Electrical engineering, Statistics, Machine learning, Physics, Robotics, Experience training or supporting large-scale language models with tens of billions of parameters or more, Track record of improving research productivity through infrastructure design or process improvements, Experience developing or tuning kernels for deep learning frameworks such as PyTorch, JAX, or custom accelerators, Familiarity with tensor parallelism, pipeline parallelism, or distributed data processing frameworks, Experience implementing low-precision formats (FP8, INT8, block floating point) or contributing to related compiler stacks (e.g., XLA, TVM), Contributions to open-source GPU, ML systems, or compiler optimization projects, Prior research or engineering experience in numerical optimization, communication-efficient training, or scalable AI infrastructure</Skills>
      <Category>Engineering</Category>
      <Industry>Technology</Industry>
      <Employername>Thinking Machines Lab</Employername>
      <Employerlogo>https://logos.yubhub.co/thinkingmachines.ai.png</Employerlogo>
      <Employerdescription>Thinking Machines Lab is a technology company that has created widely used AI products, including ChatGPT and Character.ai, and open-source projects like PyTorch.</Employerdescription>
      <Employerwebsite>https://thinkingmachines.ai/</Employerwebsite>
      <Compensationcurrency></Compensationcurrency>
      <Compensationmin></Compensationmin>
      <Compensationmax></Compensationmax>
      <Applyto>https://job-boards.greenhouse.io/thinkingmachines/jobs/5013934008</Applyto>
      <Location>San Francisco</Location>
      <Country></Country>
      <Postedate>2026-04-18</Postedate>
    </job>
    <job>
      <externalid>1aad838f-387</externalid>
      <Title>Staff+ Software Engineer, Data Infrastructure</Title>
      <Description><![CDATA[<p>We&#39;re looking for infrastructure engineers who thrive working at the intersection of data systems, security, and scalability. You&#39;ll tackle diverse challenges ranging from building financial reporting pipelines to architecting access control systems to ensuring cloud storage reliability.</p>
<p>Within Data Infra, you may be matched to critical business areas including:</p>
<ul>
<li>Data Governance &amp; Access Control: Design and implement robust access control systems ensuring only authorized users can access sensitive data.</li>
<li>Financial Data Infrastructure: Build and maintain data pipelines and warehouses powering business-critical reporting.</li>
<li>Cloud Storage &amp; Reliability: Architect disaster recovery, backup, and replication systems for petabyte-scale data.</li>
<li>Data Platform &amp; Tooling: Scale data processing infrastructure using technologies like BigQuery, BigTable, Airflow, dbt, and Spark.</li>
</ul>
<p>You&#39;ll work directly with data scientists, analysts, and business stakeholders while diving deep into cloud infrastructure primitives.</p>
<p>To be successful in this role, you&#39;ll need:</p>
<ul>
<li>10+ years of experience in a Software Engineer role, building data infrastructure, storage systems, or related distributed systems.</li>
<li>3+ years of experience leading large scale, complex projects or teams as an engineer or tech lead.</li>
<li>Deep experience with at least one of:</li>
<li>Strong proficiency in programming languages like Python, Go, Java, or similar.</li>
<li>Experience with infrastructure-as-code (Terraform, Pulumi) and cloud platforms (GCP, AWS).</li>
<li>Can navigate complex technical tradeoffs between performance, cost, security, and maintainability.</li>
<li>Have excellent collaboration skills - you work well with both technical and non-technical stakeholders.</li>
</ul>
<p>Strong candidates may also have:</p>
<ul>
<li>Background in data warehousing, ETL/ELT pipelines, or analytics infrastructure.</li>
<li>Experience with Kubernetes, containerization, and cloud-native architectures.</li>
<li>Track record of improving data reliability, availability, or cost efficiency at scale.</li>
<li>Knowledge of column-oriented databases, OLAP systems, or big data processing frameworks.</li>
<li>Experience working in fintech, financial services, or highly regulated environments.</li>
<li>Security engineering background with focus on data protection and access controls.</li>
</ul>
<p>Technologies We Use:</p>
<ul>
<li>Data: BigQuery, BigTable, Airflow, Cloud Composer, dbt, Spark, Segment, Fivetran.</li>
<li>Storage: GCS, S3.</li>
<li>Infrastructure: Terraform, Kubernetes, GCP, AWS.</li>
<li>Languages: Python, Go, SQL.</li>
</ul>
<p>The annual compensation range for this role is $405,000-$485,000 USD.</p>
<p style="margin-top:24px;font-size:13px;color:#666;">XML job scraping automation by <a href="https://yubhub.co">YubHub</a></p>]]></Description>
      <Jobtype>full-time</Jobtype>
      <Experiencelevel>staff</Experiencelevel>
      <Workarrangement>hybrid</Workarrangement>
      <Salaryrange>$405,000-$485,000 USD</Salaryrange>
      <Skills>Python, Go, Java, Terraform, Pulumi, GCP, AWS, BigQuery, BigTable, Airflow, dbt, Spark, Segment, Fivetran, GCS, S3, Kubernetes, containerization, cloud-native architectures, data warehousing, ETL/ELT pipelines, analytics infrastructure, data reliability, availability, cost efficiency, column-oriented databases, OLAP systems, big data processing frameworks, fintech, financial services, highly regulated environments, security engineering, data protection, access controls</Skills>
      <Category>Engineering</Category>
      <Industry>Technology</Industry>
      <Employername>Anthropic</Employername>
      <Employerlogo>https://logos.yubhub.co/anthropic.com.png</Employerlogo>
      <Employerdescription>Anthropic is a public benefit corporation that creates reliable, interpretable, and steerable AI systems.</Employerdescription>
      <Employerwebsite>https://www.anthropic.com/</Employerwebsite>
      <Compensationcurrency></Compensationcurrency>
      <Compensationmin></Compensationmin>
      <Compensationmax></Compensationmax>
      <Applyto>https://job-boards.greenhouse.io/anthropic/jobs/5114768008</Applyto>
      <Location>San Francisco, CA | Seattle, WA</Location>
      <Country></Country>
      <Postedate>2026-04-18</Postedate>
    </job>
    <job>
      <externalid>5008b4f7-b62</externalid>
      <Title>Member of Technical Staff - Data Research Engineer - MAI Superintelligence Team</Title>
      <Description><![CDATA[<p>We are seeking Data Research Engineers to join our Multimodal team, where we are building the next generation of foundation models across vision, language, audio, and beyond. If you are passionate about designing and curating high-quality datasets to power frontier AI models, this role is for you.</p>
<p>In this role, you’ll work at the intersection of data and innovation—collaborating with scientists, engineers, and annotators to curate, analyze, and evaluate diverse multimodal data sources critical to model development. You will lead efforts to:</p>
<ul>
<li>Develop novel data collection strategies</li>
<li>Improve dataset quality and integrity</li>
<li>Understand data-driven model behaviors</li>
<li>Align datasets with ethical and societal values</li>
</ul>
<p>This is a cross-disciplinary, high-impact role ideal for engineers who want to push the boundaries of what AI can learn from data, especially in multimodal contexts.</p>
<p>Microsoft Superintelligence Team</p>
<p>The MAIST is a startup-like team inside Microsoft AI, created to push the boundaries of AI toward Humanist Superintelligence—ultra-capable systems that remain controllable, safety-aligned, and anchored to human values. Our mission is to create AI that amplifies human potential while ensuring humanity remains firmly in control.</p>
<p>Responsibilities</p>
<ul>
<li>Create high-quality datasets for training and evaluation; run experiments on new datasets (data ablations) to assess their impact and determine the most effective data.</li>
<li>Develop and maintain scalable data pipelines for multimodal ingestion, preprocessing, filtering, and annotation.</li>
<li>Analyze real-world multimodal datasets to assess quality, diversity, relevance, and identify areas for improvement.</li>
<li>Build lightweight tools and workflows for dataset auditing, visualization, and versioning.</li>
<li>Collaborate with Safety, Ethics, and Governance teams to ensure datasets meet standards for quality, privacy, and responsible AI practices.</li>
</ul>
<p>Embody our culture and values.</p>
<p>Qualifications</p>
<ul>
<li>Bachelor’s Degree in AI, Computer Science, Data Science, Statistics, Physics, Engineering, or related technical discipline AND 4+ years technical engineering experience with coding in languages including, but not limited to, Python and common data libraries (Pandas, NumPy, etc.) OR equivalent experience.</li>
<li>2+ years of experience in data analysis or data engineering, including work with large-scale datasets that are unstructured or semi-structured.</li>
<li>Proficiency in statistics and exploratory data analysis methods.</li>
<li>Familiarity with data processing frameworks such as Spark, Ray, or Apache Beam.</li>
<li>Ability to communicate technical findings clearly to research and product teams.</li>
</ul>
<p style="margin-top:24px;font-size:13px;color:#666;">XML job scraping automation by <a href="https://yubhub.co">YubHub</a></p>]]></Description>
      <Jobtype>full-time</Jobtype>
      <Experiencelevel>staff</Experiencelevel>
      <Workarrangement>hybrid</Workarrangement>
      <Salaryrange>USD $119,800 – $234,700 per year</Salaryrange>
      <Skills>Python, Pandas, NumPy, Spark, Ray, Apache Beam, Data analysis, Data engineering, Statistics, Exploratory data analysis, Data processing frameworks, Lightweight tools and workflows, Dataset auditing, Visualization, Versioning</Skills>
      <Category>Engineering</Category>
      <Industry>Technology</Industry>
      <Employername>Microsoft</Employername>
      <Employerlogo>https://logos.yubhub.co/microsoft.ai.png</Employerlogo>
      <Employerdescription>Microsoft is a multinational technology company that develops, manufactures, licenses, and supports a wide range of software products, services, and devices.</Employerdescription>
      <Employerwebsite>https://microsoft.ai</Employerwebsite>
      <Compensationcurrency></Compensationcurrency>
      <Compensationmin></Compensationmin>
      <Compensationmax></Compensationmax>
      <Applyto>https://microsoft.ai/job/member-of-technical-staff-data-research-engineer-mai-superintelligence-team-6/</Applyto>
      <Location>New York</Location>
      <Country></Country>
      <Postedate>2026-03-08</Postedate>
    </job>
    <job>
      <externalid>783eb1af-88c</externalid>
      <Title>Principal Software Engineer</Title>
      <Description><![CDATA[<p>We are seeking a highly skilled and experienced Principal Software Engineer to join our dynamic team. The ideal candidate will have a solid background in data engineering and data analytics, with a proven track record of designing and implementing scalable data solutions.</p>
<p>As a Principal Software Engineer, you will play a key role in driving our data strategy, ensuring the integrity and accessibility of our data and leveraging data insights to support business decisions.</p>
<p>Microsoft&#39;s mission is to empower every person and every organization on the planet to achieve more. As employees we come together with a growth mindset, innovate to empower others, and collaborate to realize our shared goals. Each day we build on our values of respect, integrity, and accountability to create a culture of inclusion where everyone can thrive at work and beyond.</p>
<p>Starting January 26, 2026, Microsoft AI (MAI) employees who live within a 50-mile commute of a designated Microsoft office in the U.S. or 25-mile commute of a non-U.S., country-specific location are expected to work from the office at least four days per week. This expectation is subject to local law and may vary by jurisdiction.</p>
<p>Responsibilities:</p>
<ul>
<li>Collaborate with cross-functional teams to understand data requirements and deliver high-quality data solutions.</li>
<li>Develop and optimize data models to support data analytics.</li>
<li>Utilize advanced analytics techniques to extract insights from large datasets and drive data-driven decision making.</li>
<li>Implement data validation frameworks and monitoring systems to detect and resolve data quality issues.</li>
<li>Troubleshoot and resolve issues in data pipelines to ensure timely and accurate data delivery.</li>
<li>Work with a security-first mindset, focusing on system scalability and maintainability.</li>
<li>Coach and mentor peers and emerging team members while advocating for best practices.</li>
</ul>
<p>Qualifications:</p>
<ul>
<li>Bachelor&#39;s Degree in Computer Science or related technical field AND 6+ years technical engineering experience with coding in languages including, but not limited to, C, C++, C#, Java, JavaScript, or Python OR equivalent experience.</li>
<li>Ability to meet Microsoft, customer and/or government security screening requirements are required for this role. These requirements include but are not limited to the following specialized security screenings: Microsoft Cloud Background Check: This position will be required to pass the Microsoft Cloud background check upon hire/transfer and every two years thereafter.</li>
</ul>
<p>Preferred Qualifications:</p>
<ul>
<li>Master&#39;s Degree in Computer Science or related technical field AND 8+ years technical engineering experience with coding in languages including, but not limited to, C, C++, C#, Java, JavaScript, or Python OR Bachelor&#39;s Degree in Computer Science or related technical field AND 12+ years technical engineering experience with coding in languages including, but not limited to, C, C++, C#, Java, JavaScript, or Python OR equivalent experience.</li>
<li>6+ years of experience in software engineering, with a focus on data engineering and data analytics.</li>
<li>Solid experience with data processing frameworks such as Apache Spark, Hadoop.</li>
<li>Expertise in SQL and experience with RDBMS, Key Value stores.</li>
<li>Familiarity with cloud platforms and data services.</li>
<li>Excellent problem-solving skills and the ability to work independently and as part of a team.</li>
<li>Solid communication skills.</li>
<li>Familiarity with Azure.</li>
<li>Experience with machine learning and data science tools and frameworks.</li>
<li>Knowledge of data visualization tools (e.g., Tableau, Power BI).</li>
<li>Experience with containerization and orchestration tools (e.g., Docker, Kubernetes).</li>
</ul>
<p style="margin-top:24px;font-size:13px;color:#666;">XML job scraping automation by <a href="https://yubhub.co">YubHub</a></p>]]></Description>
      <Jobtype>full-time</Jobtype>
      <Experiencelevel>executive</Experiencelevel>
      <Workarrangement>hybrid</Workarrangement>
      <Salaryrange>$139,900 – $274,800 per year</Salaryrange>
      <Skills>C, C++, C#, Java, JavaScript, Python, Apache Spark, Hadoop, SQL, RDBMS, Key Value stores, cloud platforms, data services, Azure, machine learning, data science tools, data visualization tools, containerization, orchestration, data engineering, data analytics, data processing frameworks, data validation frameworks, data monitoring systems, security-first mindset, system scalability, maintainability, mentorship, best practices</Skills>
      <Category>Engineering</Category>
      <Industry>Technology</Industry>
      <Employername>Microsoft Advertising</Employername>
      <Employerlogo>https://logos.yubhub.co/microsoft.ai.png</Employerlogo>
      <Employerdescription>Microsoft Advertising empowers the world&apos;s largest advertisers to reach their maximum potential through digital advertising solutions on the Microsoft Advertising platform.</Employerdescription>
      <Employerwebsite>https://microsoft.ai</Employerwebsite>
      <Compensationcurrency></Compensationcurrency>
      <Compensationmin></Compensationmin>
      <Compensationmax></Compensationmax>
      <Applyto>https://microsoft.ai/job/principal-software-engineer-36/</Applyto>
      <Location>New York</Location>
      <Country></Country>
      <Postedate>2026-03-08</Postedate>
    </job>
    <job>
      <externalid>f0e01847-2e0</externalid>
      <Title>Member of Technical Staff - Data Research Engineer - MAI Superintelligence Team</Title>
      <Description><![CDATA[<p>We are seeking Data Research Engineers to join our Multimodal team, where we are building the next generation of foundation models across vision, language, audio, and beyond. If you are passionate about designing and curating high-quality datasets to power frontier AI models, this role is for you. In this role, you’ll work at the intersection of data and innovation—collaborating with scientists, engineers, and annotators to curate, analyze, and evaluate diverse multimodal data sources critical to model development. You will lead efforts to:</p>
<p>Develop novel data collection strategies</p>
<p>Improve dataset quality and integrity</p>
<p>Understand data-driven model behaviors</p>
<p>Align datasets with ethical and societal values</p>
<p>This is a cross-disciplinary, high-impact role ideal for engineers who want to push the boundaries of what AI can learn from data, especially in multimodal contexts.</p>
<p style="margin-top:24px;font-size:13px;color:#666;">XML job scraping automation by <a href="https://yubhub.co">YubHub</a></p>]]></Description>
      <Jobtype>full-time</Jobtype>
      <Experiencelevel>staff</Experiencelevel>
      <Workarrangement>hybrid</Workarrangement>
      <Salaryrange>USD $119,800 – $234,700 per year (U.S.) or USD $158,400 – $258,000 per year (San Francisco Bay area and New York City metropolitan area)</Salaryrange>
      <Skills>Python, Pandas, NumPy, data libraries, data analysis, data engineering, large-scale datasets, unstructured or semi-structured data, statistics, exploratory data analysis methods, data processing frameworks, Spark, Ray, Apache Beam, Master’s Degree in AI, Computer Science, Data Science, Statistics, Physics, Engineering, or related technical discipline, 8+ years technical engineering experience with coding in languages including, but not limited to, Python and common data libraries (Pandas, NumPy, etc.)</Skills>
      <Category>Engineering</Category>
      <Industry>Technology</Industry>
      <Employername>Microsoft</Employername>
      <Employerlogo>https://logos.yubhub.co/microsoft.ai.png</Employerlogo>
      <Employerdescription>Microsoft is a multinational technology company that develops, manufactures, licenses, and supports a wide range of software products, services, and devices.</Employerdescription>
      <Employerwebsite>https://microsoft.ai</Employerwebsite>
      <Compensationcurrency></Compensationcurrency>
      <Compensationmin></Compensationmin>
      <Compensationmax></Compensationmax>
      <Applyto>https://microsoft.ai/job/member-of-technical-staff-data-research-engineer-mai-superintelligence-team-4/</Applyto>
      <Location>Mountain View</Location>
      <Country></Country>
      <Postedate>2026-03-08</Postedate>
    </job>
    <job>
      <externalid>6dc2220e-188</externalid>
      <Title>Senior Software Engineer</Title>
      <Description><![CDATA[<p>Do you enjoy solving complex technical problems on a global scale? Microsoft AI Monetization enables advertisers to measure impact and optimize spend through secure, privacy-preserving data collaboration. The Measurement and Data Collaboration Engineering team is responsible for building the next generation of privacy-safe measurement systems that allow advertisers and partners to work with data in highly secure environments.</p>
<p>Our platform integrates Microsoft’s Azure Confidential Compute Clean Room (ACCR) with third-party clean room partners to deliver a unified, compliant, and scalable measurement ecosystem. We are looking for a Senior Software Engineer who is passionate about distributed systems, privacy-enhancing technologies, secure data processing, and building reliable production services with global impact.</p>
<p>Microsoft’s mission is to empower every person and every organization on the planet to achieve more. As employees we come together with a growth mindset, innovate to empower others, and collaborate to realize our shared goals. Each day we build on our values of respect, integrity, and accountability to create a culture of inclusion where everyone can thrive at work and beyond.</p>
<p>Responsibilities:</p>
<p>Design and build highly scalable backend services and data pipelines that support privacy-preserving measurement and analytics scenarios using C# or Java.</p>
<p>Design secure data collaboration workflows across multiple parties using modern privacy technologies, governance controls, and minimum-aggregation protections.</p>
<p>Drive integrations with external data and measurement partners, designing stable interfaces, schema governance patterns, and robust validation.</p>
<p>Lead initiatives to make delivery of high-quality software routine and efficient through the entire software development lifecycle, from inception and technical design through testing and excellence in production operations.</p>
<p>Collaborate closely with product, data science, privacy, and security teams to translate measurement needs into scalable platform capabilities.</p>
<p>Contribute to engineering team best practices leveraging AI dev tools across the software development lifecycle (SDLC).</p>
<p>Qualifications:</p>
<p>Bachelor’s Degree in Computer Science or related technical field AND 4+ years technical engineering experience with coding in languages including, but not limited to, C, C++, C#, Java, JavaScript, or Python OR equivalent experience.</p>
<p>5+ years of experience building and operating large-scale distributed systems, backend services, or data platforms.</p>
<p>Experience with large-scale data processing frameworks (e.g. Spark, SQL-based pipelines) and cloud platforms.</p>
<p>Understanding of secure data processing, encryption, identity, and access control.</p>
<p>Experience building and operating services with strict SLAs.</p>
<p>Experience with Azure.</p>
<p>Background in advertising, marketing technology, attribution, or large-scale analytics.</p>
<p>Experience integrating third-party (vendor/partner) platforms, identity systems, or data collaboration technologies.</p>
<p>Solid problem-solving skills with a focus on reliability, observability, and system design.</p>
<p style="margin-top:24px;font-size:13px;color:#666;">XML job scraping automation by <a href="https://yubhub.co">YubHub</a></p>]]></Description>
      <Jobtype>full-time</Jobtype>
      <Experiencelevel>senior</Experiencelevel>
      <Workarrangement>hybrid</Workarrangement>
      <Salaryrange>$119,800 – $234,700 per year</Salaryrange>
      <Skills>C#, Java, C, C++, JavaScript, Python, Spark, SQL, Azure, Secure data processing, Encryption, Identity, Access control, Large-scale data processing frameworks, Cloud platforms, Azure, Advertising, Marketing technology, Attribution, Large-scale analytics, Third-party platforms, Identity systems, Data collaboration technologies</Skills>
      <Category>Engineering</Category>
      <Industry>Technology</Industry>
      <Employername>Microsoft AI</Employername>
      <Employerlogo>https://logos.yubhub.co/microsoft.ai.png</Employerlogo>
      <Employerdescription>Microsoft AI is a subsidiary of Microsoft, a multinational technology company that develops, manufactures, licenses, and supports a wide range of software products, services, and devices.</Employerdescription>
      <Employerwebsite>https://microsoft.ai</Employerwebsite>
      <Compensationcurrency></Compensationcurrency>
      <Compensationmin></Compensationmin>
      <Compensationmax></Compensationmax>
      <Applyto>https://microsoft.ai/job/senior-software-engineer-12/</Applyto>
      <Location>Multiple Locations, United States</Location>
      <Country></Country>
      <Postedate>2026-03-08</Postedate>
    </job>
    <job>
      <externalid>1ace7478-7a2</externalid>
      <Title>Staff+ Software Engineer, Data Infrastructure</Title>
      <Description><![CDATA[<p><strong>About the role</strong></p>
<p>Data Infrastructure designs, operates, and scales secure, privacy-respecting systems that power data-driven decisions across Anthropic. Our mission is to provide data processing, storage, and access that are trusted, fast, and easy to use.</p>
<p>We&#39;re looking for infrastructure engineers who thrive working at the intersection of data systems, security, and scalability. You&#39;ll tackle diverse challenges ranging from building financial reporting pipelines to architecting access control systems to ensuring cloud storage reliability. This role offers the opportunity to work directly with data scientists, analysts, and business stakeholders while diving deep into cloud infrastructure primitives.</p>
<p><strong>Responsibilities:</strong></p>
<p>Within Data Infra, you may be matched to critical business areas including:</p>
<ul>
<li><strong>Data Governance &amp; Access Control:</strong> Design and implement robust access control systems ensuring only authorized users can access sensitive data. Build infrastructure for permission management, audit logging, and compliance requirements. Work on IAM policies, ACLs, and security controls that scale across thousands of users and systems.</li>
</ul>
<ul>
<li><strong>Financial Data Infrastructure:</strong> Build and maintain data pipelines and warehouses powering business-critical reporting. Ensure data integrity, accuracy, and availability for complex financial systems, including third party revenue ingestion pipelines; manage the external relationships as needed to drive upstream dependencies. Own the reliability of systems processing revenue, usage, and business metrics.</li>
</ul>
<ul>
<li><strong>Cloud Storage &amp; Reliability:</strong> Architect disaster recovery, backup, and replication systems for petabyte-scale data. Ensure high availability and durability of data stored in cloud object storage (GCS, S3). Build systems that protect against data loss and enable rapid recovery.</li>
</ul>
<ul>
<li><strong>Data Platform &amp; Tooling:</strong> Scale data processing infrastructure using technologies like BigQuery, BigTable, Airflow, dbt, and Spark. Optimize query performance, manage costs, and enable self-service analytics across the organization.</li>
</ul>
<p><strong>You might be a good fit if you:</strong></p>
<ul>
<li>Have 10+ years (not including internships or co-ops) of experience in a Software Engineer role, building data infrastructure, storage systems, or related distributed systems</li>
</ul>
<ul>
<li>Have 3+ years (not including internships or co-ops) of experience leading large scale, complex projects or teams as an engineer or tech lead</li>
</ul>
<ul>
<li>Can set technical direction for a team, not just execute within it</li>
</ul>
<ul>
<li>Have deep experience with at least one of:</li>
</ul>
<ul>
<li>Strong proficiency in programming languages like Python, Go, Java, or similar</li>
</ul>
<ul>
<li>Experience with infrastructure-as-code (Terraform, Pulumi) and cloud platforms (GCP, AWS)</li>
</ul>
<p><strong>Strong candidates may also have:</strong></p>
<ul>
<li>Background in data warehousing, ETL/ELT pipelines, or analytics infrastructure</li>
</ul>
<ul>
<li>Experience with Kubernetes, containerization, and cloud-native architectures</li>
</ul>
<ul>
<li>Track record of improving data reliability, availability, or cost efficiency at scale</li>
</ul>
<ul>
<li>Knowledge of column-oriented databases, OLAP systems, or big data processing frameworks</li>
</ul>
<ul>
<li>Experience working in fintech, financial services, or highly regulated environments</li>
</ul>
<ul>
<li>Security engineering background with focus on data protection and access controls</li>
</ul>
<p><strong>Technologies We Use:</strong></p>
<ul>
<li>Data: BigQuery, BigTable, Airflow, Cloud Composer, dbt, Spark, Segment, Fivetran</li>
</ul>
<ul>
<li>Storage: GCS, S3</li>
</ul>
<ul>
<li>Infrastructure: Terraform, Kubernetes, GCP, AWS</li>
</ul>
<ul>
<li>Languages: Python, Go, SQL</li>
</ul>
<p><strong>Logistics</strong></p>
<p><strong>Education requirements:</strong> We require at least a Bachelor&#39;s degree in a related field or equivalent experience.</p>
<p><strong>Location-based hybrid policy:</strong> Currently, we expect all staff to be in one of our offices at least 25% of the time. However, some roles may require more time in our offices.</p>
<p><strong>Visa sponsorship:</strong> We do sponsor visas! However, we aren&#39;t able to successfully sponsor visas for every role and every candidate. But if we make you an offer, we will make every reasonable effort to get you a visa, and we retain an immigration lawyer to help with this.</p>
<p><strong>We encourage you to apply even if you do not believe you meet every single qualification.</strong> Not all strong candidates will meet every single qualification as listed. Research shows that people who identify as being from underrepresented groups are more prone to experiencing imposter syndrome and doubting the strength of their candidacy, so we urge you not to exclude yourself prematurely and to submit an application if you&#39;re interested in this work.</p>
<p style="margin-top:24px;font-size:13px;color:#666;">XML job scraping automation by <a href="https://yubhub.co">YubHub</a></p>]]></Description>
      <Jobtype>full-time</Jobtype>
      <Experiencelevel>staff</Experiencelevel>
      <Workarrangement>hybrid</Workarrangement>
      <Salaryrange>$405,000 - $485,000 USD</Salaryrange>
      <Skills>Python, Go, Java, Terraform, Pulumi, GCP, AWS, BigQuery, BigTable, Airflow, dbt, Spark, Segment, Fivetran, GCS, S3, Kubernetes, containerization, cloud-native architectures, data warehousing, ETL/ELT pipelines, analytics infrastructure, column-oriented databases, OLAP systems, big data processing frameworks, fintech, financial services, highly regulated environments, security engineering, data protection, access controls, data governance, access control, cloud storage, reliability, data platform, tooling, self-service analytics, data processing infrastructure, query performance, cost management</Skills>
      <Category>Engineering</Category>
      <Industry>Technology</Industry>
      <Employername>Anthropic</Employername>
      <Employerlogo>https://logos.yubhub.co/anthropic.com.png</Employerlogo>
      <Employerdescription>Anthropic&apos;s mission is to create reliable, interpretable, and steerable AI systems. The company is a quickly growing group of committed researchers, engineers, policy experts, and business leaders working together to build beneficial AI systems.</Employerdescription>
      <Employerwebsite>https://job-boards.greenhouse.io</Employerwebsite>
      <Compensationcurrency></Compensationcurrency>
      <Compensationmin></Compensationmin>
      <Compensationmax></Compensationmax>
      <Applyto>https://job-boards.greenhouse.io/anthropic/jobs/5114768008</Applyto>
      <Location>San Francisco, CA | Seattle, WA</Location>
      <Country></Country>
      <Postedate>2026-03-08</Postedate>
    </job>
    <job>
      <externalid>453f53c5-e0d</externalid>
      <Title>Research Engineer, AI Observability</Title>
      <Description><![CDATA[<p><strong>About Anthropic</strong></p>
<p>Anthropic&#39;s mission is to create reliable, interpretable, and steerable AI systems. We want AI to be safe and beneficial for our users and for society as a whole. Our team is a quickly growing group of committed researchers, engineers, policy experts, and business leaders working together to build beneficial AI systems.</p>
<p><strong>About the Team</strong></p>
<p>As AI training and deployments scale, the volume of data we need to monitor and understand is exploding. Our team uses Claude itself to make sense of this data. We own an integrated set of tools enabling Anthropic to ask open-ended questions, surface unexpected patterns, and maintain meaningful human oversight over massive datasets.</p>
<p>Our tools are widely adopted internally — powering ongoing enforcement, threat intelligence investigations, model audits, and more — and we’re looking for experienced engineers and researchers to both scale up existing applications and go zero-to-one on new ones.</p>
<p><strong>About the Role</strong></p>
<p>As a Research Engineer on our team, you&#39;ll design and build systems that let AI analyse large, unstructured datasets — think tens or hundreds of thousands of conversations or documents — and produce structured, trustworthy insights. You&#39;ll work across the full stack, from core analysis frameworks through user-facing apps and interfaces.</p>
<p>This is a high-leverage role. The tools you build will be used by dozens of researchers and investigators, and directly shape our ability to measure and mitigate both misuse and misalignment.</p>
<p><strong>Responsibilities:</strong></p>
<ul>
<li>Design and implement AI-based monitoring systems for AI training and deployment</li>
</ul>
<ul>
<li>Extend and improve core frameworks for processing large volumes of unstructured text</li>
</ul>
<ul>
<li>Partner with researchers and safety teams across Anthropic to understand their analytical needs and build solutions</li>
</ul>
<ul>
<li>Develop agentic integrations that allow AI systems to autonomously investigate and act on analytical findings</li>
</ul>
<ul>
<li>Contribute to the strategic direction of the team, including decisions about what to build, what to partner on, and where to invest</li>
</ul>
<p><strong>You May Be a Good Fit If You:</strong></p>
<ul>
<li>Have 5+ years of software engineering experience, with meaningful exposure to ML systems</li>
</ul>
<ul>
<li>Are excited about the problem of scaling human oversight of AI systems</li>
</ul>
<ul>
<li>Are familiar with LLM application development (context engineering, evaluation, orchestration)</li>
</ul>
<ul>
<li>Enjoy building tools that other people use — you care about UX, reliability, and documentation</li>
</ul>
<ul>
<li>Can context-switch between deep infrastructure work and user-facing product thinking</li>
</ul>
<ul>
<li>Thrive in collaborative, cross-functional environments</li>
</ul>
<p><strong>Strong Candidates May Also Have:</strong></p>
<ul>
<li>Research experience in AI safety, alignment, or responsible deployment</li>
</ul>
<ul>
<li>Practical experience with both data science and engineering, including developing and using large-scale data processing frameworks</li>
</ul>
<ul>
<li>Experience with productionizing internal tools or building developer-facing platforms</li>
</ul>
<ul>
<li>Background in building monitoring or observability systems</li>
</ul>
<ul>
<li>Comfort with ambiguity — our team is small and growing, and you&#39;ll help define what we become</li>
</ul>
<p><strong>Logistics</strong></p>
<p><strong>Education requirements:</strong> We require at least a Bachelor&#39;s degree in a related field or equivalent experience. <strong>Location-based hybrid policy:</strong> Currently, we expect all staff to be in one of our offices at least 25% of the time. However, some roles may require more time in our offices.</p>
<p><strong>Visa sponsorship:</strong> We do sponsor visas! However, we aren&#39;t able to successfully sponsor visas for every role and every candidate. But if we make you an offer, we will make every reasonable effort to get you a visa, and we retain an immigration lawyer to help with this.</p>
<p><strong>We encourage you to apply even if you do not believe you meet every single qualification. Not all strong candidates will meet every single qualification as listed. Research shows that people who identify as being from underrepresented groups are more prone to experiencing imposter syndrome and doubting the strength of their candidacy, so we urge you not to exclude yourself prematurely and to submit an application if you&#39;re interested in this work.</strong></p>
<p><strong>Your safety matters to us. To protect yourself from potential scams, remember that Anthropic recruiters only contact you from @anthropic.com email addresses. In some cases, we may partner with vetted recruiting agencies who will identify themselves as working on behalf of Anthropic. Be cautious of emails from other domains. Legitimate Anthropic recruiters will never ask for money, fees, or banking information before your first day. If you&#39;re ever unsure about a communication, don&#39;t click any links—visit anthropic.com/careers directly for confirmed position openings.</strong></p>
<p><strong>How we&#39;re different</strong></p>
<p>We believe that the highest-impact AI research will be big science. At Anthropic we work as a single cohesive team on just a few large-scale research efforts. And we value impact — advancing our long-term goals of steerable, trustworthy AI — rather than work on smaller and more specific puzzles. We view AI research as an empirical science, which has as much in common with physics and biology as with traditional efforts in computer science. We&#39;re an extremely collaborative group, and we host frequent research discussions to ensure that we are pursuing the highest-impact work.</p>
<p style="margin-top:24px;font-size:13px;color:#666;">XML job scraping automation by <a href="https://yubhub.co">YubHub</a></p>]]></Description>
      <Jobtype>full-time</Jobtype>
      <Experiencelevel>senior</Experiencelevel>
      <Workarrangement>hybrid</Workarrangement>
      <Salaryrange>$320,000 - $405,000 USD</Salaryrange>
      <Skills>software engineering, ML systems, LLM application development, context engineering, evaluation, orchestration, UX, reliability, documentation, data science, engineering, large-scale data processing frameworks, productionizing internal tools, developer-facing platforms, monitoring, observability systems, research experience in AI safety, alignment, responsible deployment, practical experience with both data science and engineering, experience with productionizing internal tools or building developer-facing platforms, background in building monitoring or observability systems, comfort with ambiguity</Skills>
      <Category>Engineering</Category>
      <Industry>Technology</Industry>
      <Employername>Anthropic</Employername>
      <Employerlogo>https://logos.yubhub.co/anthropic.com.png</Employerlogo>
      <Employerdescription>Anthropic is a quickly growing organisation with a mission to create reliable, interpretable, and steerable AI systems. Our team is a group of committed researchers, engineers, policy experts, and business leaders working together to build beneficial AI systems.</Employerdescription>
      <Employerwebsite>https://job-boards.greenhouse.io</Employerwebsite>
      <Compensationcurrency></Compensationcurrency>
      <Compensationmin></Compensationmin>
      <Compensationmax></Compensationmax>
      <Applyto>https://job-boards.greenhouse.io/anthropic/jobs/5125083008</Applyto>
      <Location>San Francisco, CA</Location>
      <Country></Country>
      <Postedate>2026-03-08</Postedate>
    </job>
    <job>
      <externalid>55f3e52b-904</externalid>
      <Title>Member of Technical Staff - Data Research Engineer</Title>
      <Description><![CDATA[<p><strong>Summary</strong></p>
<p>Microsoft AI are looking for a talented Member of Technical Staff - Data Research Engineer at their Redmond office. This role sits at the intersection of data and innovation—collaborating with scientists, engineers, and annotators to curate, analyze, and evaluate diverse multimodal data sources critical to model development. You will lead efforts to develop novel data collection strategies, improve dataset quality and integrity, understand data-driven model behaviors, and align datasets with ethical and societal values.</p>
<p><strong>About the Role</strong></p>
<p>As a Data Research Engineer, you will be responsible for creating high-quality datasets for training and evaluation, running experiments on new datasets (data ablations) to assess their impact and determine the most effective data. You will also develop and maintain scalable data pipelines for multimodal ingestion, preprocessing, filtering, and annotation. Additionally, you will analyze real-world multimodal datasets to assess quality, diversity, relevance, and identify areas for improvement. You will build lightweight tools and workflows for dataset auditing, visualization, and versioning. You will collaborate with Safety, Ethics, and Governance teams to ensure datasets meet standards for quality, privacy, and responsible AI practices.</p>
<p><strong>Accountabilities</strong></p>
<ul>
<li>Create high-quality datasets for training and evaluation</li>
<li>Run experiments on new datasets (data ablations) to assess their impact and determine the most effective data</li>
<li>Develop and maintain scalable data pipelines for multimodal ingestion, preprocessing, filtering, and annotation</li>
<li>Analyze real-world multimodal datasets to assess quality, diversity, relevance, and identify areas for improvement</li>
<li>Build lightweight tools and workflows for dataset auditing, visualization, and versioning</li>
</ul>
<p><strong>The Candidate we&#39;re looking for</strong></p>
<p><strong>Experience:</strong></p>
<ul>
<li>4+ years technical engineering experience with coding in languages including, but not limited to, Python and common data libraries (Pandas, NumPy, etc.)</li>
</ul>
<p><strong>Technical skills:</strong></p>
<ul>
<li>Proficiency in statistics and exploratory data analysis methods</li>
<li>Familiarity with data processing frameworks such as Spark, Ray, or Apache Beam</li>
</ul>
<p><strong>Personal attributes:</strong></p>
<ul>
<li>Ability to communicate technical findings clearly to research and product teams</li>
</ul>
<p><strong>Benefits</strong></p>
<ul>
<li>Competitive salary</li>
<li>Comprehensive benefits package</li>
<li>Opportunities for professional growth and development</li>
<li>Collaborative and dynamic work environment</li>
</ul>
<p style="margin-top:24px;font-size:13px;color:#666;">XML job scraping automation by <a href="https://yubhub.co">YubHub</a></p>]]></Description>
      <Jobtype>full-time</Jobtype>
      <Experiencelevel>staff</Experiencelevel>
      <Workarrangement>onsite</Workarrangement>
      <Salaryrange>USD $119,800 – $234,700 per year</Salaryrange>
      <Skills>Python, Pandas, NumPy, Spark, Ray, Apache Beam, statistics, exploratory data analysis, data processing frameworks</Skills>
      <Category>Engineering</Category>
      <Industry>Technology</Industry>
      <Employername>Microsoft AI</Employername>
      <Employerlogo>https://logos.yubhub.co/microsoft.ai.png</Employerlogo>
      <Employerdescription>Microsoft AI is a leading technology company that specializes in artificial intelligence, machine learning, and data science. They are known for their innovative products and services that help organizations make data-driven decisions. Microsoft AI is committed to empowering every person and organization on the planet to achieve more.</Employerdescription>
      <Employerwebsite>https://microsoft.ai</Employerwebsite>
      <Compensationcurrency></Compensationcurrency>
      <Compensationmin></Compensationmin>
      <Compensationmax></Compensationmax>
      <Applyto>https://microsoft.ai/job/member-of-technical-staff-data-research-engineer-mai-superintelligence-team-5/</Applyto>
      <Location>Redmond</Location>
      <Country></Country>
      <Postedate>2026-03-06</Postedate>
    </job>
    <job>
      <externalid>41ac4a39-9a3</externalid>
      <Title>Member of Technical Staff - Pretraining Text Data</Title>
      <Description><![CDATA[<p><strong>Summary</strong></p>
<p>Microsoft AI are looking for a talented Member of Technical Staff - Pretraining Text Data at their Redmond office. This role sits at the heart of strategic decision-making, turning market data into actionable insights for a company that&#39;s revolutionising AI technology. You&#39;ll work directly with leadership to shape the company&#39;s direction in the AI market.</p>
<p><strong>About the Role</strong></p>
<p>We are seeking engineers and researchers to join our Pretraining Text Data team, where we are building the next generation of foundation large language models. If you are passionate about designing and curating high-quality datasets to power frontier AI models, this role is for you. In this role, you’ll work at the intersection of data and innovation—collaborating with scientists, engineers, and annotators to curate, analyze, and evaluate diverse text datasets critical to model development. You will lead efforts to:</p>
<ul>
<li>Develop novel data collection strategies</li>
<li>Improve dataset quality and integrity</li>
<li>Understand data-driven model behaviors</li>
<li>Train models to understand the impact of data and data mixes</li>
<li>Align datasets with ethical and societal values</li>
</ul>
<p><strong>Accountabilities</strong></p>
<ul>
<li>Create high-quality datasets for training and evaluation; run experiments on new datasets (data ablations) to assess their impact and determine the most effective data.</li>
<li>Develop and maintain scalable data pipelines for text data ingestion, preprocessing, filtering, and annotation.</li>
<li>Analyze real-world text datasets to assess quality, diversity, relevance, and identify areas for improvement.</li>
<li>Build lightweight tools and workflows for dataset auditing, visualization, and versioning.</li>
<li>Collaborate with Safety, Ethics, and Governance teams to ensure datasets meet standards for quality, privacy, and responsible AI practices.</li>
</ul>
<p><strong>The Candidate we&#39;re looking for</strong></p>
<p><strong>Experience:</strong></p>
<ul>
<li>Bachelor’s Degree in AI, Computer Science, Data Science, Statistics, Physics, Engineering, or related technical discipline AND 4+ years technical engineering experience with coding in languages including, but not limited to, Python and common data libraries (Pandas, NumPy, etc.) OR equivalent experience.</li>
</ul>
<p><strong>Technical skills:</strong></p>
<ul>
<li>Proficiency in statistics and exploratory data analysis methods.</li>
<li>Familiarity with data processing frameworks such as Spark, Ray, or Apache Beam.</li>
</ul>
<p><strong>Personal attributes:</strong></p>
<ul>
<li>Ability to communicate technical findings clearly to research and product teams.</li>
</ul>
<p><strong>Benefits</strong></p>
<ul>
<li>Competitive salary</li>
<li>Comprehensive benefits package</li>
<li>Opportunities for professional growth and development</li>
<li>Collaborative and dynamic work environment</li>
<li>Access to cutting-edge technology and resources</li>
</ul>
<p style="margin-top:24px;font-size:13px;color:#666;">XML job scraping automation by <a href="https://yubhub.co">YubHub</a></p>]]></Description>
      <Jobtype>full-time</Jobtype>
      <Experiencelevel>staff</Experiencelevel>
      <Workarrangement>onsite</Workarrangement>
      <Salaryrange>USD $119,800 – $234,700 per year</Salaryrange>
      <Skills>Python, Pandas, NumPy, Spark, Ray, Apache Beam, statistics, exploratory data analysis, data processing frameworks</Skills>
      <Category>Engineering</Category>
      <Industry>Technology</Industry>
      <Employername>Microsoft AI</Employername>
      <Employerlogo>https://logos.yubhub.co/microsoft.ai.png</Employerlogo>
      <Employerdescription>Microsoft AI is a leading technology company that specializes in artificial intelligence, machine learning, and data science. They are known for their innovative products and services that empower individuals and organizations to achieve more. Microsoft AI is committed to pushing the boundaries of what is possible with AI and making it accessible to everyone.</Employerdescription>
      <Employerwebsite>https://microsoft.ai</Employerwebsite>
      <Compensationcurrency></Compensationcurrency>
      <Compensationmin></Compensationmin>
      <Compensationmax></Compensationmax>
      <Applyto>https://microsoft.ai/job/member-of-technical-staff-pretraining-text-data-2/</Applyto>
      <Location>Redmond</Location>
      <Country></Country>
      <Postedate>2026-03-06</Postedate>
    </job>
    <job>
      <externalid>88f19c96-557</externalid>
      <Title>Member of Technical Staff, Data Research Engineer</Title>
      <Description><![CDATA[<p><strong>Summary</strong></p>
<p>Microsoft AI are looking for a talented Data Research Engineer to join their MAI Superintelligence Team in London. This role sits at the heart of strategic decision-making, turning market data into actionable insights for a company that&#39;s revolutionising AI technology. You&#39;ll work directly with leadership to shape the company&#39;s direction in the AI market.</p>
<p><strong>About the Role</strong></p>
<p>As a Data Research Engineer, you will be responsible for creating high-quality datasets for training and evaluation, running experiments on new datasets to assess their impact, and developing and maintaining scalable data pipelines for multimodal ingestion, pre-processing, filtering, and annotation. You will also analyse real-world multimodal datasets to assess quality, diversity, relevance, and identify areas for improvement.</p>
<p><strong>Accountabilities</strong></p>
<ul>
<li>Create high-quality datasets for training and evaluation</li>
<li>Run experiments on new datasets to assess their impact and determine the most effective data</li>
<li>Develop and maintain scalable data pipelines for multimodal ingestion, pre-processing, filtering, and annotation</li>
</ul>
<p><strong>The Candidate we&#39;re looking for</strong></p>
<p><strong>Experience:</strong></p>
<ul>
<li>Bachelor&#39;s Degree in AI, Computer Science, Data Science, Statistics, Physics, Engineering, or a related technical field</li>
<li>Technical engineering experience with coding in languages including, but not limited to, Python and common data libraries (Pandas, NumPy, etc.)</li>
</ul>
<p><strong>Technical skills:</strong></p>
<ul>
<li>Proficiency in statistics and exploratory data analysis methods</li>
<li>Familiarity with data processing frameworks such as Spark, Ray, Apache Beam</li>
</ul>
<p><strong>Personal attributes:</strong></p>
<ul>
<li>Ability to communicate technical findings effectively to research and product teams</li>
</ul>
<p><strong>Benefits</strong></p>
<ul>
<li>Competitive salary and benefits package</li>
<li>Opportunities for professional growth and development</li>
<li>Collaborative and dynamic work environment</li>
</ul>
<p style="margin-top:24px;font-size:13px;color:#666;">XML job scraping automation by <a href="https://yubhub.co">YubHub</a></p>]]></Description>
      <Jobtype>full-time</Jobtype>
      <Experiencelevel>staff</Experiencelevel>
      <Workarrangement>onsite</Workarrangement>
      <Salaryrange>Competitive salary and benefits package</Salaryrange>
      <Skills>Python, Pandas, NumPy, Spark, Ray, Apache Beam, Data processing frameworks, Machine learning algorithms</Skills>
      <Category>Engineering</Category>
      <Industry>Technology</Industry>
      <Employername>Microsoft AI</Employername>
      <Employerlogo>https://logos.yubhub.co/microsoft.ai.png</Employerlogo>
      <Employerdescription>Microsoft AI is a leading technology company that specializes in artificial intelligence, machine learning, and data science. They are known for their innovative products and services that empower individuals and organizations to achieve more. Microsoft AI is committed to making a positive impact on society through their technology and research.</Employerdescription>
      <Employerwebsite>https://microsoft.ai</Employerwebsite>
      <Compensationcurrency></Compensationcurrency>
      <Compensationmin></Compensationmin>
      <Compensationmax></Compensationmax>
      <Applyto>https://microsoft.ai/job/member-of-technical-staff-data-research-engineer-mai-superintelligence-team/</Applyto>
      <Location>London</Location>
      <Country></Country>
      <Postedate>2026-03-06</Postedate>
    </job>
    <job>
      <externalid>2902359a-64d</externalid>
      <Title>Member of Technical Staff, Infrastructure Data &amp; Analytics</Title>
      <Description><![CDATA[<p><strong>Summary</strong></p>
<p>Microsoft AI are looking for a talented Member of Technical Staff, Infrastructure Data &amp; Analytics to join their MAI SuperIntelligence Team. This role sits at the heart of strategic decision-making, turning raw telemetry into trusted, decision-quality insights on utilization, capacity, readiness, and efficiency. You&#39;ll work directly with leadership to shape the company&#39;s direction in the Superintelligence space.</p>
<p><strong>About the Role</strong></p>
<p>As a Member of Technical Staff, Infrastructure Data &amp; Analytics, you will act as the technical lead and owner for infrastructure analytics across compute, storage, and networking. You will design and build durable, scalable data pipelines that ingest telemetry from clusters, schedulers, health systems, and capacity trackers into Data Warehouse. You will define and standardize core metrics and semantics (e.g., utilization, occupancy, MFU, goodput, capacity readiness, delivery-to-production). You will architect and maintain self-service dashboards and APIs for fleet, cluster, and squad-level visibility. You will partner closely with stakeholders across Supercomputing Infra, Researchers, Strategy and Executives to ensure metrics reflect operational and business reality.</p>
<p><strong>Accountabilities</strong></p>
<ul>
<li>Act as the technical lead and owner for infrastructure analytics across compute, storage, and networking.</li>
<li>Design and build durable, scalable data pipelines that ingest telemetry from clusters, schedulers, health systems, and capacity trackers into Data Warehouse.</li>
</ul>
<p><strong>The Candidate we&#39;re looking for</strong></p>
<p><strong>Experience:</strong></p>
<ul>
<li>8+ years technical engineering experience with data engineering, analytics, or data science, with increasing technical ownership in startup environment.</li>
</ul>
<p><strong>Technical skills:</strong></p>
<ul>
<li>Distributed data processing frameworks and large-scale data systems.</li>
</ul>
<p><strong>Personal attributes:</strong></p>
<ul>
<li>Strong communication skills; can explain complex systems clearly to senior leader.</li>
</ul>
<p><strong>Benefits</strong></p>
<ul>
<li>Software Engineering IC5 – The typical base pay range for this role across the U.S. is USD $139,900 – $274,800 per year.</li>
<li>Certain roles may be eligible for benefits and other compensation.</li>
</ul>
<p style="margin-top:24px;font-size:13px;color:#666;">XML job scraping automation by <a href="https://yubhub.co">YubHub</a></p>]]></Description>
      <Jobtype>full-time</Jobtype>
      <Experiencelevel>staff</Experiencelevel>
      <Workarrangement>onsite</Workarrangement>
      <Salaryrange>USD $139,900 – $274,800 per year</Salaryrange>
      <Skills>data engineering, analytics, data science, distributed data processing frameworks, large-scale data systems, ETL orchestration frameworks, Airflow, Dagster</Skills>
      <Category>Engineering</Category>
      <Industry>Technology</Industry>
      <Employername>Microsoft AI</Employername>
      <Employerlogo>https://logos.yubhub.co/microsoft.ai.png</Employerlogo>
      <Employerdescription>Microsoft AI is a leading technology company that empowers every person and every organization on the planet to achieve more. With a growth mindset, they innovate to empower others and collaborate to realize their shared goals.</Employerdescription>
      <Employerwebsite>https://microsoft.ai</Employerwebsite>
      <Compensationcurrency></Compensationcurrency>
      <Compensationmin></Compensationmin>
      <Compensationmax></Compensationmax>
      <Applyto>https://microsoft.ai/job/member-of-technical-staff-infrastructure-data-analytics-mai-superintelligence-team/</Applyto>
      <Location>Multiple Locations, United States</Location>
      <Country></Country>
      <Postedate>2026-03-06</Postedate>
    </job>
    <job>
      <externalid>5c253d60-00b</externalid>
      <Title>Senior Software Engineer</Title>
      <Description><![CDATA[<p><strong>Summary</strong></p>
<p>Microsoft AI are looking for a talented Senior Software Engineer at their Redmond office. This role sits at the heart of strategic decision-making, turning market data into actionable insights for a company that&#39;s revolutionising data collaboration technology. You&#39;ll work directly with leadership to shape the company&#39;s direction in the measurement and analytics markets.</p>
<p><strong>About the Role</strong></p>
<p>We are looking for a Senior Software Engineer who is passionate about distributed systems, privacy-enhancing technologies, secure data processing, and building reliable production services with global impact. The successful candidate will design and build highly scalable backend services and data pipelines that support privacy-preserving measurement and analytics scenarios using C# or Java. They will also drive integrations with external data and measurement partners, designing stable interfaces, schema governance patterns, and robust validation.</p>
<p><strong>Accountabilities</strong></p>
<ul>
<li>Design and build highly scalable backend services and data pipelines that support privacy-preserving measurement and analytics scenarios using C# or Java.</li>
<li>Drive integrations with external data and measurement partners, designing stable interfaces, schema governance patterns, and robust validation.</li>
</ul>
<p><strong>The Candidate we&#39;re looking for</strong></p>
<p><strong>Experience:</strong></p>
<ul>
<li>4+ years technical engineering experience with coding in languages including, but not limited to, C, C++, C#, Java, JavaScript, or Python.</li>
</ul>
<p><strong>Technical skills:</strong></p>
<ul>
<li>Experience with large-scale data processing frameworks (e.g. Spark, SQL-based pipelines) and cloud platforms.</li>
<li>Understanding of secure data processing, encryption, identity, and access control.</li>
</ul>
<p><strong>Personal attributes:</strong></p>
<ul>
<li>Solid problem-solving skills with a focus on reliability, observability, and system design.</li>
</ul>
<p><strong>Benefits</strong></p>
<ul>
<li>Competitive salary range of USD $119,800 - $234,700 per year.</li>
<li>Benefits and other compensation, including health and wellbeing benefits, professional development opportunities, and financial benefits.</li>
</ul>
<p style="margin-top:24px;font-size:13px;color:#666;">XML job scraping automation by <a href="https://yubhub.co">YubHub</a></p>]]></Description>
      <Jobtype>full-time</Jobtype>
      <Experiencelevel>senior</Experiencelevel>
      <Workarrangement>onsite</Workarrangement>
      <Salaryrange>USD $119,800 - $234,700 per year</Salaryrange>
      <Skills>C#, Java, Spark, SQL-based pipelines, cloud platforms, secure data processing, encryption, identity, access control, large-scale data processing frameworks, distributed systems, privacy-enhancing technologies</Skills>
      <Category>Engineering</Category>
      <Industry>Technology</Industry>
      <Employername>Microsoft AI</Employername>
      <Employerlogo>https://logos.yubhub.co/microsoft.ai.png</Employerlogo>
      <Employerdescription>Microsoft AI Monetization enables advertisers to measure impact and optimize spend through secure, privacy-preserving data collaboration. The company is responsible for building the next generation of privacy-safe measurement systems that allow advertisers and partners to work with data in highly secure environments.</Employerdescription>
      <Employerwebsite>https://microsoft.ai</Employerwebsite>
      <Compensationcurrency></Compensationcurrency>
      <Compensationmin></Compensationmin>
      <Compensationmax></Compensationmax>
      <Applyto>https://microsoft.ai/job/senior-software-engineer-19/</Applyto>
      <Location>Redmond</Location>
      <Country></Country>
      <Postedate>2026-03-06</Postedate>
    </job>
    <job>
      <externalid>91ae81f0-b2b</externalid>
      <Title>Data Engineer II</Title>
      <Description><![CDATA[<p>As a Data Engineer, you will be involved in the entire development lifecycle, from brainstorming ideas to implementing scalable solutions that unlock data insights. You will collaborate with stakeholders to gather requirements, design data models, and build pipelines that support reporting, analytics, and exploratory analysis.</p>
<p><strong>What you&#39;ll do</strong></p>
<ul>
<li>Design, build, and sustain efficient, scalable and performant Data Engineering Pipelines to ingest, sanitize, transform (ETL/ELT), and deliver high-volume, high-velocity data from diverse sources.</li>
<li>Ensure reliable and consistent processing of versatile workloads of granularity such as Real Time, Near Real Time, Mini-batch, Batch and On-demand.</li>
</ul>
<p><strong>What you need</strong></p>
<ul>
<li>Strong Proficiency in writing and analyzing complex SQL, Python or any 4GL.</li>
</ul>
<p style="margin-top:24px;font-size:13px;color:#666;">XML job scraping automation by <a href="https://yubhub.co">YubHub</a></p>]]></Description>
      <Jobtype>full-time</Jobtype>
      <Experiencelevel>mid</Experiencelevel>
      <Workarrangement>hybrid</Workarrangement>
      <Salaryrange></Salaryrange>
      <Skills>SQL, Python, Data Engineering, Cloud Data Warehouses, Distributed data processing frameworks, Real-time/streaming data technologies</Skills>
      <Category>Engineering</Category>
      <Industry>Technology</Industry>
      <Employername>Electronic Arts</Employername>
      <Employerlogo>https://logos.yubhub.co/jobs.ea.com.png</Employerlogo>
      <Employerdescription>Electronic Arts creates next-level entertainment experiences that inspire players and fans around the world. Here, everyone is part of the story. Part of a community that connects across the globe. A place where creativity thrives, new perspectives are invited, and ideas matter.</Employerdescription>
      <Employerwebsite>https://jobs.ea.com</Employerwebsite>
      <Compensationcurrency></Compensationcurrency>
      <Compensationmin></Compensationmin>
      <Compensationmax></Compensationmax>
      <Applyto>https://jobs.ea.com/en_US/careers/JobDetail/Data-Engineer-II/212291</Applyto>
      <Location>Hyderabad</Location>
      <Country></Country>
      <Postedate>2026-02-04</Postedate>
    </job>
    <job>
      <externalid>779ffd11-5cf</externalid>
      <Title>Data Engineer II</Title>
      <Description><![CDATA[<p>As a Data Engineer, you will be involved in the entire development lifecycle, from brainstorming ideas to implementing scalable solutions that unlock data insights. You will collaborate with stakeholders to gather requirements, design data models, and build pipelines that support reporting, analytics, and exploratory analysis.</p>
<p><strong>What you&#39;ll do</strong></p>
<ul>
<li>Design, build, and sustain efficient, scalable and performant Data Engineering Pipelines to ingest, sanitize, transform (ETL/ELT), and deliver high-volume, high-velocity data from diverse sources.</li>
<li>Ensure reliable and consistent processing of versatile workloads of granularity such as Real Time, Near Real Time, Mini-batch, Batch and On-demand.</li>
</ul>
<p><strong>What you need</strong></p>
<ul>
<li>Strong Proficiency in writing and analyzing complex SQL, Python or any 4GL.</li>
</ul>
<p style="margin-top:24px;font-size:13px;color:#666;">XML job scraping automation by <a href="https://yubhub.co">YubHub</a></p>]]></Description>
      <Jobtype>full-time</Jobtype>
      <Experiencelevel>mid</Experiencelevel>
      <Workarrangement>hybrid</Workarrangement>
      <Salaryrange></Salaryrange>
      <Skills>SQL, Python, Data Engineering, Cloud Data Warehouses, Distributed data processing frameworks, Real-time/streaming data technologies</Skills>
      <Category>Engineering</Category>
      <Industry>Technology</Industry>
      <Employername>Electronic Arts</Employername>
      <Employerlogo>https://logos.yubhub.co/jobs.ea.com.png</Employerlogo>
      <Employerdescription>Electronic Arts creates next-level entertainment experiences that inspire players and fans around the world. Here, everyone is part of the story. Part of a community that connects across the globe. A place where creativity thrives, new perspectives are invited, and ideas matter.</Employerdescription>
      <Employerwebsite>https://jobs.ea.com</Employerwebsite>
      <Compensationcurrency></Compensationcurrency>
      <Compensationmin></Compensationmin>
      <Compensationmax></Compensationmax>
      <Applyto>https://jobs.ea.com/en_US/careers/JobDetail/Data-Engineer-II/212287</Applyto>
      <Location>Hyderabad</Location>
      <Country></Country>
      <Postedate>2026-02-04</Postedate>
    </job>
    <job>
      <externalid>d91f2ddd-f1b</externalid>
      <Title>Data Engineer II</Title>
      <Description><![CDATA[<p>As a Data Engineer, you will be involved in the entire development lifecycle, from brainstorming ideas to implementing scalable solutions that unlock data insights. You will collaborate with stakeholders to gather requirements, design data models, and build pipelines that support reporting, analytics, and exploratory analysis.</p>
<p><strong>What you&#39;ll do</strong></p>
<ul>
<li>Design, build, and sustain efficient, scalable and performant Data Engineering Pipelines to ingest, sanitize, transform (ETL/ELT), and deliver high-volume, high-velocity data from diverse sources.</li>
<li>Ensure reliable and consistent processing of versatile workloads of granularity such as Real Time, Near Real Time, Mini-batch, Batch and On-demand.</li>
</ul>
<p><strong>What you need</strong></p>
<ul>
<li>Strong Proficiency in writing and analyzing complex SQL, Python or any 4GL.</li>
</ul>
<p style="margin-top:24px;font-size:13px;color:#666;">XML job scraping automation by <a href="https://yubhub.co">YubHub</a></p>]]></Description>
      <Jobtype>full-time</Jobtype>
      <Experiencelevel>mid</Experiencelevel>
      <Workarrangement>hybrid</Workarrangement>
      <Salaryrange></Salaryrange>
      <Skills>SQL, Python, Data Engineering, Cloud Data Warehouses, Distributed data processing frameworks, Real-time/streaming data technologies</Skills>
      <Category>Engineering</Category>
      <Industry>Technology</Industry>
      <Employername>Electronic Arts</Employername>
      <Employerlogo>https://logos.yubhub.co/jobs.ea.com.png</Employerlogo>
      <Employerdescription>Electronic Arts creates next-level entertainment experiences that inspire players and fans around the world. Here, everyone is part of the story. Part of a community that connects across the globe. A place where creativity thrives, new perspectives are invited, and ideas matter.</Employerdescription>
      <Employerwebsite>https://jobs.ea.com</Employerwebsite>
      <Compensationcurrency></Compensationcurrency>
      <Compensationmin></Compensationmin>
      <Compensationmax></Compensationmax>
      <Applyto>https://jobs.ea.com/en_US/careers/JobDetail/Data-Engineer-II/212288</Applyto>
      <Location>Hyderabad</Location>
      <Country></Country>
      <Postedate>2026-02-04</Postedate>
    </job>
  </jobs>
</source>