<?xml version="1.0" encoding="UTF-8"?>
<source>
  <jobs>
    <job>
      <externalid>8610ea3d-93b</externalid>
      <Title>Cloud Platform Engineer</Title>
      <Description><![CDATA[<p>The Business Development/Management Technology team at FIC &amp; Risk Technology is building and operating platforms that support recruiting, hiring, and onboarding of investment professionals. We are currently integrating multiple legacy and new systems into a unified, cloud-native platform to standardize processes, workflows, and data models across the organisation.</p>
<p>This integration will enable seamless collaboration between teams and provide reliable, scalable data for analytics and reporting. We are looking for a Cloud Platform Engineer to design, build, and operate our AWS-based infrastructure and data platforms, using modern DevOps practices, infrastructure as code, and secure, well-engineered services in Python and C#.</p>
<p>The successful candidate will collaborate with global technology and business teams to design cloud-native solutions that support business development and onboarding workflows. They will partner with global stakeholders to understand requirements and translate them into secure, scalable AWS architectures and platform capabilities.</p>
<p>Key responsibilities include leading the end-to-end delivery of cloud and platform features, including design, implementation (Python/C#), infrastructure as code, testing, and deployment using DevOps practices.</p>
<p>We are looking for a highly skilled engineer with 6+ years of experience in software or platform engineering, with significant time spent building and operating solutions in cloud environments (AWS preferred).</p>
<p>The ideal candidate will have strong hands-on programming experience in Python and C#, with solid understanding of object-oriented design, design patterns, service-oriented / microservices architectures, concurrency, and SOLID principles.</p>
<p>They will also have proven experience designing and operating AWS-based platforms (e.g., EC2, ECS/EKS, Lambda, S3, RDS, IAM) using infrastructure as code (Terraform, CloudFormation, or CDK).</p>
<p>In addition, the successful candidate will have practical experience implementing DevOps practices and CI/CD pipelines (e.g., Jenkins, GitHub Actions, Azure DevOps), including automated testing, security scanning, and deployment.</p>
<p>Experience supporting data science and analytics platforms, including orchestration tools such as Airflow, distributed processing engines such as Spark, and cloud-native data pipelines is also required.</p>
<p>Good understanding of SQL and core database concepts; familiarity with AWS analytics services (e.g., Glue, EMR, Redshift, Athena) is a plus.</p>
<p>Awareness of cloud security best practices, including IAM, network security, data encryption, and secure configuration management is also necessary.</p>
<p>Strong problem-solving and analytical skills; demonstrated ability to take ownership, deliver in a fast-paced environment, and collaborate effectively with global teams is essential.</p>
<p>Excellent communication skills, with ability to work closely with both technical and non-technical stakeholders is also required.</p>
<p>Experience estimating, monitoring, and optimizing AWS infrastructure costs, including use of tools such as AWS Cost Explorer, AWS Budgets, and cost-allocation tagging strategies is desirable.</p>
<p>Experience designing and operating workloads across multiple cloud environments and on-premises, using centralized policies, governance, and controls to support business-aligned teams is also beneficial.</p>
<p>Working knowledge of networking across on-premises and cloud environments, including VPC design, subnets, routing, VPNs/Direct Connect, load balancing, DNS, and network security controls is necessary.</p>
<p>Nice to have experience with additional big data tools or platforms (e.g., Kafka, Databricks, Snowflake, Flink).</p>
<p>Familiarity with Capital Markets concepts and operating models is also beneficial.</p>
<p>The estimated base salary range for this position is $175,000 to $250,000, which is specific to New York and may change in the future.</p>
<p>Millennium pays a total compensation package which includes a base salary, discretionary performance bonus, and a comprehensive benefits package.</p>
<p>When finalising an offer, we take into consideration an individual&#39;s experience level and the qualifications they bring to the role to formulate a competitive total compensation package.</p>
<p style="margin-top:24px;font-size:13px;color:#666;">XML job scraping automation by <a href="https://yubhub.co">YubHub</a></p>]]></Description>
      <Jobtype>full-time</Jobtype>
      <Experiencelevel>senior</Experiencelevel>
      <Workarrangement>onsite</Workarrangement>
      <Salaryrange>$175,000 to $250,000</Salaryrange>
      <Skills>AWS, Python, C#, DevOps, Infrastructure as Code, Cloud Security, SQL, Database Concepts, Networking, Airflow, Spark, Kafka, Databricks, Snowflake, Flink, Capital Markets</Skills>
      <Category>Engineering</Category>
      <Industry>Finance</Industry>
      <Employername>FIC &amp; Risk Technology</Employername>
      <Employerlogo>https://logos.yubhub.co/mlp.eightfold.ai.png</Employerlogo>
      <Employerdescription>FIC &amp; Risk Technology is a technology company that provides solutions for financial institutions.</Employerdescription>
      <Employerwebsite>https://mlp.eightfold.ai</Employerwebsite>
      <Compensationcurrency></Compensationcurrency>
      <Compensationmin></Compensationmin>
      <Compensationmax></Compensationmax>
      <Applyto>https://mlp.eightfold.ai/careers/job/755955139979</Applyto>
      <Location>New York, New York, United States of America</Location>
      <Country></Country>
      <Postedate>2026-04-18</Postedate>
    </job>
    <job>
      <externalid>22bcbb50-ef4</externalid>
      <Title>Member of Technical Staff - Data Platform</Title>
      <Description><![CDATA[<p><strong>About the Role</strong></p>
<p>The Data Platform team at xAI builds and operates the infrastructure responsible for all large-scale data transport and processing across the company.</p>
<p>As a software engineer on the Data Platform team, you will design, build, and operate the distributed systems powering X&#39;s data movement and compute.</p>
<p><strong>Responsibilities</strong></p>
<ul>
<li>Design and implement high-throughput, low-latency data ingestion and transport systems.</li>
<li>Scale and optimise multi-tenant Kafka infrastructure supporting real-time workloads.</li>
<li>Extend and tune Spark, Flink, and Trino for demanding production pipelines.</li>
<li>Build interfaces, APIs, and pipelines enabling teams to query, process, and move data at petabyte scale.</li>
<li>Debug and optimise distributed systems, with a focus on reliability and performance under load.</li>
<li>Collaborate with ML, product, and infrastructure teams to unblock critical data workflows.</li>
</ul>
<p><strong>Basic Qualifications</strong></p>
<ul>
<li>Proven expertise in distributed systems, stream processing, or large-scale data platforms.</li>
<li>Proficiency in Rust, Go, Scala or similar systems languages.</li>
<li>Hands-on experience with Kafka, Flink, Spark, Trino, or Hadoop in production.</li>
<li>Strong debugging, profiling, and performance optimisation skills.</li>
<li>Track record of shipping and maintaining critical infrastructure.</li>
<li>Comfortable working in fast-moving, high-stakes environments with minimal guardrails.</li>
</ul>
<p><strong>Compensation and Benefits</strong></p>
<p>$180,000 - $440,000 USD</p>
<p>Base salary is just one part of our total rewards package at X, which also includes equity, comprehensive medical, vision, and dental coverage, access to a 401(k) retirement plan, short &amp; long-term disability insurance, life insurance, and various other discounts and perks.</p>
<p style="margin-top:24px;font-size:13px;color:#666;">XML job scraping automation by <a href="https://yubhub.co">YubHub</a></p>]]></Description>
      <Jobtype>full-time</Jobtype>
      <Experiencelevel>staff</Experiencelevel>
      <Workarrangement>onsite</Workarrangement>
      <Salaryrange>$180,000 - $440,000 USD</Salaryrange>
      <Skills>Rust, Go, Scala, Kafka, Flink, Spark, Trino, Hadoop, distributed systems, stream processing, large-scale data platforms</Skills>
      <Category>Engineering</Category>
      <Industry>Technology</Industry>
      <Employername>xAI</Employername>
      <Employerlogo>https://logos.yubhub.co/x.com.png</Employerlogo>
      <Employerdescription>xAI creates AI systems to understand the universe and aid humanity in its pursuit of knowledge.</Employerdescription>
      <Employerwebsite>https://www.x.com/</Employerwebsite>
      <Compensationcurrency></Compensationcurrency>
      <Compensationmin></Compensationmin>
      <Compensationmax></Compensationmax>
      <Applyto>https://job-boards.greenhouse.io/xai/jobs/4803862007</Applyto>
      <Location>Palo Alto, CA</Location>
      <Country></Country>
      <Postedate>2026-04-18</Postedate>
    </job>
    <job>
      <externalid>fa9a54d7-549</externalid>
      <Title>Senior Site Reliability Engineer, Data Infrastructure</Title>
      <Description><![CDATA[<p>As a Senior Site Reliability Engineer, you will own the reliability and performance of our Kubernetes-based data platform. You will design and operate highly available, multi-region systems, ensuring our services meet strict uptime and latency targets.</p>
<p>Day-to-day, you’ll work on scaling infrastructure, improving deployment pipelines, and hardening our security posture. You’ll play a key role in evolving our DevSecOps practices while partnering closely with engineering teams to ensure services are built for reliability from day one.</p>
<p>We operate with production-grade discipline, supporting mission-critical services with stringent uptime requirements and a focus on automation, observability, and resilience.</p>
<p>The Platform &amp; Infrastructure Engineering team in the Data Infrastructure organization is responsible for the reliability, scalability, and security of the company’s data platform. The team builds and operates the foundational systems that power data ingestion, transformation, analytics, and internal AI workloads at scale.</p>
<p>About the role:</p>
<ul>
<li>5+ years of experience in Site Reliability Engineering, Platform Engineering, or Infrastructure Engineering roles</li>
<li>Deep expertise in Kubernetes and containerized software services, including cluster design, operations, and troubleshooting in production environments</li>
<li>Strong experience building and operating CI/CD systems, including tools such as Argo CD and GitHub Actions</li>
<li>Proven experience owning production systems with high availability requirements (≥99.99% uptime), including incident response, SLI/SLO/SLA definition, error budgets, and postmortems</li>
<li>Hands-on experience designing and operating geo-replicated, multi-region, active-active systems, including traffic routing, failover strategies, and data consistency tradeoffs</li>
<li>Strong experience building and owning observability components, including metrics, logging, and tracing (e.g., Prometheus, Grafana, OpenTelemetry).</li>
<li>Experience with infrastructure as code (e.g., Helm, Terraform, Pulumi) and automated environment provisioning</li>
<li>Strong understanding of system performance tuning, capacity planning, and resource optimization in distributed systems</li>
<li>Experience implementing and operating security best practices in cloud-native environments (e.g., secrets management, network policies, vulnerability scanning)</li>
</ul>
<p>Preferred:</p>
<ul>
<li>Experience operating data platforms or data-intensive workloads (e.g., Spark, Airflow, Kafka, Flink)</li>
<li>Familiarity with service mesh technologies (e.g., Istio, Linkerd)</li>
<li>Experience working in regulated environments with compliance frameworks such as GDPR, SOC 2, HIPAA, or SOX</li>
<li>Background in building internal developer platforms or self-service infrastructure</li>
</ul>
<p>Wondering if you’re a good fit?</p>
<p>We believe in investing in our people, and value candidates who can bring their own diversified experiences to our teams – even if you aren’t a 100% skill or experience match.</p>
<p>Here are a few qualities we’ve found compatible with our team. If some of this describes you, we’d love to talk.</p>
<ul>
<li>You love building highly reliable systems that operate at scale</li>
<li>You’re curious about how to continuously improve system resilience, security, and operations</li>
<li>You’re an expert in diagnosing and solving complex distributed systems problems</li>
</ul>
<p>Why CoreWeave?</p>
<p>At CoreWeave, we work hard, have fun, and move fast! We’re in an exciting stage of hyper-growth that you will not want to miss out on. We’re not afraid of a little chaos, and we’re constantly learning.</p>
<p>Our team cares deeply about how we build our product and how we work together, which is represented through our core values:</p>
<ul>
<li>Be Curious at Your Core</li>
<li>Act Like an Owner</li>
<li>Empower Employees</li>
<li>Deliver Best-in-Class Client Experiences</li>
<li>Achieve More Together</li>
</ul>
<p>We support and encourage an entrepreneurial outlook and independent thinking. We foster an environment that encourages collaboration and provides the opportunity to develop innovative solutions to complex problems.</p>
<p>As we get set for take off, the growth opportunities within the organization are constantly expanding. You will be surrounded by some of the best talent in the industry, who will want to learn from you, too.</p>
<p>Come join us!</p>
<p>The base salary range for this role is $165,000 to $242,000. The starting salary will be determined based on job-related knowledge, skills, experience, and market location. We strive for both market alignment and internal equity when determining compensation.</p>
<p>In addition to base salary, our total rewards package includes a discretionary bonus, equity awards, and a comprehensive benefits program (all based on eligibility).</p>
<p>What We Offer</p>
<p>The range we’ve posted represents the typical compensation range for this role. To determine actual compensation, we review the market rate for each candidate which can include a variety of factors. These include qualifications, experience, interview performance, and location.</p>
<p>In addition to a competitive salary, we offer a variety of benefits to support your needs, including:</p>
<ul>
<li>Medical, dental, and vision insurance</li>
<li>100% paid for by CoreWeave</li>
<li>Company-paid Life Insurance</li>
<li>Voluntary supplemental life insurance</li>
<li>Short and long-term disability insurance</li>
<li>Flexible Spending Account</li>
<li>Health Savings Account</li>
<li>Tuition Reimbursement</li>
<li>Ability to Participate in Employee Stock Purchase Program (ESPP)</li>
<li>Mental Wellness Benefits through Spring Health</li>
<li>Family-Forming support provided by Carrot</li>
<li>Paid Parental Leave</li>
<li>Flexible, full-service childcare support with Kinside</li>
<li>401(k) with a generous employer match</li>
<li>Flexible PTO</li>
<li>Catered lunch each day in our office and data center locations</li>
<li>A casual work environment</li>
<li>A work culture focused on innovative disruption</li>
</ul>
<p>Our Workplace</p>
<p>While we prioritize a hybrid work environment, remote work may be considered for candidates located more than 30 miles from an office, based on role requirements for specialized skill sets.</p>
<p>New hires will be invited to attend onboarding at one of our hubs within their first month.</p>
<p>Teams also gather quarterly to support collaboration.</p>
<p>California Consumer Privacy Act - California applicants only</p>
<p>CoreWeave is an equal opportunity employer, committed to fostering an inclusive and supportive workplace.</p>
<p>All qualified applicants and candidates will receive consideration for employment without regard to race, color, religion, sex, disability, age, sexual orientation, gender identity, national origin, veteran status, or genetic information.</p>
<p>As part of this commitment and consistent with the Americans with Disabilities Act (ADA), CoreWeave will ensure that qualified applicants and candidates with disabilities are provided reasonable accommodations for the hiring process, unless such accommodation would cause an undue hardship.</p>
<p>If reasonable accommodation is needed, please contact: careers@coreweave.com.</p>
<p>Export Control Compliance</p>
<p>This position requires access to export controlled information.</p>
<p>To conform to U.S. Government export regulations applicable to that information, applicant must either be (A) a U.S. person, defined as a (i) U.S. citizen or national, (ii) U.S. lawful permanent resident (green card holder), (iii) refugee under 8 U.S.C. § 1157, or (iv) asylee under 8 U.S.C. § 1158, (B) eligible to access the export controlled information without restrictions, or (C) otherwise exempt from the export regulations.</p>
<p>If you are not a U.S. person, you will be required to provide documentation of your eligibility to access the export controlled information before being considered for this position.</p>
<p>Please note that CoreWeave is subject to the requirements of the U.S. Department of Commerce&#39;s Export Administration Regulations (EAR) and the U.S. Department of State&#39;s International Traffic in Arms Regulations (ITAR).</p>
<p>By applying for this position, you acknowledge that you have read and understood the export control requirements and that you will comply with them.</p>
<p>If you have any questions or concerns regarding the export control requirements, please contact: careers@coreweave.com.</p>
<p style="margin-top:24px;font-size:13px;color:#666;">XML job scraping automation by <a href="https://yubhub.co">YubHub</a></p>]]></Description>
      <Jobtype>full-time</Jobtype>
      <Experiencelevel>senior</Experiencelevel>
      <Workarrangement>hybrid</Workarrangement>
      <Salaryrange>$165,000 to $242,000</Salaryrange>
      <Skills>Kubernetes, containerized software services, cluster design, operations, troubleshooting, CI/CD systems, Argo CD, GitHub Actions, production systems, high availability, incident response, SLI/SLO/SLA definition, error budgets, postmortems, geo-replicated, multi-region, active-active systems, traffic routing, failover strategies, data consistency tradeoffs, observability components, metrics, logging, tracing, Prometheus, Grafana, OpenTelemetry, infrastructure as code, Helm, Terraform, Pulumi, automated environment provisioning, system performance tuning, capacity planning, resource optimization, distributed systems, security best practices, cloud-native environments, secrets management, network policies, vulnerability scanning, Spark, Airflow, Kafka, Flink, service mesh technologies, Istio, Linkerd, regulated environments, compliance frameworks, GDPR, SOC 2, HIPAA, SOX, internal developer platforms, self-service infrastructure</Skills>
      <Category>Engineering</Category>
      <Industry>Technology</Industry>
      <Employername>CoreWeave</Employername>
      <Employerlogo>https://logos.yubhub.co/coreweave.com.png</Employerlogo>
      <Employerdescription>CoreWeave is a cloud computing company that provides a platform for building and scaling artificial intelligence.</Employerdescription>
      <Employerwebsite>https://www.coreweave.com</Employerwebsite>
      <Compensationcurrency></Compensationcurrency>
      <Compensationmin></Compensationmin>
      <Compensationmax></Compensationmax>
      <Applyto>https://job-boards.greenhouse.io/coreweave/jobs/4671535006</Applyto>
      <Location>New York, NY / Bellevue, WA</Location>
      <Country></Country>
      <Postedate>2026-04-18</Postedate>
    </job>
    <job>
      <externalid>f32fed2e-9ba</externalid>
      <Title>Engineering Manager, Data Transformation</Title>
      <Description><![CDATA[<p>As an Engineering Manager of the Data Transformation team, you will lead a team of engineers, collaborate with infrastructure and product engineering orgs, and advance the Data Transformation roadmap and adoption at Stripe.</p>
<p>You will be driving critical workstreams for Stripe&#39;s topmost priorities around delivering high quality, materialized datasets Stripe products and AI agents.</p>
<p>Key responsibilities include:</p>
<ul>
<li>Delivering infrastructure and services that scale to our users&#39; needs with an eye on reliability and efficiency</li>
<li>Leading and managing a team of talented engineers on the team, providing mentorship, guidance, and support to ensure their success</li>
<li>Working with high-visibility teams and their stakeholders to support the Infrastructure&#39;s key engineering initiatives</li>
<li>Understanding user needs and pain points to prioritize engineering work and deliver high quality solutions that meet user needs</li>
<li>Driving the execution of projects, overseeing the entire development lifecycle from planning to delivery, while maintaining high standards of quality and timely completion</li>
</ul>
<p>You will also provide hands-on technical leadership (architecture/design, vision/direction/requirements setting, and incident response processes) for your reports, work with leaders across the company to create and drive toward the longer term vision of Stripe&#39;s Data Transformation roadmap, and foster a collaborative and inclusive work environment, promoting innovation, knowledge sharing, and continuous improvement within the team.</p>
<p>We&#39;re looking for someone who has 1-3 years of experience managing teams that shipped and operated data pipelines and critical distributed system infrastructure, successfully recruited and built great teams, and works effectively cross-functionally and is able to think rigorously, communicate effectively, and make or coordinate hard decisions and trade-offs.</p>
<p style="margin-top:24px;font-size:13px;color:#666;">XML job scraping automation by <a href="https://yubhub.co">YubHub</a></p>]]></Description>
      <Jobtype>full-time</Jobtype>
      <Experiencelevel>mid</Experiencelevel>
      <Workarrangement>remote</Workarrangement>
      <Salaryrange></Salaryrange>
      <Skills>Kafka, Flink, Spark, Airflow, Python, SQL, API design</Skills>
      <Category>Engineering</Category>
      <Industry>Technology</Industry>
      <Employername>Stripe</Employername>
      <Employerlogo>https://logos.yubhub.co/stripe.com.png</Employerlogo>
      <Employerdescription>Stripe is a financial infrastructure platform for businesses, with millions of companies using its services.</Employerdescription>
      <Employerwebsite>https://stripe.com/</Employerwebsite>
      <Compensationcurrency></Compensationcurrency>
      <Compensationmin></Compensationmin>
      <Compensationmax></Compensationmax>
      <Applyto>https://job-boards.greenhouse.io/stripe/jobs/7688358</Applyto>
      <Location>N/A</Location>
      <Country></Country>
      <Postedate>2026-04-18</Postedate>
    </job>
    <job>
      <externalid>087e2e06-4fb</externalid>
      <Title>Staff Machine Learning Engineer, Ads Auction (Ads Marketplace Quality)</Title>
      <Description><![CDATA[<p>We&#39;re looking for a Staff Machine Learning Engineer to join our Ads Marketplace Quality team. As a key member of the team, you will be responsible for developing and executing a vision to improve our Ads Marketplace at Reddit. You will develop a deep understanding of our marketplace dynamics and identify areas of improvement by getting to the bottom of data, design, implement and ship algorithms to production that improve our ads marketplace efficiency.</p>
<p>In this role, you will specialize in improving and optimizing our ads auction and pricing mechanism which will have a direct impact on upleveling the utility for both our advertiser and user values. You will also have the opportunity to work on other org-wide strategic initiatives such as supply optimization and ad relevance, where you will drive and execute on Reddit’s vision to transform Reddit into an advertising platform that shows the right ads to the right users at the right time in the right context.</p>
<p>As a Staff Machine Learning Engineer in the Ads Marketplace Quality team, you will be an industry technical leader with domain knowledge in ads marketplace dynamics, auction and pricing, you will research, formulate, and execute on our mission to build end-to-end algorithmic solutions and deliver values to all the three-sided participants to our marketplace.</p>
<p>Responsibilities:</p>
<ul>
<li>Lead and oversee the strategy development, quarterly planning and day-to-day execution of initiatives related to ads marketplace, auction and pricing.</li>
<li>Proactively further our understanding of marketplace dynamics and develop algorithms to improve the efficiency and effectiveness of our ads marketplace, auction and pricing.</li>
<li>Oversee end-to-end ML workflows,from data ingestion and feature engineering to model training, evaluation, and deployment,that optimizes the ads marketplace efficiency.</li>
<li>Be a mentor, lead both junior and senior engineers in implementing technical designs and reviews. Fostering a culture of innovation, technical excellence, and knowledge sharing across the organization.</li>
<li>Be a cross-functional advocate for the team, collaborate with cross-functional teams (e.g., product management, data science, PMM, Sales etc.) to innovate and build products.</li>
</ul>
<p>Required Qualifications:</p>
<ul>
<li>8+ years of experience with industry-level product development, with at least 5+ years focused on data-driven, marketplace-optimization problem space at scale.</li>
<li>Strong knowledge of ads marketplace optimization. Demonstrated experience architecting ads marketplace design, improving and optimizing ads auction and pricing mechanisms.</li>
<li>Solid understanding of large-scale data processing, distributed computing, and data infrastructure (e.g., Spark, Kafka, Beam, Flink).</li>
<li>Proficiency in machine learning frameworks (e.g., TensorFlow, PyTorch) and libraries for feature engineering, model training, and inference.</li>
<li>Proficiency with programming languages (Java, Python, Golang, C++, or similar) and statistical analysis.</li>
<li>Proven technical leadership in cross-functional settings, driving architectural decisions and influencing stakeholders (product, data science, privacy, legal).</li>
<li>Excellent communication, mentoring, and collaboration skills to align teams on a long-term vision for ads marketplace optimization.</li>
</ul>
<p>Benefits:</p>
<ul>
<li>Comprehensive Healthcare Benefits</li>
<li>401k Matching</li>
<li>Workspace benefits for your home office</li>
<li>Personal &amp; Professional development funds</li>
<li>Family Planning Support</li>
<li>Flexible Vacation (please use them!) &amp; Reddit Global Wellness Days</li>
<li>4+ months paid Parental Leave</li>
<li>Paid Volunteer time off</li>
</ul>
<p style="margin-top:24px;font-size:13px;color:#666;">XML job scraping automation by <a href="https://yubhub.co">YubHub</a></p>]]></Description>
      <Jobtype>full-time</Jobtype>
      <Experiencelevel>staff</Experiencelevel>
      <Workarrangement>remote</Workarrangement>
      <Salaryrange>$230,000-$322,000 USD</Salaryrange>
      <Skills>machine learning, ads marketplace optimization, large-scale data processing, distributed computing, data infrastructure, Spark, Kafka, Beam, Flink, TensorFlow, PyTorch, feature engineering, model training, inference, programming languages, statistical analysis, technical leadership, cross-functional settings, architectural decisions, influencing stakeholders</Skills>
      <Category>Engineering</Category>
      <Industry>Technology</Industry>
      <Employername>Reddit</Employername>
      <Employerlogo>https://logos.yubhub.co/redditinc.com.png</Employerlogo>
      <Employerdescription>Reddit is a social news and discussion website with over 121 million daily active unique visitors and 100,000+ active communities.</Employerdescription>
      <Employerwebsite>https://www.redditinc.com</Employerwebsite>
      <Compensationcurrency></Compensationcurrency>
      <Compensationmin></Compensationmin>
      <Compensationmax></Compensationmax>
      <Applyto>https://job-boards.greenhouse.io/reddit/jobs/7181821</Applyto>
      <Location>Remote - United States</Location>
      <Country></Country>
      <Postedate>2026-04-18</Postedate>
    </job>
    <job>
      <externalid>50f401de-7b1</externalid>
      <Title>Staff Software Engineer</Title>
      <Description><![CDATA[<p>Who we are</p>
<p>At Twilio, we&#39;re shaping the future of communications, all from the comfort of our homes. We deliver innovative solutions to hundreds of thousands of businesses and empower millions of developers worldwide to craft personalized customer experiences.</p>
<p>As we continue to revolutionize how the world interacts, we&#39;re acquiring new skills and experiences that make work feel truly rewarding.</p>
<p>Your career at Twilio is in your hands.</p>
<p>We use Artificial Intelligence (AI) to help make our hiring process efficient. That said, every hiring decision is made by real Twilions!</p>
<p>Join the team as Twilio&#39;s next Staff Software Engineer</p>
<p>About the job</p>
<p>This position is needed to harden, optimize, and scale the real-time event-aggregation services that power our Observability Insights/Analytics platform.</p>
<p>We are seeking a Staff Software Engineer with deep Java expertise to own high-throughput stream-processing microservices (Kafka Streams / Flink) deployed on AWS EKS, tune ClickHouse for millisecond-latency writes, and embed observability that keeps incident minutes near zero.</p>
<p>You will design resilient, high-performance systems capable of processing &gt;250K events/sec with p99 latencies under 200ms, while championing DevSecOps practices and mentoring junior engineers.</p>
<p>Responsibilities</p>
<p>In this role, you&#39;ll:</p>
<ul>
<li>Design, build, and maintain high-performance Java microservices using Spring Boot, capable of ingesting &gt;250K events/sec with p99</li>
</ul>
<p>Qualifications</p>
<p>Twilio values diverse experiences from all kinds of industries, and we encourage everyone who meets the required qualifications to apply.</p>
<p>If your career is just starting or hasn&#39;t followed a traditional path, don&#39;t let that stop you from considering Twilio.</p>
<p>We are always looking for people who will bring something new to the table!</p>
<p>*Required:</p>
<ul>
<li>8+ years of professional Java development experience with mastery of high-performance and low-latency design patterns</li>
</ul>
<ul>
<li>Production experience with Kafka Streams, Flink, or comparable stream-processing frameworks for building real-time data pipelines</li>
</ul>
<ul>
<li>Hands-on ClickHouse (or columnar database) performance tuning and SQL optimization expertise</li>
</ul>
<ul>
<li>Proven success operating AWS-hosted microservices at scale with solid Linux, Docker, and Kubernetes knowledge</li>
</ul>
<ul>
<li>Strong observability mindset including metrics, tracing, alerting, and post-incident analysis capabilities</li>
</ul>
<ul>
<li>Excellent communication skills and a bias toward collaborative problem-solving in cross-functional team environments</li>
</ul>
<p>Desired:</p>
<ul>
<li>Experience migrating single-region services to multi-region active-active topologies for high availability</li>
</ul>
<ul>
<li>Familiarity with data-privacy controls including PII tokenization and field-level encryption</li>
</ul>
<ul>
<li>Previous work in telecom, real-time analytics, or compliance-sensitive domains</li>
</ul>
<ul>
<li>Contributions to open-source Java or streaming projects demonstrating community engagement</li>
</ul>
<p>What We Offer</p>
<p>Working at Twilio offers many benefits, including competitive pay, generous time off, ample parental and wellness leave, healthcare, a retirement savings program, and much more.</p>
<p>Offerings vary by location.</p>
<p>Twilio thinks big. Do you?</p>
<p>We like to solve problems, take initiative, pitch in when needed, and are always up for trying new things.</p>
<p>That&#39;s why we seek out colleagues who embody our values , something we call Twilio Magic.</p>
<p>Additionally, we empower employees to build positive change in their communities by supporting their volunteering and donation efforts.</p>
<p>So, if you&#39;re ready to unleash your full potential, do your best work, and be the best version of yourself, apply now!</p>
<p>If this role isn&#39;t what you&#39;re looking for, please consider other open positions.</p>
<p>Twilio is proud to be an equal opportunity employer.</p>
<p>We do not discriminate based upon race, religion, color, national origin, sex (including pregnancy, childbirth, reproductive health decisions, or related medical conditions), sexual orientation, gender identity, gender expression, age, status as a protected veteran, status as an individual with a disability, genetic information, political views or activity, or other applicable legally protected characteristics.</p>
<p>We also consider qualified applicants with criminal histories, consistent with applicable federal, state and local law.</p>
<p>Qualified applicants with arrest or conviction records will be considered for employment in accordance with the Los Angeles County Fair Chance Ordinance for Employers and the California Fair Chance Act.</p>
<p>Additionally, Twilio participates in the E-Verify program in certain locations, as required by law.</p>
<p style="margin-top:24px;font-size:13px;color:#666;">XML job scraping automation by <a href="https://yubhub.co">YubHub</a></p>]]></Description>
      <Jobtype>full-time</Jobtype>
      <Experiencelevel>staff</Experiencelevel>
      <Workarrangement>remote</Workarrangement>
      <Salaryrange></Salaryrange>
      <Skills>Java, Kafka Streams, Flink, ClickHouse, AWS EKS, Spring Boot, Linux, Docker, Kubernetes, DevSecOps</Skills>
      <Category>Engineering</Category>
      <Industry>Technology</Industry>
      <Employername>Twilio</Employername>
      <Employerlogo>https://logos.yubhub.co/twilio.com.png</Employerlogo>
      <Employerdescription>Twilio delivers innovative solutions to hundreds of thousands of businesses and empowers millions of developers worldwide to craft personalized customer experiences.</Employerdescription>
      <Employerwebsite>https://www.twilio.com/</Employerwebsite>
      <Compensationcurrency></Compensationcurrency>
      <Compensationmin></Compensationmin>
      <Compensationmax></Compensationmax>
      <Applyto>https://job-boards.greenhouse.io/twilio/jobs/7234666</Applyto>
      <Location>Remote - Ireland</Location>
      <Country></Country>
      <Postedate>2026-04-18</Postedate>
    </job>
    <job>
      <externalid>119c9488-4eb</externalid>
      <Title>Software Engineer, Infrastructure (8+ YOE)</Title>
      <Description><![CDATA[<p>We are looking for backend engineers to join our team to help improve critical product infrastructure, with a focus on building systems that have a great developer experience and will scale as we grow.</p>
<p>We currently have openings on: Base Infrastructure: We are looking for strong engineers with leadership experience to join the Serving Infrastructure organisation. You will primarily work on the Base Infrastructure team, whose key projects include building replication to support zero downtime failovers, optimising performance and memory usage, and vertical scaling. Data Infrastructure: The Data Infrastructure team’s mission is to enable data-driven decision making at Airtable by providing reliable, self-service, high-performance analytics infrastructure. We use technologies like Apache Spark, Kafka, and Apache Flink to process vast quantities of data in our data warehouse.</p>
<p><strong>What you&#39;ll do</strong></p>
<p>Proactively identify and lead significant improvements to Airtable’s infrastructure, working across teams and product areas to maximise business and engineering impact. Work on systems-level problems in a complex design space where scalability, efficiency, reliability, and security really matter. Build clean, reusable, and maintainable abstractions that will be used by Airtable’s engineers for years to come. Take full ownership of components of Airtable’s infrastructure, including responsibility for reliability, performance, efficiency, and observability of our production environment.</p>
<p><strong>Who you are</strong></p>
<p>You have at least 8 years of industry experience, and are excited about learning new technologies and applying them in a fast-changing environment. You have experience in areas such as databases, distributed systems, service-oriented architectures, and data infrastructure. You derive joy from refactoring and building clean abstractions in order to make complex systems fun to develop on and easy to understand. You have a strong background in computer science with a degree in CS or a related field. You are currently based or willing to relocate to the San Francisco Bay Area or New York City for this role.</p>
<p style="margin-top:24px;font-size:13px;color:#666;">XML job scraping automation by <a href="https://yubhub.co">YubHub</a></p>]]></Description>
      <Jobtype>full-time</Jobtype>
      <Experiencelevel>senior</Experiencelevel>
      <Workarrangement>hybrid</Workarrangement>
      <Salaryrange>$196,000-$339,900 USD</Salaryrange>
      <Skills>databases, distributed systems, service-oriented architectures, data infrastructure, Apache Spark, Kafka, Apache Flink</Skills>
      <Category>Engineering</Category>
      <Industry>Technology</Industry>
      <Employername>Airtable</Employername>
      <Employerlogo>https://logos.yubhub.co/airtable.com.png</Employerlogo>
      <Employerdescription>Airtable is a no-code app platform that empowers people to accelerate their most critical business processes. It has over 500,000 organisations, including 80% of the Fortune 100, relying on it to transform how work gets done.</Employerdescription>
      <Employerwebsite>https://airtable.com/</Employerwebsite>
      <Compensationcurrency></Compensationcurrency>
      <Compensationmin></Compensationmin>
      <Compensationmax></Compensationmax>
      <Applyto>https://job-boards.greenhouse.io/airtable/jobs/8400388002</Applyto>
      <Location>San Francisco, CA; New York, NY; Remote - US (Seattle, WA only)</Location>
      <Country></Country>
      <Postedate>2026-04-18</Postedate>
    </job>
    <job>
      <externalid>33521936-dee</externalid>
      <Title>Software Engineer, Infrastructure (2-8 YOE)</Title>
      <Description><![CDATA[<p>We are looking for backend engineers to join our team to help improve critical product infrastructure, with a focus on building systems that have a great developer experience and will scale as we grow.</p>
<p>Airtable&#39;s infrastructure is evolving to meet the needs of our fast-growing engineering org. We currently have openings on:</p>
<ul>
<li>Base Infrastructure: The Base Infrastructure team owns the system that powers the core of Airtable&#39;s product--serving Airtable bases. We are investing in the foundations of our homegrown in-memory database. Key projects include building replication to support zero downtime failovers, optimising performance and memory usage, and vertical scaling.</li>
</ul>
<ul>
<li>Compute: The compute pod builds and manages our Kubernetes-based platform that supports every service at Airtable, including all new AI services such as vector databases, AI evals store, and document extraction and understanding services. We have a lot of exciting foundational work in our roadmap, such as overhauling our network stack and service discovery, to simplify service setup and strengthen security, region level disaster recovery, and bringing up compute platform from 0-&gt;1 in a new region, building custom Kubernetes operators for reliably managing some of our most critical workloads.</li>
</ul>
<ul>
<li>Data Infrastructure: The Data Infrastructure team&#39;s mission is to enable data-driven decision making at Airtable by providing reliable, self-service, high-performance analytics infrastructure. We use technologies like Apache Spark, Kafka, and Apache Flink to process vast quantities of data in our data warehouse. This infrastructure is used by Airtable&#39;s data engineers and analysts, as well as product developers building features powered by business data. The team is focused on scaling to petabyte volume, enabling sub-second streaming, tightening data governance, and delivering cost-efficient ML-ready datasets to power Airtable&#39;s native AI products with fresh, high-quality signals.</li>
</ul>
<ul>
<li>Developer Platform: The Developer Platform team sits at the intersection of all engineering at Airtable, focusing on building the internal tooling, frameworks, and CI/CD systems that power our product teams. We strive to streamline developer workflows,from build and test cycles to production deployments,and foster a best-in-class developer experience.</li>
</ul>
<ul>
<li>Storage: The Storage team&#39;s mission is to accelerate product development at Airtable by providing scalable, reliable, and easy-to-use storage abstractions. We use RDS MySQL, DynamoDB, Redis, and TiDB. We&#39;re looking for folks interested in distributed systems and databases who are excited to work on business-critical, petabyte-scale storage systems.</li>
</ul>
<ul>
<li>Traffic: We are looking for founding members of our Traffic Engineering team. We recently formed a Traffic Infrastructure team to ensure that traffic across Airtable&#39;s network and routing infrastructure is managed in a reliable, flexible, and secure manner. This will support improved performance in our secondary regions (EU and Australia) as well as other customer-driven projects.</li>
</ul>
<p>You will own all aspects of building, running, and improving these systems, from the underlying infrastructure all the way to the developer-facing code abstractions.</p>
<p>You will proactively identify and lead significant improvements to Airtable&#39;s infrastructure, working across teams and product areas to maximise business and engineering impact. You will work on systems-level problems in a complex design space where scalability, efficiency, reliability, and security really matter. You will build clean, reusable, and maintainable abstractions that will be used by Airtable&#39;s engineers for years to come. You will take full ownership of components of Airtable&#39;s infrastructure, including responsibility for reliability, performance, efficiency, and observability of our production environment.</p>
<p>You have 2-8 years of industry experience, and are excited about learning new technologies and applying them in a fast-changing environment. You have experience in areas such as databases, distributed systems, service-oriented architectures, and data infrastructure. You derive joy from refactoring and building clean abstractions in order to make complex systems fun to develop on and easy to understand. You have a strong background in computer science with a degree in CS or a related field. You are currently based or willing to relocate to the San Francisco Bay Area.</p>
<p style="margin-top:24px;font-size:13px;color:#666;">XML job scraping automation by <a href="https://yubhub.co">YubHub</a></p>]]></Description>
      <Jobtype>full-time</Jobtype>
      <Experiencelevel>mid</Experiencelevel>
      <Workarrangement>hybrid</Workarrangement>
      <Salaryrange>$187,000-$260,000 USD</Salaryrange>
      <Skills>databases, distributed systems, service-oriented architectures, data infrastructure, Kubernetes, Apache Spark, Kafka, Apache Flink, RDS MySQL, DynamoDB, Redis, TiDB</Skills>
      <Category>Engineering</Category>
      <Industry>Technology</Industry>
      <Employername>Airtable</Employername>
      <Employerlogo>https://logos.yubhub.co/airtable.com.png</Employerlogo>
      <Employerdescription>Airtable is a no-code app platform that empowers people to accelerate their most critical business processes. It has over 500,000 organisations, including 80% of the Fortune 100, relying on it to transform how work gets done.</Employerdescription>
      <Employerwebsite>https://www.airtable.com/</Employerwebsite>
      <Compensationcurrency></Compensationcurrency>
      <Compensationmin></Compensationmin>
      <Compensationmax></Compensationmax>
      <Applyto>https://job-boards.greenhouse.io/airtable/jobs/8400373002</Applyto>
      <Location>San Francisco, CA; New York, NY; Remote - US (Seattle, WA only)</Location>
      <Country></Country>
      <Postedate>2026-04-18</Postedate>
    </job>
    <job>
      <externalid>9b657c4e-8a1</externalid>
      <Title>Member of Technical Staff - Data Platform</Title>
      <Description><![CDATA[<p><strong>About the Role</strong></p>
<p>As a software engineer on the Data Platform team, you will design, build, and operate the distributed systems powering X&#39;s data movement and compute. You will take ownership of infrastructure components that process trillions of events daily, driving the scalability, performance, and reliability of the systems that power product and ML workloads across the company.</p>
<p><strong>Responsibilities</strong></p>
<ul>
<li>Design and implement high-throughput, low-latency data ingestion and transport systems.</li>
<li>Scale and optimize multi-tenant Kafka infrastructure supporting real-time workloads.</li>
<li>Extend and tune Spark, Flink, and Trino for demanding production pipelines.</li>
<li>Build interfaces, APIs, and pipelines enabling teams to query, process, and move data at petabyte scale.</li>
<li>Debug and optimize distributed systems, with a focus on reliability and performance under load.</li>
<li>Collaborate with ML, product, and infrastructure teams to unblock critical data workflows.</li>
</ul>
<p><strong>Basic Qualifications</strong></p>
<ul>
<li>Proven expertise in distributed systems, stream processing, or large-scale data platforms.</li>
<li>Proficiency in Rust, Go, Scala or similar systems languages.</li>
<li>Hands-on experience with Kafka, Flink, Spark, Trino, or Hadoop in production.</li>
<li>Strong debugging, profiling, and performance optimization skills.</li>
<li>Track record of shipping and maintaining critical infrastructure.</li>
<li>Comfortable working in fast-moving, high-stakes environments with minimal guardrails.</li>
</ul>
<p><strong>Compensation and Benefits</strong></p>
<p>$180,000 - $440,000 USD</p>
<p>Base salary is just one part of our total rewards package at X, which also includes equity, comprehensive medical, vision, and dental coverage, access to a 401(k) retirement plan, short &amp; long-term disability insurance, life insurance, and various other discounts and perks.</p>
<p style="margin-top:24px;font-size:13px;color:#666;">XML job scraping automation by <a href="https://yubhub.co">YubHub</a></p>]]></Description>
      <Jobtype>full-time</Jobtype>
      <Experiencelevel>staff</Experiencelevel>
      <Workarrangement>hybrid</Workarrangement>
      <Salaryrange>$180,000 - $440,000 USD</Salaryrange>
      <Skills>distributed systems, stream processing, large-scale data platforms, Rust, Go, Scala, Kafka, Flink, Spark, Trino, Hadoop, debugging, profiling, performance optimization</Skills>
      <Category>Engineering</Category>
      <Industry>Technology</Industry>
      <Employername>xAI</Employername>
      <Employerlogo>https://logos.yubhub.co/x.ai.png</Employerlogo>
      <Employerdescription>xAI creates AI systems to understand the universe and aid humanity in its pursuit of knowledge.</Employerdescription>
      <Employerwebsite>https://www.x.ai/</Employerwebsite>
      <Compensationcurrency></Compensationcurrency>
      <Compensationmin></Compensationmin>
      <Compensationmax></Compensationmax>
      <Applyto>https://job-boards.greenhouse.io/xai/jobs/4803862007</Applyto>
      <Location>Palo Alto, CA</Location>
      <Country></Country>
      <Postedate>2026-04-18</Postedate>
    </job>
    <job>
      <externalid>a98d4ace-d27</externalid>
      <Title>Senior Data Engineer</Title>
      <Description><![CDATA[<p>We are looking for a Senior Data Engineer to join our high-performing data enablement team. As a Senior Data Engineer, you will play a pivotal role within the Data team that powers Yuno and its payment platform, while helping co-design and implement an architecture that scales with the product and the company.</p>
<p>The stack is modern: StarRocks as our primary analytical layer, Flink for processing, DBT for transformation, Airflow for orchestration and various tooling for surfacing insights.</p>
<p>You&#39;ll be working on things that matter and are technically interesting:</p>
<ul>
<li><p>Design and build data pipelines for large volumes of payment data that are performant, reliable, and correct , not just fast.</p>
</li>
<li><p>Own end-to-end data flows: from ingestion and transformation through to the outputs that Finance, Product, and clients depend on.</p>
</li>
<li><p>Drive data quality across your domain with tooling.</p>
</li>
<li><p>Work cross-functionally with Product, Finance and enable other Engineering teams via a &#39;consulting&#39; style model.</p>
</li>
<li><p>Contribute to how the team works , code review culture, CI/CD standards, ADRs, how we handle incidents , we&#39;re building these practices now and senior engineers shape them.</p>
</li>
<li><p>Help onboard and level up engineers around you; there&#39;s real opportunity to make an impact here.</p>
</li>
</ul>
<p style="margin-top:24px;font-size:13px;color:#666;">XML job scraping automation by <a href="https://yubhub.co">YubHub</a></p>]]></Description>
      <Jobtype>full-time</Jobtype>
      <Experiencelevel>senior</Experiencelevel>
      <Workarrangement>remote</Workarrangement>
      <Salaryrange></Salaryrange>
      <Skills>Proven proactivity, technical acumen and the ability to lead initiatives and deliver projects., Experience in defining and evolving data engineering standards, architectural guidelines and governance, ideally within a regulated environment., Strong Python and SQL skills., Hands-on experience with Spark or Flink in production., DBT for data transformation., Airflow for orchestration., Experience with Apache Hudi., Experience with financial, transactional, or payment data.</Skills>
      <Category>Engineering</Category>
      <Industry>Technology</Industry>
      <Employername>Yuno</Employername>
      <Employerlogo>https://logos.yubhub.co/yuno.com.png</Employerlogo>
      <Employerdescription>Yuno builds the payment infrastructure that allows all companies to participate in the global market, providing access to leading payment capabilities.</Employerdescription>
      <Employerwebsite>https://www.yuno.com/</Employerwebsite>
      <Compensationcurrency></Compensationcurrency>
      <Compensationmin></Compensationmin>
      <Compensationmax></Compensationmax>
      <Applyto>https://jobs.lever.co/yuno/dc30ae7b-9c0f-426f-ae77-c58d9e4f6d6d</Applyto>
      <Location>Europe</Location>
      <Country></Country>
      <Postedate>2026-04-17</Postedate>
    </job>
    <job>
      <externalid>9cdc0a4d-95f</externalid>
      <Title>Staff Software Engineer, Stream Compute</Title>
      <Description><![CDATA[<p>We&#39;re looking for a Staff Software Engineer to join our Stream Compute team at Stripe. As a key member of this team, you will help define and deliver the next generation of Stripe&#39;s Flink-first stream compute infrastructure. This is a unique opportunity to work on some of the hardest problems in operating Flink in production, such as state management, exactly-once processing, performance isolation, and automated recovery.</p>
<p>Your primary responsibilities will include designing, building, and operating stream compute infrastructure with Apache Flink at the center, partnering with product and platform teams across Stripe to understand requirements, unblocking Flink adoption, and improving how stream processing infrastructure is used end-to-end. You will also define and implement operational best practices to improve resilience and reliability at scale, drive fleet-level automation and standardization, and lead initiatives that raise the bar on Flink availability and state durability.</p>
<p>To succeed in this role, you should have experience as a technical lead for team(s) working on distributed systems, including scaling them in fast-moving environments. You should also have hands-on experience with big data technologies such as Flink, Spark, Kafka, Pulsar, or Pinot, and experience developing, maintaining, and debugging distributed systems built with open source tools. Additionally, you should have strong software engineering skills and a passion for Big Data Distributed Systems, as well as the ability to write high-quality code in programming languages like Go, Java, Scala, etc.</p>
<p>If you&#39;re interested in joining our team and contributing to the development of our stream compute infrastructure, please don&#39;t hesitate to apply.</p>
<p style="margin-top:24px;font-size:13px;color:#666;">XML job scraping automation by <a href="https://yubhub.co">YubHub</a></p>]]></Description>
      <Jobtype>full-time</Jobtype>
      <Experiencelevel>staff</Experiencelevel>
      <Workarrangement>remote</Workarrangement>
      <Salaryrange></Salaryrange>
      <Skills>Apache Flink, Kafka, Temporal, AWS services, Distributed systems, Big data technologies, Software engineering, Go, Java, Scala, Streaming infrastructure, Real-time processing frameworks, Control planes, Open source contributions</Skills>
      <Category>Engineering</Category>
      <Industry>Technology</Industry>
      <Employername>Stripe</Employername>
      <Employerlogo>https://logos.yubhub.co/stripe.com.png</Employerlogo>
      <Employerdescription>Stripe is a financial infrastructure platform for businesses, used by millions of companies worldwide.</Employerdescription>
      <Employerwebsite>https://stripe.com/</Employerwebsite>
      <Compensationcurrency></Compensationcurrency>
      <Compensationmin></Compensationmin>
      <Compensationmax></Compensationmax>
      <Applyto>https://job-boards.greenhouse.io/stripe/jobs/7767063</Applyto>
      <Location>San Francisco, Seattle, New York, Toronto</Location>
      <Country></Country>
      <Postedate>2026-03-31</Postedate>
    </job>
    <job>
      <externalid>3367a9d1-967</externalid>
      <Title>Engineering Manager , Data Engineering Solutions</Title>
      <Description><![CDATA[<p>We&#39;re looking for a manager to drive the Data Engineering Solutions Team in solving high-impact, cutting-edge data problems. The ideal candidate will be someone that has built data pipelines for large scale volume, is deeply knowledgeable of Data Engineering tools including Airflow/Spark/Kafka/Flink, is empathetic, excels at building strong relationships, and collaborates effectively with other Stripe teams to understand their use cases and unlock new capabilities.</p>
<p>Key Responsibilities:</p>
<ul>
<li>Deliver cutting-edge data pipelines that scale to users&#39; needs, focusing on reliability and efficiency.</li>
<li>Lead and manage a team of ambitious, talented engineers, providing mentorship, guidance, and support to ensure their success.</li>
<li>Drive the execution of key reporting initiatives for Stripe, overseeing the entire development lifecycle from planning to delivery while maintaining high standards of quality and timely completion.</li>
<li>Collaborate with product managers and key leaders across the company to create a shared roadmap and drive adoption of canonical datasets and data warehouses, use golden paths, and ensure Stripes are using trustworthy data.</li>
<li>Understand user needs and pain points to prioritize engineering work and deliver high-quality solutions that meet user needs.</li>
<li>Provide hands-on technical leadership in architecture/design, vision/direction/requirements setting, and incident response processes for your reports.</li>
<li>Foster a collaborative and inclusive work environment, promoting innovation, knowledge sharing, and continuous improvement within the team.</li>
<li>Partner with our recruiting team to attract and hire top talent, and define the overall hiring strategies for your team.</li>
</ul>
<p style="margin-top:24px;font-size:13px;color:#666;">XML job scraping automation by <a href="https://yubhub.co">YubHub</a></p>]]></Description>
      <Jobtype>full-time</Jobtype>
      <Experiencelevel>senior</Experiencelevel>
      <Workarrangement>onsite</Workarrangement>
      <Salaryrange></Salaryrange>
      <Skills>Airflow, Spark, Kafka, Flink, Data Engineering, Team Management, Leadership, Communication, Problem-Solving, Iceberg, Change Data Capture, Hive Metastore, Pinot, Trino, AWS Cloud</Skills>
      <Category>Engineering</Category>
      <Industry>Technology</Industry>
      <Employername>Stripe</Employername>
      <Employerlogo>https://logos.yubhub.co/stripe.com.png</Employerlogo>
      <Employerdescription>Stripe is a financial infrastructure platform for businesses, used by millions of companies worldwide.</Employerdescription>
      <Employerwebsite>https://stripe.com/</Employerwebsite>
      <Compensationcurrency></Compensationcurrency>
      <Compensationmin></Compensationmin>
      <Compensationmax></Compensationmax>
      <Applyto>https://job-boards.greenhouse.io/stripe/jobs/7496118</Applyto>
      <Location>Bengaluru</Location>
      <Country></Country>
      <Postedate>2026-03-31</Postedate>
    </job>
    <job>
      <externalid>373a5272-a4e</externalid>
      <Title>Software Engineer I</Title>
      <Description><![CDATA[<p>Electronic Arts creates next-level entertainment experiences that inspire players and fans around the world. Here, everyone is part of the story. Part of a community that connects across the globe. A place where creativity thrives, new perspectives are invited, and ideas matter. A team where everyone makes play happen.</p>
<p>We are EA</p>
<p>Electronic Arts is more than you’ve ever realized. We’re more than a company, or a headline, or even a clever catchphrase – we’re a vibrant community of over 9,800 artists, storytellers, technologists and innovators working toward a shared vision: to inspire and unite through play.</p>
<p>This is an especially great time for the video game industry, as we’re currently going through an exciting digital transformation. The global gaming audience has also never been bigger, with industry revenue projected to reach $295.6 billion by 2026.</p>
<p><strong>The Challenge Ahead:</strong></p>
<p>EA’s Digital Platform (EADP) organization is responsible for driving critical technology decisions and investments for EA on a global basis, across all divisions and studio teams. Technology and engineering leadership at EA is critical to making the industry’s best games and services and the EADP team is leading the way to providing cross-platform infrastructure that will keep our consumers connected with our games anytime, anywhere with anyone.</p>
<p><strong>Software Engineer – I, Player &amp; Developer Experience (PDE) - EA Digital Platform (EADP)</strong></p>
<ul>
<li>Provide technical leadership and be part of the technology team that designs and develops the application platforms and tools which provides best player and developer experience.</li>
<li>Own the core system quality attributes relating to product architecture, such as performance, scalability, security, availability, reliability etc.</li>
<li>Collaborate with Product management and game teams to understand the requirements which will enhance the capabilities of the system.</li>
<li>Drive brainstorming on the new products, tools and services required by EADP internal teams &amp; Game Teams.</li>
<li>Evaluates emerging technologies and software products to determine feasibility and desirability of incorporating their capabilities within the company products.</li>
<li>Works as an Individual contributor (IC) and mentors the junior engineers technically and groom them to become experts in the technical area.</li>
<li>Hands-on in Coding and Testing and Deployment in large-scale environments.</li>
<li>Hands-on experience with building world-class applications, especially in a distributed system.</li>
</ul>
<p><strong>Responsibilities:</strong></p>
<ul>
<li>Design and develop the application features considering the functionality, Performance, extensibility, Scalability, Reliability, Consistency, Observability, Usability, Testability, Completeness, maintainability and Security aspects</li>
<li>Review and validate feature requirements. Monitor or assess the performance budgets for the features allocated</li>
<li>Write good suite of unit tests. Focuses on preventing the introduction of defects during the software development process rather than finding defects after testing begins</li>
<li>Analyze and troubleshoot issues</li>
<li>Apply best software development practices</li>
<li>Able to work in a variety of technologies like Java, Python, Flink, Kafka, Redis, Grpc, Spring, Node.js, Couchbase, Mysql, Postgres, Prometheus, Kubernetes, Istio/Envoy, Docker, AWS etc.</li>
<li>Collaboration with global teams to track and resolve issues</li>
<li>Prompt and high quality customer support on the queries/issues</li>
<li>Communicate the updates to the partners/stake holders</li>
<li>Communicate your ideas effectively to others within your team.</li>
<li>Write good user documents and design documents</li>
<li>Active participation in Sprint planning and task estimates</li>
<li>Learn and Support your team growth through active participation in code and design reviews</li>
<li>Continuous learning to efficiently solve new challenges and improve the system performance and robustness</li>
<li>Harmonize discordant views, find the best way forward and convince your team. Demonstrates resilience and navigate difficult situations with composure and tact.</li>
<li>Deliver high quality software &amp; products with a Continuous Integration, Validation and Deployment methodology.</li>
<li>Extensively use open source products/tools and develop the systems for easy maintenance of code and deliver in smaller cycle time.</li>
</ul>
<p><strong>The next great EA Software Engineer - I also needs:</strong></p>
<ul>
<li>Bachelor’s degree in Computer Science or higher</li>
<li>1-3 years of relative experience</li>
<li>Must have experience building applications in a fast paced agile environment.</li>
<li>knowledge of building high performance, highly available, reliable, distributed systems software.</li>
<li>A strong background in Data Structures, Algorithms, Design patterns, analysis of algorithm complexity and efficient implementation of complex algorithms</li>
<li>Experience with software development tools such as source control systems, automated build systems, software validation systems, test harnesses, continuous integration &amp; deployment.</li>
<li>Development experience with cloud platforms such as Amazon Web Services, Azure, etc. is a definite plus.</li>
<li>Experience in big data systems will be an advantage</li>
<li>Comes from a product development background.</li>
<li>Ability to work in an environment with high degree of ambiguity (previous start-up like experience could be helpful)</li>
<li>Excellent communication skills (oral and written) - able to communicate effectively with all levels of management as well as a geographically and culturally diverse technical organization.</li>
</ul>
<p style="margin-top:24px;font-size:13px;color:#666;">XML job scraping automation by <a href="https://yubhub.co">YubHub</a></p>]]></Description>
      <Jobtype>full-time</Jobtype>
      <Experiencelevel>entry</Experiencelevel>
      <Workarrangement>hybrid</Workarrangement>
      <Salaryrange></Salaryrange>
      <Skills>Java, Python, Flink, Kafka, Redis, Grpc, Spring, Node.js, Couchbase, Mysql, Postgres, Prometheus, Kubernetes, Istio/Envoy, Docker, AWS</Skills>
      <Category>Engineering</Category>
      <Industry>Technology</Industry>
      <Employername>Electronic Arts</Employername>
      <Employerlogo>https://logos.yubhub.co/jobs.ea.com.png</Employerlogo>
      <Employerdescription>Electronic Arts is a leading video game developer and publisher with a portfolio of over 300 million registered players. The company was founded in 1982 and has since become a major player in the gaming industry.</Employerdescription>
      <Employerwebsite>https://jobs.ea.com</Employerwebsite>
      <Compensationcurrency></Compensationcurrency>
      <Compensationmin></Compensationmin>
      <Compensationmax></Compensationmax>
      <Applyto>https://jobs.ea.com/en_US/careers/JobDetail/Software-Engineer-I/213038</Applyto>
      <Location>Hyderabad</Location>
      <Country></Country>
      <Postedate>2026-03-09</Postedate>
    </job>
    <job>
      <externalid>0841fcf4-9ab</externalid>
      <Title>Data Engineer SE - II</Title>
      <Description><![CDATA[<p>We are on a mission to rid the world of bad customer service by “mobilizing” the way help is delivered. Today’s consumers want an always-available customer service experience that leaves them feeling valued and respected.</p>
<p>Helpshift helps B2B brands deliver this modern customer service experience through a mobile-first approach. We have changed how conversations take place, moving the conversation away from a slow, outdated email and desktop experience to an in-app chat experience that allows users to interact with brands in their own time.</p>
<p>Through our market-leading AI-powered chatbots and automation, we help brands deliver instant and rapid resolutions. Because agents play a key role in delivering help, our platform gives agents superpowers with automation and AI that simply works.</p>
<p><strong>About the Team</strong></p>
<p>Consumers care first and foremost about having their time valued by brands. Brands need insights into their customer service operation to serve their consumers effectively. Such insights and analytics are delivered through various data products like in-app analytics dashboards and data-sharing integrations.</p>
<p>The data platform team is responsible for designing, building, and maintaining the data infrastructure that enables such data and analytics products at scale. We build and manage data pipelines, databases, and other data structures to ensure that the data is reliable, accurate, and easily accessible.</p>
<p>We also enable internal stakeholders with business intelligence and machine learning teams with data ops. This team manages the platform that handles 2 Million events per minute and processes 1+ terabytes of data daily.</p>
<p><strong>About the Role</strong></p>
<ul>
<li>Building maintainable data pipelines both for data ingestion and operational analytics for data collected from 2 billion devices and 900M Monthly active users</li>
<li>Building customer-facing analytics products that deliver actionable insights and data, easily detect anomalies</li>
<li>Collaborating with data stakeholders to see what their data needs are and being a part of the analysis process</li>
<li>Write design specifications, test, deployment, and scaling plans for the data pipelines</li>
<li>Mentor people in the team &amp; organization</li>
</ul>
<p><strong>Requirements</strong></p>
<ul>
<li>3+ years of experience in building and running data pipelines that scale for TBs of data</li>
<li>Proficiency in high-level object-oriented programming language (Python or Java) is must</li>
<li>Experience in Cloud data platforms like Snowflake and AWS, EMR/Athena is a must</li>
<li>Experience in building modern data lakehouse architectures using Snowflake and columnar formats like Apache Iceberg/Hudi, Parquet, etc</li>
<li>Proficiency in Data modeling, SQL query profiling, and data warehousing skills is a must</li>
<li>Experience in distributed data processing engines like Apache Spark, Apache Flink, Datalfow/Apache Beam, etc</li>
<li>Knowledge of workflow orchestrators like Airflow, Dasgter, etc is a plus</li>
<li>Data visualization skills are a plus (PowerBI, Metabase, Tableau, Hex, Sigma, etc)</li>
<li>Excellent verbal and written communication skills</li>
<li>Bachelor’s Degree in Computer Science (or equivalent)</li>
</ul>
<p><strong>Benefits</strong></p>
<ul>
<li>Hybrid setup</li>
<li>Worker&#39;s insurance</li>
<li>Paid Time Offs</li>
<li>Other employee benefits to be discussed by our Talent Acquisition team in India.</li>
</ul>
<p style="margin-top:24px;font-size:13px;color:#666;">XML job scraping automation by <a href="https://yubhub.co">YubHub</a></p>]]></Description>
      <Jobtype>full-time</Jobtype>
      <Experiencelevel>senior</Experiencelevel>
      <Workarrangement>hybrid</Workarrangement>
      <Salaryrange></Salaryrange>
      <Skills>Python, Java, Snowflake, AWS, EMR/Athena, Apache Iceberg/Hudi, Parquet, Apache Spark, Apache Flink, Datalflow/Apache Beam, Airflow, Data modeling, SQL query profiling, data warehousing, PowerBI, Metabase, Tableau, Hex, Sigma</Skills>
      <Category>Engineering</Category>
      <Industry>Technology</Industry>
      <Employername>Helpshift</Employername>
      <Employerlogo>https://logos.yubhub.co/j.com.png</Employerlogo>
      <Employerdescription>Helpshift is a company that provides a mobile-first customer service experience for B2B brands. It has over 900 million active monthly consumers and is used by hundreds of leading brands.</Employerdescription>
      <Employerwebsite>https://apply.workable.com</Employerwebsite>
      <Compensationcurrency></Compensationcurrency>
      <Compensationmin></Compensationmin>
      <Compensationmax></Compensationmax>
      <Applyto>https://apply.workable.com/j/D451DB2325</Applyto>
      <Location>Pune, Maharashtra, India</Location>
      <Country></Country>
      <Postedate>2026-03-09</Postedate>
    </job>
    <job>
      <externalid>df9a4b26-709</externalid>
      <Title>Senior Software Engineer</Title>
      <Description><![CDATA[<p>The Ads Data Platform Team, part of Microsoft AI, is hiring a Senior Software Engineer. This role is available in Redmond, WA. Our team powers the backbone of Microsoft’s global ads marketplace—gathering, storing, and enriching over half a trillion ad-serving events every day. We build data platforms that fuel business analytics, machine learning models, and real-time reporting at massive scale.</p>
<p>As part of our team, you’ll:</p>
<p>Design and operate high-scale, high-performance systems that process billions of events through near-real-time and offline pipelines.
Build data applications that directly impact Microsoft Ads’ double-digit annual growth.
Work on cutting-edge technologies in distributed systems, machine learning, and big data.</p>
<p><strong>Responsibilities</strong></p>
<p>Work with BingAds stakeholders to determine requirements for new features to drive up Ads business.
Create system design for feature requirements.
Assure system meets security and compliance requirements and expectations.
Creates a clear and articulated plan for testing and assuring quality solutions.
Implement the features with high efficiency, extensibility, diagnosability, reliability, and maintainability with few defects.
Reviews code of product to assure it meets the team’s and Microsoft’s quality standards, is reliable and accurate, and is appropriate for the scale of the product feature.
Maintain operations of live service as issues arise on a rotational, on-call basis.
Identifies solutions and mitigations to simple and complex issues and escalates as necessary.
Acts as a Designated Responsible Individual (DRI) working on call to monitor system/product feature/service for degradation, downtime, or interruptions.
Responds within Service Level Agreement (SLA) timeframe.
Escalate issues to appropriate owners.
Build knowledge, share new ideas, and share pinpoints of engineering tool gaps to improve software developer tools to support other programs, tools, and applications to create, debug, and maintain code for product features.
Contribute to the development of automation within production and deployment of a product feature.
Profile and analyze distributed system performance and capacity bottlenecks.
Propose and implement solutions to improve system latency and capacity to meet BingAds online serving requirements.</p>
<p><strong>Qualifications</strong></p>
<p>Required Qualifications:</p>
<ul>
<li>Bachelor’s Degree in Computer Science or related technical field</li>
<li>4+ years technical engineering experience with coding in languages including, but not limited to, C, C++, C#, Java, JavaScript, or Python</li>
<li>Ability to meet Microsoft, customer and/or government security screening requirements</li>
</ul>
<p>Preferred Qualifications:</p>
<ul>
<li>Master’s Degree in Computer Science or related technical field</li>
<li>6+ years technical engineering experience with coding in languages including, but not limited to, C, C++, C#, Java, JavaScript, or Python</li>
<li>Experience in Azure</li>
<li>Experience in Machine learning and online system design, implementation and qualification</li>
<li>2+ years’ experience in Distributed Systems and Big Data Technologies such as Spark, Hadoop, HDFS, Kafka, Flink, Scala</li>
</ul>
<p>#MicrosoftAI #BingAds Software Engineering IC4 – The typical base pay range for this role across the U.S. is USD $119,800 – $234,700 per year. There is a different range applicable to specific work locations, within the San Francisco Bay area and New York City metropolitan area, and the base pay range for this role in those locations is USD $158,400 – $258,000 per year.</p>
<p style="margin-top:24px;font-size:13px;color:#666;">XML job scraping automation by <a href="https://yubhub.co">YubHub</a></p>]]></Description>
      <Jobtype>full-time</Jobtype>
      <Experiencelevel>senior</Experiencelevel>
      <Workarrangement>hybrid</Workarrangement>
      <Salaryrange>USD $119,800 – $234,700 per year</Salaryrange>
      <Skills>C, C++, C#, Java, JavaScript, Python, Azure, Machine learning, Distributed Systems, Big Data Technologies, Spark, Hadoop, HDFS, Kafka, Flink, Scala</Skills>
      <Category>Engineering</Category>
      <Industry>Technology</Industry>
      <Employername>Microsoft AI</Employername>
      <Employerlogo>https://logos.yubhub.co/microsoft.ai.png</Employerlogo>
      <Employerdescription>Microsoft AI is a part of Microsoft, a multinational technology company that develops, manufactures, licenses, and supports a wide range of software products, services, and devices.</Employerdescription>
      <Employerwebsite>https://microsoft.ai</Employerwebsite>
      <Compensationcurrency></Compensationcurrency>
      <Compensationmin></Compensationmin>
      <Compensationmax></Compensationmax>
      <Applyto>https://microsoft.ai/job/senior-software-engineer-92/</Applyto>
      <Location>Redmond</Location>
      <Country></Country>
      <Postedate>2026-03-08</Postedate>
    </job>
    <job>
      <externalid>672557eb-bee</externalid>
      <Title>Engineering Manager, Data Platform</Title>
      <Description><![CDATA[<p><strong>Engineering Manager, Data Platform</strong></p>
<p>We&#39;re looking for an experienced Engineering Manager to lead our Data Interfaces team, responsible for enabling users and systems to leverage our core data platform. The team owns the collection of operational telemetry data, the UI for interacting with the Data Platform, as well as APIs and plugins for querying data out of the Data Platform for visualization, alerting, and integration into internal services.</p>
<p><strong>Responsibilities</strong></p>
<ul>
<li>Lead, mentor, and grow a team of senior and principal engineers</li>
<li>Foster an inclusive, collaborative, and feedback-driven engineering culture</li>
<li>Drive continuous improvement in the team&#39;s processes, delivery, and impact</li>
<li>Collaborate with stakeholders in engineering, data science, and analytics to shape and communicate the team&#39;s vision, strategy, and roadmap</li>
<li>Bridge strategic vision and tactical execution by breaking down long-term goals into achievable, well-scoped iterations that deliver continuous value</li>
<li>Ensure high standards in system architecture, code quality, and operational excellence</li>
</ul>
<p><strong>Requirements</strong></p>
<ul>
<li>3+ years of engineering management experience leading high-performing teams in data platform or infrastructure environments</li>
<li>Proven track record navigating complex systems, ambiguous requirements, and high-pressure situations with confidence and clarity</li>
<li>Deep experience in architecting, building, and operating scalable, distributed data platforms</li>
<li>Strong technical leadership skills, including the ability to review architecture/design documents and provide actionable feedback on code and systems</li>
<li>Ability to engage deeply in technical discussions, review architecture and design documents, evaluate pull requests, and step in during high-priority incidents when needed — even if hands-on coding isn’t a part of the day-to-day</li>
<li>Hands-on experience with distributed event streaming systems like Apache Kafka</li>
<li>Familiarity with OLAP databases such as Apache Pinot or ClickHouse</li>
<li>Proficient in modern data lake and warehouse tools such as S3, Databricks, or Snowflake</li>
<li>Strong foundation in the .NET ecosystem, container orchestration with Kubernetes, and cloud platforms, especially AWS</li>
<li>Experience with distributed data processing engines like Apache Flink or Apache Spark is nice to have</li>
</ul>
<p><strong>Benefits</strong></p>
<p>Epic Games offers a comprehensive benefits package, including:</p>
<ul>
<li>100% coverage of medical, dental, and vision premiums for you and your dependents</li>
<li>Long-term disability and life insurance</li>
<li>401k with competitive match</li>
<li>Unlimited PTO and sick time</li>
<li>Paid sabbatical after 7 years of employment</li>
<li>Robust mental well-being program through Modern Health</li>
<li>Company-wide paid breaks and events throughout the year</li>
</ul>
<p style="margin-top:24px;font-size:13px;color:#666;">XML job scraping automation by <a href="https://yubhub.co">YubHub</a></p>]]></Description>
      <Jobtype>full-time</Jobtype>
      <Experiencelevel>senior</Experiencelevel>
      <Workarrangement>onsite</Workarrangement>
      <Salaryrange></Salaryrange>
      <Skills>engineering management, data platform, distributed event streaming systems, OLAP databases, modern data lake and warehouse tools, .NET ecosystem, container orchestration, cloud platforms, Apache Kafka, Apache Pinot, ClickHouse, S3, Databricks, Snowflake, Kubernetes, AWS, Apache Flink, Apache Spark</Skills>
      <Category>Engineering</Category>
      <Industry>Technology</Industry>
      <Employername>Epic Games</Employername>
      <Employerlogo>https://logos.yubhub.co/epicgames.com.png</Employerlogo>
      <Employerdescription>Epic Games is a leading game development company that creates award-winning games and engine technology.</Employerdescription>
      <Employerwebsite>https://www.epicgames.com</Employerwebsite>
      <Compensationcurrency></Compensationcurrency>
      <Compensationmin></Compensationmin>
      <Compensationmax></Compensationmax>
      <Applyto>https://www.epicgames.com/en-US/careers/jobs/5818031004</Applyto>
      <Location>Cary</Location>
      <Country></Country>
      <Postedate>2026-03-08</Postedate>
    </job>
    <job>
      <externalid>4b563c21-dd0</externalid>
      <Title>Software Engineer, Data Infrastructure</Title>
      <Description><![CDATA[<p><strong>Software Engineer, Data Infrastructure</strong></p>
<p><strong>Location</strong></p>
<p>San Francisco</p>
<p><strong>Employment Type</strong></p>
<p>Full time</p>
<p><strong>Department</strong></p>
<p>Applied AI</p>
<p><strong>Compensation</strong></p>
<ul>
<li>$185K – $385K • Offers Equity</li>
</ul>
<p>The base pay offered may vary depending on multiple individualized factors, including market location, job-related knowledge, skills, and experience. If the role is non-exempt, overtime pay will be provided consistent with applicable laws. In addition to the salary range listed above, total compensation also includes generous equity, performance-related bonus(es) for eligible employees, and the following benefits.</p>
<p><strong>Benefits</strong></p>
<ul>
<li>Medical, dental, and vision insurance for you and your family, with employer contributions to Health Savings Accounts</li>
</ul>
<ul>
<li>Pre-tax accounts for Health FSA, Dependent Care FSA, and commuter expenses (parking and transit)</li>
</ul>
<ul>
<li>401(k) retirement plan with employer match</li>
</ul>
<ul>
<li>Paid parental leave (up to 24 weeks for birth parents and 20 weeks for non-birthing parents), plus paid medical and caregiver leave (up to 8 weeks)</li>
</ul>
<ul>
<li>Paid time off: flexible PTO for exempt employees and up to 15 days annually for non-exempt employees</li>
</ul>
<ul>
<li>13+ paid company holidays, and multiple paid coordinated company office closures throughout the year for focus and recharge, plus paid sick or safe time (1 hour per 30 hours worked, or more, as required by applicable state or local law)</li>
</ul>
<ul>
<li>Mental health and wellness support</li>
</ul>
<ul>
<li>Employer-paid basic life and disability coverage</li>
</ul>
<ul>
<li>Annual learning and development stipend to fuel your professional growth</li>
</ul>
<ul>
<li>Daily meals in our offices, and meal delivery credits as eligible</li>
</ul>
<ul>
<li>Relocation support for eligible employees</li>
</ul>
<ul>
<li>Additional taxable fringe benefits, such as charitable donation matching and wellness stipends, may also be provided.</li>
</ul>
<p><strong>About the Team</strong></p>
<p>Data Platform at OpenAI owns the foundational data stack powering critical product, research, and analytics workflows. We operate some of the largest Spark compute fleets in production; design, and build data lakes and metadata systems on Iceberg and Delta with a vision toward exabyte-scale architecture; run high throughput streaming platforms on Kafka and Flink; provide orchestration with Airflow; and support ML feature engineering tooling such as Chronon. Our mission is to deliver reliable, secure, and efficient data access at scale and accelerate intelligent, AI assisted data workflows.</p>
<p><strong>About the Role</strong></p>
<p>This role focuses on building and operating data infrastructure that supports massive compute fleets and storage systems, designed for high performance and scalability. You’ll help design, build, and operate the next generation of data infrastructure at OpenAI. You will scale and harden big data compute and storage platforms, build and support high-throughput streaming systems, build and operate low latency data ingestions, enable secure and governed data access for ML and analytics, and design for reliability and performance at extreme scale.</p>
<p>You will take full lifecycle ownership: architecture, implementation, production operations, and on-call participation.</p>
<p><strong>Responsibilities</strong></p>
<ul>
<li>Design, build, and maintain data infrastructure systems such as distributed compute, data orchestration, distributed storage, streaming infrastructure, machine learning infrastructure while ensuring scalability, reliability, and security</li>
</ul>
<ul>
<li>Ensure our data platform can scale by orders of magnitude while remaining reliable and efficient</li>
</ul>
<ul>
<li>Accelerate company productivity by empowering your fellow engineers &amp; teammates with excellent data tooling and systems</li>
</ul>
<ul>
<li>Collaborate with product, research and analytics teams to build the technical foundations capabilities that unlock new features and experiences</li>
</ul>
<ul>
<li>Own the reliability of the systems you build, including participation in an on-call rotation for critical incidents</li>
</ul>
<p><strong>Requirements</strong></p>
<ul>
<li>4+ years in data infrastructure engineering OR</li>
</ul>
<ul>
<li>4+ years in infrastructure engineering with a strong interest in data</li>
</ul>
<ul>
<li>Take pride in building and operating scalable, reliable, secure systems</li>
</ul>
<ul>
<li>Are comfortable with ambiguity and rapid change</li>
</ul>
<ul>
<li>Have an intrinsic desire to learn and fill in missing skills, and an equally strong talent for sharing learnings clearly and concisely with others</li>
</ul>
<p><strong>About OpenAI</strong></p>
<p>OpenAI is an AI research and deployment company dedicated to ensuring that general-purpose artificial intelligence benefits all of humanity. We push the boundaries of the capabilities of AI systems and seek to safely deploy them to the world through our products. AI is an extremely powerful tool that must be created with safety and human needs at its core, and to achieve our mission, we must encompass and value the many different perspectives, voices, and experiences that form the full spectrum of human diversity.</p>
<p style="margin-top:24px;font-size:13px;color:#666;">XML job scraping automation by <a href="https://yubhub.co">YubHub</a></p>]]></Description>
      <Jobtype>full-time</Jobtype>
      <Experiencelevel>mid</Experiencelevel>
      <Workarrangement>hybrid</Workarrangement>
      <Salaryrange>$185K – $385K • Offers Equity</Salaryrange>
      <Skills>data infrastructure engineering, infrastructure engineering, Spark, Kafka, Flink, Airflow, Chronon, Iceberg, Delta, Terraform, distributed systems, machine learning, data science, cloud computing, containerization, DevOps</Skills>
      <Category>Engineering</Category>
      <Industry>Technology</Industry>
      <Employername>OpenAI</Employername>
      <Employerlogo>https://logos.yubhub.co/openai.com.png</Employerlogo>
      <Employerdescription>OpenAI is an AI research and deployment company dedicated to ensuring that general-purpose artificial intelligence benefits all of humanity. We push the boundaries of the capabilities of AI systems and seek to safely deploy them to the world through our products.</Employerdescription>
      <Employerwebsite>https://jobs.ashbyhq.com</Employerwebsite>
      <Compensationcurrency></Compensationcurrency>
      <Compensationmin></Compensationmin>
      <Compensationmax></Compensationmax>
      <Applyto>https://jobs.ashbyhq.com/openai/f763c6b3-5167-4a67-b691-4c3fa2c44156</Applyto>
      <Location>San Francisco</Location>
      <Country></Country>
      <Postedate>2026-03-06</Postedate>
    </job>
    <job>
      <externalid>c873a489-0dc</externalid>
      <Title>Data Engineer, Analytics</Title>
      <Description><![CDATA[<p><strong>Data Engineer, Analytics</strong></p>
<p><strong>Location</strong></p>
<p>San Francisco</p>
<p><strong>Employment Type</strong></p>
<p>Full time</p>
<p><strong>Department</strong></p>
<p>Applied AI</p>
<p><strong>Compensation</strong></p>
<ul>
<li>$230K – $385K • Offers Equity</li>
</ul>
<p>The base pay offered may vary depending on multiple individualized factors, including market location, job-related knowledge, skills, and experience. If the role is non-exempt, overtime pay will be provided consistent with applicable laws. In addition to the salary range listed above, total compensation also includes generous equity, performance-related bonus(es) for eligible employees, and the following benefits.</p>
<p><strong>Benefits</strong></p>
<ul>
<li>Medical, dental, and vision insurance for you and your family, with employer contributions to Health Savings Accounts</li>
</ul>
<ul>
<li>Pre-tax accounts for Health FSA, Dependent Care FSA, and commuter expenses (parking and transit)</li>
</ul>
<ul>
<li>401(k) retirement plan with employer match</li>
</ul>
<ul>
<li>Paid parental leave (up to 24 weeks for birth parents and 20 weeks for non-birthing parents), plus paid medical and caregiver leave (up to 8 weeks)</li>
</ul>
<ul>
<li>Paid time off: flexible PTO for exempt employees and up to 15 days annually for non-exempt employees</li>
</ul>
<ul>
<li>13+ paid company holidays, and multiple paid coordinated company office closures throughout the year for focus and recharge, plus paid sick or safe time (1 hour per 30 hours worked, or more, as required by applicable state or local law)</li>
</ul>
<ul>
<li>Mental health and wellness support</li>
</ul>
<ul>
<li>Employer-paid basic life and disability coverage</li>
</ul>
<ul>
<li>Annual learning and development stipend to fuel your professional growth</li>
</ul>
<ul>
<li>Daily meals in our offices, and meal delivery credits as eligible</li>
</ul>
<ul>
<li>Relocation support for eligible employees</li>
</ul>
<ul>
<li>Additional taxable fringe benefits, such as charitable donation matching and wellness stipends, may also be provided.</li>
</ul>
<p><strong>About the team</strong></p>
<p>The Applied team works across research, engineering, product, and design to bring OpenAI’s technology to consumers and businesses.</p>
<p>We seek to learn from deployment and distribute the benefits of AI, while ensuring that this powerful tool is used responsibly and safely. Safety is more important to us than unfettered growth.</p>
<p><strong>About the role</strong></p>
<p>We&#39;re seeking a Data Engineer to take the lead in building our data pipelines and core tables for OpenAI. These pipelines are crucial for powering analyses, safety systems that guide business decisions, product growth, and prevent bad actors. If you&#39;re passionate about working with data and are eager to create solutions with significant impact, we&#39;d love to hear from you. This role also provides the opportunity to collaborate closely with the researchers behind ChatGPT and help them train new models to deliver to users. As we continue our rapid growth, we value data-driven insights, and your contributions will play a pivotal role in our trajectory. Join us in shaping the future of OpenAI!</p>
<p><strong>In this role, you will:</strong></p>
<ul>
<li>Design, build and manage our data pipelines, ensuring all user event data is seamlessly integrated into our data warehouse.</li>
</ul>
<ul>
<li>Develop canonical datasets to track key product metrics including user growth, engagement, and revenue.</li>
</ul>
<ul>
<li>Work collaboratively with various teams, including, Infrastructure, Data Science, Product, Marketing, Finance, and Research to understand their data needs and provide solutions.</li>
</ul>
<ul>
<li>Implement robust and fault-tolerant systems for data ingestion and processing.</li>
</ul>
<ul>
<li>Participate in data architecture and engineering decisions, bringing your strong experience and knowledge to bear.</li>
</ul>
<ul>
<li>Ensure the security, integrity, and compliance of data according to industry and company standards.</li>
</ul>
<p><strong>You might thrive in this role if you:</strong></p>
<ul>
<li>Have 3+ years of experience as a data engineer and 8+ years of any software engineering experience(including data engineering).</li>
</ul>
<ul>
<li>Proficiency in at least one programming language commonly used within Data Engineering, such as Python, Scala, or Java.</li>
</ul>
<ul>
<li>Experience with distributed processing technologies and frameworks, such as Hadoop, Flink and distributed storage systems (e.g., HDFS, S3).</li>
</ul>
<ul>
<li>Expertise with any of ETL schedulers such as Airflow, Dagster, Prefect or similar frameworks.</li>
</ul>
<ul>
<li>Solid understanding of Spark and ability to write, debug and optimize Spark code.</li>
</ul>
<p><strong>About OpenAI</strong></p>
<p>OpenAI is an AI research and deployment company dedicated to ensuring that general-purpose artificial intelligence benefits all of humanity. We push the boundaries of the capabilities of AI systems and seek to safely deploy them to the world through our products. AI is an extremely powerful tool that must be created with safety and human needs at its core, and to achieve our mission, we must encompass and value the many different perspectives, voices, and experiences that form the full spectrum of humanity.</p>
<p style="margin-top:24px;font-size:13px;color:#666;">XML job scraping automation by <a href="https://yubhub.co">YubHub</a></p>]]></Description>
      <Jobtype>full-time</Jobtype>
      <Experiencelevel>senior</Experiencelevel>
      <Workarrangement>onsite</Workarrangement>
      <Salaryrange>$230K – $385K • Offers Equity</Salaryrange>
      <Skills>Python, Scala, Java, Hadoop, Flink, HDFS, S3, Airflow, Dagster, Prefect, Spark</Skills>
      <Category>Engineering</Category>
      <Industry>Technology</Industry>
      <Employername>OpenAI</Employername>
      <Employerlogo>https://logos.yubhub.co/openai.com.png</Employerlogo>
      <Employerdescription>OpenAI is an AI research and deployment company dedicated to ensuring that general-purpose artificial intelligence benefits all of humanity. We push the boundaries of the capabilities of AI systems and seek to safely deploy them to the world through our products.</Employerdescription>
      <Employerwebsite>https://jobs.ashbyhq.com</Employerwebsite>
      <Compensationcurrency></Compensationcurrency>
      <Compensationmin></Compensationmin>
      <Compensationmax></Compensationmax>
      <Applyto>https://jobs.ashbyhq.com/openai/fc5bbc77-a30c-4e7a-9acc-8a2e748545b4</Applyto>
      <Location>San Francisco</Location>
      <Country></Country>
      <Postedate>2026-03-06</Postedate>
    </job>
    <job>
      <externalid>26c57034-3a3</externalid>
      <Title>Senior Software Engineer</Title>
      <Description><![CDATA[<p><strong>Summary</strong></p>
<p>Microsoft AI are looking for a talented Senior Software Engineer at their Redmond office. This role sits at the heart of strategic decision-making, turning market data into actionable insights for a company that&#39;s revolutionising haptic entertainment technology. You&#39;ll work directly with leadership to shape the company&#39;s direction in the cinema and simulation markets.</p>
<p><strong>About the Role</strong></p>
<p>As a Senior Software Engineer, you&#39;ll design and operate high-scale, high-performance systems that process billions of events through near-real-time and offline pipelines. You&#39;ll build data applications that directly impact Microsoft Ads&#39; double-digit annual growth. You&#39;ll work on cutting-edge technologies in distributed systems, machine learning, and big data.</p>
<p><strong>Accountabilities</strong></p>
<ul>
<li>Conduct in-depth market research across cinema and simulation sectors, identifying emerging trends, competitive threats, and partnership opportunities that directly inform the company&#39;s quarterly strategic planning sessions</li>
<li>Work with BingAds stakeholders to determine requirements for new features to drive up Ads business</li>
</ul>
<p><strong>The Candidate we&#39;re looking for</strong></p>
<p><strong>Experience:</strong></p>
<ul>
<li>2+ years technical engineering experience with coding in languages including, but not limited to, C, C++, C#, Java, JavaScript, or Python</li>
</ul>
<p><strong>Technical skills:</strong></p>
<ul>
<li>Experience in Azure</li>
<li>Experience in Machine learning and online system design, implementation and qualification</li>
<li>2+ years’ experience in Distributed Systems and Big Data Technologies such as Spark, Hadoop, HDFS, Kafka, Flink, Scala</li>
</ul>
<p><strong>Personal attributes:</strong></p>
<ul>
<li>Strong problem-solving skills</li>
<li>Excellent communication and collaboration skills</li>
</ul>
<p><strong>Benefits</strong></p>
<ul>
<li>Competitive salary</li>
<li>Comprehensive benefits package</li>
<li>Opportunities for professional growth and development</li>
</ul>
<p style="margin-top:24px;font-size:13px;color:#666;">XML job scraping automation by <a href="https://yubhub.co">YubHub</a></p>]]></Description>
      <Jobtype>full-time</Jobtype>
      <Experiencelevel>senior</Experiencelevel>
      <Workarrangement>onsite</Workarrangement>
      <Salaryrange>USD $119,800 – $234,700 per year</Salaryrange>
      <Skills>C, C++, C#, Java, JavaScript, Python, Azure, Machine learning, Distributed Systems, Big Data Technologies, Spark, Hadoop, HDFS, Kafka, Flink, Scala</Skills>
      <Category>Engineering</Category>
      <Industry>Technology</Industry>
      <Employername>Microsoft AI</Employername>
      <Employerlogo>https://logos.yubhub.co/microsoft.ai.png</Employerlogo>
      <Employerdescription>Microsoft AI is a leading technology company that powers the backbone of Microsoft&apos;s global ads marketplace, gathering, storing, and enriching over half a trillion ad-serving events every day.</Employerdescription>
      <Employerwebsite>https://microsoft.ai</Employerwebsite>
      <Compensationcurrency></Compensationcurrency>
      <Compensationmin></Compensationmin>
      <Compensationmax></Compensationmax>
      <Applyto>https://microsoft.ai/job/senior-software-engineer-79/</Applyto>
      <Location>Redmond</Location>
      <Country></Country>
      <Postedate>2026-03-06</Postedate>
    </job>
    <job>
      <externalid>f5ac6e0f-4b7</externalid>
      <Title>Principal Software Engineer(Data)</Title>
      <Description><![CDATA[<p><strong>Summary</strong></p>
<p>Microsoft AI are looking for a talented Principal Software Engineer(Data) at their Beijing office. This role sits at the heart of strategic decision-making, turning market data into actionable insights for a company that&#39;s revolutionising advertising technology. You&#39;ll work directly with leadership to shape the company&#39;s direction in the advertising measurement ecosystem.</p>
<p><strong>About the Role</strong></p>
<p>As a Principal Software Engineer(Data), you will provide critical technical leadership across conversion and attribution, driving the continuous expansion of conversion signal coverage, the evolution of measurement logic, and systematic improvements in system reliability. Operating under complex business constraints and within a rapidly evolving industry landscape, the role requires balancing measurement accuracy, platform stability, and long-term extensibility. In close collaboration with product, modeling, and engineering partners, this position delivers stable, scalable conversion and attribution capabilities that create sustained business value.</p>
<p><strong>Accountabilities</strong></p>
<ul>
<li>Provides critical technical leadership across conversion and attribution, driving the continuous expansion of conversion signal coverage, the evolution of measurement logic, and systematic improvements in system reliability.</li>
<li>Operating under complex business constraints and within a rapidly evolving industry landscape, the role requires balancing measurement accuracy, platform stability, and long-term extensibility.</li>
</ul>
<p><strong>The Candidate we&#39;re looking for</strong></p>
<p><strong>Experience:</strong></p>
<ul>
<li>Bachelor’s Degree in Computer Science or related technical field AND 6+ years technical engineering experience with coding in languages including, but not limited to, C, C++, C#, Java, JavaScript, or Python OR equivalent experience.</li>
</ul>
<p><strong>Technical skills:</strong></p>
<ul>
<li>Solid experience of shipping high performance C#, Java, or equivalent language code software.</li>
<li>Understanding of distributed system and data parallel computing is preferred.</li>
<li>Data processing or analytics experience with Spark, Flink, Kafka, Azure Data Lake is a plus.</li>
</ul>
<p><strong>Personal attributes:</strong></p>
<ul>
<li>Quick learning and solid problem solving and debugging skills.</li>
<li>Accountable and proactive.</li>
<li>Good communication skill, fluent in English (both oral and written).</li>
</ul>
<p><strong>Benefits</strong></p>
<ul>
<li>Ability to meet Microsoft, customer and/or government security screening requirements are required for this role.</li>
<li>This position will be required to pass the Microsoft Cloud background check upon hire/transfer and every two years thereafter.</li>
</ul>
<p style="margin-top:24px;font-size:13px;color:#666;">XML job scraping automation by <a href="https://yubhub.co">YubHub</a></p>]]></Description>
      <Jobtype>full-time</Jobtype>
      <Experiencelevel>senior</Experiencelevel>
      <Workarrangement>onsite</Workarrangement>
      <Salaryrange></Salaryrange>
      <Skills>C, C++, C#, Java, JavaScript, Python, Spark, Flink, Kafka, Azure Data Lake, Distributed system, Data parallel computing, Data processing, Analytics</Skills>
      <Category>Engineering</Category>
      <Industry>Technology</Industry>
      <Employername>Microsoft AI</Employername>
      <Employerlogo>https://logos.yubhub.co/microsoft.ai.png</Employerlogo>
      <Employerdescription>Microsoft&apos;s mission is to empower every person and every organization on the planet to achieve more. As employees we come together with a growth mindset, innovate to empower others, and collaborate to realize our shared goals.</Employerdescription>
      <Employerwebsite>https://microsoft.ai</Employerwebsite>
      <Compensationcurrency></Compensationcurrency>
      <Compensationmin></Compensationmin>
      <Compensationmax></Compensationmax>
      <Applyto>https://microsoft.ai/job/principal-software-engineerdata/</Applyto>
      <Location>Beijing</Location>
      <Country></Country>
      <Postedate>2026-03-06</Postedate>
    </job>
    <job>
      <externalid>68e759f6-179</externalid>
      <Title>Principal Software Engineer - Data, Personalization</Title>
      <Description><![CDATA[<p><strong>Summary</strong></p>
<p>Microsoft AI are looking for a talented Principal Software Engineer to lead the design and development of distributed data infrastructure, APIs and personalization pipelines that drive Copilot&#39;s intelligence.</p>
<p><strong>About the Role</strong></p>
<p>As a Principal Software Engineer, you will work across Microsoft AI and Copilot teams to build scalable, low-latency systems for ingesting, processing, and serving personalized signals. You will design data models and APIs that enable Copilot to reason about user context, preferences, and history. You will build real-time and batch personalization engines that adapt Copilot&#39;s behavior. You will collaborate with privacy, security, and responsible AI teams to ensure personalization is safe, transparent, and user-controlled. You will optimize for performance, reliability, and cost across diverse workloads and geographies. You will ship high-quality, well-tested, secure, and maintainable code. You will find a path to get things done despite roadblocks to get your work into the hands of users quickly and iteratively. You will enjoy working in a fast-paced, design-driven, product development cycle. You will embody our Culture and Values.</p>
<p><strong>Accountabilities</strong></p>
<ul>
<li>Architect scalable, low-latency systems for ingesting, processing, and serving personalized signals.</li>
<li>Design data models and APIs that enable Copilot to reason about user context, preferences, and history.</li>
</ul>
<p><strong>The Candidate we&#39;re looking for</strong></p>
<p><strong>Experience:</strong></p>
<ul>
<li>8+ years technical engineering experience with coding in languages including, but not limited to, C, C++, C#, Java, JavaScript, or Python.</li>
</ul>
<p><strong>Technical skills:</strong></p>
<ul>
<li>Proficiency in backend technologies.</li>
<li>Familiarity with applied AI and its unique challenges.</li>
</ul>
<p><strong>Personal attributes:</strong></p>
<ul>
<li>Passion for learning new technologies and staying up to date with industry trends, best practices, and emerging technologies in web, data systems and AI.</li>
</ul>
<p><strong>Benefits</strong></p>
<ul>
<li>Competitive salary.</li>
<li>Benefits and other compensation.</li>
<li>Opportunity to work on cutting-edge AI projects.</li>
<li>Collaborative and inclusive work environment.</li>
<li>Professional development opportunities.</li>
<li>Flexible work arrangements.</li>
</ul>
<p style="margin-top:24px;font-size:13px;color:#666;">XML job scraping automation by <a href="https://yubhub.co">YubHub</a></p>]]></Description>
      <Jobtype>full-time</Jobtype>
      <Experiencelevel>senior</Experiencelevel>
      <Workarrangement>hybrid</Workarrangement>
      <Salaryrange>USD $163,000 – $296,400 per year</Salaryrange>
      <Skills>C, C++, C#, Java, JavaScript, Python, backend technologies, applied AI, Kafka, Spark, Flink, large scale data systems, AI platforms, Machine Learning frameworks</Skills>
      <Category>Engineering</Category>
      <Industry>Technology</Industry>
      <Employername>Microsoft AI</Employername>
      <Employerlogo>https://logos.yubhub.co/microsoft.ai.png</Employerlogo>
      <Employerdescription>Microsoft continues to redefine the future of AI, building intelligent systems that deeply understand users and adapt across agents, applications, services, and infrastructure.</Employerdescription>
      <Employerwebsite>https://microsoft.ai</Employerwebsite>
      <Compensationcurrency></Compensationcurrency>
      <Compensationmin></Compensationmin>
      <Compensationmax></Compensationmax>
      <Applyto>https://microsoft.ai/job/principal-software-engineer-data-personalization-microsoft-ai-4/</Applyto>
      <Location>Mountain View</Location>
      <Country></Country>
      <Postedate>2026-03-06</Postedate>
    </job>
    <job>
      <externalid>97c676ce-653</externalid>
      <Title>Principal Software Engineer - Data, Personalization</Title>
      <Description><![CDATA[<p><strong>Summary</strong></p>
<p>Microsoft AI are looking for a talented Principal Software Engineer to lead the design and development of distributed data infrastructure, APIs and personalization pipelines that drive Copilot&#39;s intelligence.</p>
<p><strong>About the Role</strong></p>
<p>As a Principal Software Engineer, you will work across Microsoft AI and Copilot teams to build scalable, low-latency systems for ingesting, processing, and serving personalized signals. You will design data models and APIs that enable Copilot to reason about user context, preferences, and history. You will build real-time and batch personalization engines that adapt Copilot&#39;s behavior. You will collaborate with privacy, security, and responsible AI teams to ensure personalization is safe, transparent, and user-controlled. You will optimize for performance, reliability, and cost across diverse workloads and geographies. You will ship high-quality, well-tested, secure, and maintainable code. You will find a path to get things done despite roadblocks to get your work into the hands of users quickly and iteratively. You will enjoy working in a fast-paced, design-driven, product development cycle. You will embody our Culture and Values.</p>
<p><strong>Accountabilities</strong></p>
<ul>
<li>Architect scalable, low-latency systems for ingesting, processing, and serving personalized signals.</li>
<li>Design data models and APIs that enable Copilot to reason about user context, preferences, and history.</li>
</ul>
<p><strong>The Candidate we&#39;re looking for</strong></p>
<p><strong>Experience:</strong></p>
<ul>
<li>8+ years technical engineering experience with coding in languages including, but not limited to, C, C++, C#, Java, JavaScript, or Python.</li>
</ul>
<p><strong>Technical skills:</strong></p>
<ul>
<li>Proficiency in backend technologies.</li>
<li>Familiarity with applied AI and its unique challenges.</li>
</ul>
<p><strong>Personal attributes:</strong></p>
<ul>
<li>Methodical approach to problem-solving.</li>
<li>Ability to identify, analyze, and resolve complex technical issues.</li>
<li>Demonstrated interpersonal skills and ability to work closely with cross-functional teams.</li>
</ul>
<p><strong>Benefits</strong></p>
<ul>
<li>Competitive salary.</li>
<li>Benefits and other compensation.</li>
</ul>
<p style="margin-top:24px;font-size:13px;color:#666;">XML job scraping automation by <a href="https://yubhub.co">YubHub</a></p>]]></Description>
      <Jobtype>full-time</Jobtype>
      <Experiencelevel>senior</Experiencelevel>
      <Workarrangement>onsite</Workarrangement>
      <Salaryrange>USD $163,000 – $296,400 per year</Salaryrange>
      <Skills>C, C++, C#, Java, JavaScript, Python, backend technologies, applied AI, Kafka, Spark, Flink, large scale data systems, AI platforms, Machine Learning frameworks</Skills>
      <Category>Engineering</Category>
      <Industry>Technology</Industry>
      <Employername>Microsoft AI</Employername>
      <Employerlogo>https://logos.yubhub.co/microsoft.ai.png</Employerlogo>
      <Employerdescription>Microsoft continues to redefine the future of AI, building intelligent systems that deeply understand users and adapt across agents, applications, services, and infrastructure.</Employerdescription>
      <Employerwebsite>https://microsoft.ai</Employerwebsite>
      <Compensationcurrency></Compensationcurrency>
      <Compensationmin></Compensationmin>
      <Compensationmax></Compensationmax>
      <Applyto>https://microsoft.ai/job/principal-software-engineer-data-personalization-microsoft-ai-2/</Applyto>
      <Location>Redmond</Location>
      <Country></Country>
      <Postedate>2026-03-06</Postedate>
    </job>
    <job>
      <externalid>d20b543c-c34</externalid>
      <Title>Principal Software Engineer(Data)</Title>
      <Description><![CDATA[<p><strong>Summary</strong></p>
<p>Microsoft AI are looking for a talented Principal Software Engineer(Data) at their Suzhou office. This role sits at the heart of strategic decision-making, turning market data into actionable insights for a company that&#39;s revolutionising advertising technology. You&#39;ll work directly with leadership to shape the company&#39;s direction in the advertising measurement ecosystem.</p>
<p><strong>About the Role</strong></p>
<p>As a Principal Software Engineer(Data), you will provide critical technical leadership across conversion and attribution, driving the continuous expansion of conversion signal coverage, the evolution of measurement logic, and systematic improvements in system reliability. Operating under complex business constraints and within a rapidly evolving industry landscape, the role requires balancing measurement accuracy, platform stability, and long-term extensibility. In close collaboration with product, modeling, and engineering partners, this position delivers stable, scalable conversion and attribution capabilities that create sustained business value.</p>
<p><strong>Accountabilities</strong></p>
<ul>
<li>Provides critical technical leadership across conversion and attribution, driving the continuous expansion of conversion signal coverage, the evolution of measurement logic, and systematic improvements in system reliability.</li>
<li>Operating under complex business constraints and within a rapidly evolving industry landscape, the role requires balancing measurement accuracy, platform stability, and long-term extensibility.</li>
</ul>
<p><strong>The Candidate we&#39;re looking for</strong></p>
<p><strong>Experience:</strong></p>
<ul>
<li>Bachelor’s Degree in Computer Science or related technical field AND 6+ years technical engineering experience with coding in languages including, but not limited to, C, C++, C#, Java, JavaScript, or Python OR equivalent experience.</li>
</ul>
<p><strong>Technical skills:</strong></p>
<ul>
<li>Solid experience of shipping high performance C#, Java, or equivalent language code software.</li>
<li>Understanding of distributed system and data parallel computing is preferred.</li>
<li>Data processing or analytics experience with Spark, Flink, Kafka, Azure Data Lake is a plus.</li>
</ul>
<p><strong>Personal attributes:</strong></p>
<ul>
<li>Quick learning and solid problem solving and debugging skills.</li>
<li>Accountable and proactive.</li>
<li>Good communication skill, fluent in English (both oral and written).</li>
</ul>
<p><strong>Benefits</strong></p>
<ul>
<li>Ability to meet Microsoft, customer and/or government security screening requirements are required for this role.</li>
<li>This position will be required to pass the Microsoft Cloud background check upon hire/transfer and every two years thereafter.</li>
</ul>
<p style="margin-top:24px;font-size:13px;color:#666;">XML job scraping automation by <a href="https://yubhub.co">YubHub</a></p>]]></Description>
      <Jobtype>full-time</Jobtype>
      <Experiencelevel>senior</Experiencelevel>
      <Workarrangement>onsite</Workarrangement>
      <Salaryrange></Salaryrange>
      <Skills>C, C++, C#, Java, JavaScript, Python, Spark, Flink, Kafka, Azure Data Lake, Distributed system, Data parallel computing, Data processing, Analytics</Skills>
      <Category>Engineering</Category>
      <Industry>Technology</Industry>
      <Employername>Microsoft AI</Employername>
      <Employerlogo>https://logos.yubhub.co/microsoft.ai.png</Employerlogo>
      <Employerdescription>Microsoft&apos;s mission is to empower every person and every organization on the planet to achieve more. As employees we come together with a growth mindset, innovate to empower others, and collaborate to realize our shared goals.</Employerdescription>
      <Employerwebsite>https://microsoft.ai</Employerwebsite>
      <Compensationcurrency></Compensationcurrency>
      <Compensationmin></Compensationmin>
      <Compensationmax></Compensationmax>
      <Applyto>https://microsoft.ai/job/principal-software-engineerdata-2/</Applyto>
      <Location>Suzhou</Location>
      <Country></Country>
      <Postedate>2026-03-06</Postedate>
    </job>
    <job>
      <externalid>3533ba01-2e9</externalid>
      <Title>Principal Software Engineer</Title>
      <Description><![CDATA[<p><strong>Summary</strong></p>
<p>Microsoft AI are looking for a talented Principal Software Engineer at their Mountain View office. This role sits at the forefront of transforming digital advertising through intelligent automation and large-scale optimization. You&#39;ll help shape the next generation of their agentic auto-bidding platform — one that learns, adapts, and optimizes autonomously.</p>
<p><strong>About the Role</strong></p>
<p>Our team is at the forefront of transforming digital advertising through intelligent automation and large-scale optimization. We design and build the auto-bidding platform that powers real-time ad auctions across Microsoft’s marketplaces—leveraging cutting-edge AI, machine learning, and large-scale distributed systems to bid on behalf of millions of advertisers. Our systems process billions of auction events daily, optimizing bids in milliseconds to maximize performance and return on ad spend. Engineers and scientists work hand-in-hand, blending algorithmic innovation, reinforcement learning, and large-scale data pipelines to create the intelligence that drives Microsoft Advertising’s success.</p>
<p><strong>Accountabilities</strong></p>
<ul>
<li>Collaborate with data scientists, ML engineers, and product teams to define requirements for agentic AI-driven bidding capabilities that observe, reason, and adapt autonomously.</li>
<li>Architect and implement the next-generation agentic bidding platform, enabling AI agents to monitor marketplace signals, interpret advertiser and user behavior, and continuously optimize bidding strategies.</li>
</ul>
<p><strong>The Candidate we&#39;re looking for</strong></p>
<p><strong>Experience:</strong></p>
<ul>
<li>Bachelor’s Degree in Computer Science or related technical field AND 6+ years technical engineering experience with coding in languages including, but not limited to, C, C++, C#, Java, JavaScript, or Python OR equivalent experience.</li>
</ul>
<p><strong>Technical skills:</strong></p>
<ul>
<li>Experience building real world applications using AI techniques.</li>
<li>5+ years of hands-on experience in machine learning operations (MLOps), including pipeline automation, monitoring, and lifecycle management</li>
<li>3+ years of hands-on experience with large-scale streaming platforms such as Apache Spark or Flink</li>
</ul>
<p><strong>Personal attributes:</strong></p>
<ul>
<li>Ability to meet Microsoft, customer and/or government security screening requirements are required for this role.</li>
<li>Strong communication and collaboration skills.</li>
</ul>
<p><strong>Benefits</strong></p>
<ul>
<li>Competitive salary range of USD $139,900 – $274,800 per year.</li>
<li>Benefits and other compensation.</li>
<li>Opportunities for professional growth and development.</li>
</ul>
<p style="margin-top:24px;font-size:13px;color:#666;">XML job scraping automation by <a href="https://yubhub.co">YubHub</a></p>]]></Description>
      <Jobtype>full-time</Jobtype>
      <Experiencelevel>senior</Experiencelevel>
      <Workarrangement>onsite</Workarrangement>
      <Salaryrange>USD $139,900 – $274,800 per year</Salaryrange>
      <Skills>C, C++, C#, Java, JavaScript, Python, AI, Machine Learning, MLOps, Apache Spark, Flink, Real world applications using AI techniques, Large-scale streaming platforms, Strong communication and collaboration skills</Skills>
      <Category>Engineering</Category>
      <Industry>Technology</Industry>
      <Employername>Microsoft AI</Employername>
      <Employerlogo>https://logos.yubhub.co/microsoft.ai.png</Employerlogo>
      <Employerdescription>Microsoft AI is a leading technology company that empowers every person and every organization on the planet to achieve more. They come together with a growth mindset, innovate to empower others, and collaborate to realize their shared goals.</Employerdescription>
      <Employerwebsite>https://microsoft.ai</Employerwebsite>
      <Compensationcurrency></Compensationcurrency>
      <Compensationmin></Compensationmin>
      <Compensationmax></Compensationmax>
      <Applyto>https://microsoft.ai/job/principal-software-engineer-21/</Applyto>
      <Location>Mountain View</Location>
      <Country></Country>
      <Postedate>2026-03-06</Postedate>
    </job>
    <job>
      <externalid>c5a79e86-69f</externalid>
      <Title>Principal Software Engineer - AI Ads</Title>
      <Description><![CDATA[<p><strong>Summary</strong></p>
<p>Microsoft AI are looking for a talented Principal Software Engineer - AI Ads to shape the future of online advertising in Mountain View, CA or Redmond, WA. You&#39;ll lead the design and development of large-scale shopping ads infrastructure that powers billions of products worldwide.</p>
<p><strong>About the Role</strong></p>
<p>As a Principal Software Engineer - AI Ads, you will be responsible for leading the design, development, and optimization of large-scale shopping ads infrastructure and algorithms. You will build and maintain the universal product graph spanning billions of products across multiple languages. You will develop scalable systems for data ingestion, storage, retrieval, and real-time serving at global scale. You will apply machine learning (ML), nature language processing (NLP), and deep learning (DL) models to improve ad relevance, personalization, and selection.</p>
<p><strong>Accountabilities</strong></p>
<ul>
<li>Lead the design, development, and optimization of large-scale shopping ads infrastructure and algorithms.</li>
<li>Build and maintain the universal product graph spanning billions of products across multiple languages.</li>
<li>Develop scalable systems for data ingestion, storage, retrieval, and real-time serving at global scale.</li>
<li>Apply machine learning (ML), nature language processing (NLP), and deep learning (DL) models to improve ad relevance, personalization, and selection.</li>
</ul>
<p><strong>The Candidate we&#39;re looking for</strong></p>
<p><strong>Experience:</strong></p>
<ul>
<li>6+ years technical engineering experience with coding in languages including, but not limited to, C, C++, C#, Java, JavaScript, or Python.</li>
</ul>
<p><strong>Technical skills:</strong></p>
<ul>
<li>Deep learning frameworks (e.g., PyTorch, TensorFlow), LLMs/SLMs, and AI Agents.</li>
<li>Cloud services, large-scale big data platforms, and streaming/real-time frameworks (e.g., Kafka, Flink, Spark Streaming), and AI infrastructure development.</li>
</ul>
<p><strong>Personal attributes:</strong></p>
<ul>
<li>Ability to meet Microsoft, customer and/or government security screening requirements.</li>
</ul>
<p><strong>Benefits</strong></p>
<ul>
<li>Competitive salary and benefits package.</li>
<li>Opportunity to work on cutting-edge AI innovation at massive scale.</li>
<li>Collaborative and dynamic work environment.</li>
<li>Professional development opportunities.</li>
</ul>
<p style="margin-top:24px;font-size:13px;color:#666;">XML job scraping automation by <a href="https://yubhub.co">YubHub</a></p>]]></Description>
      <Jobtype>full-time</Jobtype>
      <Experiencelevel>senior</Experiencelevel>
      <Workarrangement>onsite</Workarrangement>
      <Salaryrange>Competitive salary and benefits package</Salaryrange>
      <Skills>C, C++, C#, Java, JavaScript, Python, PyTorch, TensorFlow, LLMs/SLMs, AI Agents, Kafka, Flink, Spark Streaming, Cloud services, large-scale big data platforms, streaming/real-time frameworks</Skills>
      <Category>Engineering</Category>
      <Industry>Technology</Industry>
      <Employername>Microsoft AI</Employername>
      <Employerlogo>https://logos.yubhub.co/microsoft.ai.png</Employerlogo>
      <Employerdescription>Microsoft AI is a leading technology company that specializes in artificial intelligence, machine learning, and data analytics. They are known for their innovative solutions and commitment to empowering every person and organization on the planet to achieve more.</Employerdescription>
      <Employerwebsite>https://microsoft.ai</Employerwebsite>
      <Compensationcurrency></Compensationcurrency>
      <Compensationmin></Compensationmin>
      <Compensationmax></Compensationmax>
      <Applyto>https://microsoft.ai/job/principal-software-engineer-ai-ads/</Applyto>
      <Location>Redmond</Location>
      <Country></Country>
      <Postedate>2026-03-06</Postedate>
    </job>
    <job>
      <externalid>e3ce7035-a47</externalid>
      <Title>Software Engineer II</Title>
      <Description><![CDATA[<p><strong>Summary</strong></p>
<p>Microsoft AI are looking for a talented Software Engineer II to join their Ads Data Platform Team. This role is available in Redmond, WA and is a great opportunity for those who are passionate about solving complex problems and driving innovation.</p>
<p><strong>About the Role</strong></p>
<p>As a Software Engineer II, you will design and operate high-scale, high-performance systems that process billions of events through near-real-time and offline pipelines. You will build data applications that directly impact Microsoft Ads&#39; double-digit annual growth. You will work on cutting-edge technologies in distributed systems, machine learning, and big data.</p>
<p><strong>Accountabilities</strong></p>
<ul>
<li>Design and operate high-scale, high-performance systems that process billions of events through near-real-time and offline pipelines.</li>
<li>Build data applications that directly impact Microsoft Ads&#39; double-digit annual growth.</li>
<li>Work on cutting-edge technologies in distributed systems, machine learning, and big data.</li>
</ul>
<p><strong>The Candidate we&#39;re looking for</strong></p>
<p><strong>Experience:</strong></p>
<ul>
<li>2+ years of technical engineering experience with coding in languages including, but not limited to, C, C++, C#, Java, JavaScript, or Python.</li>
</ul>
<p><strong>Technical skills:</strong></p>
<ul>
<li>Experience in Azure.</li>
<li>Experience in machine learning and online system design, implementation and qualification.</li>
<li>2+ years&#39; experience in Distributed Systems and Big Data Technologies such as Spark, Hadoop, HDFS, Kafka, Flink, Scala.</li>
</ul>
<p><strong>Personal attributes:</strong></p>
<ul>
<li>Strong problem-solving skills and ability to work in a fast-paced environment.</li>
<li>Excellent communication and collaboration skills.</li>
</ul>
<p><strong>Benefits</strong></p>
<ul>
<li>Competitive salary range of $100,600 - $199,000 per year.</li>
<li>Comprehensive benefits package including health, dental, and vision insurance.</li>
<li>401(k) matching program.</li>
<li>Paid time off and holidays.</li>
<li>Opportunities for professional growth and development.</li>
</ul>
<p style="margin-top:24px;font-size:13px;color:#666;">XML job scraping automation by <a href="https://yubhub.co">YubHub</a></p>]]></Description>
      <Jobtype>full-time</Jobtype>
      <Experiencelevel>mid</Experiencelevel>
      <Workarrangement>onsite</Workarrangement>
      <Salaryrange>$100,600 - $199,000 per year</Salaryrange>
      <Skills>C, C++, C#, Java, JavaScript, Python, Azure, machine learning, Distributed Systems, Big Data Technologies, Spark, Hadoop, HDFS, Kafka, Flink, Scala</Skills>
      <Category>Engineering</Category>
      <Industry>Technology</Industry>
      <Employername>Microsoft AI</Employername>
      <Employerlogo>https://logos.yubhub.co/microsoft.ai.png</Employerlogo>
      <Employerdescription>Microsoft AI is a leading technology company that specializes in artificial intelligence and machine learning. They are known for their innovative products and services that power the backbone of Microsoft&apos;s global ads marketplace. With a strong focus on innovation and customer satisfaction, Microsoft AI is a great place to work for those who are passionate about technology and making a difference.</Employerdescription>
      <Employerwebsite>https://microsoft.ai</Employerwebsite>
      <Compensationcurrency></Compensationcurrency>
      <Compensationmin></Compensationmin>
      <Compensationmax></Compensationmax>
      <Applyto>https://microsoft.ai/job/software-engineer-ii-7/</Applyto>
      <Location>Redmond</Location>
      <Country></Country>
      <Postedate>2026-03-06</Postedate>
    </job>
    <job>
      <externalid>6c661ae3-4dc</externalid>
      <Title>Principal Software Engineer</Title>
      <Description><![CDATA[<p><strong>Summary</strong></p>
<p>Microsoft are looking for a talented Principal Software Engineer at their Mountain View office. This role sits at the forefront of transforming digital advertising through intelligent automation and large-scale optimization. You&#39;ll help shape the next generation of our agentic auto-bidding platform — one that learns, adapts, and optimizes autonomously.</p>
<p><strong>About the Role</strong></p>
<p>Our team is at the forefront of transforming digital advertising through intelligent automation and large-scale optimization. We design and build the auto-bidding platform that powers real-time ad auctions across Microsoft’s marketplaces—leveraging cutting-edge AI, machine learning, and large-scale distributed systems to bid on behalf of millions of advertisers. Engineers and scientists work hand-in-hand, blending algorithmic innovation, reinforcement learning, and large-scale data pipelines to create the intelligence that drives Microsoft Advertising’s success.</p>
<p><strong>Accountabilities</strong></p>
<ul>
<li>Collaborate with data scientists, ML engineers, and product teams to define requirements for agentic AI-driven bidding capabilities that observe, reason, and adapt autonomously.</li>
<li>Architect and implement the next-generation agentic bidding platform , enabling AI agents to monitor marketplace signals, interpret advertiser and user behavior, and continuously optimize bidding strategies.</li>
</ul>
<p><strong>The Candidate we&#39;re looking for</strong></p>
<p><strong>Experience:</strong></p>
<ul>
<li>Bachelor’s Degree in Computer Science or related technical field AND 6+ years technical engineering experience with coding in languages including, but not limited to, C, C++, C#, Java, JavaScript, or Python OR equivalent experience.</li>
</ul>
<p><strong>Technical skills:</strong></p>
<ul>
<li>4+ years of hands-on experience with large-scale streaming platforms such as Apache Spark or Flink.</li>
<li>4+ years of experience as a technical lead, including mentoring and guiding engineers.</li>
</ul>
<p><strong>Personal attributes:</strong></p>
<ul>
<li>Ability to meet Microsoft, customer and/or government security screening requirements are required for this role.</li>
</ul>
<p><strong>Benefits</strong></p>
<ul>
<li>Software Engineering IC5 – The typical base pay range for this role across the U.S. is USD $139,900 – $274,800 per year.</li>
<li>Software Engineering IC6 – The typical base pay range for this role across the U.S. is USD $163,000 – $296,400 per year.</li>
</ul>
<p style="margin-top:24px;font-size:13px;color:#666;">XML job scraping automation by <a href="https://yubhub.co">YubHub</a></p>]]></Description>
      <Jobtype>full-time</Jobtype>
      <Experiencelevel>senior</Experiencelevel>
      <Workarrangement>onsite</Workarrangement>
      <Salaryrange>USD $139,900 – $274,800 per year</Salaryrange>
      <Skills>C, C++, C#, Java, JavaScript, Python, Apache Spark, Flink, Machine learning, Reinforcement learning, Data pipelines</Skills>
      <Category>Engineering</Category>
      <Industry>Technology</Industry>
      <Employername>Microsoft</Employername>
      <Employerlogo>https://logos.yubhub.co/microsoft.ai.png</Employerlogo>
      <Employerdescription>Microsoft is a multinational technology company that develops, manufactures, licenses, and supports a wide range of software products, services, and devices. They are a leader in the technology industry and have a strong presence in the global market. Microsoft is known for its innovative products and services, such as Windows, Office, and Azure.</Employerdescription>
      <Employerwebsite>https://microsoft.ai</Employerwebsite>
      <Compensationcurrency></Compensationcurrency>
      <Compensationmin></Compensationmin>
      <Compensationmax></Compensationmax>
      <Applyto>https://microsoft.ai/job/principal-software-engineer-27/</Applyto>
      <Location>Mountain View</Location>
      <Country></Country>
      <Postedate>2026-03-06</Postedate>
    </job>
    <job>
      <externalid>930ee536-a54</externalid>
      <Title>Principal Software Engineer</Title>
      <Description><![CDATA[<p><strong>Summary</strong></p>
<p>Microsoft are looking for a talented Principal Software Engineer at their Redmond office. This role sits at the heart of transforming digital advertising through intelligent automation and large-scale optimization. You&#39;ll work directly with leadership to shape the company&#39;s direction in the digital advertising market.</p>
<p><strong>About the Role</strong></p>
<p>Our team is at the forefront of transforming digital advertising through intelligent automation and large-scale optimization. We design and build the auto-bidding platform that powers real-time ad auctions across Microsoft&#39;s marketplaces—leveraging cutting-edge AI, machine learning, and large-scale distributed systems to bid on behalf of millions of advertisers. Engineers and scientists work hand-in-hand, blending algorithmic innovation, reinforcement learning, and large-scale data pipelines to create the intelligence that drives Microsoft Advertising&#39;s success.</p>
<p><strong>Accountabilities</strong></p>
<ul>
<li>Collaborate with data scientists, ML engineers, and product teams to define requirements for agentic AI-driven bidding capabilities that observe, reason, and adapt autonomously.</li>
<li>Architect and implement the next-generation agentic bidding platform, enabling AI agents to monitor marketplace signals, interpret advertiser and user behavior, and continuously optimize bidding strategies.</li>
</ul>
<p><strong>The Candidate we&#39;re looking for</strong></p>
<p><strong>Experience:</strong></p>
<ul>
<li>Bachelor&#39;s Degree in Computer Science or related technical field AND 6+ years technical engineering experience with coding in languages including, but not limited to, C, C++, C#, Java, JavaScript, or Python OR equivalent experience.</li>
</ul>
<p><strong>Technical skills:</strong></p>
<ul>
<li>4+ years of hands-on experience with large-scale streaming platforms such as Apache Spark or Flink.</li>
<li>4+ years of experience as a technical lead, including mentoring and guiding engineers.</li>
</ul>
<p><strong>Personal attributes:</strong></p>
<ul>
<li>Ability to meet Microsoft, customer and/or government security screening requirements are required for this role.</li>
</ul>
<p><strong>Benefits</strong></p>
<ul>
<li>Software Engineering IC5 – The typical base pay range for this role across the U.S. is USD $139,900 – $274,800 per year.</li>
<li>Certain roles may be eligible for benefits and other compensation.</li>
</ul>
<p style="margin-top:24px;font-size:13px;color:#666;">XML job scraping automation by <a href="https://yubhub.co">YubHub</a></p>]]></Description>
      <Jobtype>full-time</Jobtype>
      <Experiencelevel>senior</Experiencelevel>
      <Workarrangement>onsite</Workarrangement>
      <Salaryrange>USD $139,900 – $274,800 per year</Salaryrange>
      <Skills>C, C++, C#, Java, JavaScript, Python, Apache Spark, Flink, Machine learning, Reinforcement learning, Data pipelines</Skills>
      <Category>Engineering</Category>
      <Industry>Technology</Industry>
      <Employername>Microsoft</Employername>
      <Employerlogo>https://logos.yubhub.co/microsoft.ai.png</Employerlogo>
      <Employerdescription>Microsoft is a multinational technology company that develops, manufactures, licenses, and supports a wide range of software products, services, and devices. They are a leader in the technology industry and have a strong presence in the global market. Microsoft is known for its innovative products and services, such as Windows, Office, and Azure.</Employerdescription>
      <Employerwebsite>https://microsoft.ai</Employerwebsite>
      <Compensationcurrency></Compensationcurrency>
      <Compensationmin></Compensationmin>
      <Compensationmax></Compensationmax>
      <Applyto>https://microsoft.ai/job/principal-software-engineer-26/</Applyto>
      <Location>Redmond</Location>
      <Country></Country>
      <Postedate>2026-03-06</Postedate>
    </job>
    <job>
      <externalid>8c9ae282-129</externalid>
      <Title>Member of Technical Staff - Data Platform</Title>
      <Description><![CDATA[<p><strong>Summary</strong></p>
<p>Microsoft are looking for a talented Member of Technical Staff - Data Platform at their Mountain View office. This role sits at the heart of designing distributed systems that process petabytes of data for the world&#39;s most advanced AI models. You will own the platform that transforms raw, massive-scale signals into the fuel that powers training, inference, and evaluation for millions of users.</p>
<p><strong>About the Role</strong></p>
<p>As a Member of Technical Staff - Data Platform, you will be responsible for designing and building the underlying frameworks that allow internal teams to process massive datasets efficiently, abstracting away the complexity of &#39;ETL&#39; into self-service infrastructure. You will modernize our data stack by moving from batch-heavy patterns to event-driven architectures, utilizing modern streaming architecture to reduce latency for AI inference.</p>
<p><strong>Accountabilities</strong></p>
<ul>
<li>Design and build the underlying frameworks that allow internal teams to process massive datasets efficiently, abstracting away the complexity of &#39;ETL&#39; into self-service infrastructure.</li>
<li>Modernize our data stack by moving from batch-heavy patterns to event-driven architectures, utilizing modern streaming architecture to reduce latency for AI inference.</li>
</ul>
<p><strong>The Candidate we&#39;re looking for</strong></p>
<p><strong>Experience:</strong></p>
<ul>
<li>Master&#39;s Degree in Computer Science, Math, Software Engineering, Computer Engineering, or related field AND 3+ years experience in business analytics, data science, software development, data modeling, or data engineering OR Bachelor&#39;s Degree in Computer Science, Math, Software Engineering, Computer Engineering, or related field AND 4+ years experience in business analytics, data science, software development, data modeling, or data engineering OR equivalent experience.</li>
</ul>
<p><strong>Technical skills:</strong></p>
<ul>
<li>Proficiency in Python, Scala, Java, or Go.</li>
<li>Deep Distributed Systems Knowledge: Demonstrated technical understanding of massive-scale compute engines (e.g., Apache Spark, Flink, Ray, Trino, or Snowflake).</li>
</ul>
<p><strong>Personal attributes:</strong></p>
<ul>
<li>Strong background in streaming technologies (Kafka, Azure EventHubs, Pulsar) and stateful stream processing.</li>
<li>Experience with container orchestration (Kubernetes) for deploying data applications.</li>
</ul>
<p><strong>Benefits</strong></p>
<ul>
<li>Competitive salary range: $119,800 - $234,700 per year.</li>
<li>Comprehensive benefits package, including medical, dental, and vision insurance.</li>
<li>401(k) matching program.</li>
<li>Paid time off and holidays.</li>
<li>Opportunities for professional growth and development.</li>
</ul>
<p style="margin-top:24px;font-size:13px;color:#666;">XML job scraping automation by <a href="https://yubhub.co">YubHub</a></p>]]></Description>
      <Jobtype>full-time</Jobtype>
      <Experiencelevel>staff</Experiencelevel>
      <Workarrangement>onsite</Workarrangement>
      <Salaryrange>$119,800 - $234,700 per year</Salaryrange>
      <Skills>Python, Scala, Java, Go, Apache Spark, Flink, Ray, Trino, Snowflake, Kafka, Azure EventHubs, Pulsar, Kubernetes</Skills>
      <Category>Engineering</Category>
      <Industry>Technology</Industry>
      <Employername>Microsoft</Employername>
      <Employerlogo>https://logos.yubhub.co/microsoft.ai.png</Employerlogo>
      <Employerdescription>Microsoft is a multinational technology company that develops, manufactures, licenses, and supports a wide range of software products, services, and devices. They are known for their operating systems, productivity software, and cloud computing services. Microsoft&apos;s mission is to empower every person and every organization on the planet to achieve more.</Employerdescription>
      <Employerwebsite>https://microsoft.ai</Employerwebsite>
      <Compensationcurrency></Compensationcurrency>
      <Compensationmin></Compensationmin>
      <Compensationmax></Compensationmax>
      <Applyto>https://microsoft.ai/job/member-of-technical-staff-data-platform/</Applyto>
      <Location>Mountain View</Location>
      <Country></Country>
      <Postedate>2026-03-06</Postedate>
    </job>
    <job>
      <externalid>4f2a566f-ec6</externalid>
      <Title>Member of Technical Staff - Copilot Data &amp; Insights</Title>
      <Description><![CDATA[<p><strong>Summary</strong></p>
<p>Microsoft AI are looking for a talented Member of Technical Staff - Copilot Data &amp; Insights at their New York office. This role sits at the heart of strategic decision-making, turning market data into actionable insights for a company that&#39;s revolutionising AI technology. You&#39;ll work directly with leadership to shape the company&#39;s direction in the AI market.</p>
<p><strong>About the Role</strong></p>
<p>As a Member of Technical Staff - Copilot Data &amp; Insights, you will be responsible for architecting scalable, low-latency systems/data pipelines for ingesting, processing using LLM(s), and serving customer insights. You will design, build, and maintain robust pipelines for product usage, provide insights to improve Copilot features. You will own orchestration, monitoring, and DevOps for critical data workflows. You will design data models and APIs that enable customer loop insights using LLM(s). You will collaborate with privacy, security, and responsible AI teams to ensure customer insight is safe, transparent, and user-controlled. You will optimize for performance, reliability, and cost across diverse workloads and geographies. You will ship high-quality, well-tested, secure, and maintainable code. You will find a path to get things done despite roadblocks to get your work into the hands of users quickly and iteratively. You will enjoy working in a fast-paced, design-driven, product development cycle. You will embody our Culture and Values.</p>
<p><strong>Accountabilities</strong></p>
<ul>
<li>Architect scalable, low-latency systems/data pipelines for ingesting, processing using LLM(s), and serving customer insights.</li>
<li>Design, build, and maintain robust pipelines for product usage, provide insights to improve Copilot features.</li>
</ul>
<p><strong>The Candidate we&#39;re looking for</strong></p>
<p><strong>Experience:</strong></p>
<ul>
<li>6+ years technical engineering experience with coding in languages including, but not limited to, C, C++, C#, Java, JavaScript, or Python.</li>
</ul>
<p><strong>Technical skills:</strong></p>
<ul>
<li>Experience with large scale data systems.</li>
<li>Experience working with AI platforms, frameworks, and APIs.</li>
</ul>
<p><strong>Personal attributes:</strong></p>
<ul>
<li>Passion for learning new technologies and staying up to date with industry trends, best practices, and emerging technologies in web, data systems and AI.</li>
<li>Ability to work in a fast-paced environment, manage multiple priorities, and adapt to changing requirements and deadlines.</li>
</ul>
<p><strong>Benefits</strong></p>
<ul>
<li>Competitive salary.</li>
<li>Comprehensive benefits package.</li>
<li>Opportunities for professional growth and development.</li>
<li>Collaborative and dynamic work environment.</li>
</ul>
<p style="margin-top:24px;font-size:13px;color:#666;">XML job scraping automation by <a href="https://yubhub.co">YubHub</a></p>]]></Description>
      <Jobtype>full-time</Jobtype>
      <Experiencelevel>staff</Experiencelevel>
      <Workarrangement>onsite</Workarrangement>
      <Salaryrange>USD $139,900 – $274,800 per year</Salaryrange>
      <Skills>C, C++, C#, Java, JavaScript, Python, large scale data systems, AI platforms, frameworks, APIs, Kafka, Spark, Flink</Skills>
      <Category>Engineering</Category>
      <Industry>Technology</Industry>
      <Employername>Microsoft AI</Employername>
      <Employerlogo>https://logos.yubhub.co/microsoft.ai.png</Employerlogo>
      <Employerdescription>Microsoft AI is a leading technology company that is redefining the future of AI. They are seeking passionate engineers to tackle some of the most complex and impactful challenges of our time.</Employerdescription>
      <Employerwebsite>https://microsoft.ai</Employerwebsite>
      <Compensationcurrency></Compensationcurrency>
      <Compensationmin></Compensationmin>
      <Compensationmax></Compensationmax>
      <Applyto>https://microsoft.ai/job/member-of-technical-staff-copilot-data-insights/</Applyto>
      <Location>New York</Location>
      <Country></Country>
      <Postedate>2026-03-06</Postedate>
    </job>
    <job>
      <externalid>4c6c34df-ee0</externalid>
      <Title>Member of Technical Staff - Data Platform</Title>
      <Description><![CDATA[<p><strong>Summary</strong></p>
<p>Microsoft are looking for a talented Member of Technical Staff - Data Platform at their Redmond office. This role sits at the heart of designing distributed systems that process petabytes of data for the world&#39;s most advanced AI models. You will own the platform that transforms raw, massive-scale signals into the fuel that powers training, inference, and evaluation for millions of users.</p>
<p><strong>About the Role</strong></p>
<p>As a Member of Technical Staff - Data Platform, you will be responsible for designing and building the underlying frameworks that allow internal teams to process massive datasets efficiently, abstracting away the complexity of &#39;ETL&#39; into self-service infrastructure. You will modernize our data stack by moving from batch-heavy patterns to event-driven architectures, utilizing modern streaming architecture to reduce latency for AI inference.</p>
<p><strong>Accountabilities</strong></p>
<ul>
<li>Design and build the underlying frameworks that allow internal teams to process massive datasets efficiently</li>
<li>Modernize our data stack by moving from batch-heavy patterns to event-driven architectures</li>
</ul>
<p><strong>The Candidate we&#39;re looking for</strong></p>
<p><strong>Experience:</strong></p>
<ul>
<li>Master&#39;s Degree in Computer Science, Math, Software Engineering, Computer Engineering, or related field AND 3+ years experience in business analytics, data science, software development, data modeling, or data engineering</li>
</ul>
<p><strong>Technical skills:</strong></p>
<ul>
<li>Proficiency in Python, Scala, Java, or Go</li>
<li>Deep Distributed Systems Knowledge: Demonstrated technical understanding of massive-scale compute engines (e.g., Apache Spark, Flink, Ray, Trino, or Snowflake)</li>
</ul>
<p><strong>Personal attributes:</strong></p>
<ul>
<li>Strong background in streaming technologies (Kafka, Azure EventHubs, Pulsar) and stateful stream processing</li>
</ul>
<p><strong>Benefits</strong></p>
<ul>
<li>Competitive salary</li>
<li>Comprehensive benefits package</li>
<li>Opportunities for professional growth and development</li>
</ul>
<p style="margin-top:24px;font-size:13px;color:#666;">XML job scraping automation by <a href="https://yubhub.co">YubHub</a></p>]]></Description>
      <Jobtype>full-time</Jobtype>
      <Experiencelevel>staff</Experiencelevel>
      <Workarrangement>onsite</Workarrangement>
      <Salaryrange>USD $119,800 – $234,700 per year</Salaryrange>
      <Skills>Python, Scala, Java, Go, Apache Spark, Flink, Ray, Trino, Snowflake, Kafka, Azure EventHubs, Pulsar</Skills>
      <Category>Engineering</Category>
      <Industry>Technology</Industry>
      <Employername>Microsoft</Employername>
      <Employerlogo>https://logos.yubhub.co/microsoft.ai.png</Employerlogo>
      <Employerdescription>Microsoft is a multinational technology company that develops, manufactures, licenses, and supports a wide range of software products, services, and devices. They are known for their operating systems, productivity software, and cloud computing services. Microsoft&apos;s mission is to empower every person and every organization on the planet to achieve more.</Employerdescription>
      <Employerwebsite>https://microsoft.ai</Employerwebsite>
      <Compensationcurrency></Compensationcurrency>
      <Compensationmin></Compensationmin>
      <Compensationmax></Compensationmax>
      <Applyto>https://microsoft.ai/job/member-of-technical-staff-data-platform-2/</Applyto>
      <Location>Redmond</Location>
      <Country></Country>
      <Postedate>2026-03-06</Postedate>
    </job>
    <job>
      <externalid>03dd7222-bf0</externalid>
      <Title>Member of Technical Staff - Copilot Data &amp; Insights</Title>
      <Description><![CDATA[<p><strong>Summary</strong></p>
<p>Microsoft AI are looking for a talented Member of Technical Staff - Copilot Data &amp; Insights at their Redmond office. This role sits at the heart of strategic decision-making, turning market data into actionable insights for a company that&#39;s revolutionising AI technology. You&#39;ll work directly with leadership to shape the company&#39;s direction in the AI market.</p>
<p><strong>About the Role</strong></p>
<p>As a Member of Technical Staff - Copilot Data &amp; Insights, you will be responsible for architecting scalable, low-latency systems/data pipelines for ingesting, processing using LLM(s), and serving customer insights. You will design, build, and maintain robust pipelines for product usage, provide insights to improve Copilot features. You will own orchestration, monitoring, and DevOps for critical data workflows. You will design data models and APIs that enable customer loop insights using LLM(s). You will collaborate with privacy, security, and responsible AI teams to ensure customer insight is safe, transparent, and user-controlled. You will optimize for performance, reliability, and cost across diverse workloads and geographies. You will ship high-quality, well-tested, secure, and maintainable code. You will find a path to get things done despite roadblocks to get your work into the hands of users quickly and iteratively. You will enjoy working in a fast-paced, design-driven, product development cycle. You will embody our Culture and Values.</p>
<p><strong>Accountabilities</strong></p>
<ul>
<li>Architect scalable, low-latency systems/data pipelines for ingesting, processing using LLM(s), and serving customer insights.</li>
<li>Design, build, and maintain robust pipelines for product usage, provide insights to improve Copilot features.</li>
</ul>
<p><strong>The Candidate we&#39;re looking for</strong></p>
<p><strong>Experience:</strong></p>
<ul>
<li>6+ years technical engineering experience with coding in languages including, but not limited to, C, C++, C#, Java, JavaScript, or Python.</li>
</ul>
<p><strong>Technical skills:</strong></p>
<ul>
<li>Experience with large scale data systems.</li>
<li>Experience working with AI platforms, frameworks, and APIs.</li>
</ul>
<p><strong>Personal attributes:</strong></p>
<ul>
<li>Passion for learning new technologies and staying up to date with industry trends, best practices, and emerging technologies in web, data systems and AI.</li>
<li>Ability to work in a fast-paced environment, manage multiple priorities, and adapt to changing requirements and deadlines.</li>
</ul>
<p><strong>Benefits</strong></p>
<ul>
<li>Competitive salary.</li>
<li>Comprehensive benefits package.</li>
<li>Opportunities for professional growth and development.</li>
<li>Collaborative and dynamic work environment.</li>
</ul>
<p style="margin-top:24px;font-size:13px;color:#666;">XML job scraping automation by <a href="https://yubhub.co">YubHub</a></p>]]></Description>
      <Jobtype>full-time</Jobtype>
      <Experiencelevel>staff</Experiencelevel>
      <Workarrangement>onsite</Workarrangement>
      <Salaryrange>USD $139,900 – $274,800 per year</Salaryrange>
      <Skills>C, C++, C#, Java, JavaScript, Python, large scale data systems, AI platforms, frameworks, APIs, Kafka, Spark, Flink</Skills>
      <Category>Engineering</Category>
      <Industry>Technology</Industry>
      <Employername>Microsoft AI</Employername>
      <Employerlogo>https://logos.yubhub.co/microsoft.ai.png</Employerlogo>
      <Employerdescription>Microsoft AI is a leading technology company that is redefining the future of AI. They are seeking passionate engineers to tackle some of the most complex and impactful challenges of our time.</Employerdescription>
      <Employerwebsite>https://microsoft.ai</Employerwebsite>
      <Compensationcurrency></Compensationcurrency>
      <Compensationmin></Compensationmin>
      <Compensationmax></Compensationmax>
      <Applyto>https://microsoft.ai/job/member-of-technical-staff-copilot-data-insights-3/</Applyto>
      <Location>Redmond</Location>
      <Country></Country>
      <Postedate>2026-03-06</Postedate>
    </job>
    <job>
      <externalid>1c29c926-f4e</externalid>
      <Title>Member of Technical Staff - Copilot Data &amp; Insights</Title>
      <Description><![CDATA[<p><strong>Summary</strong></p>
<p>Microsoft AI are looking for a talented Member of Technical Staff - Copilot Data &amp; Insights at their Mountain View office. This role sits at the heart of strategic decision-making, turning market data into actionable insights for a company that&#39;s revolutionising AI technology. You&#39;ll work directly with leadership to shape the company&#39;s direction in the AI market.</p>
<p><strong>About the Role</strong></p>
<p>This role focuses on building data pipelines, applications on customer-centric, automatically process customers insights using LLM(s) on Copilot and its features, generating insights, on top of Azure environments, dashboards, reporting and APIs that power adaptive, context-aware experiences across Microsoft AI. We aim to make Copilot feel like your Copilot — responsive to your preferences, workflows, and goals — while preserving privacy, security, performance, and scale.</p>
<p><strong>Accountabilities</strong></p>
<ul>
<li>Architect scalable, low-latency systems/data pipelines for ingesting, processing using LLM(s), and serving customer insights.</li>
<li>Design, build, and maintain robust pipelines for product usage, provide insights to improve Copilot features.</li>
</ul>
<p><strong>The Candidate we&#39;re looking for</strong></p>
<p><strong>Experience:</strong></p>
<ul>
<li>6+ years technical engineering experience with coding in languages including, but not limited to, C, C++, C#, Java, JavaScript, or Python.</li>
</ul>
<p><strong>Technical skills:</strong></p>
<ul>
<li>Experience with large scale data systems.</li>
<li>Experience working with AI platforms, frameworks, and APIs.</li>
</ul>
<p><strong>Personal attributes:</strong></p>
<ul>
<li>Passion for learning new technologies and staying up to date with industry trends, best practices, and emerging technologies in web, data systems and AI.</li>
</ul>
<p><strong>Benefits</strong></p>
<ul>
<li>Competitive salary.</li>
<li>Comprehensive benefits package.</li>
<li>Opportunities for professional growth and development.</li>
</ul>
<p style="margin-top:24px;font-size:13px;color:#666;">XML job scraping automation by <a href="https://yubhub.co">YubHub</a></p>]]></Description>
      <Jobtype>full-time</Jobtype>
      <Experiencelevel>staff</Experiencelevel>
      <Workarrangement>onsite</Workarrangement>
      <Salaryrange>USD $139,900 – $274,800 per year</Salaryrange>
      <Skills>C, C++, C#, Java, JavaScript, Python, large scale data systems, AI platforms, frameworks, APIs, Kafka, Spark, Flink</Skills>
      <Category>Engineering</Category>
      <Industry>Technology</Industry>
      <Employername>Microsoft AI</Employername>
      <Employerlogo>https://logos.yubhub.co/microsoft.ai.png</Employerlogo>
      <Employerdescription>Microsoft continues to redefine the future of AI, building intelligent systems that deeply understand users and adapt across agents, applications, services, and infrastructure.</Employerdescription>
      <Employerwebsite>https://microsoft.ai</Employerwebsite>
      <Compensationcurrency></Compensationcurrency>
      <Compensationmin></Compensationmin>
      <Compensationmax></Compensationmax>
      <Applyto>https://microsoft.ai/job/member-of-technical-staff-copilot-data-insights-2/</Applyto>
      <Location>Mountain View</Location>
      <Country></Country>
      <Postedate>2026-03-06</Postedate>
    </job>
    <job>
      <externalid>f361254f-dee</externalid>
      <Title>Principal Software Engineer - Data, Personalization</Title>
      <Description><![CDATA[<p><strong>Summary</strong></p>
<p>Microsoft AI are looking for a talented Principal Software Engineer to lead the design and development of distributed data infrastructure, APIs and personalization pipelines that drive Copilot&#39;s intelligence.</p>
<p><strong>About the Role</strong></p>
<p>As a Principal Software Engineer, you will work across Microsoft AI and Copilot teams to build scalable, low-latency systems for ingesting, processing, and serving personalized signals. You will design data models and APIs that enable Copilot to reason about user context, preferences, and history. You will build real-time and batch personalization engines that adapt Copilot&#39;s behavior. You will collaborate with privacy, security, and responsible AI teams to ensure personalization is safe, transparent, and user-controlled. You will optimize for performance, reliability, and cost across diverse workloads and geographies. You will ship high-quality, well-tested, secure, and maintainable code. You will find a path to get things done despite roadblocks to get your work into the hands of users quickly and iteratively. You will embody our Culture and Values.</p>
<p><strong>Accountabilities</strong></p>
<ul>
<li>Architect scalable, low-latency systems for ingesting, processing, and serving personalized signals.</li>
<li>Design data models and APIs that enable Copilot to reason about user context, preferences, and history.</li>
</ul>
<p><strong>The Candidate we&#39;re looking for</strong></p>
<p><strong>Experience:</strong></p>
<ul>
<li>8+ years technical engineering experience with coding in languages including, but not limited to, C, C++, C#, Java, JavaScript, or Python.</li>
</ul>
<p><strong>Technical skills:</strong></p>
<ul>
<li>Proficiency in backend technologies.</li>
<li>Familiarity with applied AI and its unique challenges.</li>
</ul>
<p><strong>Personal attributes:</strong></p>
<ul>
<li>Passion for learning new technologies and staying up to date with industry trends, best practices, and emerging technologies in web, data systems and AI.</li>
</ul>
<p><strong>Benefits</strong></p>
<ul>
<li>Competitive salary.</li>
<li>Benefits and other compensation.</li>
<li>Opportunities for professional growth and development.</li>
<li>A positive, inclusive work environment.</li>
<li>A culture of innovation and collaboration.</li>
<li>A commitment to diversity, equity, and inclusion.</li>
</ul>
<p style="margin-top:24px;font-size:13px;color:#666;">XML job scraping automation by <a href="https://yubhub.co">YubHub</a></p>]]></Description>
      <Jobtype>full-time</Jobtype>
      <Experiencelevel>senior</Experiencelevel>
      <Workarrangement>hybrid</Workarrangement>
      <Salaryrange>USD $163,000 – $296,400 per year</Salaryrange>
      <Skills>C, C++, C#, Java, JavaScript, Python, backend technologies, applied AI, Kafka, Spark, Flink, large scale data systems, AI platforms, Machine Learning frameworks</Skills>
      <Category>Engineering</Category>
      <Industry>Technology</Industry>
      <Employername>Microsoft AI</Employername>
      <Employerlogo>https://logos.yubhub.co/microsoft.ai.png</Employerlogo>
      <Employerdescription>Microsoft continues to redefine the future of AI, building intelligent systems that deeply understand users and adapt across agents, applications, services, and infrastructure.</Employerdescription>
      <Employerwebsite>https://microsoft.ai</Employerwebsite>
      <Compensationcurrency></Compensationcurrency>
      <Compensationmin></Compensationmin>
      <Compensationmax></Compensationmax>
      <Applyto>https://microsoft.ai/job/principal-software-engineer-data-personalization-microsoft-ai-3/</Applyto>
      <Location>New York</Location>
      <Country></Country>
      <Postedate>2026-03-05</Postedate>
    </job>
    <job>
      <externalid>2a725219-246</externalid>
      <Title>Principal Software Engineer</Title>
      <Description><![CDATA[<p><strong>Summary</strong></p>
<p>Microsoft are looking for a talented Principal Software Engineer at their Bengaluru office. This role sits at the heart of strategic decision-making, turning market data into actionable insights for a company that&#39;s revolutionising AI technology. You&#39;ll work directly with leadership to shape the company&#39;s direction in the AI market.</p>
<p><strong>About the Role</strong></p>
<p>We are looking for a Principal Software Engineer who is hands-on with production coding and system design to build the real-time data pipelines and feature/embedding materialization systems that feed online stores/caches and integrate tightly with ML inference serving. This role is ideal for engineers who enjoy building robust streaming + ETL systems (correctness, idempotency, backfills, late data), owning SLOs with strong observability and operational maturity, and optimizing end-to-end performance and cost across compute, storage, and serving integrations.</p>
<p><strong>Accountabilities</strong></p>
<ul>
<li>Design and implement real-time streaming ETL / feature pipelines (e.g., Flink or Spark Structured Streaming) that meet strict freshness and correctness constraints.</li>
<li>Build and operate reliable messaging and ingestion with Kafka/Pulsar (partitioning strategy, retries, ordering guarantees, DLQs, backpressure handling).</li>
</ul>
<p><strong>The Candidate we&#39;re looking for</strong></p>
<p><strong>Experience:</strong></p>
<ul>
<li>Bachelor’s or Master’s degree in Computer Science, Electrical/Computer Engineering, or a related field, with 8+ years of related experience.</li>
</ul>
<p><strong>Technical skills:</strong></p>
<ul>
<li>Strong programming skills in language C++, C# or Python (at least one required).</li>
<li>Hands-on experience in one or more: Building and operating streaming data pipelines in production (Flink or Spark Structured Streaming), Distributed systems engineering with strong reliability and operational rigor, Messaging systems such as Kafka/Pulsar.</li>
</ul>
<p><strong>Personal attributes:</strong></p>
<ul>
<li>Strong communication and collaboration skills, with experience working across engineering, applied science/ML, and product/business stakeholders.</li>
</ul>
<p><strong>Benefits</strong></p>
<ul>
<li>Competitive salary and benefits package.</li>
<li>Opportunities for professional growth and development.</li>
<li>Collaborative and dynamic work environment.</li>
</ul>
<p style="margin-top:24px;font-size:13px;color:#666;">XML job scraping automation by <a href="https://yubhub.co">YubHub</a></p>]]></Description>
      <Jobtype>full-time</Jobtype>
      <Experiencelevel>senior</Experiencelevel>
      <Workarrangement>onsite</Workarrangement>
      <Salaryrange>Competitive salary and benefits package</Salaryrange>
      <Skills>C++, C#, Python, Flink, Spark Structured Streaming, Kafka, Pulsar, Distributed systems engineering, Messaging systems, Observability and operational maturity</Skills>
      <Category>Engineering</Category>
      <Industry>Technology</Industry>
      <Employername>Microsoft</Employername>
      <Employerlogo>https://logos.yubhub.co/microsoft.ai.png</Employerlogo>
      <Employerdescription>Microsoft is a multinational technology company that develops, manufactures, licenses, and supports a wide range of software products, services, and devices. They are a leader in the technology industry and have a strong presence in the global market.</Employerdescription>
      <Employerwebsite>https://microsoft.ai</Employerwebsite>
      <Compensationcurrency></Compensationcurrency>
      <Compensationmin></Compensationmin>
      <Compensationmax></Compensationmax>
      <Applyto>https://microsoft.ai/job/principal-software-engineer/</Applyto>
      <Location>Bengaluru</Location>
      <Country></Country>
      <Postedate>2026-03-05</Postedate>
    </job>
    <job>
      <externalid>901a6402-db5</externalid>
      <Title>Data Engineer</Title>
      <Description><![CDATA[<p>Join Razer to help build and optimize data pipelines and data platforms that support analytics, product improvements, and foundational AI/ML data needs. Collaborate with cross-functional teams to ensure data is reliable, accessible, and governed. Tech stack includes Redshift, Airflow, and DBT.</p>
<p><strong>What you&#39;ll do</strong></p>
<p>Join Razer to help build and optimize data pipelines and data platforms that support analytics, product improvements, and foundational AI/ML data needs. Collaborate with cross-functional teams to ensure data is reliable, accessible, and governed. Tech stack includes Redshift, Airflow, and DBT.</p>
<p><strong>What you need</strong></p>
<ul>
<li>Strong Python and SQL</li>
<li>Hands-on experience with Redshift, Airflow, DBT</li>
<li>Mandatory hands-on experience with Apache Spark (batch and/or structured processing)</li>
</ul>
<p style="margin-top:24px;font-size:13px;color:#666;">XML job scraping automation by <a href="https://yubhub.co">YubHub</a></p>]]></Description>
      <Jobtype>full-time</Jobtype>
      <Experiencelevel>mid</Experiencelevel>
      <Workarrangement>onsite</Workarrangement>
      <Salaryrange></Salaryrange>
      <Skills>Python, SQL, Redshift, Airflow, DBT, Apache Spark, Apache Flink, Apache Kafka, Hadoop ecosystem components, ETL design patterns, performance tuning</Skills>
      <Category>Engineering</Category>
      <Industry>Technology</Industry>
      <Employername>Razer</Employername>
      <Employerlogo>https://logos.yubhub.co/razer.com.png</Employerlogo>
      <Employerdescription>Razer is a global leader in the gaming industry, dedicated to creating cutting-edge products and experiences that define the ultimate gameplay. With a mission to revolutionize the way the world games, Razer is a place to do great work, offering opportunities to make an impact globally while working across a global team located across 5 continents.</Employerdescription>
      <Employerwebsite>https://razer.wd3.myworkdayjobs.com</Employerwebsite>
      <Compensationcurrency></Compensationcurrency>
      <Compensationmin></Compensationmin>
      <Compensationmax></Compensationmax>
      <Applyto>https://razer.wd3.myworkdayjobs.com/en-US/Careers/job/Chengdu/Data-Engineer_JR2025006594</Applyto>
      <Location>Chengdu</Location>
      <Country></Country>
      <Postedate>2025-12-26</Postedate>
    </job>
  </jobs>
</source>