<?xml version="1.0" encoding="UTF-8"?>
<source>
  <jobs>
    <job>
      <externalid>28447796-a41</externalid>
      <Title>Senior Data Scientist/Senior Consultant Specialist</Title>
      <Description><![CDATA[<p>Join HSBC and fulfil your potential. We are currently seeking an experienced professional to join our team in the role of Senior Consultant Specialist. In this role, you will own end-to-end technical delivery of AI/ML use cases, provide technical direction to data scientists and engineers, establish and enforce best practices for reproducible research, and drive adoption of reusable components.</p>
<p>Key responsibilities include:</p>
<ul>
<li>Owning end-to-end technical delivery of AI/ML use cases: problem framing, data strategy, modelling, evaluation, deployment, and monitoring.</li>
<li>Providing technical direction to data scientists and engineers, including design reviews, code reviews, and solution architecture decisions.</li>
<li>Establishing and enforcing best practices for reproducible research, ML engineering, and production readiness (testing, CI/CD, observability).</li>
<li>Driving adoption of reusable components (feature stores, model templates, evaluation harnesses, and prompt libraries where relevant).</li>
<li>Developing and optimising models across supervised/unsupervised learning, time series, NLP, and/or GenAI (as applicable to the domain).</li>
<li>Defining robust evaluation approaches (offline metrics, back-testing, A/B testing, calibration, fairness and stability checks) and ensuring models are resilient to data drift and changing business conditions, with clear retraining and monitoring strategies.</li>
<li>Implementing MLOps practices: model registry, automated pipelines, versioning, lineage, and monitoring (performance, drift, latency).</li>
<li>Optimising performance and cost (compute, storage, inference efficiency), balancing speed-to-market with HSBC control standards and contributing to cross-team communities of practice, sharing patterns, lessons learned, and accelerators.</li>
</ul>
<p>To be successful in this role, you should meet the following requirements:</p>
<ul>
<li>A degree in Computer Science, Statistics, Mathematics, Engineering, or related field (Master’s/PhD advantageous) with strong hands-on experience delivering production ML systems in a regulated environment (financial services preferred).</li>
<li>Expert-level Python and solid software engineering practices (clean code, testing, packaging, APIs).</li>
<li>Proven experience with ML frameworks (e.g., scikit-learn, XGBoost, PyTorch/TensorFlow) and data tooling (e.g., Spark).</li>
<li>Strong understanding of model lifecycle management: experimentation → deployment → monitoring → retraining along with cloud-native delivery patterns and CI/CD.</li>
<li>Ability to communicate complex technical concepts to non-technical stakeholders and influence decisions.</li>
<li>Experience with GenAI/LLMs (prompt engineering, RAG, evaluation, guardrails, safety patterns).</li>
<li>Familiarity with feature stores, model registries, and orchestration tools (e.g., MLflow, Airflow).</li>
<li>Knowledge of model risk governance, explainability techniques (e.g., SHAP), and fairness testing.</li>
</ul>
<p style="margin-top:24px;font-size:13px;color:#666;">XML job scraping automation by <a href="https://yubhub.co">YubHub</a></p>]]></Description>
      <Jobtype>full-time</Jobtype>
      <Experiencelevel>senior</Experiencelevel>
      <Workarrangement>onsite</Workarrangement>
      <Salaryrange></Salaryrange>
      <Skills>Python, scikit-learn, XGBoost, PyTorch, TensorFlow, Spark, GenAI, LLMs, feature stores, model registries, orchestration tools, model risk governance, explainability techniques, fairness testing</Skills>
      <Category>Engineering</Category>
      <Industry>Finance</Industry>
      <Employername>HSBC</Employername>
      <Employerlogo>https://logos.yubhub.co/portal.careers.hsbc.com.png</Employerlogo>
      <Employerdescription>HSBC is one of the largest banking and financial services organisations in the world, with operations in 64 countries and territories.</Employerdescription>
      <Employerwebsite>https://portal.careers.hsbc.com</Employerwebsite>
      <Compensationcurrency></Compensationcurrency>
      <Compensationmin></Compensationmin>
      <Compensationmax></Compensationmax>
      <Applyto>https://portal.careers.hsbc.com/careers/job/563774610827412?utm_source=yubhub.co&amp;utm_medium=jobs_feed&amp;utm_campaign=apply</Applyto>
      <Location>Pune, Maharashtra, India; Hyderabad, Telangana, India</Location>
      <Country></Country>
      <Postedate>2026-04-28</Postedate>
    </job>
    <job>
      <externalid>ed3cf4b4-074</externalid>
      <Title>Recommendations Engineer</Title>
      <Description><![CDATA[<p>As a member of the Recommendations team at Constructor, you will apply cutting-edge Machine Learning techniques to build the best recommendations-as-a-service product on the market. This will transform the ecommerce experience for hundreds of millions of users worldwide.</p>
<p>You will build high-load, real-time recommendation services, design metrics to evaluate recommendation relevance and performance, and lead the full development lifecycle from initial design to production. You will also participate in strategic planning to help drive product evolution and prioritization, and collaborate with stakeholders to align technical roadmaps with business needs.</p>
<p>To succeed in this role, you will require a deep understanding of ML fundamentals and experience building large-scale recommendation, retrieval, or ranking systems. You should be proficient in Python and SQL, with hands-on experience in big data systems such as Spark, Presto/Athena, and Hive. Additionally, you should have production-level ML experience, including deploying models to production and designing A/B tests to validate business impact.</p>
<p>In terms of benefits, Constructor offers unlimited vacation time, fully remote work, a work-from-home stipend, Apple laptops for new employees, and a training and development budget. The company also provides maternity and paternity leave, stock options, and regular team offsites to connect and collaborate.</p>
<p style="margin-top:24px;font-size:13px;color:#666;">XML job scraping automation by <a href="https://yubhub.co">YubHub</a></p>]]></Description>
      <Jobtype>full-time</Jobtype>
      <Experiencelevel>mid</Experiencelevel>
      <Workarrangement>remote</Workarrangement>
      <Salaryrange>$80k–$120k USD</Salaryrange>
      <Skills>Machine Learning, Python, SQL, Spark, Presto/Athena, Hive</Skills>
      <Category>Engineering</Category>
      <Industry>Technology</Industry>
      <Employername>Constructor</Employername>
      <Employerlogo>https://logos.yubhub.co/constructor.com.png</Employerlogo>
      <Employerdescription>Constructor is an AI-first ecommerce search and discovery platform launched in 2019.</Employerdescription>
      <Employerwebsite>https://constructor.com</Employerwebsite>
      <Compensationcurrency></Compensationcurrency>
      <Compensationmin></Compensationmin>
      <Compensationmax></Compensationmax>
      <Applyto>https://apply.workable.com/j/D397376477?utm_source=yubhub.co&amp;utm_medium=jobs_feed&amp;utm_campaign=apply</Applyto>
      <Location>San Francisco</Location>
      <Country></Country>
      <Postedate>2026-04-28</Postedate>
    </job>
    <job>
      <externalid>5ce88202-297</externalid>
      <Title>Compute Optimization Researcher/Engineer</Title>
      <Description><![CDATA[<p>We are seeking a Compute Optimization Researcher/Engineer to build the systems that maximize the value of OpenAI&#39;s global compute capacity.</p>
<p>In this role, you will work on high-impact optimization problems spanning capacity allocation, demand forecasting, cluster planning, workload placement, and infrastructure utilization. You will combine mathematical modeling, software systems, and cross-functional execution to improve how compute is planned and consumed across GPU clusters, networking, storage, and data center environments.</p>
<p>This role is ideal for candidates with backgrounds in operations research, optimization, applied math, infrastructure systems, or large-scale capacity planning.</p>
<p>Key responsibilities include:</p>
<ul>
<li>Building optimization models for compute allocation, workload scheduling, and cluster utilization.</li>
<li>Developing planning systems that balance supply, demand, cost, latency, and reliability constraints.</li>
<li>Creating forecasting frameworks for GPU demand, infrastructure growth, and capacity needs.</li>
<li>Designing decision tools for allocating compute across internal teams, products, and strategic priorities.</li>
<li>Partnering with architecture, infrastructure engineering, finance, and operations teams to translate business needs into mathematical models.</li>
<li>Integrating multiple operational data sources into planning systems and optimization workflows.</li>
<li>Improving utilization of GPUs, networking, power, cooling, and storage infrastructure.</li>
<li>Analyzing tradeoffs across first-party data centers, cloud providers, and hybrid environments.</li>
<li>Building dashboards, metrics, and operational tooling for capacity decision-making.</li>
<li>Leading ambiguous, cross-functional initiatives that improve infrastructure efficiency at scale.</li>
<li>Presenting recommendations clearly to technical leaders and executives.</li>
</ul>
<p>Requirements include:</p>
<ul>
<li>A doctorate degree in Computer Science, Engineering, Mathematics, Operations Research, Economics, or related field.</li>
<li>5+ years of experience in optimization, planning, infrastructure analytics, or systems engineering.</li>
<li>Strong experience with linear programming, mixed-integer optimization, convex optimization, simulation, or forecasting methods.</li>
<li>Proficiency in Python and data tooling (SQL, Pandas, Spark, etc.).</li>
<li>Experience translating real-world business constraints into scalable optimization systems.</li>
<li>Strong analytical problem-solving skills with comfort operating in ambiguous environments.</li>
<li>Ability to influence cross-functional stakeholders without formal authority.</li>
<li>Excellent communication skills with both technical and non-technical audiences.</li>
</ul>
<p>Preferred qualifications include experience with large-scale infrastructure, cloud capacity planning, or data center operations, familiarity with tools such as Gurobi, CPLEX, CVXPY, Pyomo, or similar solvers, and experience optimizing GPU fleets, networking systems, or distributed compute environments.</p>
<p style="margin-top:24px;font-size:13px;color:#666;">XML job scraping automation by <a href="https://yubhub.co">YubHub</a></p>]]></Description>
      <Jobtype>Full time</Jobtype>
      <Experiencelevel>senior</Experiencelevel>
      <Workarrangement>hybrid</Workarrangement>
      <Salaryrange>$293K – $455K</Salaryrange>
      <Skills>optimization, planning, infrastructure analytics, systems engineering, linear programming, mixed-integer optimization, convex optimization, simulation, forecasting, Python, data tooling, SQL, Pandas, Spark, large-scale infrastructure, cloud capacity planning, data center operations, Gurobi, CPLEX, CVXPY, Pyomo, GPU fleets, networking systems, distributed compute environments</Skills>
      <Category>Engineering</Category>
      <Industry>Technology</Industry>
      <Employername>OpenAI</Employername>
      <Employerlogo>https://logos.yubhub.co/openai.com.png</Employerlogo>
      <Employerdescription>OpenAI is an AI research and deployment company dedicated to ensuring that general-purpose artificial intelligence benefits all of humanity.</Employerdescription>
      <Employerwebsite>https://openai.com</Employerwebsite>
      <Compensationcurrency></Compensationcurrency>
      <Compensationmin></Compensationmin>
      <Compensationmax></Compensationmax>
      <Applyto>https://jobs.ashbyhq.com/openai/dfec1204-1a3a-44e3-ade9-ac1b3e64f3f4?utm_source=yubhub.co&amp;utm_medium=jobs_feed&amp;utm_campaign=apply</Applyto>
      <Location>San Francisco; Seattle</Location>
      <Country></Country>
      <Postedate>2026-04-28</Postedate>
    </job>
    <job>
      <externalid>6f17e7be-561</externalid>
      <Title>Staff Product Manager</Title>
      <Description><![CDATA[<p>Who we are</p>
<p>At Twilio, we&#39;re shaping the future of communications, all from the comfort of our homes. We deliver innovative solutions to hundreds of thousands of businesses and empower millions of developers worldwide to craft personalized customer experiences.</p>
<p>About the job</p>
<p>This position is needed to own Twilio&#39;s Data Governance initiatives across Twilio. This position is based in India. You will touch many teams within Twilio to ensure safe customer data handling, supporting data privacy and compliance. This team manages data pipeline security, data reliability, and ensuring access controls. We are also the bridge to the reporting systems trusted by customers, executives and shareholders.</p>
<p>Responsibilities</p>
<ul>
<li>Lead the product requirements required to build and operate a central data catalog as a metadata store.</li>
<li>Drive Data Governance initiative, working across various organizations and stakeholders.</li>
<li>Understand the needs of our customers for operational and analytical purposes, and execute on governance of the data pipeline with required access management to fulfill these requirements.</li>
<li>Craft and deliver a vision for data governance at Twilio, working side by side with other product managers and engineering counterparts across Twilio R&amp;D.</li>
</ul>
<p>Qualifications</p>
<ul>
<li>Required:</li>
<li>Someone with 10+ years of product management experience in a fast-paced company.</li>
<li>Strong background in Data Governance, having led at least one initiative that included cataloging metadata, data reliability, sensitive data classification and access management.</li>
<li>Worked on data platforms, customer engagement platforms, or streaming applications.</li>
<li>Proficiency in the big data ecosystem e.g. Kafka, Spark, Presto/Athena, or similar technologies.</li>
<li>Technically savvy and experienced with the cloud, APIs, communications, enterprise software, data reliability, and ETL techniques.</li>
<li>You have a customer oriented approach. You have an amazing ability to understand the customer’s challenges and are able to articulate a vision to solve challenges to make an impact.</li>
<li>The ability to solicit customer requirements from many - often opposing - sources, prioritize, and work with engineering and design to deliver.</li>
<li>You are a strategic problem solver and flourish operating in broad scope, from conception through continuous operation of 24x7 services.</li>
<li>You have solved sophisticated problems and have the aptitude to navigate uncharted waters.</li>
<li>Desired:</li>
<li>Collaborative approach and ability to work with distributed, cross-functional teams.</li>
<li>Great communication skills. You&#39;re equally at home on zoom presenting to an audience of developers as you are on a zoom talking to users and then coming up with product requirements. Your best days are the ones where you do both on the same day.</li>
<li>Bachelor’s degree Computer Science, Engineering or equivalent experience required.</li>
</ul>
<p>Location</p>
<p>This role will be remote, and based in India.</p>
<p>Travel</p>
<p>We prioritize connection and opportunities to build relationships with our customers and each other. For this role, you may be required to travel occasionally to participate in project or team in-person meetings.</p>
<p>What We Offer</p>
<p>Working at Twilio offers many benefits, including competitive pay, generous time off, ample parental and wellness leave, healthcare, a retirement savings program, and much more. Offerings vary by location.</p>
<p>Twilio thinks big. Do you? We like to solve problems, take initiative, pitch in when needed, and are always up for trying new things. That&#39;s why we seek out colleagues who embody our values , something we call Twilio Magic. Additionally, we empower employees to build positive change in their communities by supporting their volunteering and donation efforts. So, if you&#39;re ready to unleash your full potential, do your best work, and be the best version of yourself, apply now!</p>
<p>If this role isn&#39;t what you&#39;re looking for, please consider other open positions. Twilio is proud to be an equal opportunity employer. We do not discriminate based upon race, religion, color, national origin, sex (including pregnancy, childbirth, reproductive health decisions, or related medical conditions), sexual orientation, gender identity, gender expression, age, status as a protected veteran, status as an individual with a disability, genetic information, political views or activity, or other applicable legally protected characteristics. We also consider qualified applicants with criminal histories, consistent with applicable federal, state and local law. Qualified applicants with arrest or conviction records will be considered for employment in accordance with the Los Angeles County Fair Chance Ordinance for Employers and the California Fair Chance Act. Additionally, Twilio participates in the E-Verify program in certain locations, as required by law.</p>
<p style="margin-top:24px;font-size:13px;color:#666;">XML job scraping automation by <a href="https://yubhub.co">YubHub</a></p>]]></Description>
      <Jobtype>full-time</Jobtype>
      <Experiencelevel>staff</Experiencelevel>
      <Workarrangement>remote</Workarrangement>
      <Salaryrange></Salaryrange>
      <Skills>Data Governance, Metadata Store, Data Pipeline Security, Data Reliability, Access Management, Kafka, Spark, Presto/Athena, Cloud, APIs, Communications, Enterprise Software, ETL Techniques</Skills>
      <Category>Engineering</Category>
      <Industry>Technology</Industry>
      <Employername>Twilio</Employername>
      <Employerlogo>https://logos.yubhub.co/twilio.com.png</Employerlogo>
      <Employerdescription>Twilio delivers innovative solutions to hundreds of thousands of businesses and empowers millions of developers worldwide to craft personalized customer experiences.</Employerdescription>
      <Employerwebsite>https://www.twilio.com/</Employerwebsite>
      <Compensationcurrency></Compensationcurrency>
      <Compensationmin></Compensationmin>
      <Compensationmax></Compensationmax>
      <Applyto>https://job-boards.greenhouse.io/twilio/jobs/7424250?utm_source=yubhub.co&amp;utm_medium=jobs_feed&amp;utm_campaign=apply</Applyto>
      <Location>Bengaluru, India</Location>
      <Country></Country>
      <Postedate>2026-04-25</Postedate>
    </job>
    <job>
      <externalid>e6146351-ea7</externalid>
      <Title>Software Engineer Intern (22nd June - 11th September, remote-US)</Title>
      <Description><![CDATA[<p>Join the team as our next Software Engineer Intern for a duration of 12 weeks, starting on 22nd June. This position is needed to design, develop, deploy and operate software solutions and help Twilio deliver real-time, low latency capabilities for next-generation communications.</p>
<p>As a Software Engineer Intern, you will experience the following:</p>
<ul>
<li>Be a Software Engineer, not just an &quot;intern&quot;.</li>
<li>Ship many different projects during your summer.</li>
<li>Solve problems in distributed computing, real-time DSP (audio processing), virtualization performance, distributed messaging, busses and more.</li>
<li>Partner with other engineers on core feature development and services that ship to our users.</li>
<li>Embrace challenges, learn fast and deliver great results.</li>
<li>Demonstrate a willingness to learn and grow, and we will reciprocate with ample opportunity to do just that, in a friendly, fun and exciting startup environment!</li>
</ul>
<ul>
<li>Develop beautiful and profitable applications.</li>
<li>Demonstrate consistent improvement in your coding skills, issue-tracking and source control systems, and agile development mentality.</li>
<li>Participate in code reviews, bug tracking and project management with the rest of the Twilio Team.</li>
</ul>
<p>This role will be remote, based in the US. This role is not allowed to be hired in the San Francisco Bay area, California.</p>
<p>There are many benefits to working at Twilio, including, in addition to competitive pay, things like generous time-off, ample parental and wellness leave, healthcare, a retirement savings program, and much more. Offerings vary by location.</p>
<p>The estimated pay ranges for this role are as follows:</p>
<ul>
<li>Based in Colorado, Hawaii, Illinois, Maryland, Massachusetts, Minnesota, Vermont, or Washington D.C.: $47.00/hourly</li>
<li>Based in New York, New Jersey, Washington State, or California (outside of the San Francisco Bay area): $50.00/hourly</li>
</ul>
<p style="margin-top:24px;font-size:13px;color:#666;">XML job scraping automation by <a href="https://yubhub.co">YubHub</a></p>]]></Description>
      <Jobtype>internship</Jobtype>
      <Experiencelevel>entry</Experiencelevel>
      <Workarrangement>remote</Workarrangement>
      <Salaryrange></Salaryrange>
      <Skills>Python, Java, Javascript, Golang, C, C++, unit and integration testing methodologies, data processing, analytics, business intelligence, reporting, Hadoop, Spark, AWS, Scala</Skills>
      <Category>Engineering</Category>
      <Industry>Technology</Industry>
      <Employername>Twilio</Employername>
      <Employerlogo>https://logos.yubhub.co/twilio.com.png</Employerlogo>
      <Employerdescription>Twilio delivers innovative solutions to hundreds of thousands of businesses and empowers millions of developers worldwide to craft personalized customer experiences.</Employerdescription>
      <Employerwebsite>https://www.twilio.com/</Employerwebsite>
      <Compensationcurrency></Compensationcurrency>
      <Compensationmin></Compensationmin>
      <Compensationmax></Compensationmax>
      <Applyto>https://job-boards.greenhouse.io/twilio/jobs/7850821?utm_source=yubhub.co&amp;utm_medium=jobs_feed&amp;utm_campaign=apply</Applyto>
      <Location>Remote - US</Location>
      <Country></Country>
      <Postedate>2026-04-25</Postedate>
    </job>
    <job>
      <externalid>57b92c64-f58</externalid>
      <Title>Staff Software Engineer (L4)</Title>
      <Description><![CDATA[<p>At Twilio, we&#39;re shaping the future of communications, all from the comfort of our homes. We deliver innovative solutions to hundreds of thousands of businesses and empower millions of developers worldwide to craft personalized customer experiences.</p>
<p>Join the team as our next Staff Software Engineer in the Enterprise AI Engineering team. Twilio is undergoing a major business transformation powered by Enterprise AI, supported by a dedicated engineering team building the foundations for a unified, secure, and scalable operating system across GTM functions (Sales, Support, Operations, etc.) as well as Internal non-GTM functions (Finance, HR, Legal, etc.).</p>
<p>Our platform is designed to support a multitude of business functions by deploying intelligent agentic solutions that automate complex workflows and deliver unprecedented user experiences. We&#39;re building the future of work at Twilio, and this role offers the opportunity to be at the forefront of enterprise AI innovation.</p>
<p>This role focuses specifically on transforming how Twilio&#39;s Customer Support organization operates through AI-powered tools and agentic products. We are looking for Full-Stack Engineers who view AI as a fundamental shift in the software development lifecycle of engineering products and the delivery of beautiful, engaging user experiences.</p>
<p>In this role, you&#39;ll:</p>
<ul>
<li>Co-lead the design and development of our software infrastructure, driving technical vision and strategy to ensure scalability, reliability, and performance.</li>
</ul>
<ul>
<li>Drive the development of sophisticated, stateful web applications. You will oversee the integration of complex React-based front-ends with backend modular services, ensuring a seamless UI experience.</li>
</ul>
<ul>
<li>Serve as developer leader in distributed systems, data technologies, with strong software engineering skills.</li>
</ul>
<ul>
<li>Drive technical innovation and research to stay at the forefront of emerging data technologies and best practices.</li>
</ul>
<ul>
<li>Mentor and elevate a team of high-performing engineers. You don’t just write great code; you foster a culture of technical excellence, helping senior and mid-level engineers level up through deep-dive code reviews and architectural workshops.</li>
</ul>
<ul>
<li>Collaborate closely with cross-functional teams to understand business requirements and translate them into scalable and efficient technical solutions.</li>
</ul>
<ul>
<li>Continuously adapt to the evolving JavaScript ecosystem to maximize engineering efficiency.</li>
</ul>
<ul>
<li>Ensure data quality, integrity, and security throughout the data lifecycle, adhering to industry best practices and compliance standards.</li>
</ul>
<p>Qualifications:</p>
<ul>
<li>Bachelor&#39;s degree in Computer Science, Engineering, or a related field.</li>
</ul>
<ul>
<li>8+ years of experience in data engineering, software development, or a related field, with at least 3 years in a technical leadership role.</li>
</ul>
<ul>
<li>Experience with full-stack development building web apps, using modern programming languages such as JavaScript, Typescript or React.</li>
</ul>
<ul>
<li>Proven track record of architecting and delivering complex data projects at scale, with a deep understanding of data infrastructure and distributed systems.</li>
</ul>
<ul>
<li>Strong understanding of data modeling, data warehousing, and ETL processes, with experience designing and optimizing data pipelines.</li>
</ul>
<ul>
<li>Excellent communication and collaboration skills, with the ability to influence technical decisions and drive alignment across teams.</li>
</ul>
<ul>
<li>Strong leadership skills, with a track record of mentoring and developing high-performing engineering teams.</li>
</ul>
<ul>
<li>Demonstrated ability to thrive in a fast-paced, dynamic environment and deliver results under tight timelines.</li>
</ul>
<p>Desired:</p>
<ul>
<li>Experience developing production-quality LLM applications and using modern agent frameworks such as Langchain, Langgraph, Llamaindex, LangSmith, LangFuse, CrewAI, and/or others is a plus.</li>
</ul>
<ul>
<li>Expertise in big data technologies such as Hadoop, Spark, Kafka, and cloud-based data services (AWS/GCP/Azure).</li>
</ul>
<p>For this role, you may be required to travel occasionally to participate in project or team in-person meetings.</p>
<p style="margin-top:24px;font-size:13px;color:#666;">XML job scraping automation by <a href="https://yubhub.co">YubHub</a></p>]]></Description>
      <Jobtype>full-time</Jobtype>
      <Experiencelevel>staff</Experiencelevel>
      <Workarrangement>remote</Workarrangement>
      <Salaryrange></Salaryrange>
      <Skills>JavaScript, Typescript, React, Full-stack development, Data engineering, Software development, Distributed systems, Data technologies, Cloud-based data services, LLM applications, Modern agent frameworks, Hadoop, Spark, Kafka, AWS/GCP/Azure</Skills>
      <Category>Engineering</Category>
      <Industry>Technology</Industry>
      <Employername>Twilio</Employername>
      <Employerlogo>https://logos.yubhub.co/twilio.com.png</Employerlogo>
      <Employerdescription>Twilio is a communications platform that provides cloud communication APIs and customer service software.</Employerdescription>
      <Employerwebsite>https://www.twilio.com/</Employerwebsite>
      <Compensationcurrency></Compensationcurrency>
      <Compensationmin></Compensationmin>
      <Compensationmax></Compensationmax>
      <Applyto>https://job-boards.greenhouse.io/twilio/jobs/7716279?utm_source=yubhub.co&amp;utm_medium=jobs_feed&amp;utm_campaign=apply</Applyto>
      <Location>Remote - Colombia</Location>
      <Country></Country>
      <Postedate>2026-04-25</Postedate>
    </job>
    <job>
      <externalid>65a43303-e5b</externalid>
      <Title>Staff Analytics Engineer</Title>
      <Description><![CDATA[<p>At Twilio, we&#39;re shaping the future of communications, all from the comfort of our homes. We deliver innovative solutions to hundreds of thousands of businesses and empower millions of developers worldwide to craft personalized customer experiences. Our dedication to remote-first work, and strong culture of connection and global inclusion means that no matter your location, you’re part of a vibrant team with diverse experiences making a global impact each day.</p>
<p>As we continue to revolutionize how the world interacts, we’re acquiring new skills and experiences that make work feel truly rewarding. Your career at Twilio is in your hands.</p>
<p>We use Artificial Intelligence (AI) to help make our hiring process efficient. That said, every hiring decision is made by real Twilions! .</p>
<p>Join the team as Twilio’s next Staff Analytics Engineer, R&amp;D.</p>
<p>This position is needed to advance the consistency &amp; quality of our R&amp;D analytics data layer and accelerate the development velocity of analysts. Our Data Science and Analytics team seeks to empower R&amp;D to make data-backed decisions that accelerate innovation and improve product performance. You will work closely within our team and across Product &amp; Engineering to design and maintain a robust analytics data layer that enables trusted reporting on R&amp;D metrics.</p>
<p>Responsibilities:</p>
<ul>
<li>Design and implement a formal analytics data layer using AWS Glue, Presto, and LookML</li>
</ul>
<ul>
<li>Collaborate within the Data Science &amp; Analytics team and across Product &amp; Engineering to define, document, and maintain alignment on metric definition and data lineage</li>
</ul>
<ul>
<li>Develop and maintain automated data reconciliation and quality checks to proactively identify and resolve discrepancies, ensuring accuracy and consistency of critical reports and dashboards</li>
</ul>
<ul>
<li>Lead investigations into complex data anomalies, conduct root cause analysis, and communicate findings and solutions effectively to both technical and non-technical audiences</li>
</ul>
<ul>
<li>Mentor and guide members of the data science and analytics team, establishing and enforcing best practices around data modeling, testing, documentation, and code review</li>
</ul>
<p>Qualifications:</p>
<ul>
<li>Required:</li>
</ul>
<ul>
<li>6+ years of professional experience in analytics engineering, data engineering, business intelligence, or a related discipline–ideally in a B2B SaaS environment.</li>
</ul>
<ul>
<li>Advanced expertise in SQL and hands-on experience designing data models and orchestrating data pipelines using AWS Glue or similar technologies.</li>
</ul>
<ul>
<li>Demonstrated ability to partner with cross-functional stakeholders to codify, document, and reconcile critical business metrics, ensuring company-wide data alignment.</li>
</ul>
<ul>
<li>Proven track record of owning ambiguous projects from beginning to end with minimal guidance.</li>
</ul>
<p>Strong technical communication and mentorship skills, with the ability to convey complex concepts to a range of audiences</p>
<p>Desired:</p>
<ul>
<li>Intermediate expertise in Python; distributed computing technologies like Hive, Presto, and Spark; and dashboarding tools like Looker or Tableau</li>
</ul>
<ul>
<li>Proven track record of implementing robust data quality and testing frameworks, including expertise with dbt tests, CI/CD, and data observability</li>
</ul>
<ul>
<li>Experience evangelizing and establishing data culture and best practices within a fast-paced technology organization</li>
</ul>
<p>Location:</p>
<p>This role will be remote, but is not eligible to be hired in CA, CT, NJ, NY, PA, WA.</p>
<p>Travel:</p>
<p>We prioritize connection and opportunities to build relationships with our customers and each other. For this role, you may be required to travel occasionally to participate in project or team in-person meetings.</p>
<p>What We Offer:</p>
<p>Working at Twilio offers many benefits, including competitive pay, generous time off, ample parental and wellness leave, healthcare, a retirement savings program, and much more. Offerings vary by location.</p>
<p>Compensation:</p>
<p>The estimated pay ranges for this role are as follows:</p>
<ul>
<li>Based in Colorado, Hawaii, Illinois, Maryland, Massachusetts, Minnesota, Vermont or Washington D.C. : $155,520 - $194,400.</li>
</ul>
<ul>
<li>Based in New York, New Jersey, Washington State, or California (outside of the San Francisco Bay area): $164,640 - $205,800.</li>
</ul>
<ul>
<li>Based in the San Francisco Bay area, California: $182,960 - $228,700.</li>
</ul>
<ul>
<li>This role may be eligible to participate in Twilio’s equity plan and corporate bonus plan. All roles are generally eligible for the following benefits: health care insurance, 401(k) retirement account, paid sick time, paid personal time off, paid parental leave.</li>
</ul>
<p>The successful candidate’s starting salary will be determined based on permissible, non-discriminatory factors such as skills, experience, and geographic location.</p>
<p>Application deadline information:</p>
<p>Applications for this role are intended to be accepted until May 29th, 2026, but may change based on business needs.</p>
<p>Twilio thinks big. Do you? We like to solve problems, take initiative, pitch in when needed, and are always up for trying new things. That&#39;s why we seek out colleagues who embody our values , something we call Twilio Magic. Additionally, we empower employees to build positive change in their communities by supporting their volunteering and donation efforts. So, if you&#39;re ready to unleash your full potential, do your best work, and be the best version of yourself, apply now!</p>
<p style="margin-top:24px;font-size:13px;color:#666;">XML job scraping automation by <a href="https://yubhub.co">YubHub</a></p>]]></Description>
      <Jobtype>full-time</Jobtype>
      <Experiencelevel>staff</Experiencelevel>
      <Workarrangement>remote</Workarrangement>
      <Salaryrange>$155,520 - $228,700</Salaryrange>
      <Skills>AWS Glue, Presto, LookML, SQL, data engineering, business intelligence, Python, Hive, Spark, Tableau</Skills>
      <Category>Engineering</Category>
      <Industry>Technology</Industry>
      <Employername>Twilio</Employername>
      <Employerlogo>https://logos.yubhub.co/twilio.com.png</Employerlogo>
      <Employerdescription>Twilio delivers innovative solutions to hundreds of thousands of businesses and empowers millions of developers worldwide to craft personalized customer experiences.</Employerdescription>
      <Employerwebsite>https://www.twilio.com/</Employerwebsite>
      <Compensationcurrency></Compensationcurrency>
      <Compensationmin></Compensationmin>
      <Compensationmax></Compensationmax>
      <Applyto>https://job-boards.greenhouse.io/twilio/jobs/7551660?utm_source=yubhub.co&amp;utm_medium=jobs_feed&amp;utm_campaign=apply</Applyto>
      <Location>Remote - US</Location>
      <Country></Country>
      <Postedate>2026-04-25</Postedate>
    </job>
    <job>
      <externalid>3f06d188-50a</externalid>
      <Title>Product Manager 3</Title>
      <Description><![CDATA[<p>Join the team as our next Data Platform Product Manager in the Data Governance and Insights team.</p>
<p>This position is needed to drive Data Insights and Twilio&#39;s Data Governance initiatives across Twilio. This position is based in India. You will touch many teams within Twilio to ensure safe customer data handling, supporting data privacy and compliance. This team manages data pipeline security, data reliability, and ensuring access controls. We are also the bridge to the reporting systems trusted by customers, executives and shareholders.</p>
<p>In this role, you’ll:</p>
<ul>
<li>Champion customer-facing product development that will reduce time to insights.</li>
<li>Own the cradle to grave product lifecycle for insights platforms.</li>
<li>Understand the needs of our end customers in the global communications market and build a platform to help internal teams manage and leverage their data to derive meaningful insights.</li>
<li>Support Data Governance initiative for data pipelines and insights products, working with product managers and engineering counterparts across various organizations and stakeholders.</li>
</ul>
<p>Twilio values diverse experiences from all kinds of industries, and we encourage everyone who meets the required qualifications to apply. If your career is just starting or hasn&#39;t followed a traditional path, don&#39;t let that stop you from considering Twilio.</p>
<p style="margin-top:24px;font-size:13px;color:#666;">XML job scraping automation by <a href="https://yubhub.co">YubHub</a></p>]]></Description>
      <Jobtype>full-time</Jobtype>
      <Experiencelevel>senior</Experiencelevel>
      <Workarrangement>remote</Workarrangement>
      <Salaryrange></Salaryrange>
      <Skills>data platforms, customer engagement platforms, streaming applications, Kafka, ElasticSearch, Clickhouse, Spark, Presto/Athena, cloud, APIs, communications, enterprise software, data reliability, ETL techniques, collaborative approach, ability to work with distributed, cross-functional teams, great communication skills</Skills>
      <Category>Engineering</Category>
      <Industry>Technology</Industry>
      <Employername>Twilio</Employername>
      <Employerlogo>https://logos.yubhub.co/twilio.com.png</Employerlogo>
      <Employerdescription>Twilio delivers innovative solutions to hundreds of thousands of businesses and empowers millions of developers worldwide to craft personalized customer experiences.</Employerdescription>
      <Employerwebsite>https://www.twilio.com/</Employerwebsite>
      <Compensationcurrency></Compensationcurrency>
      <Compensationmin></Compensationmin>
      <Compensationmax></Compensationmax>
      <Applyto>https://job-boards.greenhouse.io/twilio/jobs/7424471?utm_source=yubhub.co&amp;utm_medium=jobs_feed&amp;utm_campaign=apply</Applyto>
      <Location>Bengaluru, India</Location>
      <Country></Country>
      <Postedate>2026-04-25</Postedate>
    </job>
    <job>
      <externalid>8abaab74-f0c</externalid>
      <Title>Software Engineer</Title>
      <Description><![CDATA[<p>Join the Voice &amp; Video Postflight team as Twilio&#39;s next Senior Software Engineer.</p>
<p>This position is needed to build and evolve next-generation distributed systems that empower our customers through high-performance APIs. You will be tasked with solving the complex challenges inherent in supporting the massive scale of Twilio Voice, ensuring our infrastructure remains robust as we expand our capabilities.</p>
<p>As a Software Engineer, you will focus on the intersection of large-scale API development and advanced data systems. You will work on designing and implementing low-latency, highly scalable architectures that leverage modern database technologies to provide customers with seamless access to large-scale data.</p>
<p>Responsibilities:</p>
<ul>
<li>Architect and implement next-generation distributed systems capable of handling the immense throughput and concurrency requirements of Twilio Voice.</li>
</ul>
<ul>
<li>Design low-latency, high-scale APIs that empower customers with real-time access to their data and communications infrastructure.</li>
</ul>
<ul>
<li>Optimize and manage distributed database environments, ensuring high availability and performance across high-volume data stores.</li>
</ul>
<ul>
<li>Own the full development lifecycle, from initial system design and prototyping to the continuous operation of 24x7 production services.</li>
</ul>
<ul>
<li>Collaborate across engineering teams to solve &#39;hard&#39; distributed systems problems, ensuring our API layer is both resilient and developer-friendly.</li>
</ul>
<p>Qualifications:</p>
<ul>
<li>A Master&#39;s or Bachelor&#39;s degree and 5+ years of experience in software engineering, with a focus on backend or infrastructure systems.</li>
</ul>
<ul>
<li>Expertise in Distributed Systems: A deep understanding of consistency models, partition tolerance, and the challenges of scaling stateful services.</li>
</ul>
<ul>
<li>Core Languages: Proficiency in Java, Spring, Dropwizard and a strong grasp of building RESTful APIs at scale.</li>
</ul>
<ul>
<li>Database Fundamentals: Practical experience working with and tuning PostgreSQL, Aurora or similar relational databases.</li>
</ul>
<ul>
<li>Cloud Infrastructure: Familiarity with deploying and managing large-scale services on AWS or GCP.</li>
</ul>
<ul>
<li>Operational Excellence: Comfortable operating in an agile environment with a &#39;you build it, you run it&#39; mentality.</li>
</ul>
<p>Desired:</p>
<ul>
<li>OLAP &amp; Big Data: Experience with ClickHouse or other column-oriented databases for high-performance analytical queries.</li>
</ul>
<ul>
<li>Infrastructure as a code: Familiarity with tools such as Terraform, Harness for managing systems.</li>
</ul>
<ul>
<li>Data Pipelines: Prior exposure to technologies like Kafka or Spark for moving and processing data between distributed systems.</li>
</ul>
<p style="margin-top:24px;font-size:13px;color:#666;">XML job scraping automation by <a href="https://yubhub.co">YubHub</a></p>]]></Description>
      <Jobtype>full-time</Jobtype>
      <Experiencelevel>senior</Experiencelevel>
      <Workarrangement>remote</Workarrangement>
      <Salaryrange></Salaryrange>
      <Skills>Distributed Systems, Java, Spring, Dropwizard, PostgreSQL, Aurora, AWS, GCP, Agile, ClickHouse, Terraform, Harness, Kafka, Spark</Skills>
      <Category>Engineering</Category>
      <Industry>Technology</Industry>
      <Employername>Twilio</Employername>
      <Employerlogo>https://logos.yubhub.co/twilio.com.png</Employerlogo>
      <Employerdescription>Twilio delivers innovative solutions to hundreds of thousands of businesses and empowers millions of developers worldwide to craft personalized customer experiences.</Employerdescription>
      <Employerwebsite>https://www.twilio.com/</Employerwebsite>
      <Compensationcurrency></Compensationcurrency>
      <Compensationmin></Compensationmin>
      <Compensationmax></Compensationmax>
      <Applyto>https://job-boards.greenhouse.io/twilio/jobs/7785202?utm_source=yubhub.co&amp;utm_medium=jobs_feed&amp;utm_campaign=apply</Applyto>
      <Location>Remote - Ireland</Location>
      <Country></Country>
      <Postedate>2026-04-25</Postedate>
    </job>
    <job>
      <externalid>3b19e1e2-e3e</externalid>
      <Title>Software Engineer, New Grad - Infrastructure</Title>
      <Description><![CDATA[<p>Are you passionate about engineering quality, performance, and increasing the impact of engineers around you? As a Software Engineer in Palantir&#39;s Foundations organization, you&#39;ll have the opportunity to grow more quickly than you ever imagined, building the shared infrastructure that underpins the Palantir Foundry, Palantir Gotham, and Palantir Apollo platforms.</p>
<p>You&#39;ll be involved throughout the product lifecycle, from idea generation and design, to execution and rollout, all while being paired with a mentor dedicated to your growth and success. You&#39;ll collaborate closely with technical and non-technical counterparts to understand our developers&#39; and customers&#39; problems and build infrastructure to tackle them.</p>
<p>The role involves building features to improve the developer experience for other Palantir engineers, or improving the scalability and reliability of Palantir&#39;s platforms. You&#39;ll work with a variety of languages, including Java and Go for backend and TypeScript for frontend, alongside open-source technologies like Cassandra, Spark, Elasticsearch, React, and Redux.</p>
<p>We&#39;re looking for engineers who can write clean, effective code and learn new languages quickly. Alongside peers that bring diverse experience, you&#39;ll build your skills to apply the best technology to solve a given problem.</p>
<p>In this role, you&#39;ll have the opportunity to:</p>
<ul>
<li>Build features to improve the developer experience for other Palantir engineers</li>
<li>Improve the scalability and reliability of Palantir&#39;s platforms</li>
<li>Collaborate with technical and non-technical counterparts to understand our developers&#39; and customers&#39; problems and build infrastructure to tackle them</li>
<li>Work with a variety of languages, including Java and Go for backend and TypeScript for frontend</li>
<li>Learn and apply new technologies to solve complex problems</li>
</ul>
<p>If you&#39;re passionate about engineering quality, performance, and increasing the impact of engineers around you, this could be the perfect opportunity for you.</p>
<p><strong>Responsibilities</strong></p>
<ul>
<li>Build features to improve the developer experience for other Palantir engineers</li>
<li>Improve the scalability and reliability of Palantir&#39;s platforms</li>
<li>Collaborate with technical and non-technical counterparts to understand our developers&#39; and customers&#39; problems and build infrastructure to tackle them</li>
<li>Work with a variety of languages, including Java and Go for backend and TypeScript for frontend</li>
<li>Learn and apply new technologies to solve complex problems</li>
</ul>
<p><strong>Requirements</strong></p>
<ul>
<li>Engineering background in fields such as Computer Science, Mathematics, Software Engineering, or Physics</li>
<li>Familiarity with data structures, storage systems, cloud infrastructure, frontend frameworks, and other technical tools</li>
<li>Experience coding in programming languages, such as Java, C++, Python, TypeScript, JavaScript, or similar languages</li>
</ul>
<p><strong>What We Offer</strong></p>
<ul>
<li>Competitive salary and benefits package</li>
<li>Opportunities for professional growth and development</li>
<li>Collaborative and dynamic work environment</li>
<li>Flexible working hours and remote work options</li>
<li>Access to cutting-edge technologies and tools</li>
<li>Recognition and rewards for outstanding performance</li>
</ul>
<p><strong>How to Apply</strong></p>
<p>If you&#39;re interested in this exciting opportunity, please submit your application, including your resume and a thoughtful cover letter explaining why you&#39;re the perfect fit for this role. We look forward to hearing from you!</p>
<p style="margin-top:24px;font-size:13px;color:#666;">XML job scraping automation by <a href="https://yubhub.co">YubHub</a></p>]]></Description>
      <Jobtype>full-time</Jobtype>
      <Experiencelevel>entry</Experiencelevel>
      <Workarrangement>hybrid</Workarrangement>
      <Salaryrange>$145,000 - $155,000/year</Salaryrange>
      <Skills>Java, Go, TypeScript, Cassandra, Spark, Elasticsearch, React, Redux</Skills>
      <Category>Engineering</Category>
      <Industry>Technology</Industry>
      <Employername>Palantir</Employername>
      <Employerlogo>https://logos.yubhub.co/palantir.com.png</Employerlogo>
      <Employerdescription>Palantir builds software for data-driven decisions and operations, serving various industries and partners worldwide.</Employerdescription>
      <Employerwebsite>https://www.palantir.com/</Employerwebsite>
      <Compensationcurrency></Compensationcurrency>
      <Compensationmin></Compensationmin>
      <Compensationmax></Compensationmax>
      <Applyto>https://jobs.lever.co/palantir/7d75bed5-45d8-4876-840a-2d92ea79c98d?utm_source=yubhub.co&amp;utm_medium=jobs_feed&amp;utm_campaign=apply</Applyto>
      <Location>Palo Alto</Location>
      <Country></Country>
      <Postedate>2026-04-25</Postedate>
    </job>
    <job>
      <externalid>49cb8f87-4ad</externalid>
      <Title>Software Engineer, New Grad - Infrastructure</Title>
      <Description><![CDATA[<p>Are you passionate about engineering quality, performance, and increasing the impact of engineers around you?</p>
<p>As a Software Engineer in Palantir&#39;s Foundations organization, you&#39;ll have the opportunity to grow more quickly than you ever imagined, building the shared infrastructure that underpins the Palantir Foundry, Palantir Gotham, and Palantir Apollo platforms.</p>
<p>You&#39;ll be involved throughout the product lifecycle, from idea generation and design, to execution and rollout, all while being paired with a mentor dedicated to your growth and success.</p>
<p>You&#39;ll collaborate closely with technical and non-technical counterparts to understand our developers&#39; and customers&#39; problems and build infrastructure to tackle them.</p>
<p>We use a variety of languages, including Java and Go for backend and TypeScript for frontend, alongside open-source technologies like Cassandra, Spark, Elasticsearch, React, and Redux.</p>
<p>We&#39;re looking for engineers who can write clean, effective code and learn new languages quickly, alongside peers that bring diverse experience.</p>
<p>Our software is constantly evolving, so we need engineers who can do the same.</p>
<p><strong>Core Responsibilities:</strong></p>
<ul>
<li>Build features to improve the developer experience for other Palantir engineers</li>
<li>Improve the scalability and reliability of Palantir&#39;s platforms</li>
</ul>
<p><strong>Technologies We Use:</strong></p>
<ul>
<li>Java and Go for backend</li>
<li>TypeScript for frontend</li>
<li>Cassandra, Spark, Elasticsearch, React, and Redux</li>
</ul>
<p><strong>What We Value:</strong></p>
<ul>
<li>Passion for helping other developers build better applications</li>
<li>Empathy for the impact your changes will have on the workflows and productivity of developers and end users</li>
<li>Ability to communicate and collaborate with a variety of individuals</li>
</ul>
<p><strong>What We Require:</strong></p>
<ul>
<li>Engineering background in fields such as Computer Science, Mathematics, Software Engineering, or Physics</li>
<li>Familiarity with data structures, storage systems, cloud infrastructure, frontend frameworks, and other technical tools</li>
<li>Experience coding in programming languages, such as Java, C++, Python, TypeScript, JavaScript, or similar languages</li>
</ul>
<p style="margin-top:24px;font-size:13px;color:#666;">XML job scraping automation by <a href="https://yubhub.co">YubHub</a></p>]]></Description>
      <Jobtype>full-time</Jobtype>
      <Experiencelevel>entry</Experiencelevel>
      <Workarrangement>hybrid</Workarrangement>
      <Salaryrange>$145,000 - $155,000/year</Salaryrange>
      <Skills>Java, Go, TypeScript, Cassandra, Spark, Elasticsearch, React, Redux</Skills>
      <Category>Engineering</Category>
      <Industry>Technology</Industry>
      <Employername>Palantir</Employername>
      <Employerlogo>https://logos.yubhub.co/palantir.com.png</Employerlogo>
      <Employerdescription>Palantir builds software for data-driven decisions and operations, empowering partners to develop lifesaving drugs, forecast supply chain disruptions, and locate missing children.</Employerdescription>
      <Employerwebsite>https://www.palantir.com/</Employerwebsite>
      <Compensationcurrency></Compensationcurrency>
      <Compensationmin></Compensationmin>
      <Compensationmax></Compensationmax>
      <Applyto>https://jobs.lever.co/palantir/4abf26b4-795c-420a-bf22-1ab98db268b4?utm_source=yubhub.co&amp;utm_medium=jobs_feed&amp;utm_campaign=apply</Applyto>
      <Location>New York</Location>
      <Country></Country>
      <Postedate>2026-04-25</Postedate>
    </job>
    <job>
      <externalid>9adc643a-0b0</externalid>
      <Title>Staff Software Engineer (L4) Data Platform</Title>
      <Description><![CDATA[<p>We are seeking an experienced Staff Engineer to join our Data Substrate team. In this role, you will be responsible for architecting scalable and reliable data solutions, collaborating closely with cross-functional partners driving technical innovation, and mentoring a team of talented engineers.</p>
<p>As a Staff Engineer, you will serve as a subject matter expert in distributed systems, data technologies, with strong software engineering skills. You will architect and implement scalable and efficient data systems, storage solutions, and processing frameworks using state-of-the-art technologies.</p>
<p>Key responsibilities include:</p>
<ul>
<li>Serving as a subject matter expert in distributed systems, data technologies, with strong software engineering skills</li>
<li>Architecting and implementing scalable and efficient data systems, storage solutions, and processing frameworks using state-of-the-art technologies</li>
<li>Driving technical innovation and research to stay at the forefront of emerging data technologies and best practices</li>
<li>Mentoring and coaching a team of talented engineers, fostering a culture of technical excellence, collaboration, and continuous learning</li>
<li>Collaborating closely with cross-functional teams to understand business requirements and translate them into scalable and efficient technical solutions</li>
<li>Ensuring data quality, integrity, and security throughout the data lifecycle, adhering to industry best practices and compliance standards</li>
</ul>
<p>Required qualifications include:</p>
<ul>
<li>Bachelor&#39;s or Master&#39;s degree in Computer Science, Engineering, or a related field</li>
<li>8+ years of experience in software development, or a related field</li>
<li>Proven track record of architecting and delivering complex data projects at scale, with a deep understanding of data infrastructure and distributed systems</li>
<li>Expertise in big data technologies such as Hadoop, Spark, Kafka, and other distributed computing systems</li>
<li>Experience designing, building, and operating large-scale systems using AWS technologies</li>
<li>Proficiency in programming languages such as Python, Java, or Scala, with strong problem-solving skills and attention to detail</li>
<li>Experience designing or working with Data Lakehouse architectures, including hands-on experience with Hudi, Iceberg, or Delta data formats</li>
<li>Excellent communication and collaboration skills, with the ability to influence technical decisions and drive alignment across teams</li>
<li>Strong leadership skills, with a track record of mentoring and developing junior engineers</li>
<li>Demonstrated ability to thrive in a fast-paced, dynamic environment and deliver results under tight timelines</li>
</ul>
<p>Desired qualifications include:</p>
<ul>
<li>Contributions to OSS projects</li>
<li>Familiarity with data modeling, data warehousing, and ETL processes</li>
</ul>
<p>This role will be remote, and based in the United States. Travel may be required to participate in project or team in-person meetings.</p>
<p style="margin-top:24px;font-size:13px;color:#666;">XML job scraping automation by <a href="https://yubhub.co">YubHub</a></p>]]></Description>
      <Jobtype>full-time</Jobtype>
      <Experiencelevel>staff</Experiencelevel>
      <Workarrangement>remote</Workarrangement>
      <Salaryrange>$171,120.00 to $213,900.00</Salaryrange>
      <Skills>Distributed systems, Data technologies, Software engineering, Big data technologies, Hadoop, Spark, Kafka, AWS technologies, Python, Java, Scala, Data Lakehouse architectures, Hudi, Iceberg, Delta data formats</Skills>
      <Category>Engineering</Category>
      <Industry>Technology</Industry>
      <Employername>Twilio</Employername>
      <Employerlogo>https://logos.yubhub.co/twilio.com.png</Employerlogo>
      <Employerdescription>Twilio is a cloud communication platform that provides software developers with tools to build, scale and operate real-time communication and collaboration applications.</Employerdescription>
      <Employerwebsite>https://www.twilio.com/</Employerwebsite>
      <Compensationcurrency></Compensationcurrency>
      <Compensationmin></Compensationmin>
      <Compensationmax></Compensationmax>
      <Applyto>https://job-boards.greenhouse.io/twilio/jobs/7782805?utm_source=yubhub.co&amp;utm_medium=jobs_feed&amp;utm_campaign=apply</Applyto>
      <Location>Remote - US</Location>
      <Country></Country>
      <Postedate>2026-04-25</Postedate>
    </job>
    <job>
      <externalid>2ceda0cf-914</externalid>
      <Title>Software Engineer, New Grad</Title>
      <Description><![CDATA[<p>A Software Engineer at Palantir builds software at scale to transform how organisations around the world use data. You&#39;ll contribute high-quality code directly to Palantir Gotham, Palantir Apollo, or Palantir Foundry: products that are deployed at some of the most important institutions across the public and private sectors.</p>
<p>As a Software Engineer, you are involved throughout the product lifecycle - from idea generation, design, and prototyping, to execution and shipping, all while being paired with a mentor dedicated to your growth and success. You&#39;ll collaborate closely with technical and non-technical counterparts to understand our customers&#39; problems and build products that tackle them.</p>
<p>One of the most effective ways to understand what our users need is to meet them. You may receive an opportunity to tour the assembly line at an auto-manufacturer or join a counter-terror analyst at their desk to really understand their mission and difficulties.</p>
<p>We encourage communication and collaboration among teams to share context, skills, and experience, so you&#39;ll also have the opportunity to learn about other business areas.</p>
<p>Our software is constantly evolving, so we need engineers who can do the same. Alongside peers that bring diverse experience, you&#39;ll build your skills to apply the best technology to solve a given problem.</p>
<p>We use a variety of languages, including Java and Go for backend and Typescript for frontend. Open-source technologies like Cassandra, Spark, Elasticsearch, React, and Redux are also part of our tech stack. Industry-standard build tooling, including Gradle and GitHub, are used to manage our codebase.</p>
<p>We value ability to communicate and collaborate with a variety of individuals, including engineers, users and non-technical team members. We also require an engineering background in fields such as Computer Science, Mathematics, Software Engineering, and Physics.</p>
<p>To apply, please submit an updated resume/CV in PDF format and thoughtful responses to our application questions.</p>
<p style="margin-top:24px;font-size:13px;color:#666;">XML job scraping automation by <a href="https://yubhub.co">YubHub</a></p>]]></Description>
      <Jobtype>full-time</Jobtype>
      <Experiencelevel>entry</Experiencelevel>
      <Workarrangement>hybrid</Workarrangement>
      <Salaryrange>$145,000 - $155,000/year</Salaryrange>
      <Skills>Java, Go, Typescript, Cassandra, Spark, Elasticsearch, React, Redux, Gradle, GitHub</Skills>
      <Category>Engineering</Category>
      <Industry>Technology</Industry>
      <Employername>Palantir</Employername>
      <Employerlogo>https://logos.yubhub.co/palantir.com.png</Employerlogo>
      <Employerdescription>Palantir builds software for data-driven decisions and operations, empowering partners to develop lifesaving drugs, forecast supply chain disruptions, and more.</Employerdescription>
      <Employerwebsite>https://www.palantir.com/</Employerwebsite>
      <Compensationcurrency></Compensationcurrency>
      <Compensationmin></Compensationmin>
      <Compensationmax></Compensationmax>
      <Applyto>https://jobs.lever.co/palantir/dea9d3d5-75b2-4588-b7bd-585a47b79c8c?utm_source=yubhub.co&amp;utm_medium=jobs_feed&amp;utm_campaign=apply</Applyto>
      <Location>Seattle</Location>
      <Country></Country>
      <Postedate>2026-04-25</Postedate>
    </job>
    <job>
      <externalid>e778f2de-b17</externalid>
      <Title>Principal Engineer</Title>
      <Description><![CDATA[<p>At Twilio, we&#39;re shaping the future of communications, all from the comfort of our homes. We deliver innovative solutions to hundreds of thousands of businesses and empower millions of developers worldwide to craft personalized customer experiences.</p>
<p>As a Principal Engineer, you will own the technical strategy for the Phone Numbers Platform, transitioning legacy systems into a modern, distributed microservices architecture that supports global scale. You will be expected to pioneer AI-driven development workflows using LLMs and agentic agents to accelerate code generation, automate architectural reviews, and synthesize complex global telecommunications regulations into executable logic.</p>
<p>In this role, you&#39;ll:</p>
<ul>
<li>Lead the architecture and long-term technical roadmap for the Phone Numbers Platform,</li>
<li>Accelerate with AI: Lead by example in using AI coding assistants to automate boilerplate, generate exhaustive test suites, and perform rapid prototyping,</li>
<li>Be an owner: Drive technical excellence across multiple teams, making critical architectural decisions that balance rapid feature innovation with world-class reliability and security,</li>
<li>Wear the customer&#39;s shoes: Partner with Product Managers to translate complex global telecommunications regulations and customer needs into elegant, developer-first APIs and application workflows,</li>
<li>Empower others: Mentor and uplift senior and staff engineers. Foster a culture of technical curiosity, ownership, and continuous learning,</li>
<li>Ruthlessly prioritize: Identify and mitigate technical risks early, ranging from scalability bottlenecks to security vulnerabilities and lead the team through complex production incidents with a focus on long-term remediation,</li>
<li>Write it down: Communicate complex technical strategies to both executive leadership and engineering teams through high-quality design documents, RFCs, and presentations.</li>
</ul>
<p>Twilio values diverse experiences from all kinds of industries, and we encourage everyone who meets the required qualifications to apply. If your career is just starting or hasn&#39;t followed a traditional path, don&#39;t let that stop you from considering Twilio.</p>
<p style="margin-top:24px;font-size:13px;color:#666;">XML job scraping automation by <a href="https://yubhub.co">YubHub</a></p>]]></Description>
      <Jobtype>full-time</Jobtype>
      <Experiencelevel>senior</Experiencelevel>
      <Workarrangement>remote</Workarrangement>
      <Salaryrange>$188,240.00 - $235,300.00</Salaryrange>
      <Skills>Java, Go, AWS, Kubernetes, Terraform, CI/CD, API design, Data processing, SQL/NoSQL, Redis, Kafka, Spark, Telecom domain expertise, Agentic systems, LLMs, AI coding assistants</Skills>
      <Category>Engineering</Category>
      <Industry>Technology</Industry>
      <Employername>Twilio</Employername>
      <Employerlogo>https://logos.yubhub.co/twilio.com.png</Employerlogo>
      <Employerdescription>Twilio is a cloud communication platform that provides APIs and services for building, scaling, and operating real-time communication and collaboration applications.</Employerdescription>
      <Employerwebsite>https://www.twilio.com/</Employerwebsite>
      <Compensationcurrency></Compensationcurrency>
      <Compensationmin></Compensationmin>
      <Compensationmax></Compensationmax>
      <Applyto>https://job-boards.greenhouse.io/twilio/jobs/7811868?utm_source=yubhub.co&amp;utm_medium=jobs_feed&amp;utm_campaign=apply</Applyto>
      <Location>Remote - US</Location>
      <Country></Country>
      <Postedate>2026-04-25</Postedate>
    </job>
    <job>
      <externalid>2ed69621-58e</externalid>
      <Title>Software Engineer, New Grad</Title>
      <Description><![CDATA[<p>A Software Engineer at Palantir builds software at scale to transform how organisations around the world use data. You&#39;ll contribute high-quality code directly to Palantir Gotham, Palantir Apollo, or Palantir Foundry: products that are deployed at some of the most important institutions across the public and private sectors. You&#39;ll create features used by research scientists, aerospace engineers, intelligence analysts, and economic forecasters in countries around the world.</p>
<p>As a Software Engineer, you are involved throughout the product lifecycle - from idea generation, design, and prototyping, to execution and shipping, all while also being paired with a mentor dedicated to your growth and success. You&#39;ll collaborate closely with technical and non-technical counterparts to understand our customers&#39; problems and build products that tackle them.</p>
<p>One of the most effective ways to understand what our users need is to meet them. You may receive an opportunity to tour the assembly line at an auto-manufacturer or join a counter-terror analyst at their desk to really understand their mission and difficulties.</p>
<p>SWE principles include:</p>
<ul>
<li>Ownership: We see projects through from beginning to end in spite of obstacles we may encounter.</li>
<li>Collaboration: We work internally with people from a variety of backgrounds , such as other Software Engineers, Product Managers, Designers and Product Reliability Engineers. We also partner with our business development teams (Forward Deployed Engineers, Deployment Strategists) in order to understand and solve our customers&#39; problems.</li>
<li>Trust: We trust each other to effectively handle time and priorities, and don&#39;t micromanage. We want people to have the space to think for themselves, while feeling supported by their team.</li>
</ul>
<p>Technologies We Use</p>
<p>It doesn’t matter what languages you know when you join us; what matters is that you can write clean, effective code and learn new languages quickly. Our software is constantly evolving, so we need engineers who can do the same. Alongside peers that bring diverse experience - whether you’re a former university Teaching Assistant, switched to computer science recently, or are a hackathon enthusiast , you&#39;ll build your skills to apply the best technology to solve a given problem.</p>
<p>Right now, we use:</p>
<ul>
<li>A variety of languages, including Java and Go for backend and Typescript for frontend</li>
<li>Open-source technologies like Cassandra, Spark, Elasticsearch, React, and Redux</li>
<li>Industry-standard build tooling, including Gradle and GitHub</li>
</ul>
<p style="margin-top:24px;font-size:13px;color:#666;">XML job scraping automation by <a href="https://yubhub.co">YubHub</a></p>]]></Description>
      <Jobtype>full-time</Jobtype>
      <Experiencelevel>entry</Experiencelevel>
      <Workarrangement>hybrid</Workarrangement>
      <Salaryrange></Salaryrange>
      <Skills>Java, Go, Typescript, Cassandra, Spark, Elasticsearch, React, Redux, Gradle, GitHub</Skills>
      <Category>Engineering</Category>
      <Industry>Technology</Industry>
      <Employername>Palantir</Employername>
      <Employerlogo>https://logos.yubhub.co/palantir.com.png</Employerlogo>
      <Employerdescription>Palantir builds software for data-driven decisions and operations, empowering partners to develop lifesaving drugs, forecast supply chain disruptions, and more.</Employerdescription>
      <Employerwebsite>https://www.palantir.com/</Employerwebsite>
      <Compensationcurrency></Compensationcurrency>
      <Compensationmin></Compensationmin>
      <Compensationmax></Compensationmax>
      <Applyto>https://jobs.lever.co/palantir/d372c805-d0cd-4a10-9522-fbecc78d6f3e?utm_source=yubhub.co&amp;utm_medium=jobs_feed&amp;utm_campaign=apply</Applyto>
      <Location>London</Location>
      <Country></Country>
      <Postedate>2026-04-25</Postedate>
    </job>
    <job>
      <externalid>ea66c3fe-01d</externalid>
      <Title>Software Engineer, New Grad</Title>
      <Description><![CDATA[<p>A Software Engineer at Palantir builds software at scale to transform how organisations around the world use data. You&#39;ll have an opportunity to grow quickly as you contribute high-quality code directly to Palantir Gotham, Palantir Apollo, or Palantir Foundry. You&#39;ll create features used by research scientists, aerospace engineers, intelligence analysts, and economic forecasters in countries around the world.</p>
<p>As a Software Engineer, you are involved throughout the product lifecycle - from idea generation, design, and prototyping, to execution and shipping. You&#39;ll collaborate closely with technical and non-technical counterparts to understand our customers&#39; problems and build products that tackle them.</p>
<p>We encourage communication and collaboration among teams to share context, skills, and experience. You&#39;ll also have the opportunity to learn about other business areas.</p>
<p>Key responsibilities include:</p>
<ul>
<li>Contributing high-quality code directly to Palantir Gotham, Palantir Apollo, or Palantir Foundry</li>
<li>Collaborating with technical and non-technical counterparts to understand customer problems and build products that tackle them</li>
<li>Working closely with cross-functional teams to share context, skills, and experience</li>
</ul>
<p>We&#39;re looking for individuals who are passionate about building software and have a strong foundation in computer science. You should be able to write clean, effective code and learn new languages quickly.</p>
<p>In addition to a competitive salary, we offer a comprehensive benefits package, including medical, dental, and vision insurance, as well as a 401(k) plan. We also provide opportunities for professional growth and development, including training and mentorship programs.</p>
<p>If you&#39;re excited about the opportunity to work with a talented team of engineers and contribute to the development of innovative software solutions, we encourage you to apply.</p>
<p style="margin-top:24px;font-size:13px;color:#666;">XML job scraping automation by <a href="https://yubhub.co">YubHub</a></p>]]></Description>
      <Jobtype>full-time</Jobtype>
      <Experiencelevel>entry</Experiencelevel>
      <Workarrangement>hybrid</Workarrangement>
      <Salaryrange>$145,000 - $155,000/year</Salaryrange>
      <Skills>Java, Go, Typescript, Cassandra, Spark, Elasticsearch, React, Redux, Gradle, GitHub</Skills>
      <Category>Engineering</Category>
      <Industry>Technology</Industry>
      <Employername>Palantir</Employername>
      <Employerlogo>https://logos.yubhub.co/palantir.com.png</Employerlogo>
      <Employerdescription>Palantir builds software for data-driven decisions and operations, empowering partners to develop lifesaving drugs, forecast supply chain disruptions, and more.</Employerdescription>
      <Employerwebsite>https://www.palantir.com/</Employerwebsite>
      <Compensationcurrency></Compensationcurrency>
      <Compensationmin></Compensationmin>
      <Compensationmax></Compensationmax>
      <Applyto>https://jobs.lever.co/palantir/94984771-0704-446c-88c6-91ce748f6d92?utm_source=yubhub.co&amp;utm_medium=jobs_feed&amp;utm_campaign=apply</Applyto>
      <Location>New York</Location>
      <Country></Country>
      <Postedate>2026-04-25</Postedate>
    </job>
    <job>
      <externalid>401a7fa7-d15</externalid>
      <Title>Software Engineer, Internship</Title>
      <Description><![CDATA[<p>A Software Engineer at Palantir builds software at scale to transform how organisations around the world use data. As a Software Engineer Intern, you&#39;ll contribute high-quality code directly to Palantir Gotham, Palantir Foundry, or Palantir Apollo: products that are deployed at some of the most important institutions across the public and private sectors.</p>
<p>You&#39;ll create features used by research scientists, aerospace engineers, intelligence analysts, and economic forecasters, in countries around the world. Palantir&#39;s Product Development organisation is made up of small teams of Software Engineers, each focusing on a specific aspect of a product.</p>
<p>Core Responsibilities:</p>
<ul>
<li>Involved throughout the product lifecycle - from idea generation, design, and prototyping to execution and shipping</li>
<li>Collaborate closely with technical and non-technical counterparts to understand our customers&#39; problems and build products that tackle them</li>
<li>Meet users to understand their needs and difficulties</li>
</ul>
<p>Technologies We Use:</p>
<ul>
<li>Varying languages, including Java and Go for backend and Typescript for frontend</li>
<li>Open-source technologies like Cassandra, Spark, Elasticsearch, React, and Redux</li>
<li>Industry-standard build tooling, including Gradle and GitHub</li>
</ul>
<p>What We Value:</p>
<ul>
<li>Ability to communicate and collaborate with a variety of individuals, including engineers, users and non-technical team members</li>
<li>Willingness to learn and make decisions independently, and the ability to ask questions when needed</li>
<li>Active US Security clearance, or eligibility and willingness to obtain a US Security clearance</li>
</ul>
<p>What We Require:</p>
<ul>
<li>Engineering background in fields such as Computer Science, Mathematics, Software Engineering, and Physics</li>
<li>Familiarity with data structures, storage systems, cloud infrastructure, front-end frameworks, and other technical tools</li>
<li>Experience coding in programming languages, such as Java, C++, Python, JavaScript, or similar languages</li>
<li>Must be planning on graduating in 2027</li>
</ul>
<p>To apply, please submit the following:</p>
<ul>
<li>An updated resume / CV - please do so in PDF format</li>
<li>Thoughtful responses to our application questions</li>
</ul>
<p>Salary:</p>
<p>The estimated salary range for this position is estimated to be $10,500/month.</p>
<p style="margin-top:24px;font-size:13px;color:#666;">XML job scraping automation by <a href="https://yubhub.co">YubHub</a></p>]]></Description>
      <Jobtype>internship</Jobtype>
      <Experiencelevel>entry</Experiencelevel>
      <Workarrangement>onsite</Workarrangement>
      <Salaryrange>$10,500/month</Salaryrange>
      <Skills>Java, Go, Typescript, Cassandra, Spark, Elasticsearch, React, Redux, Gradle, GitHub</Skills>
      <Category>Engineering</Category>
      <Industry>Technology</Industry>
      <Employername>Palantir</Employername>
      <Employerlogo>https://logos.yubhub.co/palantir.com.png</Employerlogo>
      <Employerdescription>Palantir builds software for data-driven decisions and operations, empowering partners to develop lifesaving drugs, forecast supply chain disruptions, and more. The company has a global presence.</Employerdescription>
      <Employerwebsite>https://www.palantir.com/</Employerwebsite>
      <Compensationcurrency></Compensationcurrency>
      <Compensationmin></Compensationmin>
      <Compensationmax></Compensationmax>
      <Applyto>https://jobs.lever.co/palantir/7d69cf8a-06fd-4f05-bd84-27149db29c4d?utm_source=yubhub.co&amp;utm_medium=jobs_feed&amp;utm_campaign=apply</Applyto>
      <Location>New York</Location>
      <Country></Country>
      <Postedate>2026-04-25</Postedate>
    </job>
    <job>
      <externalid>a08ffbea-000</externalid>
      <Title>Software Engineer, Internship</Title>
      <Description><![CDATA[<p>A Software Engineer at Palantir builds software at scale to transform how organisations around the world use data. As a Software Engineer Intern, you&#39;ll contribute high-quality code directly to Palantir Gotham, Palantir Foundry, or Palantir Apollo. You&#39;ll create features used by research scientists, aerospace engineers, intelligence analysts, and economic forecasters, in countries around the world.</p>
<p>As a Software Engineer, you are involved throughout the product lifecycle - from idea generation, design, and prototyping to execution and shipping. You&#39;ll collaborate closely with technical and non-technical counterparts to understand our customers&#39; problems and build products that tackle them.</p>
<p>We encourage communication and collaboration among teams to share context, skills, and experience, so you&#39;ll learn about many different aspects of each product.</p>
<p>Our software is constantly evolving, so we need engineers who can do the same. Alongside peers that bring diverse experience, you&#39;ll build your skills to apply the best technology to solve a given problem.</p>
<p>We use varying languages, including Java and Go for backend and Typescript for frontend. We also use open-source technologies like Cassandra, Spark, Elasticsearch, React, and Redux, alongside industry-standard build tooling, including Gradle, Webpack, and GitHub.</p>
<p style="margin-top:24px;font-size:13px;color:#666;">XML job scraping automation by <a href="https://yubhub.co">YubHub</a></p>]]></Description>
      <Jobtype>internship</Jobtype>
      <Experiencelevel>entry</Experiencelevel>
      <Workarrangement>onsite</Workarrangement>
      <Salaryrange></Salaryrange>
      <Skills>Java, Go, Typescript, Cassandra, Spark, Elasticsearch, React, Redux, Gradle, Webpack, GitHub</Skills>
      <Category>Engineering</Category>
      <Industry>Technology</Industry>
      <Employername>Palantir</Employername>
      <Employerlogo>https://logos.yubhub.co/palantir.com.png</Employerlogo>
      <Employerdescription>Palantir builds software for data-driven decisions and operations, empowering partners to develop lifesaving drugs, forecast supply chain disruptions, and more.</Employerdescription>
      <Employerwebsite>https://www.palantir.com/</Employerwebsite>
      <Compensationcurrency></Compensationcurrency>
      <Compensationmin></Compensationmin>
      <Compensationmax></Compensationmax>
      <Applyto>https://jobs.lever.co/palantir/76a60923-bb49-40f5-b061-7c7eb1299602?utm_source=yubhub.co&amp;utm_medium=jobs_feed&amp;utm_campaign=apply</Applyto>
      <Location>London</Location>
      <Country></Country>
      <Postedate>2026-04-25</Postedate>
    </job>
    <job>
      <externalid>c9e5f5a8-0e3</externalid>
      <Title>Senior Backend Software Engineer - Infrastructure</Title>
      <Description><![CDATA[<p>A Senior Backend Software Engineer - Infrastructure at Palantir will contribute high-quality code to underpin Palantir Foundry and Gotham with performant, secure, and scalable building blocks, enabling products deployed to the most important institutions in the public and private sector.</p>
<p>You will build the foundational capabilities that power our products used by research scientists, aerospace engineers, intelligence analysts, and economic forecasters, in countries around the world.</p>
<p>We&#39;re hiring engineers who are passionate about solving real-world problems and empowering both developers and end-users to work optimally.</p>
<p>If you’re motivated to develop reliable, performant, and scalable systems, and to design robust APIs and primitives, this role offers the opportunity to make a significant impact on our products and the people who use them.</p>
<p>As a Senior Backend Software Engineer - Infrastructure, you will:</p>
<ul>
<li>Build a performant search and indexing ecosystem for complex granularly permissioned data</li>
<li>Contribute to open-source data processing libraries, integrating the latest innovations to achieve performance gains</li>
<li>Build the distributed systems that power large scale compute workloads, orchestrating and efficiently scheduling hundreds of thousands of containers every hour</li>
<li>Design architecture and opinionated APIs to keep application developers on the happy path</li>
<li>Trace and performance observability in high scale distributed microservice architectures</li>
<li>Build reliant, performant, and scalable systems for storage, auth, or asset serving to enable other product teams to build robust applications without deep domain expertise in the underlying systems</li>
<li>Automate the deployment, management, and operations of complex distributed systems like Cassandra, Elasticsearch, Kafka, and more across different environments</li>
</ul>
<p>We use different backend languages, including Java, Rust, and Go, and open-source technologies like Cassandra, ElasticSearch, Spark, Kafka, Kubernetes, Flink, and industry-standard build tooling, including Gradle and GitHub.</p>
<p>To succeed in this role, you will need to demonstrate:</p>
<ul>
<li>Demonstrated ability to collaborate and empathize with a variety of individuals</li>
<li>Ability to learn new technology and concepts, even without in-depth experience</li>
<li>Bias towards quality and thoughtful about edge cases (“anything that can go wrong will go wrong”); writes code that is defensive against all possibilities</li>
<li>Leading solutions and APIs with users in mind while maintaining a high engineering bar</li>
</ul>
<p>We require:</p>
<ul>
<li>6+ years of experience designing, building, and operating scalable and reliable infrastructure systems in a production environment</li>
<li>Engineering background in Computer Science, Mathematics, Software Engineering, Physics or similar field</li>
<li>Strong coding skills with demonstrated proficiency in programming languages, such as Java, C++, Python, Rust, or similar languages</li>
<li>Familiarity with storage and data processing systems, cloud infrastructure, and other technical tools</li>
<li>Strong written and verbal communication skills and ability to iterate quickly with teammates, incorporating feedback and holding a high bar for quality</li>
</ul>
<p>The estimated salary range for this position is $135,000 - $200,000/year.</p>
<p style="margin-top:24px;font-size:13px;color:#666;">XML job scraping automation by <a href="https://yubhub.co">YubHub</a></p>]]></Description>
      <Jobtype>full-time</Jobtype>
      <Experiencelevel>senior</Experiencelevel>
      <Workarrangement>hybrid</Workarrangement>
      <Salaryrange>$135,000 - $200,000/year</Salaryrange>
      <Skills>Java, Rust, Go, Cassandra, ElasticSearch, Spark, Kafka, Kubernetes, Flink, Gradle, GitHub</Skills>
      <Category>Engineering</Category>
      <Industry>Technology</Industry>
      <Employername>Palantir</Employername>
      <Employerlogo>https://logos.yubhub.co/palantir.com.png</Employerlogo>
      <Employerdescription>Palantir builds software for data-driven decisions and operations, empowering partners to develop lifesaving drugs, forecast supply chain disruptions, and more.</Employerdescription>
      <Employerwebsite>https://www.palantir.com/</Employerwebsite>
      <Compensationcurrency></Compensationcurrency>
      <Compensationmin></Compensationmin>
      <Compensationmax></Compensationmax>
      <Applyto>https://jobs.lever.co/palantir/b5ad6660-8145-4be5-97e2-3799f2912f5b?utm_source=yubhub.co&amp;utm_medium=jobs_feed&amp;utm_campaign=apply</Applyto>
      <Location>New York</Location>
      <Country></Country>
      <Postedate>2026-04-25</Postedate>
    </job>
    <job>
      <externalid>3084b139-c47</externalid>
      <Title>Senior Software Engineer, Data Engineering</Title>
      <Description><![CDATA[<p>Join us in building the future of finance.</p>
<p>Our mission is to democratize finance for all.</p>
<p>An estimated $124 trillion of assets will be inherited by younger generations in the next two decades. The largest transfer of wealth in human history.</p>
<p>If you’re ready to be at the epicenter of this historic cultural and financial shift, keep reading.</p>
<p><strong>About the team + role</strong></p>
<p>We are building an elite team, applying frontier technologies to the world’s biggest financial problems. We’re looking for bold thinkers. Sharp problem-solvers. Builders who are wired to make an impact.</p>
<p>Robinhood isn’t a place for complacency, it’s where ambitious people do the best work of their careers.</p>
<p>We’re a high-performing, fast-moving team with ethics at the center of everything we do. Expectations are high, and so are the rewards.</p>
<p>The Data Engineering team builds and maintains the foundational datasets that power decision-making across Robinhood.</p>
<p>We design reliable, scalable data systems that support product analytics, growth strategy, financial reporting, experimentation, and machine learning.</p>
<p>The team partners closely with Product, Engineering, Data Science, and Finance to ensure accurate, well-modeled data is available to teams across the company.</p>
<p>Our work directly influences how Robinhood measures performance, improves customer experience, and scales its products.</p>
<p>As a Senior Data Engineer, you will design, build, and evolve core datasets that track product performance and company-wide metrics.</p>
<p>You will develop scalable data pipelines that ingest application events and database snapshots into our data lake, ensuring high data quality and reliability.</p>
<p>You’ll collaborate with application engineers to improve data generation patterns and with analytics teams to design intuitive, well-documented data models.</p>
<p>This is an opportunity to shape the technical foundation that supports data-informed decisions across the organization!</p>
<p>This role is based in our Menlo Park, CA office, with in-person attendance expected at least 3 days per week.</p>
<p>At Robinhood, we believe in the power of in-person work to accelerate progress, spark innovation, and strengthen community.</p>
<p>Our office experience is intentional, energizing, and designed to fully support high-performing teams.</p>
<p><strong>What you’ll do</strong></p>
<ul>
<li>Help define and build key datasets across all Robinhood product areas.</li>
</ul>
<ul>
<li>Lead the evolution of these datasets as use cases grow.</li>
</ul>
<ul>
<li>Build scalable data pipelines using Python, Spark and Airflow to move data from different applications into our data lake.</li>
</ul>
<ul>
<li>Partner with upstream engineering teams to enhance data generation patterns.</li>
</ul>
<ul>
<li>Partner with data consumers across Robinhood to understand consumption patterns and design intuitive data models.</li>
</ul>
<ul>
<li>Ideate and contribute to shared data engineering tooling and standards.</li>
</ul>
<ul>
<li>Define and promote data engineering best practices across the company.</li>
</ul>
<p><strong>What you bring</strong></p>
<ul>
<li>5+ years of professional experience building end-to-end data pipelines.</li>
</ul>
<ul>
<li>Hands-on software engineering experience, with the ability to write production-level code in Python for user-facing applications, services, or systems (not just data scripting or automation).</li>
</ul>
<ul>
<li>Expert at building and maintaining large-scale data pipelines using open source frameworks (Spark, Flink, etc).</li>
</ul>
<ul>
<li>Strong SQL (Presto, Spark SQL, etc) skills.</li>
</ul>
<ul>
<li>Experience solving problems across the data stack (Data Infrastructure, Analytics and Visualization platforms).</li>
</ul>
<ul>
<li>Expert collaborator with the ability to democratize data through actionable insights and solutions.</li>
</ul>
<p><strong>What we offer</strong></p>
<ul>
<li>Challenging, high-impact work to grow your career.</li>
</ul>
<ul>
<li>Performance driven compensation with multipliers for outsized impact, bonus programs, equity ownership, and 401(k) matching.</li>
</ul>
<ul>
<li>Best in class benefits to fuel your work, including 100% paid health insurance for employees with 90% coverage for dependents.</li>
</ul>
<ul>
<li>Lifestyle wallet - a highly flexible benefits spending account for wellness, learning, and more.</li>
</ul>
<ul>
<li>Employer-paid life &amp; disability insurance, fertility benefits, and mental health benefits.</li>
</ul>
<ul>
<li>Time off to recharge including company holidays, paid time off, sick time, parental leave, and more!</li>
</ul>
<ul>
<li>Exceptional office experience with catered meals, events, and comfortable workspaces.</li>
</ul>
<p>In addition to the base pay range listed below, this role is also eligible for bonus opportunities + equity + benefits.</p>
<p>Base pay for the successful applicant will depend on a variety of job-related factors, which may include education, training, experience, location, business needs, or market demands.</p>
<p>The expected base pay range for this role is based on the location where the work will be performed and is aligned to one of 3 compensation zones.</p>
<p>For other locations not listed, compensation can be discussed with your recruiter during the interview process.</p>
<p>Base Pay Range:</p>
<p>Zone 1 (Menlo Park, CA; New York, NY; Bellevue, WA; Washington, DC): $196,000-$230,000 USD</p>
<p>Zone 2 (Denver, CO; Westlake, TX; Chicago, IL): $172,000-$202,000 USD</p>
<p>Zone 3 (Lake Mary, FL; Clearwater, FL; Gainesville, FL): $153,000-$179,000 USD</p>
<p style="margin-top:24px;font-size:13px;color:#666;">XML job scraping automation by <a href="https://yubhub.co">YubHub</a></p>]]></Description>
      <Jobtype>full-time</Jobtype>
      <Experiencelevel>senior</Experiencelevel>
      <Workarrangement>hybrid</Workarrangement>
      <Salaryrange>Base Pay Range: Zone 1 (Menlo Park, CA; New York, NY; Bellevue, WA; Washington, DC): $196,000-$230,000 USD

Zone 2 (Denver, CO; Westlake, TX; Chicago, IL): $172,000-$202,000 USD

Zone 3 (Lake Mary, FL; Clearwater, FL; Gainesville, FL): $153,000-$179,000 USD</Salaryrange>
      <Skills>Python, Spark, Airflow, SQL, Data Engineering, Data Pipelines, Data Modeling, Data Visualization</Skills>
      <Category>Engineering</Category>
      <Industry>Finance</Industry>
      <Employername>Robinhood</Employername>
      <Employerlogo>https://logos.yubhub.co/robinhood.com.png</Employerlogo>
      <Employerdescription>Robinhood is a financial services company that provides commission-free trading and investing services.</Employerdescription>
      <Employerwebsite>https://www.robinhood.com/</Employerwebsite>
      <Compensationcurrency></Compensationcurrency>
      <Compensationmin></Compensationmin>
      <Compensationmax></Compensationmax>
      <Applyto>https://job-boards.greenhouse.io/robinhood/jobs/4738660?utm_source=yubhub.co&amp;utm_medium=jobs_feed&amp;utm_campaign=apply</Applyto>
      <Location>Menlo Park, CA</Location>
      <Country></Country>
      <Postedate>2026-04-25</Postedate>
    </job>
    <job>
      <externalid>c1d78c04-4d5</externalid>
      <Title>Senior Backend Software Engineer - Infrastructure</Title>
      <Description><![CDATA[<p>A Senior Backend Software Engineer - Infrastructure at Palantir will contribute high-quality code to underpin Palantir Foundry and Gotham with performant, secure, and scalable building blocks, enabling products deployed to the most important institutions in the public and private sector.</p>
<p>The role involves building a performant search and indexing ecosystem for complex granularly permissioned data, contributing to open-source data processing libraries, and designing architecture and opinionated APIs to keep application developers on the happy path.</p>
<p>As a member of the infrastructure team, you will work on building the foundational capabilities that power our products used by research scientists, aerospace engineers, intelligence analysts, and economic forecasters, in countries around the world.</p>
<p>We&#39;re hiring engineers who are passionate about solving real-world problems and empowering both developers and end-users to work optimally. If you’re motivated to develop reliable, performant, and scalable systems, and to design robust APIs and primitives, this role offers the opportunity to make a significant impact on our products and the people who use them.</p>
<p>Frontline Foundry Software Engineers may be offered the opportunity to Frontline, an exclusive program unlike any other. This unique, short-term assignment involves being embedded with customers, allowing you to work directly with users and gain firsthand insight into how our products are used and the challenges our customers face.</p>
<p>Some of our most successful products were built on the factory floor, addressing real-world problems for the world&#39;s most important institutions. These products were developed by some of our most successful product engineers, who began their careers in roles aligned with Frontline responsibilities, gaining a deep understanding of both our technology and our customers.</p>
<p>Core Responsibilities:</p>
<ul>
<li>Building a performant search and indexing ecosystem for complex granularly permissioned data</li>
<li>Contributing to open-source data processing libraries, integrating the latest innovations to achieve performance gains</li>
<li>Building the distributed systems that power large scale compute workloads, orchestrating and efficiently scheduling hundreds of thousands of containers every hour</li>
<li>Designing architecture and opinionated APIs to keep application developers on the happy path</li>
<li>Tracing and performance observability in high scale distributed microservice architectures</li>
<li>Building reliant, performant, and scalable systems for storage, auth, or asset serving to enable other product teams to build robust applications without deep domain expertise in the underlying systems</li>
<li>Automating the deployment, management, and operations of complex distributed systems like Cassandra, Elasticsearch, Kafka, and more across different environments</li>
</ul>
<p>Technologies We Use:</p>
<ul>
<li>Different backend languages, including Java, Rust, and Go</li>
<li>Open-source technologies like Cassandra, ElasticSearch, Spark, Kafka, Kubernetes, Flink</li>
<li>Industry-standard build tooling, including Gradle and GitHub</li>
</ul>
<p>What We Value:</p>
<ul>
<li>Demonstrated ability to collaborate and empathize with a variety of individuals. Able to iterate with users and non-technical stakeholders and understand how technical decisions impact them.</li>
<li>Ability to learn new technology and concepts, even without in-depth experience. Experience developing and managing highly-available distributed systems is beneficial, but not required.</li>
<li>Bias towards quality and thoughtful about edge cases (“anything that can go wrong will go wrong”); writes code that is defensive against all possibilities.</li>
<li>Leading solutions and APIs with users in mind while maintaining a high engineering bar. Seeks to centralize and abstract complexity away from our users in order to expose simple, powerful APIs for consumers.</li>
<li>Active UK Security clearance, or eligibility and willingness to obtain a UK Security clearance is beneficial, but not necessary.</li>
</ul>
<p>What We Require:</p>
<ul>
<li>6+ years of experience designing, building, and operating scalable and reliable infrastructure systems in a production environment</li>
<li>Engineering background in Computer Science, Mathematics, Software Engineering, Physics or similar field.</li>
<li>Strong coding skills with demonstrated proficiency in programming languages, such as Java, C++, Python, Rust, or similar languages.</li>
<li>Familiarity with storage and data processing systems, cloud infrastructure, and other technical tools.</li>
<li>Strong written and verbal communication skills and ability to iterate quickly with teammates, incorporating feedback and holding a high bar for quality.</li>
</ul>
<p>Additional Information:</p>
<p>Life at Palantir</p>
<p>We want every Palantirian to achieve their best outcomes, that’s why we celebrate individuals’ strengths, skills, and interests, from your first interview to your longterm growth, rather than rely on traditional career ladders. Paying attention to the needs of our community enables us to optimize our opportunities to grow and helps ensure many pathways to success at Palantir.</p>
<p>Promoting health and well-being across all areas of Palantirians’ lives is just one of the ways we’re investing in our community. Learn more at Life at Palantir and note that our offerings may vary by region.</p>
<p>In keeping consistent with Palantir’s values and culture, we believe employees are “better together” and in-person work affords the opportunity for more creative outcomes. Therefore, we encourage employees to work from our offices to foster connectivity and innovation. Many teams do offer hybrid options (WFH a day or two a week), allowing our employees to strike</p>
<p style="margin-top:24px;font-size:13px;color:#666;">XML job scraping automation by <a href="https://yubhub.co">YubHub</a></p>]]></Description>
      <Jobtype>full-time</Jobtype>
      <Experiencelevel>senior</Experiencelevel>
      <Workarrangement>hybrid</Workarrangement>
      <Salaryrange></Salaryrange>
      <Skills>Java, Rust, Go, Cassandra, ElasticSearch, Spark, Kafka, Kubernetes, Flink, Gradle, GitHub, Computer Science, Mathematics, Software Engineering, Physics</Skills>
      <Category>Engineering</Category>
      <Industry>Technology</Industry>
      <Employername>Palantir</Employername>
      <Employerlogo>https://logos.yubhub.co/palantir.com.png</Employerlogo>
      <Employerdescription>Palantir builds software for data-driven decisions and operations, empowering partners to develop lifesaving drugs, forecast supply chain disruptions, and more.</Employerdescription>
      <Employerwebsite>https://www.palantir.com/</Employerwebsite>
      <Compensationcurrency></Compensationcurrency>
      <Compensationmin></Compensationmin>
      <Compensationmax></Compensationmax>
      <Applyto>https://jobs.lever.co/palantir/2cd25c0b-088d-4a5c-9b96-1165a33fe652?utm_source=yubhub.co&amp;utm_medium=jobs_feed&amp;utm_campaign=apply</Applyto>
      <Location>London</Location>
      <Country></Country>
      <Postedate>2026-04-25</Postedate>
    </job>
    <job>
      <externalid>b660fd80-094</externalid>
      <Title>Senior Backend Software Engineer - Application Development</Title>
      <Description><![CDATA[<p>A Senior Backend Software Engineer at Palantir will lead the architecture, development, and maintenance of high-performance, scalable backend services that underpin our operational data and AI systems.</p>
<p>The role involves maintaining high coding standards, building robust APIs, designing efficient data structures and algorithms, and optimizing applications for speed and scalability.</p>
<p>As a Senior Backend Software Engineer, you will work collaboratively in teams of technical and non-technical individuals, understand how technical decisions impact the people who will use what you&#39;re building, and actively improve user workflows by collaborating with cross-functional teams.</p>
<p>Palantir uses various backend languages, including Java, Rust, Python, and Go, as well as distributed systems technologies such as Kafka, Cassandra, Elasticsearch, and Spark.</p>
<p>We value a deep understanding of server-side logic, efficient data handling, and distributed systems, as well as strong focus on creating user-oriented workflows and solutions.</p>
<p>To be successful in this role, you will require 6+ years of experience in designing, developing, and leading features and improvements, as well as supporting and maintaining live backend systems.</p>
<p>You will also need an in-depth understanding of data structures, system architecture, API development for microservices frameworks, distributed systems, and other backend-related concepts and best practices.</p>
<p>Additionally, you should have strong coding skills with demonstrated proficiency in programming languages, such as Java, C++, Python, Rust, or similar languages, and strong written and verbal communication skills.</p>
<p style="margin-top:24px;font-size:13px;color:#666;">XML job scraping automation by <a href="https://yubhub.co">YubHub</a></p>]]></Description>
      <Jobtype>full-time</Jobtype>
      <Experiencelevel>senior</Experiencelevel>
      <Workarrangement>hybrid</Workarrangement>
      <Salaryrange></Salaryrange>
      <Skills>Java, Rust, Python, Go, Kafka, Cassandra, Elasticsearch, Spark, API development, microservices frameworks, distributed systems</Skills>
      <Category>Engineering</Category>
      <Industry>Technology</Industry>
      <Employername>Palantir</Employername>
      <Employerlogo>https://logos.yubhub.co/palantir.com.png</Employerlogo>
      <Employerdescription>Palantir builds software for data-driven decisions and operations, empowering partners to develop lifesaving drugs, forecast supply chain disruptions, and more.</Employerdescription>
      <Employerwebsite>https://www.palantir.com/</Employerwebsite>
      <Compensationcurrency></Compensationcurrency>
      <Compensationmin></Compensationmin>
      <Compensationmax></Compensationmax>
      <Applyto>https://jobs.lever.co/palantir/a92b55d0-1d36-4884-8e65-f456450b3a74?utm_source=yubhub.co&amp;utm_medium=jobs_feed&amp;utm_campaign=apply</Applyto>
      <Location>London</Location>
      <Country></Country>
      <Postedate>2026-04-25</Postedate>
    </job>
    <job>
      <externalid>58f76f9c-b33</externalid>
      <Title>Senior Backend Software Engineer - Application Development</Title>
      <Description><![CDATA[<p>A Senior Backend Software Engineer at Palantir builds software at scale to transform how organisations use data. You will collaborate closely with technical and non-technical teammates to understand our customers&#39; problems and build products that solve them. We encourage movement across teams to share context, skills, and experience, so you&#39;ll learn about many different technologies and aspects of each product. Engineers work autonomously and make decisions independently, within a community that will support and challenge you as you grow and develop, becoming a strong technical contributor and engineering leader.</p>
<p>Your day-to-day workflow will vary, adapting to the requirements of our users and the technical challenges that arise. One day, you may find yourself collaborating with other engineers to architect a new system that enables a novel workflow, the next you could be fine-tuning performance to enable low-latency operational outcomes.</p>
<p>We’re hiring engineers who are passionate about solving real-world problems and empowering both developers and end-users to work optimally. If you’re motivated to develop reliable, performant, and scalable systems, and to design robust APIs and primitives, this role offers the opportunity to make a significant impact on our products and the people who use them.</p>
<p>As a Senior Backend Software Engineer, you will lead the architecture, development, and maintenance of high-performance, scalable backend services that underpin our operational data and AI systems. You will maintain high coding standards through the development of guidelines, active participation in code reviews, and fostering a culture of continuous improvement and knowledge sharing among your team.</p>
<p>You will build robust APIs for use by front-end developers and interface external systems, and collaborate with front-end developers to integrate user-facing elements with server-side logic. You will design efficient data structures and algorithms to manage large-scale and high-throughput data, and optimize applications for speed and scalability through performance analysis.</p>
<p>Active US Security clearance, or eligibility and willingness to obtain a US Security clearance is beneficial but not necessary.</p>
<p style="margin-top:24px;font-size:13px;color:#666;">XML job scraping automation by <a href="https://yubhub.co">YubHub</a></p>]]></Description>
      <Jobtype>full-time</Jobtype>
      <Experiencelevel>senior</Experiencelevel>
      <Workarrangement>hybrid</Workarrangement>
      <Salaryrange>$135,000 - $200,000/year</Salaryrange>
      <Skills>Java, Rust, Python, Go, Distributed systems technologies, Kafka, Cassandra, Elasticsearch, Spark, Docker, Kubernetes, Gradle, GitHub</Skills>
      <Category>Engineering</Category>
      <Industry>Technology</Industry>
      <Employername>Palantir</Employername>
      <Employerlogo>https://logos.yubhub.co/palantir.com.png</Employerlogo>
      <Employerdescription>Palantir builds software for data-driven decisions and operations, empowering partners to develop lifesaving drugs, forecast supply chain disruptions, and locate missing children.</Employerdescription>
      <Employerwebsite>https://www.palantir.com/</Employerwebsite>
      <Compensationcurrency></Compensationcurrency>
      <Compensationmin></Compensationmin>
      <Compensationmax></Compensationmax>
      <Applyto>https://jobs.lever.co/palantir/7177acab-5c64-4005-9b28-93f33b3e172a?utm_source=yubhub.co&amp;utm_medium=jobs_feed&amp;utm_campaign=apply</Applyto>
      <Location>New York</Location>
      <Country></Country>
      <Postedate>2026-04-25</Postedate>
    </job>
    <job>
      <externalid>d4064caa-a92</externalid>
      <Title>Forward Deployed Software Engineer</Title>
      <Description><![CDATA[<p>A Forward Deployed Software Engineer (FDSE) at Palantir is responsible for designing and implementing end-to-end solutions to address customer pain points. They work closely with customers and colleagues to gather feedback and improve products through rapid iteration cycles. FDSEs deploy groundbreaking technical solutions to solve complex problems, leveraging Palantir products, open-source technologies, and custom-built tools.</p>
<p>As an FDSE, you will work with customers worldwide, gaining insights into various industries and institutions. You will help detect insider trading, improve disaster relief, fight healthcare fraud, and more. Each mission presents unique challenges, and you will work to accommodate all aspects of an environment to drive real technical outcomes for our customers.</p>
<p>Whether you aspire to be an entrepreneur or an engineering leader, Palantir believes it is the best place to learn how. You will learn how to unpack a problem, understand the costs and consequences of its solution, and develop new technologies and languages. You will work autonomously and make decisions independently, within a community that will support and challenge you as you grow and develop.</p>
<p><strong>Technologies We Use</strong></p>
<ul>
<li>Core Palantir products provide the foundations for our deployments.</li>
<li>Custom applications built on top of core Palantir platforms.</li>
<li>Postgres, Cassandra, Hadoop, and Spark for distributed data storage and parallel computing.</li>
<li>Java and Groovy for our back-end applications and data integration tools.</li>
<li>Typescript, React, Leaflet, and d3 for our web technologies.</li>
<li>Python for data processing and analysis.</li>
<li>Palantir cloud infrastructure based on AWS EC2 and S3.</li>
</ul>
<p><strong>Our Principles</strong></p>
<ul>
<li>Impact: We take on meaningful and challenging projects that change the world for the better.</li>
<li>Ownership: We see projects through from beginning to end, working through any obstacles we may encounter.</li>
<li>Collaboration: We work internally with people from a variety of backgrounds and externally with our customers to understand and solve their problems.</li>
<li>Trust: We trust each other to effectively manage time and priorities and give people the space to think for themselves.</li>
<li>Growth: We encourage ourselves and our peers to seek new challenges and opportunities for growth.</li>
<li>Learning: We believe experiential learning is one of the best teachers.</li>
</ul>
<p><strong>What We Value</strong></p>
<ul>
<li>Active US Security Clearance at or above the Top Secret level or willingness to obtain/upgrade to or above Top Secret.</li>
<li>Strong engineering background, preferred in fields such as Computer Science, Mathematics, Software Engineering, Physics.</li>
<li>Experience with logistics, materiel, sustainment, aviation, or readiness analysis is a plus.</li>
<li>Familiarity with data structures, storage systems, cloud infrastructure, front-end frameworks, and other technical tools.</li>
<li>Understanding of how technical decisions impact the user of what you’re building.</li>
<li>Strong coder with demonstrated proficiency in programming languages such as Python, Java, C++, TypeScript/JavaScript, or similar.</li>
<li>Demonstrated ability to collaborate effectively in teams of technical and non-technical individuals, and comfortable working in a rapidly changing environment with dynamic objectives and iteration with users.</li>
<li>Demonstrated ability to continuously learn, work independently, and make decisions with minimal supervision.</li>
<li>Willingness and interest to travel as needed.</li>
</ul>
<p><strong>Additional Information</strong></p>
<p>The estimated salary range for this position is $135,000-$200,000/year. Total compensation for this position may also include Restricted Stock units, sign-on bonus, and other potential future incentives.</p>
<p style="margin-top:24px;font-size:13px;color:#666;">XML job scraping automation by <a href="https://yubhub.co">YubHub</a></p>]]></Description>
      <Jobtype>full-time</Jobtype>
      <Experiencelevel>mid</Experiencelevel>
      <Workarrangement>onsite</Workarrangement>
      <Salaryrange>$135,000-$200,000/year</Salaryrange>
      <Skills>Java, Groovy, Postgres, Cassandra, Hadoop, Spark, Typescript, React, Leaflet, d3, Python, AWS EC2, S3</Skills>
      <Category>Engineering</Category>
      <Industry>Technology</Industry>
      <Employername>Palantir</Employername>
      <Employerlogo>https://logos.yubhub.co/palantir.com.png</Employerlogo>
      <Employerdescription>Palantir builds software for data-driven decisions and operations, serving various industries and institutions.</Employerdescription>
      <Employerwebsite>https://www.palantir.com/</Employerwebsite>
      <Compensationcurrency></Compensationcurrency>
      <Compensationmin></Compensationmin>
      <Compensationmax></Compensationmax>
      <Applyto>https://jobs.lever.co/palantir/e82b696e-a085-4bbf-8bcb-6d2c4f8cf2f7?utm_source=yubhub.co&amp;utm_medium=jobs_feed&amp;utm_campaign=apply</Applyto>
      <Location>New York</Location>
      <Country></Country>
      <Postedate>2026-04-25</Postedate>
    </job>
    <job>
      <externalid>426c576a-144</externalid>
      <Title>Forward Deployed Software Engineer - US Government</Title>
      <Description><![CDATA[<p>A Forward Deployed Software Engineer (FDSE) at Palantir works closely with customers to understand their pain points and design end-to-end solutions. They deploy groundbreaking technical solutions to solve customers&#39; hardest problems, leveraging Palantir products, open-source technologies, and custom-built tools.</p>
<p>The role involves working with customers globally, gaining insight into the world&#39;s most important industries and institutions. You will work to accommodate all aspects of an environment to drive real technical outcomes for our customers.</p>
<p>As an FDSE, you will:</p>
<ul>
<li>Work with customers to understand their pain points and design end-to-end solutions</li>
<li>Deploy groundbreaking technical solutions to solve customers&#39; hardest problems</li>
<li>Leverage Palantir products, open-source technologies, and custom-built tools</li>
<li>Collaborate with internal teams, including product teams and deployment strategists</li>
<li>Work with customers on-site to understand and solve their problems</li>
</ul>
<p>We are looking for individuals with a strong engineering background, preferably in fields such as Computer Science, Mathematics, Software Engineering, or Physics. Experience with logistics, materiel, sustainment, aviation, or readiness analysis is a plus. Familiarity with data structures, storage systems, cloud infrastructure, front-end frameworks, and other technical tools is also desired.</p>
<p>We require active US Security clearance or eligibility and willingness to obtain a US Security clearance. You must be located in North Carolina and willing to travel to the Liberty ecosystem and Research Triangle, due to the nature and business needs of this role.</p>
<p style="margin-top:24px;font-size:13px;color:#666;">XML job scraping automation by <a href="https://yubhub.co">YubHub</a></p>]]></Description>
      <Jobtype>full-time</Jobtype>
      <Experiencelevel>mid</Experiencelevel>
      <Workarrangement>hybrid</Workarrangement>
      <Salaryrange>$135,000 - $200,000/year</Salaryrange>
      <Skills>Core Palantir products, Custom applications, Postgres, Cassandra, Hadoop, Spark, Java, Groovy, Typescript, React, Leaflet, d3, Python, Palantir cloud infrastructure, Logistics, Materiel, Sustainment, Aviation, Readiness analysis, Data structures, Storage systems, Cloud infrastructure, Front-end frameworks</Skills>
      <Category>Engineering</Category>
      <Industry>Technology</Industry>
      <Employername>Palantir</Employername>
      <Employerlogo>https://logos.yubhub.co/palantir.com.png</Employerlogo>
      <Employerdescription>Palantir builds software for data-driven decisions and operations, serving clients across various industries.</Employerdescription>
      <Employerwebsite>https://www.palantir.com/</Employerwebsite>
      <Compensationcurrency></Compensationcurrency>
      <Compensationmin></Compensationmin>
      <Compensationmax></Compensationmax>
      <Applyto>https://jobs.lever.co/palantir/d83fac1c-353e-4b77-a586-3276b1090b6e?utm_source=yubhub.co&amp;utm_medium=jobs_feed&amp;utm_campaign=apply</Applyto>
      <Location>Fayetteville</Location>
      <Country></Country>
      <Postedate>2026-04-25</Postedate>
    </job>
    <job>
      <externalid>9dd64963-5c2</externalid>
      <Title>Forward Deployed Software Engineer</Title>
      <Description><![CDATA[<p>A Forward Deployed Software Engineer (FDSE) at Palantir understands our customers&#39; greatest pain points and designs end-to-end solutions to address them. They solicit constant feedback on their work from both customers and colleagues, improving our products over time with rapid iteration cycles. FDSEs deploy groundbreaking technical solutions to solve our customers&#39; hardest problems.</p>
<p>Projects often start with a nebulous question like &#39;Why are we losing customers?&#39; or &#39;How can we more effectively identify instances of money laundering?&#39; FDSEs lead the way in developing a solution, from high-level system design and prototyping to application development and data integration. As an FDSE, you leverage everything around you: Palantir products, open source technologies, and anything you and your team can build to drive real impact.</p>
<p>You work with customers around the globe, where you gain rare insight into the world&#39;s most important industries and institutions. We help our customers detect insider trading, improve disaster relief, fight healthcare fraud, and more. Each mission presents different challenges, from the regulatory environment to the nature of the data to the user population. You will work to accommodate all aspects of an environment to drive real technical outcomes for our customers.</p>
<p>Whether you aspire to be an entrepreneur or an engineering leader, we believe Palantir is the best place , with the best colleagues , to learn how. You&#39;ll learn how to unpack a problem and understand the costs and consequences of its solution. You&#39;ll learn new technologies and languages, and even develop them yourself. You&#39;ll work autonomously and make decisions independently, within a community that will support and challenge you as you grow and develop.</p>
<p><strong>Technologies We Use</strong></p>
<ul>
<li>Core Palantir products provide the foundations for our deployments.</li>
<li>Custom applications built on top of core Palantir platforms.</li>
<li>Postgres, Cassandra, Hadoop, and Spark for distributed data storage and parallel computing.</li>
<li>Java and Groovy for our back-end applications and data integration tools.</li>
<li>Typescript, React, Leaflet, and d3 for our web technologies.</li>
<li>Python for data processing and analysis.</li>
<li>Palantir cloud infrastructure based on AWS EC2 and S3.</li>
</ul>
<p><strong>Our Principles</strong></p>
<ul>
<li>Impact: We take on meaningful and challenging projects that change the world for the better.</li>
<li>Dedication: We see projects through from beginning to end in spite of obstacles we may encounter.</li>
<li>Collaboration: We work internally with people from a variety of backgrounds , such as other FDSEs, product teams, and Deployment Strategists. We also work externally with our customers, often on site, to understand and solve their problems.</li>
<li>Trust: We trust each other to effectively manage time and priorities,we don&#39;t micromanage. We want to give people the space to think for themselves.</li>
<li>Growth: We push ourselves and our peers to improve themselves and the world around them.</li>
<li>Learning: We often face entirely novel problems, where we need to pick up a lot of new information and learn how to use it to make progress.</li>
</ul>
<p><strong>What We Value</strong></p>
<ul>
<li>Strong engineering background, preferred in fields such as Computer Science, Mathematics, Software Engineering, Physics.</li>
<li>Experience with logistics, materiel, sustainment, aviation, or readiness analysis is a plus.</li>
<li>Familiarity with data structures, storage systems, cloud infrastructure, front-end frameworks, and other technical tools.</li>
<li>Understanding of how technical decisions impact the user of what you&#39;re building.</li>
<li>Strong coder with demonstrated proficiency in programming languages such as Python, Java, C++, TypeScript/JavaScript, or similar.</li>
<li>Demonstrated ability to collaborate effectively in teams of technical and non-technical individuals, and comfortable working in a rapidly changing environment with dynamic objectives and iteration with users.</li>
<li>Demonstrated ability to continuously learn, work independently, and make decisions with minimal supervision.</li>
<li>Willingness and interest to travel as needed.</li>
</ul>
<p><strong>What We Require</strong></p>
<ul>
<li>Active US Security Clearance at or above the Top Secret level</li>
</ul>
<p style="margin-top:24px;font-size:13px;color:#666;">XML job scraping automation by <a href="https://yubhub.co">YubHub</a></p>]]></Description>
      <Jobtype>full-time</Jobtype>
      <Experiencelevel>mid</Experiencelevel>
      <Workarrangement>onsite</Workarrangement>
      <Salaryrange>$125,000-$200,000/year</Salaryrange>
      <Skills>Core Palantir products, Custom applications, Postgres, Cassandra, Hadoop, Spark, Java, Groovy, Typescript, React, Leaflet, d3, Python, Palantir cloud infrastructure, AWS EC2, S3, Computer Science, Mathematics, Software Engineering, Physics, Logistics, Materiel, Sustainment, Aviation, Readiness analysis, Data structures, Storage systems, Cloud infrastructure, Front-end frameworks</Skills>
      <Category>Engineering</Category>
      <Industry>Technology</Industry>
      <Employername>Palantir</Employername>
      <Employerlogo>https://logos.yubhub.co/palantir.com.png</Employerlogo>
      <Employerdescription>Palantir builds software for data-driven decisions and operations.</Employerdescription>
      <Employerwebsite>https://www.palantir.com/</Employerwebsite>
      <Compensationcurrency></Compensationcurrency>
      <Compensationmin></Compensationmin>
      <Compensationmax></Compensationmax>
      <Applyto>https://jobs.lever.co/palantir/ce8ca664-60dc-4f9a-8986-3c96673bcfdf?utm_source=yubhub.co&amp;utm_medium=jobs_feed&amp;utm_campaign=apply</Applyto>
      <Location>Honolulu</Location>
      <Country></Country>
      <Postedate>2026-04-25</Postedate>
    </job>
    <job>
      <externalid>0e24d94f-292</externalid>
      <Title>Forward Deployed Software Engineer - US Government</Title>
      <Description><![CDATA[<p>A Forward Deployed Software Engineer (FDSE) at Palantir will work with customers around the globe to develop end-to-end solutions to address their greatest pain points. As an FDSE, you will design and implement technical solutions to solve complex problems, leveraging Palantir products, open source technologies, and other tools. You will work closely with customers and colleagues to gather feedback and improve products over time with rapid iteration cycles.</p>
<p>Projects may start with a nebulous question like &#39;Why are we losing customers?&#39; or &#39;How can we more effectively identify instances of money laundering?&#39; You will lead the way in developing a solution, from high-level system design and prototyping to application development and data integration.</p>
<p>As an FDSE, you will have the opportunity to work on a wide range of projects, from detecting insider trading to improving disaster relief, and from fighting healthcare fraud to more effectively identifying instances of money laundering. Each mission presents different challenges, from the regulatory environment to the nature of the data to the user population.</p>
<p>You will work to accommodate all aspects of an environment to drive real technical outcomes for our customers. Whether you aspire to be an entrepreneur or an engineering leader, we believe Palantir is the best place , with the best colleagues , to learn how.</p>
<p>You&#39;ll learn how to unpack a problem and understand the costs and consequences of its solution. You&#39;ll learn new technologies and languages, and even develop them yourself. You&#39;ll work autonomously and make decisions independently, within a community that will support and challenge you as you grow and develop.</p>
<p><strong>Technologies We Use</strong></p>
<ul>
<li>Core Palantir products provide the foundations for our deployments.</li>
<li>Custom applications built on top of core Palantir platforms.</li>
<li>Postgres, Cassandra, Hadoop, and Spark for distributed data storage and parallel computing.</li>
<li>Java and Groovy for our back-end applications and data integration tools.</li>
<li>Typescript, React, Leaflet, and d3 for our web technologies.</li>
<li>Python for data processing and analysis.</li>
<li>Palantir cloud infrastructure based on AWS EC2 and S3.</li>
</ul>
<p><strong>Our Principles</strong></p>
<ul>
<li>Impact: We take on meaningful and challenging projects that change the world for the better.</li>
<li>Ownership: We see projects through from beginning to end, working through any obstacles we may encounter.</li>
<li>Collaboration: We work internally with people from a variety of backgrounds , such as other FDSEs, product teams, and Deployment Strategists. We also work externally with our customers, often on site, to understand and solve their problems.</li>
<li>Trust: We trust each other to effectively manage time and priorities and give people the space to think for themselves.</li>
<li>Growth: We encourage ourselves and our peers to seek new challenges and opportunities for growth, as well as find new ways to innovate and share knowledge.</li>
<li>Learning: We often face entirely novel problems where we need to pick up a lot of new knowledge and learn how to use it to make progress. We believe experiential learning is one of the best teachers.</li>
</ul>
<p><strong>What We Value</strong></p>
<ul>
<li>Active US Security clearance, or eligibility and willingness to obtain a US Security clearance.</li>
<li>Strong engineering background, preferred in fields such as Computer Science, Mathematics, Software Engineering, Physics.</li>
<li>Experience with logistics, materiel, sustainment, aviation, or readiness analysis is a plus.</li>
<li>Familiarity with data structures, storage systems, cloud infrastructure, front-end frameworks, and other technical tools.</li>
<li>Understanding of how technical decisions impact the user of what you’re building.</li>
<li>Strong coder with demonstrated proficiency in programming languages such as Python, Java, C++, TypeScript/JavaScript, or similar.</li>
<li>Demonstrated ability to collaborate effectively in teams of technical and non-technical individuals, and comfortable working in a rapidly changing environment with dynamic objectives and iteration with users.</li>
<li>Demonstrated ability to continuously learn, work independently, and make decisions with minimal supervision.</li>
<li>Willingness and interest to travel as needed.</li>
</ul>
<p><strong>Additional Information</strong></p>
<p>The estimated salary range for this position is estimated to be $135,000 - $200,000/year. Total compensation for this position may also include Restricted Stock units, sign-on bonus and other potential future incentives.</p>
<p style="margin-top:24px;font-size:13px;color:#666;">XML job scraping automation by <a href="https://yubhub.co">YubHub</a></p>]]></Description>
      <Jobtype>full-time</Jobtype>
      <Experiencelevel>senior</Experiencelevel>
      <Workarrangement>onsite</Workarrangement>
      <Salaryrange>$135,000 - $200,000/year</Salaryrange>
      <Skills>Python, Java, C++, TypeScript/JavaScript, Palantir products, Postgres, Cassandra, Hadoop, Spark, React, Leaflet, d3, AWS EC2, S3</Skills>
      <Category>Engineering</Category>
      <Industry>Technology</Industry>
      <Employername>Palantir</Employername>
      <Employerlogo>https://logos.yubhub.co/palantir.com.png</Employerlogo>
      <Employerdescription>Palantir builds the world&apos;s leading software for data-driven decisions and operations.</Employerdescription>
      <Employerwebsite>https://www.palantir.com/</Employerwebsite>
      <Compensationcurrency></Compensationcurrency>
      <Compensationmin></Compensationmin>
      <Compensationmax></Compensationmax>
      <Applyto>https://jobs.lever.co/palantir/84131e3f-455e-47fc-9c11-898d95f09048?utm_source=yubhub.co&amp;utm_medium=jobs_feed&amp;utm_campaign=apply</Applyto>
      <Location>San Diego</Location>
      <Country></Country>
      <Postedate>2026-04-25</Postedate>
    </job>
    <job>
      <externalid>b18fe215-a47</externalid>
      <Title>Forward Deployed Software Engineer - Korea Forward Deployed</Title>
      <Description><![CDATA[<p>As a Forward Deployed Software Engineer, you will work on-site in Seoul, South Korea, supporting US Government work.</p>
<p>You will design end-to-end solutions to address customers&#39; greatest pain points, soliciting constant feedback from customers and colleagues to improve products over time with rapid iteration cycles.</p>
<p>You will deploy groundbreaking technical solutions to solve customers&#39; hardest problems, from high-level system design and prototyping to application development and data integration.</p>
<p>You will leverage Palantir products, open source technologies, and anything you and your team can build to drive real impact.</p>
<p>You will work with customers around the globe, gaining rare insight into the world&#39;s most important industries and institutions.</p>
<p>You will help customers detect insider trading, improve disaster relief, fight healthcare fraud, and more.</p>
<p>Each mission presents different challenges, from the regulatory environment to the nature of the data to the user population.</p>
<p>You will work to accommodate all aspects of an environment to drive real technical outcomes for our customers.</p>
<p>Whether you aspire to be an entrepreneur or an engineering leader, we believe Palantir is the best place , with the best colleagues , to learn how.</p>
<p>You&#39;ll learn how to unpack a problem and understand the costs and consequences of its solution.</p>
<p>You&#39;ll learn new technologies and languages, and even develop them yourself.</p>
<p>You&#39;ll work autonomously and make decisions independently, within a community that will support and challenge you as you grow and develop.</p>
<p><strong>Technologies We Use</strong></p>
<ul>
<li>Core Palantir products provide the foundations for our deployments.</li>
</ul>
<ul>
<li>Custom applications built on top of core Palantir platforms.</li>
</ul>
<ul>
<li>Postgres, Cassandra, Hadoop, and Spark for distributed data storage and parallel computing.</li>
</ul>
<ul>
<li>Java and Groovy for our back-end applications and data integration tools.</li>
</ul>
<ul>
<li>Typescript, React, Leaflet, and d3 for our web technologies.</li>
</ul>
<ul>
<li>Python for data processing and analysis.</li>
</ul>
<ul>
<li>Palantir cloud infrastructure based on AWS EC2 and S3.</li>
</ul>
<p><strong>Our Principles</strong></p>
<ul>
<li>Impact: We take on meaningful and challenging projects that change the world for the better.</li>
</ul>
<ul>
<li>Dedication: We see projects through from beginning to end in spite of obstacles we may encounter.</li>
</ul>
<ul>
<li>Collaboration: We work internally with people from a variety of backgrounds , such as other FDSEs, product teams, and Deployment Strategists.</li>
</ul>
<ul>
<li>Trust: We trust each other to effectively manage time and priorities,we don&#39;t micromanage.</li>
</ul>
<ul>
<li>Growth: We push ourselves and our peers to improve themselves and the world around them.</li>
</ul>
<ul>
<li>Learning: We often face entirely novel problems, where we need to pick up a lot of new information and learn how to use it to make progress.</li>
</ul>
<p><strong>What We Value</strong></p>
<ul>
<li>Strong engineering background, preferred in fields such as Computer Science, Mathematics, Software Engineering, Physics.</li>
</ul>
<ul>
<li>Experience with logistics, materiel, sustainment, aviation, or readiness analysis is a plus.</li>
</ul>
<ul>
<li>Familiarity with data structures, storage systems, cloud infrastructure, front-end frameworks, and other technical tools.</li>
</ul>
<ul>
<li>Understanding of how technical decisions impact the user of what you’re building.</li>
</ul>
<ul>
<li>Strong coder with demonstrated proficiency in programming languages such as Python, Java, C++, TypeScript/JavaScript, or similar.</li>
</ul>
<ul>
<li>Demonstrated ability to collaborate effectively in teams of technical and non-technical individuals, and comfortable working in a rapidly changing environment with dynamic objectives and iteration with users.</li>
</ul>
<ul>
<li>Skill and comfort working in a rapidly changing environment with dynamic objectives and iteration with users.</li>
</ul>
<ul>
<li>Demonstrated ability to continuously learn, work independently, and make decisions with minimal supervision.</li>
</ul>
<ul>
<li>Willingness and interest to travel within country as needed.</li>
</ul>
<p><strong>What We Require</strong></p>
<ul>
<li>Active Top Secret Clearance with eligibility and willingness to obtain SCI.</li>
</ul>
<p style="margin-top:24px;font-size:13px;color:#666;">XML job scraping automation by <a href="https://yubhub.co">YubHub</a></p>]]></Description>
      <Jobtype>full-time</Jobtype>
      <Experiencelevel>senior</Experiencelevel>
      <Workarrangement>onsite</Workarrangement>
      <Salaryrange>$135,000 - $200,000/year</Salaryrange>
      <Skills>Core Palantir products, Custom applications, Postgres, Cassandra, Hadoop, Spark, Java, Groovy, Typescript, React, Leaflet, d3, Python, Palantir cloud infrastructure, AWS EC2, S3, Data structures, Storage systems, Cloud infrastructure, Front-end frameworks, Technical tools</Skills>
      <Category>Engineering</Category>
      <Industry>Technology</Industry>
      <Employername>Palantir</Employername>
      <Employerlogo>https://logos.yubhub.co/palantir.com.png</Employerlogo>
      <Employerdescription>Palantir builds software for data-driven decisions and operations, providing platforms that empower partners to develop lifesaving drugs, forecast supply chain disruptions, locate missing children, and more.</Employerdescription>
      <Employerwebsite>https://www.palantir.com/</Employerwebsite>
      <Compensationcurrency></Compensationcurrency>
      <Compensationmin></Compensationmin>
      <Compensationmax></Compensationmax>
      <Applyto>https://jobs.lever.co/palantir/a39bf84c-6648-4871-bd07-9b882d401c4c?utm_source=yubhub.co&amp;utm_medium=jobs_feed&amp;utm_campaign=apply</Applyto>
      <Location>Seoul</Location>
      <Country></Country>
      <Postedate>2026-04-25</Postedate>
    </job>
    <job>
      <externalid>bec56bd8-a83</externalid>
      <Title>Forward Deployed Enablement Engineer - Customer Success</Title>
      <Description><![CDATA[<p>A Forward Deployed Enablement Engineer is embedded with centralized customer success teams to maximize the outcomes of Palantir&#39;s deployed products and workflows across all commercial customers. This role balances high-level support responsibilities with the development of innovative tooling and infrastructure to scale customer enablement effectively.</p>
<p>As a resourceful, gritty, and adaptable problem solver, you will work collaboratively and independently to resolve difficult and nebulous technical issues, as well as work productively with external customers to debug and resolve their problems.</p>
<p>In this role, you&#39;ll leverage your problem-solving abilities, creativity, and technical skills to support and guide customer development teams, ensuring they can effectively build and optimize their workflows.</p>
<p>Responsibilities:</p>
<ul>
<li>Respond to and triage incoming support requests from both internal and external teams.</li>
</ul>
<ul>
<li>Provide guidance to customer engineers and unblock their workflows on Palantir&#39;s platform.</li>
</ul>
<ul>
<li>Build with Foundry and AIP to enhance enablement operations and use your creativity to solve novel problems that deliver more value to internal and external stakeholders.</li>
</ul>
<ul>
<li>Collaborate with our Product Development teams to tackle workflow problems, resolve product issues, and inform product improvements.</li>
</ul>
<ul>
<li>Partner with Business teams to understand and meet customer needs.</li>
</ul>
<ul>
<li>Participate in a 24/7 on-call rotation responsible for coordinating responses to critical customer-facing incidents.</li>
</ul>
<ul>
<li>Take ownership as the first responder when things go wrong.</li>
</ul>
<p>What We Value:</p>
<ul>
<li>Ability to continuously learn and work independently, making decisions with minimal supervision.</li>
</ul>
<ul>
<li>Excellent written and verbal communication skills, capable of interacting effectively with both technical and non-technical stakeholders.</li>
</ul>
<ul>
<li>Strong creative problem-solving skills.</li>
</ul>
<ul>
<li>Comfortable working in a rapidly changing environment with dynamic objectives and iteration with users.</li>
</ul>
<ul>
<li>Proficiency with one or more programming languages or data engineering frameworks, such as Python, SQL, Spark, Java, C++, TypeScript/JavaScript, or similar.</li>
</ul>
<p>What We Require:</p>
<ul>
<li>Background in Data Science, Computer Science, Engineering, or STEM.</li>
</ul>
<ul>
<li>OR equivalent relevant practical experience in a technical or engineering role.</li>
</ul>
<p>Salary: The estimated salary range for this position is $110,000 - $147,000/year.</p>
<p style="margin-top:24px;font-size:13px;color:#666;">XML job scraping automation by <a href="https://yubhub.co">YubHub</a></p>]]></Description>
      <Jobtype>full-time</Jobtype>
      <Experiencelevel>mid</Experiencelevel>
      <Workarrangement>onsite</Workarrangement>
      <Salaryrange>$110,000 - $147,000/year</Salaryrange>
      <Skills>Python, SQL, Spark, Java, C++, TypeScript/JavaScript</Skills>
      <Category>Engineering</Category>
      <Industry>Technology</Industry>
      <Employername>Palantir</Employername>
      <Employerlogo>https://logos.yubhub.co/palantir.com.png</Employerlogo>
      <Employerdescription>Palantir builds software for data-driven decisions and operations, serving commercial customers worldwide.</Employerdescription>
      <Employerwebsite>https://www.palantir.com/</Employerwebsite>
      <Compensationcurrency></Compensationcurrency>
      <Compensationmin></Compensationmin>
      <Compensationmax></Compensationmax>
      <Applyto>https://jobs.lever.co/palantir/a90eb029-19cc-413c-bd51-b8411053d7d4?utm_source=yubhub.co&amp;utm_medium=jobs_feed&amp;utm_campaign=apply</Applyto>
      <Location>Washington, D.C.</Location>
      <Country></Country>
      <Postedate>2026-04-25</Postedate>
    </job>
    <job>
      <externalid>72076235-cee</externalid>
      <Title>Forward Deployed Enablement Engineer - Customer Success</Title>
      <Description><![CDATA[<p>A Forward Deployed Enablement Engineer is embedded with centralized customer success teams to maximize the outcomes of Palantir&#39;s deployed products and workflows across all commercial customers. This role balances high-level support responsibilities with the development of innovative tooling and infrastructure to scale customer enablement effectively.</p>
<p>You are a resourceful, gritty, and adaptable problem solver who is able to work both collaboratively and independently to resolve difficult and nebulous technical issues, as well as work productively with external customers to debug and resolve their problems.</p>
<p>In this role, you&#39;ll leverage your problem-solving abilities, creativity, and technical skills to support and guide customer development teams, ensuring they can effectively build and optimize their workflows.</p>
<p>You&#39;ll have the opportunity to gain rare insight into and contribute to some of the world&#39;s most important industries and institutions.</p>
<p>Every day at Palantir is different: we&#39;re constantly evolving to better respond to customer needs, and you will have the opportunity to contribute your creativity and problem-solving to internal processes and tools that define how we deliver business value to the customer with increasing efficacy and efficiency.</p>
<p><strong>Responsibilities</strong></p>
<ul>
<li>Respond to and triage incoming support requests from both internal and external teams.</li>
<li>Provide guidance to customer engineers and unblock their workflows on Palantir&#39;s platform.</li>
<li>Build with Foundry and AIP to enhance enablement operations and use your creativity to solve novel problems that deliver more value to internal and external stakeholders.</li>
<li>Collaborate with our Product Development teams to tackle workflow problems, resolve product issues, and inform product improvements.</li>
<li>Partner with Business teams to understand and meet customer needs.</li>
<li>Participate in a 24/7 on-call rotation responsible for coordinating responses to critical customer-facing incidents.</li>
<li>Take ownership as the first responder when things go wrong.</li>
</ul>
<p><strong>What We Value</strong></p>
<ul>
<li>Ability to continuously learn and work independently, making decisions with minimal supervision.</li>
<li>Excellent written and verbal communication skills, capable of interacting effectively with both technical and non-technical stakeholders.</li>
<li>Strong creative problem-solving skills.</li>
<li>Comfortable working in a rapidly changing environment with dynamic objectives and iteration with users.</li>
<li>Proficiency with one or more programming languages or data engineering frameworks, such as Python, SQL, Spark, Java, C++, TypeScript/JavaScript, or similar.</li>
</ul>
<p><strong>What We Require</strong></p>
<ul>
<li>Background in Data Science, Computer Science, Engineering, or STEM.</li>
<li>OR equivalent relevant practical experience in a technical or engineering role.</li>
</ul>
<p><strong>Additional Information</strong></p>
<p>The estimated salary range for this position is estimated to be $110,000 - $147,000/year. Total compensation for this position may also include Restricted Stock units, sign-on bonus, and other potential future incentives.</p>
<p style="margin-top:24px;font-size:13px;color:#666;">XML job scraping automation by <a href="https://yubhub.co">YubHub</a></p>]]></Description>
      <Jobtype>full-time</Jobtype>
      <Experiencelevel>mid</Experiencelevel>
      <Workarrangement>onsite</Workarrangement>
      <Salaryrange>$110,000 - $147,000/year</Salaryrange>
      <Skills>Python, SQL, Spark, Java, C++, TypeScript/JavaScript</Skills>
      <Category>Engineering</Category>
      <Industry>Technology</Industry>
      <Employername>Palantir</Employername>
      <Employerlogo>https://logos.yubhub.co/palantir.com.png</Employerlogo>
      <Employerdescription>Palantir builds software for data-driven decisions and operations, empowering partners to develop lifesaving drugs, forecast supply chain disruptions, and more.</Employerdescription>
      <Employerwebsite>https://www.palantir.com/</Employerwebsite>
      <Compensationcurrency></Compensationcurrency>
      <Compensationmin></Compensationmin>
      <Compensationmax></Compensationmax>
      <Applyto>https://jobs.lever.co/palantir/4cba9c95-d16f-440d-83e7-2352480f689f?utm_source=yubhub.co&amp;utm_medium=jobs_feed&amp;utm_campaign=apply</Applyto>
      <Location>New York</Location>
      <Country></Country>
      <Postedate>2026-04-25</Postedate>
    </job>
    <job>
      <externalid>030dfc36-14f</externalid>
      <Title>Forward Deployed Enablement Engineer - Customer Success</Title>
      <Description><![CDATA[<p>A Forward Deployed Enablement Engineer is embedded with centralised customer success teams to maximise the outcomes of Palantir&#39;s deployed products and workflows across all commercial customers. This role balances high-level support responsibilities with the development of innovative tooling and infrastructure to scale customer enablement effectively.</p>
<p>Responsibilities:</p>
<ul>
<li>Respond to and triage incoming support requests from both internal and external teams.</li>
<li>Provide guidance to customer engineers and unblock their workflows on Palantir&#39;s platform.</li>
<li>Build with Foundry and AIP to enhance enablement operations and use creativity to solve novel problems that deliver more value to internal and external stakeholders.</li>
<li>Collaborate with Product Development teams to tackle workflow problems, resolve product issues, and inform product improvements.</li>
<li>Partner with Business teams to understand and meet customer needs.</li>
<li>Participate in a 24/7 on-call rotation responsible for coordinating responses to critical customer-facing incidents.</li>
</ul>
<p>What We Value:</p>
<ul>
<li>Ability to continuously learn and work independently, making decisions with minimal supervision.</li>
<li>Excellent written and verbal communication skills, capable of interacting effectively with both technical and non-technical stakeholders.</li>
<li>Strong creative problem-solving skills.</li>
<li>Comfortable working in a rapidly changing environment with dynamic objectives and iteration with users.</li>
</ul>
<p>What We Require:</p>
<ul>
<li>Background in Data Science, Computer Science, Engineering or STEM, or equivalent relevant practical experience in a technical or engineering role.</li>
</ul>
<p>Additional Information:</p>
<ul>
<li>Life at Palantir: We want every Palantirian to achieve their best outcomes, that&#39;s why we celebrate individuals&#39; strengths, skills, and interests, from your first interview to your long-term growth, rather than rely on traditional career ladders.</li>
<li>Paying attention to the needs of our community enables us to optimize our opportunities to grow and helps ensure many pathways to success at Palantir.</li>
</ul>
<p style="margin-top:24px;font-size:13px;color:#666;">XML job scraping automation by <a href="https://yubhub.co">YubHub</a></p>]]></Description>
      <Jobtype>full-time</Jobtype>
      <Experiencelevel>mid</Experiencelevel>
      <Workarrangement>hybrid</Workarrangement>
      <Salaryrange></Salaryrange>
      <Skills>Python, SQL, Spark, Java, C++, TypeScript/JavaScript</Skills>
      <Category>Engineering</Category>
      <Industry>Technology</Industry>
      <Employername>Palantir</Employername>
      <Employerlogo>https://logos.yubhub.co/palantir.com.png</Employerlogo>
      <Employerdescription>Palantir builds software for data-driven decisions and operations, empowering partners to develop lifesaving drugs, forecast supply chain disruptions, and more.</Employerdescription>
      <Employerwebsite>https://www.palantir.com/</Employerwebsite>
      <Compensationcurrency></Compensationcurrency>
      <Compensationmin></Compensationmin>
      <Compensationmax></Compensationmax>
      <Applyto>https://jobs.lever.co/palantir/00c2c97b-8514-4617-9883-e53e486b6dcd?utm_source=yubhub.co&amp;utm_medium=jobs_feed&amp;utm_campaign=apply</Applyto>
      <Location>London</Location>
      <Country></Country>
      <Postedate>2026-04-25</Postedate>
    </job>
    <job>
      <externalid>8a1c0b7d-ba9</externalid>
      <Title>Senior Staff Software Engineer, Data Platform</Title>
      <Description><![CDATA[<p>Join us in building the future of finance.</p>
<p>Our mission is to democratize finance for all.</p>
<p>An estimated $124 trillion of assets will be inherited by younger generations in the next two decades. The largest transfer of wealth in human history.</p>
<p>If you’re ready to be at the epicenter of this historic cultural and financial shift, keep reading.</p>
<p><strong>About the team + role</strong></p>
<p>We are building an elite team, applying frontier technologies to the world’s biggest financial problems. We’re looking for bold thinkers. Sharp problem-solvers. Builders who are wired to make an impact.</p>
<p>Robinhood isn’t a place for complacency, it’s where ambitious people do the best work of their careers.</p>
<p>We’re a high-performing, fast-moving team with ethics at the center of everything we do. Expectations are high, and so are the rewards.</p>
<p>The Data Platform organization builds and operates the systems that power how data is stored, moved, and consumed across Robinhood.</p>
<p>This organization spans three core pillars: Storage (Postgres, DynamoDB, and caching systems), Streaming (real-time event infrastructure), and Data Lake (ingestion and compute systems built on Delta Lake).</p>
<p>Together, these platforms support transactional workloads, real-time data processing, and large-scale analytics that are critical to Robinhood’s products and operations.</p>
<p>The team owns the full lifecycle of data,from low-latency order path systems to near real-time and batch analytics,serving millions of users and internal teams across the company.</p>
<p>As a Senior Staff Software Engineer, you will serve as the technical lead across the Data Platform organization, shaping architecture and guiding execution across multiple teams.</p>
<p>You’ll work on complex distributed systems challenges such as database sharding and proxy architectures, real-time streaming and CDC systems, and large-scale data ingestion and compute platforms.</p>
<p>You’ll define and drive key technical bets, partner with engineering leaders to align platform capabilities with business needs, and lead 0→1 initiatives that introduce new capabilities across storage, streaming, and data systems.</p>
<p>This is a rare opportunity to influence multiple critical systems at once while raising the technical bar across an entire organization!</p>
<p><strong>What you’ll do</strong></p>
<p>Lead architectural direction across storage, streaming, and data lake platforms, connecting systems that handle transactional, real-time, and analytical workloads.</p>
<p>Design and guide implementation of distributed systems, including database sharding, proxy-based query routing, and real-time event processing pipelines.</p>
<p>Improve data freshness and latency by evolving streaming and ingestion systems toward near real-time processing goals.</p>
<p>Partner with engineering leaders and teams across Robinhood to define platform strategy, align roadmaps, and ensure systems meet reliability, scalability, and performance requirements.</p>
<p>Drive 0→1 initiatives that introduce new platform capabilities, including next-generation streaming, CDC, and data processing systems.</p>
<p><strong>What you bring</strong></p>
<p>Extensive experience building and scaling distributed systems, with deep expertise in at least two of the following areas: storage systems, streaming platforms, or data lake / large-scale data processing.</p>
<p>Strong understanding of database systems such as PostgreSQL and/or DynamoDB, including replication, sharding, and performance optimization.</p>
<p>Experience with streaming and event-driven architectures using technologies such as Kafka, Flink, or similar systems.</p>
<p>Familiarity with modern data platforms and compute engines such as Spark, Delta Lake, or equivalent large-scale data processing systems.</p>
<p>Proven ability to lead complex technical initiatives, define long-term architecture, and collaborate across multiple teams.</p>
<p><strong>What we offer</strong></p>
<p>Challenging, high-impact work to grow your career.</p>
<p>Performance driven compensation with multipliers for outsized impact, bonus programs, equity ownership, and 401(k) matching.</p>
<p>Best in class benefits to fuel your work, including 100% paid health insurance for employees with 90% coverage for dependents.</p>
<p>Lifestyle wallet – a highly flexible benefits spending account for wellness, learning, and more.</p>
<p>Employer-paid life &amp; disability insurance, fertility benefits, and mental health benefits.</p>
<p>Time off to recharge including company holidays, paid time off, sick time, parental leave, and more!</p>
<p>Exceptional office experience with catered meals, events, and comfortable workspaces.</p>
<p><strong>In addition to the base pay range listed below, this role is also eligible for bonus opportunities + equity + benefits.</strong></p>
<p>Base pay for the successful applicant will depend on a variety of job-related factors, which may include education, training, experience, location, business needs, or market demands.</p>
<p>The expected base pay range for this role is based on the location where the work will be performed and is aligned to one of 3 compensation zones.</p>
<p>For other locations not listed, compensation can be discussed with your recruiter during the interview process.</p>
<p>Base Pay Range:</p>
<p>Zone 1 (Menlo Park, CA; New York, NY; Bellevue, WA; Washington, DC)$264,000-$310,000 USD</p>
<p>Zone 2 (Denver, CO; Westlake, TX; Chicago, IL)$264,000-$310,000 USD</p>
<p>Zone 3 (Lake Mary, FL; Clearwater, FL; Gainesville, FL)$264,000-$310,000 USD</p>
<p style="margin-top:24px;font-size:13px;color:#666;">XML job scraping automation by <a href="https://yubhub.co">YubHub</a></p>]]></Description>
      <Jobtype>full-time</Jobtype>
      <Experiencelevel>senior</Experiencelevel>
      <Workarrangement>onsite</Workarrangement>
      <Salaryrange>$264,000-$310,000 USD</Salaryrange>
      <Skills>distributed systems, database systems, streaming platforms, data lake / large-scale data processing, PostgreSQL, DynamoDB, Kafka, Flink, Spark, Delta Lake</Skills>
      <Category>Engineering</Category>
      <Industry>Finance</Industry>
      <Employername>Robinhood</Employername>
      <Employerlogo>https://logos.yubhub.co/robinhood.com.png</Employerlogo>
      <Employerdescription>Robinhood is a financial services company that provides a mobile app for buying and selling stocks, options, ETFs, and cryptocurrencies.</Employerdescription>
      <Employerwebsite>https://www.robinhood.com/</Employerwebsite>
      <Compensationcurrency></Compensationcurrency>
      <Compensationmin></Compensationmin>
      <Compensationmax></Compensationmax>
      <Applyto>https://job-boards.greenhouse.io/robinhood/jobs/7729014?utm_source=yubhub.co&amp;utm_medium=jobs_feed&amp;utm_campaign=apply</Applyto>
      <Location>Bellevue, WA</Location>
      <Country></Country>
      <Postedate>2026-04-25</Postedate>
    </job>
    <job>
      <externalid>81d19b3d-95d</externalid>
      <Title>Backend Software Engineer - Infrastructure, Foundations</Title>
      <Description><![CDATA[<p>A Backend Software Engineer at Palantir builds software at scale to transform how organisations use data. Collaborate closely with technical and non-technical teammates to understand customer problems and build products that solve them. Contribute high-quality code to underpin Palantir Foundry and Gotham with performant, secure, and scalable building blocks.</p>
<p><strong>Core Responsibilities</strong></p>
<ul>
<li>Building a performant search and indexing ecosystem for complex granularly permissioned data</li>
<li>Contributing to open-source data processing libraries, integrating the latest innovations to achieve performance gains</li>
<li>Building the distributed systems that power large scale compute workloads, orchestrating and efficiently scheduling hundreds of thousands of containers every hour</li>
<li>Designing architecture and opinionated APIs to keep application developers on the happy path</li>
<li>Tracing and performance observability in high scale distributed microservice architectures</li>
<li>Building reliant, performant, and scalable systems for storage, auth, or asset serving to enable other product teams to build robust applications without deep domain expertise in the underlying systems</li>
<li>Automating the deployment, management, and operations of complex distributed systems like Cassandra, Elasticsearch, Kafka, and more across different environments</li>
</ul>
<p><strong>Technologies We Use</strong></p>
<ul>
<li>Different backend languages, including Java, Rust, and Go</li>
<li>Open-source technologies like Cassandra, ElasticSearch, Spark, Kafka, Kubernetes, Flink</li>
<li>Industry-standard build tooling, including Gradle and GitHub</li>
</ul>
<p><strong>What We Value</strong></p>
<ul>
<li>Demonstrated ability to collaborate and empathize with a variety of individuals</li>
<li>Ability to learn new technology and concepts, even without in-depth experience</li>
<li>Bias towards quality and thoughtful about edge cases</li>
<li>Builds solutions and APIs with users in mind while maintaining a high engineering bar</li>
</ul>
<p><strong>What We Require</strong></p>
<ul>
<li>Engineering background in Computer Science, Mathematics, Software Engineering, Physics or similar field</li>
<li>Strong coding skills with demonstrated proficiency in programming languages, such as Java, C++, Python, Rust, or similar languages</li>
<li>Familiarity with storage and data processing systems, cloud infrastructure, and other technical tools</li>
<li>Strong written and verbal communication skills and ability to iterate quickly with teammates, incorporating feedback and holding a high bar for quality</li>
</ul>
<p><strong>Additional Information</strong></p>
<p>The estimated salary range for this position is estimated to be $135,000 - $200,000/year. Total compensation for this position may also include Restricted Stock units, sign-on bonus and other potential future incentives.</p>
<p style="margin-top:24px;font-size:13px;color:#666;">XML job scraping automation by <a href="https://yubhub.co">YubHub</a></p>]]></Description>
      <Jobtype>full-time</Jobtype>
      <Experiencelevel>mid</Experiencelevel>
      <Workarrangement>hybrid</Workarrangement>
      <Salaryrange>$135,000 - $200,000/year</Salaryrange>
      <Skills>Java, Rust, Go, Cassandra, ElasticSearch, Spark, Kafka, Kubernetes, Flink, Gradle, GitHub</Skills>
      <Category>Engineering</Category>
      <Industry>Technology</Industry>
      <Employername>Palantir</Employername>
      <Employerlogo>https://logos.yubhub.co/palantir.com.png</Employerlogo>
      <Employerdescription>Palantir builds software for data-driven decisions and operations, empowering partners to develop lifesaving drugs, forecast supply chain disruptions, and more.</Employerdescription>
      <Employerwebsite>https://www.palantir.com/</Employerwebsite>
      <Compensationcurrency></Compensationcurrency>
      <Compensationmin></Compensationmin>
      <Compensationmax></Compensationmax>
      <Applyto>https://jobs.lever.co/palantir/fb2d3222-dbd8-4e03-8d39-47b820e9509c?utm_source=yubhub.co&amp;utm_medium=jobs_feed&amp;utm_campaign=apply</Applyto>
      <Location>New York</Location>
      <Country></Country>
      <Postedate>2026-04-25</Postedate>
    </job>
    <job>
      <externalid>f913e2d7-8d5</externalid>
      <Title>Backend Software Engineer - Infrastructure</Title>
      <Description><![CDATA[<p>A Backend Software Engineer at Palantir builds software at scale to transform how organisations use data. You will collaborate closely with technical and non-technical teammates to understand our customers&#39; problems and build products that solve them.</p>
<p>Our Software Engineers are involved throughout the product lifecycle, from idea generation, design, prototyping, and production delivery. We encourage movement across teams to share context, skills, and experience, so you&#39;ll learn about many different technologies and aspects of each product.</p>
<p>As a Software Engineer on infrastructure, you&#39;ll contribute high-quality code to underpin Palantir Foundry and Gotham with performant, secure, and scalable building blocks, enabling products deployed to the most important institutions in the public and private sector.</p>
<p>We&#39;re hiring engineers who are passionate about solving real-world problems and empowering both developers and end-users to work optimally. If you&#39;re motivated to develop reliable, performant, and scalable systems, and to design robust APIs and primitives, this role offers the opportunity to make a significant impact on our products and the people who use them.</p>
<p><strong>Core Responsibilities</strong></p>
<ul>
<li>Building a performant search and indexing ecosystem for complex granularly permissioned data</li>
<li>Contributing to open-source data processing libraries, integrating the latest innovations to achieve performance gains</li>
<li>Building the distributed systems that power large scale compute workloads, orchestrating and efficiently scheduling hundreds of thousands of containers every hour</li>
<li>Designing architecture and opinionated APIs to keep application developers on the happy path</li>
<li>Tracing and performance observability in high scale distributed microservice architectures</li>
<li>Building reliant, performant, and scalable systems for storage, auth, or asset serving to enable other product teams to build robust applications without deep domain expertise in the underlying systems</li>
<li>Automating the deployment, management, and operations of complex distributed systems like Cassandra, Elasticsearch, Kafka, and more across different environments</li>
</ul>
<p><strong>Technologies We Use</strong></p>
<ul>
<li>Different backend languages, including Java, Rust, and Go</li>
<li>Open-source technologies like Cassandra, ElasticSearch, Spark, Kafka, Kubernetes, Flink</li>
<li>Industry-standard build tooling, including Gradle and GitHub</li>
</ul>
<p><strong>What We Value</strong></p>
<ul>
<li>Demonstrated ability to collaborate and empathise with a variety of individuals. Able to iterate with users and non-technical stakeholders and understand how technical decisions impact them.</li>
<li>Ability to learn new technology and concepts, even without in-depth experience. Experience developing and managing highly-available distributed systems is beneficial, but not required.</li>
<li>Bias towards quality and thoughtful about edge cases (“anything that can go wrong will go wrong”); writes code that is defensive against all possibilities.</li>
<li>Builds solutions and APIs with users in mind while maintaining a high engineering bar. Seeks to centralise and abstract complexity away from our users in order to expose simple, powerful APIs for consumers.</li>
<li>Active UK Security clearance, or eligibility and willingness to obtain a UK Security clearance is beneficial, but not necessary.</li>
</ul>
<p><strong>What We Require</strong></p>
<ul>
<li>Engineering background in Computer Science, Mathematics, Software Engineering, Physics or similar field.</li>
<li>Strong coding skills with demonstrated proficiency in programming languages, such as Java, C++, Python, Rust, or similar languages.</li>
<li>Familiarity with storage and data processing systems, cloud infrastructure, and other technical tools.</li>
<li>Strong written and verbal communication skills and ability to iterate quickly with teammates, incorporating feedback and holding a high bar for quality.</li>
</ul>
<p><strong>Additional Information</strong></p>
<p>Life at Palantir</p>
<p>We want every Palantirian to achieve their best outcomes, that’s why we celebrate individuals’ strengths, skills, and interests, from your first interview to your longterm growth, rather than rely on traditional career ladders. Paying attention to the needs of our community enables us to optimize our opportunities to grow and helps ensure many pathways to success at Palantir.</p>
<p>Promoting health and well-being across all areas of Palantirians’ lives is just one of the ways we’re investing in our community. Learn more at Life at Palantir and note that our offerings may vary by region.</p>
<p>In keeping consistent with Palantir’s values and culture, we believe employees are “better together” and in-person work affords the opportunity for more creative outcomes. Therefore, we encourage employees to work from our offices to foster connectivity and innovation. Many teams do offer hybrid options (WFH a day or two a week), allowing our employees to strike the right trade-off for their personal productivity. Based on business need, there are a few roles that allow for “Remote” work on an exceptional basis. If you are</p>
<p style="margin-top:24px;font-size:13px;color:#666;">XML job scraping automation by <a href="https://yubhub.co">YubHub</a></p>]]></Description>
      <Jobtype>full-time</Jobtype>
      <Experiencelevel>mid</Experiencelevel>
      <Workarrangement>hybrid</Workarrangement>
      <Salaryrange></Salaryrange>
      <Skills>Java, Rust, Go, Cassandra, ElasticSearch, Spark, Kafka, Kubernetes, Flink, Gradle, GitHub</Skills>
      <Category>Engineering</Category>
      <Industry>Technology</Industry>
      <Employername>Palantir</Employername>
      <Employerlogo>https://logos.yubhub.co/palantir.com.png</Employerlogo>
      <Employerdescription>Palantir builds software for data-driven decisions and operations. It empowers partners to develop lifesaving drugs, forecast supply chain disruptions, and locate missing children.</Employerdescription>
      <Employerwebsite>https://www.palantir.com/</Employerwebsite>
      <Compensationcurrency></Compensationcurrency>
      <Compensationmin></Compensationmin>
      <Compensationmax></Compensationmax>
      <Applyto>https://jobs.lever.co/palantir/f70cdff7-c62f-4b73-a136-909e5e3d1891?utm_source=yubhub.co&amp;utm_medium=jobs_feed&amp;utm_campaign=apply</Applyto>
      <Location>London</Location>
      <Country></Country>
      <Postedate>2026-04-25</Postedate>
    </job>
    <job>
      <externalid>ce058b80-935</externalid>
      <Title>Backend Software Engineer - Infrastructure</Title>
      <Description><![CDATA[<p>A Backend Software Engineer - Infrastructure at Palantir will contribute high-quality code to underpin Palantir Foundry and Gotham with performant, secure, and scalable building blocks. You&#39;ll build the foundational capabilities that power our products used by research scientists, aerospace engineers, intelligence analysts, and economic forecasters, in countries around the world.</p>
<p>As a Software Engineer on infrastructure working on our Foundry platform, you&#39;ll collaborate closely with technical and non-technical teammates to understand our customers&#39; problems and build products that solve them. You&#39;ll work autonomously and make decisions independently, within a community that will support and challenge you as you grow and develop, becoming a strong technical contributor and engineering leader.</p>
<p>Some of the key responsibilities of this role include:</p>
<ul>
<li>Building a performant search and indexing ecosystem for complex granularly permissioned data</li>
<li>Contributing to open-source data processing libraries, integrating the latest innovations to achieve performance gains</li>
<li>Building the distributed systems that power large scale compute workloads, orchestrating and efficiently scheduling hundreds of thousands of containers every hour</li>
<li>Designing architecture and opinionated APIs to keep application developers on the happy path</li>
<li>Tracing and performance observability in high scale distributed microservice architectures</li>
<li>Building reliant, performant, and scalable systems for storage, auth, or asset serving to enable other product teams to build robust applications without deep domain expertise in the underlying systems</li>
<li>Automating the deployment, management, and operations of complex distributed systems like Cassandra, Elasticsearch, Kafka, and more across different environments</li>
</ul>
<p>We&#39;re looking for engineers who are passionate about solving real-world problems and empowering both developers and end-users to work optimally. If you&#39;re motivated to develop reliable, performant, and scalable systems, and to design robust APIs and primitives, this role offers the opportunity to make a significant impact on our products and the people who use them.</p>
<p style="margin-top:24px;font-size:13px;color:#666;">XML job scraping automation by <a href="https://yubhub.co">YubHub</a></p>]]></Description>
      <Jobtype>full-time</Jobtype>
      <Experiencelevel>mid</Experiencelevel>
      <Workarrangement>hybrid</Workarrangement>
      <Salaryrange>$135,000 - $200,000/year</Salaryrange>
      <Skills>Java, Rust, Go, Cassandra, ElasticSearch, Spark, Kafka, Kubernetes, Flink</Skills>
      <Category>Engineering</Category>
      <Industry>Technology</Industry>
      <Employername>Palantir</Employername>
      <Employerlogo>https://logos.yubhub.co/palantir.com.png</Employerlogo>
      <Employerdescription>Palantir builds software for data-driven decisions and operations, empowering partners to develop lifesaving drugs, forecast supply chain disruptions, and more.</Employerdescription>
      <Employerwebsite>https://www.palantir.com/</Employerwebsite>
      <Compensationcurrency></Compensationcurrency>
      <Compensationmin></Compensationmin>
      <Compensationmax></Compensationmax>
      <Applyto>https://jobs.lever.co/palantir/6fe5515f-f677-4d98-8ac2-1775a425f5e7?utm_source=yubhub.co&amp;utm_medium=jobs_feed&amp;utm_campaign=apply</Applyto>
      <Location>New York</Location>
      <Country></Country>
      <Postedate>2026-04-25</Postedate>
    </job>
    <job>
      <externalid>da1286db-282</externalid>
      <Title>Backend Software Engineer - Defense</Title>
      <Description><![CDATA[<p>A Backend Software Engineer at Palantir will work on building software at scale to transform how organisations use data. The role involves collaborating closely with technical and non-technical teammates to understand customer problems and build products that solve them.</p>
<p>Responsibilities: Architecting, developing, and maintaining high-performance, scalable backend services that underpin our operational data and AI systems Maintaining high coding standards through the development of guidelines, active participation in code reviews, and fostering a culture of continuous improvement and knowledge sharing among your team Building robust APIs for use by front-end developers and interfacing external systems, and collaborating with front-end developers to integrate user-facing elements with server-side logic Designing efficient data structures and algorithms to manage large-scale and high throughput data Optimizing applications for speed and scalability through performance analysis Actively improving user workflows by collaborating with cross-functional teams, ensuring seamless experiences across product boundaries and a cohesive user experience</p>
<p>Technologies We Use: Different backend languages, including Java, Rust, Python, and Go Distributed systems technologies such as Kafka, Cassandra, Elasticsearch, and Spark Docker and Kubernetes for containerization and orchestration Industry-standard build tooling, including Gradle and GitHub</p>
<p>What We Value: A deep understanding of server-side logic, efficient data handling, and distributed systems Strong focus on creating user-oriented workflows and solutions, crossing product boundaries to deliver cohesive and solid user workflows that ensure a seamless and intuitive user experience Experience building high-quality software in a fast-paced CI/CD development environment Ability to work collaboratively in teams of technical and non-technical individuals and understand how technical decisions impact the people who will use what you&#39;re building Skill and comfort working in a constantly evolving environment with dynamic objectives and iteration with users</p>
<p>What We Require: Experience in designing and developing features and improvements, as well as supporting and maintaining, live backend systems In-depth understanding of data structures, system architecture, API development for microservices frameworks, distributed systems, and other backend-related concepts and best practices Engineering background in Computer Science, Mathematics, Software Engineering, Physics, or similar field Strong coding skills with demonstrated proficiency in programming languages, such as Java, C++, Python, Rust, or similar languages Strong written and verbal communication skills and ability to iterate quickly with teammates, incorporating feedback and holding a high bar for quality Eligibility and willingness to obtain a US Security clearance</p>
<p>Additional Information: The estimated salary range for this position is estimated to be $135,000 - $200,000/year. Total compensation for this position may also include Restricted Stock units, sign-on bonus, and other potential future incentives.</p>
<p style="margin-top:24px;font-size:13px;color:#666;">XML job scraping automation by <a href="https://yubhub.co">YubHub</a></p>]]></Description>
      <Jobtype>full-time</Jobtype>
      <Experiencelevel>mid</Experiencelevel>
      <Workarrangement>onsite</Workarrangement>
      <Salaryrange>$135,000 - $200,000/year</Salaryrange>
      <Skills>Java, Rust, Python, Go, Kafka, Cassandra, Elasticsearch, Spark, Docker, Kubernetes, Gradle, GitHub</Skills>
      <Category>Engineering</Category>
      <Industry>Technology</Industry>
      <Employername>Palantir</Employername>
      <Employerlogo>https://logos.yubhub.co/palantir.com.png</Employerlogo>
      <Employerdescription>Palantir builds software for data-driven decisions and operations, serving customers across various industries.</Employerdescription>
      <Employerwebsite>https://www.palantir.com</Employerwebsite>
      <Compensationcurrency></Compensationcurrency>
      <Compensationmin></Compensationmin>
      <Compensationmax></Compensationmax>
      <Applyto>https://jobs.lever.co/palantir/d33e0c31-ac7e-4f57-ba74-36f2df6ae2f5?utm_source=yubhub.co&amp;utm_medium=jobs_feed&amp;utm_campaign=apply</Applyto>
      <Location>New York</Location>
      <Country></Country>
      <Postedate>2026-04-25</Postedate>
    </job>
    <job>
      <externalid>6bc82374-dad</externalid>
      <Title>Backend Software Engineer - Defense</Title>
      <Description><![CDATA[<p>A Backend Software Engineer at Palantir builds software at scale to transform how organisations use data. The role involves collaborating closely with technical and non-technical teammates to understand customer problems and build products that solve them. Engineers work autonomously and make decisions independently, within a community that supports and challenges them as they grow and develop.</p>
<p>Some examples of product work you could work on are:</p>
<ul>
<li>Build for high-scale, collaborative, geospatial workflows (Gaia)</li>
<li>Design sophisticated frameworks to enable complex workflows across applications in a single workspace</li>
<li>Develop the next generation of real-time collaborative tooling and data-analysis solutions (Secure Collaboration)</li>
</ul>
<p>Core Responsibilities:</p>
<ul>
<li>Architecting, developing, and maintaining high-performance, scalable backend services that underpin our operational data and AI systems</li>
<li>Maintaining high coding standards through the development of guidelines, active participation in code reviews, and fostering a culture of continuous improvement and knowledge sharing among your team</li>
<li>Building robust APIs for use by front-end developers and interfacing external systems, and collaborating with front-end developers to integrate user-facing elements with server-side logic</li>
<li>Designing efficient data structures and algorithms to manage large-scale and high throughput data</li>
<li>Optimizing applications for speed and scalability through performance analysis</li>
<li>Actively improving user workflows by collaborating with cross-functional teams, ensuring seamless experiences across product boundaries and a cohesive user experience</li>
</ul>
<p>Technologies We Use:</p>
<ul>
<li>Different backend languages, including Java, Rust, Python, and Go</li>
<li>Distributed systems technologies such as Kafka, Cassandra, Elasticsearch, and Spark</li>
<li>Docker and Kubernetes for containerization and orchestration</li>
<li>Industry-standard build tooling, including Gradle and GitHub</li>
</ul>
<p>What We Value:</p>
<ul>
<li>A deep understanding of server-side logic, efficient data handling, and distributed systems</li>
<li>Strong focus on creating user-oriented workflows and solutions, crossing product boundaries to deliver cohesive and solid user workflows that ensure a seamless and intuitive user experience</li>
<li>Experience building high-quality software in a fast-paced CI/CD development environment</li>
<li>Ability to work collaboratively in teams of technical and non-technical individuals and understand how technical decisions impact the people who will use what you&#39;re building</li>
<li>Skill and comfort working in a constantly evolving environment with dynamic objectives and iteration with users</li>
</ul>
<p>What We Require:</p>
<ul>
<li>Experience in designing and developing features and improvements, as well as supporting and maintaining, live backend systems</li>
<li>In-depth understanding of data structures, system architecture, API development for microservices frameworks, distributed systems, and other backend-related concepts and best practices</li>
<li>Engineering background in Computer Science, Mathematics, Software Engineering, Physics, or similar field</li>
<li>Strong coding skills with demonstrated proficiency in programming languages, such as Java, C++, Python, Rust, or similar languages</li>
<li>Strong written and verbal communication skills and ability to iterate quickly with teammates, incorporating feedback and holding a high bar for quality</li>
<li>Eligibility and willingness to obtain a US Security clearance</li>
</ul>
<p>Additional Information: The estimated salary range for this position is estimated to be $135,000 - $200,000/year. Total compensation for this position may also include Restricted Stock units, sign-on bonus, and other potential future incentives.</p>
<p style="margin-top:24px;font-size:13px;color:#666;">XML job scraping automation by <a href="https://yubhub.co">YubHub</a></p>]]></Description>
      <Jobtype>full-time</Jobtype>
      <Experiencelevel>mid</Experiencelevel>
      <Workarrangement>onsite</Workarrangement>
      <Salaryrange>$135,000 - $200,000/year</Salaryrange>
      <Skills>Java, Rust, Python, Go, Kafka, Cassandra, Elasticsearch, Spark, Docker, Kubernetes, Gradle, GitHub, Data structures, System architecture, API development, Microservices frameworks, Distributed systems</Skills>
      <Category>Engineering</Category>
      <Industry>Technology</Industry>
      <Employername>Palantir</Employername>
      <Employerlogo>https://logos.yubhub.co/palantir.com.png</Employerlogo>
      <Employerdescription>Palantir builds software for data-driven decisions and operations, serving customers in various industries.</Employerdescription>
      <Employerwebsite>https://www.palantir.com/</Employerwebsite>
      <Compensationcurrency></Compensationcurrency>
      <Compensationmin></Compensationmin>
      <Compensationmax></Compensationmax>
      <Applyto>https://jobs.lever.co/palantir/a8174f9c-6f46-46b4-8e15-d1ff9e37c9eb?utm_source=yubhub.co&amp;utm_medium=jobs_feed&amp;utm_campaign=apply</Applyto>
      <Location>Palo Alto</Location>
      <Country></Country>
      <Postedate>2026-04-25</Postedate>
    </job>
    <job>
      <externalid>510d5c19-b05</externalid>
      <Title>Backend Software Engineer - Defense</Title>
      <Description><![CDATA[<p>A Backend Software Engineer at Palantir builds software at scale to transform how organisations use data. Collaborate closely with technical and non-technical teammates to understand customer problems and build products that solve them. Work autonomously and make decisions independently, within a community that supports and challenges you as you grow and develop.</p>
<p>Some examples of product work you could work on are:</p>
<ul>
<li>Build for high-scale, collaborative, geospatial workflows (Gaia)</li>
<li>Design sophisticated frameworks to enable complex workflows across applications in a single workspace</li>
<li>Develop the next generation of real-time collaborative tooling and data-analysis solutions (Secure Collaboration)</li>
</ul>
<p>Core Responsibilities:</p>
<ul>
<li>Architecting, developing, and maintaining high-performance, scalable backend services that underpin our operational data and AI systems</li>
<li>Maintaining high coding standards through the development of guidelines, active participation in code reviews, and fostering a culture of continuous improvement and knowledge sharing among your team</li>
<li>Building robust APIs for use by front-end developers and interfacing external systems, and collaborating with front-end developers to integrate user-facing elements with server-side logic</li>
<li>Designing efficient data structures and algorithms to manage large-scale and high throughput data</li>
<li>Optimizing applications for speed and scalability through performance analysis</li>
<li>Actively improve user workflows by collaborating with cross-functional teams, ensuring seamless experiences across product boundaries and a cohesive user experience</li>
</ul>
<p>Technologies We Use:</p>
<ul>
<li>Different backend languages, including Java, Rust, Python and Go</li>
<li>Distributed systems technologies such as Kafka, Cassandra, Elasticsearch and Spark</li>
<li>Docker and Kubernetes for containerization and orchestration</li>
<li>Industry-standard build tooling, including Gradle and GitHub</li>
</ul>
<p>What We Value:</p>
<ul>
<li>A deep understanding of server-side logic, efficient data handling, and distributed systems</li>
<li>Strong focus on creating user-oriented workflows and solutions, crossing product boundaries to deliver cohesive and solid user workflows that ensure a seamless and intuitive user experience</li>
<li>Experience building high-quality software in a fast-paced CI/CD development environment</li>
<li>Ability to work collaboratively in teams of technical and non-technical individuals and understand how technical decisions impact the people who will use what you&#39;re building</li>
<li>Skill and comfort working in a constantly evolving environment with dynamic objectives and iteration with users</li>
</ul>
<p>What We Require:</p>
<ul>
<li>Experience in designing and developing features and improvements, as well as supporting and maintaining, live backend systems</li>
<li>In-depth understanding of data structures, system architecture, API development for microservices frameworks, distributed systems and other backend-related concepts and best practices</li>
<li>Engineering background in Computer Science, Mathematics, Software Engineering, Physics or similar field</li>
<li>Strong coding skills with demonstrated proficiency in programming languages, such as Java, C++, Python, Rust, or similar languages</li>
<li>Strong written and verbal communication skills and ability to iterate quickly with teammates, incorporating feedback and holding a high bar for quality</li>
<li>Eligibility and willingness to obtain a US Security clearance</li>
</ul>
<p>Additional Information: The estimated salary range for this position is estimated to be $135,000 - $200,000/year. Total compensation for this position may also include Restricted Stock units, sign-on bonus and other potential future incentives.</p>
<p style="margin-top:24px;font-size:13px;color:#666;">XML job scraping automation by <a href="https://yubhub.co">YubHub</a></p>]]></Description>
      <Jobtype>full-time</Jobtype>
      <Experiencelevel>mid</Experiencelevel>
      <Workarrangement>onsite</Workarrangement>
      <Salaryrange>$135,000 - $200,000/year</Salaryrange>
      <Skills>Java, Rust, Python, Go, Kafka, Cassandra, Elasticsearch, Spark, Docker, Kubernetes, Gradle, GitHub, Data structures, System architecture, API development, Microservices frameworks, Distributed systems</Skills>
      <Category>Engineering</Category>
      <Industry>Technology</Industry>
      <Employername>Palantir</Employername>
      <Employerlogo>https://logos.yubhub.co/palantir.com.png</Employerlogo>
      <Employerdescription>Palantir builds software for data-driven decisions and operations, providing a complete ecosystem for customers to securely integrate and visualize their data.</Employerdescription>
      <Employerwebsite>https://www.palantir.com/</Employerwebsite>
      <Compensationcurrency></Compensationcurrency>
      <Compensationmin></Compensationmin>
      <Compensationmax></Compensationmax>
      <Applyto>https://jobs.lever.co/palantir/1345438c-ebfc-4fa5-b545-30c1414f317c?utm_source=yubhub.co&amp;utm_medium=jobs_feed&amp;utm_campaign=apply</Applyto>
      <Location>Washington, D.C.</Location>
      <Country></Country>
      <Postedate>2026-04-25</Postedate>
    </job>
    <job>
      <externalid>786b69a8-34d</externalid>
      <Title>Backend Software Engineer - Application Development</Title>
      <Description><![CDATA[<p>A Backend Software Engineer at Palantir builds software at scale to transform how organisations use data. Collaborate closely with technical and non-technical teammates to understand customers&#39; problems and build products that solve them. Work autonomously and make decisions independently, within a community that supports and challenges you as you grow and develop.</p>
<p>Key Responsibilities: Architecting, developing, and maintaining high-performance, scalable backend services that underpin operational data and AI systems Maintaining high coding standards through the development of guidelines, active participation in code reviews, and fostering a culture of continuous improvement and knowledge sharing among your team Building robust APIs for use by front-end developers and interfacing external systems, and collaborating with front-end developers to integrate user-facing elements with server-side logic Designing efficient data structures and algorithms to manage large-scale and high-throughput data Optimizing applications for speed and scalability through performance analysis Actively improving user workflows by collaborating with cross-functional teams, ensuring seamless experiences across product boundaries and a cohesive user experience</p>
<p>Technologies Used: Different backend languages, including Java, Rust, Python, and Go Distributed systems technologies such as Kafka, Cassandra, Elasticsearch, and Spark Docker and Kubernetes for containerization and orchestration Industry-standard build tooling, including Gradle and GitHub</p>
<p>What We Value: A deep understanding of server-side logic, efficient data handling, and distributed systems Strong focus on creating user-oriented workflows and solutions, crossing product boundaries to deliver cohesive and solid user workflows that ensure a seamless and intuitive user experience Experience building high-quality software in a fast-paced CI/CD development environment Ability to work collaboratively in teams of technical and non-technical individuals and understand how technical decisions impact the people who will use what you&#39;re building Skill and comfort working in a constantly evolving environment with dynamic objectives and iteration with users Active US Security clearance, or eligibility and willingness to obtain a US Security clearance is beneficial but not necessary</p>
<p>What We Require: Experience in designing and developing features and improvements, as well as supporting and maintaining, live backend systems In-depth understanding of data structures, system architecture, API development for microservices frameworks, distributed systems, and other backend-related concepts and best practices Engineering background in Computer Science, Mathematics, Software Engineering, Physics, or similar field Strong coding skills with demonstrated proficiency in programming languages, such as Java, C++, Python, Rust, or similar languages Strong written and verbal communication skills and ability to iterate quickly with teammates, incorporating feedback and holding a high bar for quality</p>
<p>Additional Information: The estimated salary range for this position is estimated to be $135,000 - $200,000/year. Total compensation for this position may also include Restricted Stock units, sign-on bonus, and other potential future incentives.</p>
<p style="margin-top:24px;font-size:13px;color:#666;">XML job scraping automation by <a href="https://yubhub.co">YubHub</a></p>]]></Description>
      <Jobtype>full-time</Jobtype>
      <Experiencelevel>mid</Experiencelevel>
      <Workarrangement>hybrid</Workarrangement>
      <Salaryrange>$135,000 - $200,000/year</Salaryrange>
      <Skills>Java, Rust, Python, Go, Kafka, Cassandra, Elasticsearch, Spark, Docker, Kubernetes, Gradle, GitHub</Skills>
      <Category>Engineering</Category>
      <Industry>Technology</Industry>
      <Employername>Palantir</Employername>
      <Employerlogo>https://logos.yubhub.co/palantir.com.png</Employerlogo>
      <Employerdescription>Palantir builds software for data-driven decisions and operations, empowering partners to develop lifesaving drugs, forecast supply chain disruptions, and locate missing children, among other uses.</Employerdescription>
      <Employerwebsite>https://www.palantir.com/</Employerwebsite>
      <Compensationcurrency></Compensationcurrency>
      <Compensationmin></Compensationmin>
      <Compensationmax></Compensationmax>
      <Applyto>https://jobs.lever.co/palantir/ab7e3425-81d5-4705-a7b5-cd60c8a45cdb?utm_source=yubhub.co&amp;utm_medium=jobs_feed&amp;utm_campaign=apply</Applyto>
      <Location>New York</Location>
      <Country></Country>
      <Postedate>2026-04-25</Postedate>
    </job>
    <job>
      <externalid>12cf5f64-0d6</externalid>
      <Title>Backend Software Engineer - Application Development</Title>
      <Description><![CDATA[<p>A Backend Software Engineer at Palantir builds software at scale to transform how organisations use data. You will collaborate closely with technical and non-technical teammates to understand our customers&#39; problems and build products that solve them. We encourage movement across teams to share context, skills, and experience, so you&#39;ll learn about many different technologies and aspects of each product.</p>
<p>Your day-to-day workflow will vary, adapting to the requirements of our users and the technical challenges that arise. One day, you may find yourself collaborating with other engineers to architect a new system that enables a novel workflow, the next you could be fine-tuning performance to enable low-latency operational outcomes.</p>
<p>We&#39;re hiring engineers who are passionate about solving real-world problems and empowering both developers and end-users to work optimally. If you’re motivated to develop reliable, performant, and scalable systems, and to design robust APIs and primitives, this role offers the opportunity to make a significant impact on our products and the people who use them.</p>
<p>Core Responsibilities:</p>
<ul>
<li>Architecting, developing, and maintaining high-performance, scalable backend services that underpin our operational data and AI systems</li>
<li>Maintaining high coding standards through the development of guidelines, active participation in code reviews, and fostering a culture of continuous improvement and knowledge sharing among your team</li>
<li>Building robust APIs for use by front-end developers and interfacing external systems, and collaborating with front-end developers to integrate user-facing elements with server-side logic</li>
<li>Designing efficient data structures and algorithms to manage large-scale and high throughput data</li>
<li>Optimizing applications for speed and scalability through performance analysis</li>
<li>Actively improve user workflows by collaborating with cross-functional teams, ensuring seamless experiences across product boundaries and a cohesive user experience</li>
</ul>
<p>Technologies We Use:</p>
<ul>
<li>Different backend languages, including Java, Rust, Python, and Go</li>
<li>Distributed systems technologies such as Kafka, Cassandra, Elasticsearch, and Spark</li>
<li>Docker and Kubernetes for containerisation and orchestration</li>
<li>Industry-standard build tooling, including Gradle and GitHub</li>
</ul>
<p>What We Value:</p>
<ul>
<li>A deep understanding of server-side logic, efficient data handling, and distributed systems</li>
<li>Strong focus on creating user-oriented workflows and solutions, crossing product boundaries to deliver cohesive and solid user workflows that ensure a seamless and intuitive user experience</li>
<li>Experience building high-quality software in a fast-paced CI/CD development environment</li>
<li>Ability to work collaboratively in teams of technical and non-technical individuals and understand how technical decisions impact the people who will use what you&#39;re building</li>
<li>Skill and comfort working in a constantly evolving environment with dynamic objectives and iteration with users</li>
<li>Active UK Security clearance, or eligibility and willingness to obtain a UK Security clearance is beneficial but not necessary</li>
</ul>
<p>What We Require:</p>
<ul>
<li>Experience in designing and developing features and improvements, as well as supporting and maintaining, live backend systems</li>
<li>In-depth understanding of data structures, system architecture, API development for microservices frameworks, distributed systems, and other backend-related concepts and best practices</li>
<li>Engineering background in Computer Science, Mathematics, Software Engineering, Physics, or similar field</li>
<li>Strong coding skills with demonstrated proficiency in programming languages, such as Java, C++, Python, Rust, or similar languages</li>
<li>Strong written and verbal communication skills and ability to iterate quickly with teammates, incorporating feedback and holding a high bar for quality</li>
</ul>
<p>Additional Information: Life at Palantir: We want every Palantirian to achieve their best outcomes, that’s why we celebrate individuals’ strengths, skills, and interests, from your first interview to your longterm growth, rather than rely on traditional career ladders. Paying attention to the needs of our community enables us to optimize our opportunities to grow and helps ensure many pathways to success at Palantir.</p>
<p style="margin-top:24px;font-size:13px;color:#666;">XML job scraping automation by <a href="https://yubhub.co">YubHub</a></p>]]></Description>
      <Jobtype>full-time</Jobtype>
      <Experiencelevel>mid</Experiencelevel>
      <Workarrangement>hybrid</Workarrangement>
      <Salaryrange></Salaryrange>
      <Skills>Java, Rust, Python, Go, Kafka, Cassandra, Elasticsearch, Spark, Docker, Kubernetes, Gradle, GitHub, Data structures, System architecture, API development, Microservices frameworks, Distributed systems</Skills>
      <Category>Engineering</Category>
      <Industry>Technology</Industry>
      <Employername>Palantir</Employername>
      <Employerlogo>https://logos.yubhub.co/palantir.com.png</Employerlogo>
      <Employerdescription>Palantir builds software for data-driven decisions and operations, empowering partners to develop lifesaving drugs, forecast supply chain disruptions, and more.</Employerdescription>
      <Employerwebsite>https://www.palantir.com/</Employerwebsite>
      <Compensationcurrency></Compensationcurrency>
      <Compensationmin></Compensationmin>
      <Compensationmax></Compensationmax>
      <Applyto>https://jobs.lever.co/palantir/10dfc8bc-99ad-4ca2-ab76-853cb90a92c2?utm_source=yubhub.co&amp;utm_medium=jobs_feed&amp;utm_campaign=apply</Applyto>
      <Location>London</Location>
      <Country></Country>
      <Postedate>2026-04-25</Postedate>
    </job>
    <job>
      <externalid>ba643316-1c5</externalid>
      <Title>Backend Software Engineer</Title>
      <Description><![CDATA[<p>A Backend Software Engineer at Palantir builds software at scale to transform how organisations use data. Collaborate closely with technical and non-technical teammates to understand customers&#39; problems and build products that solve them. Contribute high-quality code to underpin Palantir Foundry and Gotham with performant, secure, and scalable building blocks.</p>
<p>Core Responsibilities:</p>
<ul>
<li>Building performant search and indexing ecosystem for complex granularly permissioned data</li>
<li>Contributing to open-source data processing libraries, integrating the latest innovations to achieve performance gains</li>
<li>Building the distributed systems that power large scale compute workloads, orchestrating and efficiently scheduling hundreds of thousands of containers every hour</li>
<li>Designing architecture and opinionated APIs to keep application developers on the happy path</li>
<li>Tracing and performance observability in high scale distributed microservice architectures</li>
<li>Building reliant, performant, and scalable systems for storage, auth, or asset serving to enable other product teams to build robust applications without deep domain expertise in the underlying systems</li>
<li>Automating the deployment, management, and operations of complex distributed systems like Cassandra, Elasticsearch, Kafka, and more across different environments</li>
</ul>
<p>Technologies we use:</p>
<ul>
<li>Different backend languages, including Java, Rust, and Go</li>
<li>Open-source technologies like Cassandra, ElasticSearch, Spark, Kafka, Kubernetes, Flink</li>
<li>Industry-standard build tooling, including Gradle and GitHub</li>
</ul>
<p>What We Value:</p>
<ul>
<li>Demonstrated ability to collaborate and empathize with a variety of individuals</li>
<li>Ability to learn new technology and concepts, even without in-depth experience</li>
<li>Bias towards quality and thoughtful about edge cases</li>
<li>Builds solutions and APIs with users in mind while maintaining a high engineering bar</li>
</ul>
<p>What We Require:</p>
<ul>
<li>Engineering background in Computer Science, Mathematics, Software Engineering, Physics or similar field</li>
<li>Strong coding skills with demonstrated proficiency in programming languages, such as Java, C++, Python, Rust, or similar languages</li>
<li>Familiarity with storage and data processing systems, cloud infrastructure, and other technical tools</li>
<li>Strong written and verbal communication skills and ability to iterate quickly with teammates, incorporating feedback and holding a high bar for quality</li>
</ul>
<p>Additional Information: Palantir is particularly interested in applicants who have a current right to work in Singapore, and we encourage Singapore citizens to apply.</p>
<p style="margin-top:24px;font-size:13px;color:#666;">XML job scraping automation by <a href="https://yubhub.co">YubHub</a></p>]]></Description>
      <Jobtype>full-time</Jobtype>
      <Experiencelevel>mid</Experiencelevel>
      <Workarrangement>hybrid</Workarrangement>
      <Salaryrange></Salaryrange>
      <Skills>Java, Rust, Go, Cassandra, ElasticSearch, Spark, Kafka, Kubernetes, Flink, Gradle, GitHub</Skills>
      <Category>Engineering</Category>
      <Industry>Technology</Industry>
      <Employername>Palantir</Employername>
      <Employerlogo>https://logos.yubhub.co/palantir.com.png</Employerlogo>
      <Employerdescription>Palantir builds software for data-driven decisions and operations, empowering partners to develop lifesaving drugs, forecast supply chain disruptions, and more.</Employerdescription>
      <Employerwebsite>https://www.palantir.com/</Employerwebsite>
      <Compensationcurrency></Compensationcurrency>
      <Compensationmin></Compensationmin>
      <Compensationmax></Compensationmax>
      <Applyto>https://jobs.lever.co/palantir/0b2dbe51-0d9f-47ee-9f24-82bff4654048?utm_source=yubhub.co&amp;utm_medium=jobs_feed&amp;utm_campaign=apply</Applyto>
      <Location>Singapore</Location>
      <Country></Country>
      <Postedate>2026-04-25</Postedate>
    </job>
    <job>
      <externalid>5d757f12-460</externalid>
      <Title>Engineering Manager</Title>
      <Description><![CDATA[<p>We are looking for an Engineering Manager to drive Advertising engineering leadership and practices. In this role, you will be instrumental in growing the team and guiding development to successfully scale. You will help us to create a user-first ad experience that&#39;s personalized and relevant so we can grow to billions of fans, increasing engagement with our listeners and providing better value to our advertisers. Above all, your work will impact the way the world experiences music.</p>
<p>This squad is responsible for the foundational data, services, and tooling that power the ad-serving ecosystem, with a focus on three core areas: Data Architecture and Integrity, Targeting Data Propagation, and Observability and Debugging. They design and own the data architecture to ensure the accuracy and timeliness of core ad-serving datasets for pacing, billing, and business analysis. They also manage the services and pipelines that reliably deliver critical user-targeting information,such as privacy preferences, ad history, and audience segment data,to the ad-serving path. Furthermore, the team develops tools and mechanisms to enhance telemetry within the ad-serving stack, enabling more effective debugging and easier evaluation of system changes, while also providing necessary support for new feature development by handling their specific data and event functionality requirements.</p>
<p>Responsibilities:</p>
<ul>
<li>Build and lead a robust team of data and backend engineers by attracting top talent, mentoring individuals and managing conflict.</li>
<li>Work with product managers and lead the team to design and implement product features, while improving the quality of the current big-data intensive tools that exist for audiences and targeting.</li>
<li>Lead the team to utilize, homogenize and make available for peer teams to use, diverse large-volume datasets built around user preferences, behavior, identity and location - gathered from a user&#39;s mobile as well as other connected platforms.</li>
<li>Strong understanding of how data-intensive systems are expressed in the UX shown to customers.</li>
<li>Grow the technical expertise of the team around system design, quality and testing, scalability, performance and fault tolerance.</li>
<li>Manage OKRs, roadmaps, career conversations, performance and accountability, and thereby carefully plan, track, and report on work of the team and identify problems early.</li>
<li>Work closely with many peer teams to ensure that our systems are designed in a scalable and maintainable manner.</li>
<li>Nurture a culture of technical quality from design, through code review, to production.</li>
<li>Drive optimization, testing and tooling to improve data quality.</li>
</ul>
<p>Who You Are:</p>
<ul>
<li>You have experience in leading, managing, coaching and mentoring software developers</li>
<li>You have 6+ years Experience in object-oriented programming including Java, Python.</li>
<li>You have experience working with high volume heterogeneous data, with data tools such as Hadoop,</li>
<li>You have experience with BigTable, BigQuery and Hive.</li>
<li>You have experience in data modeling, data access and data storage techniques.</li>
<li>You have designed and built distributed production services / pipelines with data processing frameworks like Scio, Storm, Spark and the Google Cloud Platform.</li>
<li>You have led agile ceremonies including sprint planning, daily standups and retrospectives</li>
<li>You have experience with yearly and quarterly project roadmap planning including sizing, scoping, prioritizing, sequencing and defining external dependencies</li>
<li>You have mentored and coached software engineers</li>
</ul>
<p>Where You&#39;ll Be:</p>
<ul>
<li>We offer you the flexibility to work where you work best! For this role, it can be within the North America region in which we have a work location</li>
<li>This team operates within the Eastern Standard time zone for collaboration</li>
</ul>
<p>Additional Information:</p>
<ul>
<li>The United States base range for this position is $164,448.00 - $234,926.00, plus equity.</li>
<li>The benefits available for this position include health insurance, six month paid parental leave, 401(k) retirement plan, monthly meal allowance, 23 paid days off, 13 paid flexible holidays, paid sick leave.</li>
</ul>
<p style="margin-top:24px;font-size:13px;color:#666;">XML job scraping automation by <a href="https://yubhub.co">YubHub</a></p>]]></Description>
      <Jobtype>full-time</Jobtype>
      <Experiencelevel>senior</Experiencelevel>
      <Workarrangement>remote</Workarrangement>
      <Salaryrange>$164,448.00 - $234,926.00</Salaryrange>
      <Skills>Java, Python, Hadoop, BigTable, BigQuery, Hive, Scio, Storm, Spark, Google Cloud Platform, Agile, Data modeling, Data access, Data storage, Distributed production services, Pipelines, Data processing frameworks</Skills>
      <Category>Engineering</Category>
      <Industry>Technology</Industry>
      <Employername>not provided</Employername>
      <Employerlogo>https://logos.yubhub.co/spotify.com.png</Employerlogo>
      <Employerdescription>A technology company that powers the ad-serving ecosystem with a focus on data architecture and integrity, targeting data propagation, and observability and debugging.</Employerdescription>
      <Employerwebsite>https://spotify.com</Employerwebsite>
      <Compensationcurrency></Compensationcurrency>
      <Compensationmin></Compensationmin>
      <Compensationmax></Compensationmax>
      <Applyto>https://jobs.lever.co/spotify/b47245e8-8727-4cf4-b010-2bb9afcdc5a4?utm_source=yubhub.co&amp;utm_medium=jobs_feed&amp;utm_campaign=apply</Applyto>
      <Location>North America</Location>
      <Country></Country>
      <Postedate>2026-04-25</Postedate>
    </job>
    <job>
      <externalid>3d42fa98-242</externalid>
      <Title>Software Engineer Intern (22nd June - 11th September, remote-US)</Title>
      <Description><![CDATA[<p>At Twilio, we&#39;re shaping the future of communications, all from the comfort of our homes. We deliver innovative solutions to hundreds of thousands of businesses and empower millions of developers worldwide to craft personalized customer experiences.</p>
<p>Our dedication to remote-first work, and strong culture of connection and global inclusion means that no matter your location, you’re part of a vibrant team with diverse experiences making a global impact each day. As we continue to revolutionize how the world interacts, we’re acquiring new skills and experiences that make work feel truly rewarding. Your career at Twilio is in your hands.</p>
<p>Join the team as our next Software Engineer Intern for a duration of 12 weeks, starting on 22nd June. This position is needed to design, develop, deploy and operate software solutions and help Twilio deliver real-time, low latency capabilities for next-generation communications.</p>
<p>As a Software Engineer Intern, you will experience the following:</p>
<ul>
<li>Be a Software Engineer, not just an &quot;intern&quot;.</li>
<li>Ship many different projects during your summer.</li>
<li>Solve problems in distributed computing, real-time DSP (audio processing), virtualization performance, distributed messaging, busses and more.</li>
<li>Partner with other engineers on core feature development and services that ship to our users.</li>
<li>Embrace challenges, learn fast and deliver great results.</li>
<li>Demonstrate a willingness to learn and grow, and we will reciprocate with ample opportunity to do just that, in a friendly, fun and exciting startup environment!</li>
</ul>
<ul>
<li>Develop beautiful and profitable applications.</li>
<li>Demonstrate consistent improvement in your coding skills, issue-tracking and source control systems, and agile development mentality.</li>
<li>Participate in code reviews, bug tracking and project management with the rest of the Twilio Team.</li>
</ul>
<p>To be considered for this role, you should have a Bachelors, Masters, or PhD degree in computer science, computer engineering or a related field. You should also have a hungry entrepreneurial and &quot;can do&quot; spirit, as evidenced by successful interest in learning new technologies. Having knowledge of unit and integration testing methodologies, and the ability to write, debug and deploy testing frameworks is a plus.</p>
<p>Twilio offers a competitive salary, generous time-off, ample parental and wellness leave, healthcare, a retirement savings program, and much more. Offerings vary by location.</p>
<p>The estimated pay ranges for this role are as follows:</p>
<ul>
<li>Based in Colorado, Hawaii, Illinois, Maryland, Massachusetts, Minnesota, Vermont, or Washington D.C.: $47.00/hourly</li>
<li>Based in New York, New Jersey, Washington State, or California (outside of the San Francisco Bay area): $50.00/hourly</li>
</ul>
<p>Applications for the role will be accepted on an ongoing basis until May 10th.</p>
<p style="margin-top:24px;font-size:13px;color:#666;">XML job scraping automation by <a href="https://yubhub.co">YubHub</a></p>]]></Description>
      <Jobtype>internship</Jobtype>
      <Experiencelevel>entry</Experiencelevel>
      <Workarrangement>remote</Workarrangement>
      <Salaryrange>$47.00/hourly - $50.00/hourly</Salaryrange>
      <Skills>Python, Java, Javascript, Golang, C, C++, unit and integration testing methodologies, testing frameworks, data processing, analytics, business intelligence, reporting, Hadoop, Spark, AWS, Scala</Skills>
      <Category>Engineering</Category>
      <Industry>Technology</Industry>
      <Employername>Twilio</Employername>
      <Employerlogo>https://logos.yubhub.co/twilio.com.png</Employerlogo>
      <Employerdescription>Twilio is a communications platform that delivers innovative solutions to hundreds of thousands of businesses worldwide.</Employerdescription>
      <Employerwebsite>https://www.twilio.com/</Employerwebsite>
      <Compensationcurrency></Compensationcurrency>
      <Compensationmin></Compensationmin>
      <Compensationmax></Compensationmax>
      <Applyto>https://job-boards.greenhouse.io/twilio/jobs/7850821?utm_source=yubhub.co&amp;utm_medium=jobs_feed&amp;utm_campaign=apply</Applyto>
      <Location>Remote - US</Location>
      <Country></Country>
      <Postedate>2026-04-25</Postedate>
    </job>
    <job>
      <externalid>b680fcb5-a33</externalid>
      <Title>Sr. Solutions Architect - Strategic AI Native</Title>
      <Description><![CDATA[<p>As a Solutions Architect on the Digital Natives team, you will shape the future of the big data landscape by working with the most sophisticated data engineering and data science teams in the world.</p>
<p>Reporting to the Field Engineering Manager, you will collaborate with customer stakeholders, product teams, and the broader customer-facing team to develop architectures and solutions using our platform and APIs.</p>
<p>You will guide one of our largest AI native customers through the competitive landscape, best practices, and implementation; and develop technical champions along the way.</p>
<p>The impact you will have:</p>
<ul>
<li>Partner with the sales team and provide technical leadership to help customers understand how Databricks can help solve their business problems.</li>
</ul>
<ul>
<li>Consult on Big Data architectures, implement proof of concepts for strategic projects, spanning data engineering, data science, and machine learning, and SQL analysis workflows.</li>
</ul>
<ul>
<li>As well as validating integrations with cloud services, homegrown tools, and other 3rd party applications.</li>
</ul>
<ul>
<li>Collaborate with your fellow Solutions Architects, using your skills to support each other and our users.</li>
</ul>
<ul>
<li>Become an expert in, promote, and recruit contributors for Databricks-inspired open-source projects (Spark, Delta Lake, and MLflow) across the developer community.</li>
</ul>
<p>What we look for:</p>
<ul>
<li>7+ years in a data engineering, data science, technical architecture, or similar pre-sales/consulting role.</li>
</ul>
<ul>
<li>Experience building distributed data systems.</li>
</ul>
<ul>
<li>Comfortable programming in, and debugging, Python and SQL.</li>
</ul>
<ul>
<li>Have built solutions with public cloud providers such as AWS, Azure, or GCP.</li>
</ul>
<ul>
<li>Expertise in one of the following:</li>
</ul>
<ul>
<li>Data Engineering technologies (Ex: Spark, Hadoop, Kafka)</li>
</ul>
<ul>
<li>Data Science and Machine Learning technologies (Ex: pandas, scikit-learn, pytorch, Tensorflow)</li>
</ul>
<p>Available to travel to customers in your region.</p>
<p>[Desired] Degree in a quantitative discipline (Computer Science, Applied Mathematics, Operations Research).</p>
<p>Nice to have: Databricks Certification.</p>
<p style="margin-top:24px;font-size:13px;color:#666;">XML job scraping automation by <a href="https://yubhub.co">YubHub</a></p>]]></Description>
      <Jobtype>full-time</Jobtype>
      <Experiencelevel>senior</Experiencelevel>
      <Workarrangement>remote</Workarrangement>
      <Salaryrange>$219,100-$301,300 USD</Salaryrange>
      <Skills>data engineering, data science, technical architecture, pre-sales/consulting, Python, SQL, AWS, Azure, GCP, Spark, Hadoop, Kafka, pandas, scikit-learn, pytorch, Tensorflow</Skills>
      <Category>Engineering</Category>
      <Industry>Technology</Industry>
      <Employername>Databricks</Employername>
      <Employerlogo>https://logos.yubhub.co/databricks.com.png</Employerlogo>
      <Employerdescription>Databricks is a data and AI company that provides a data intelligence platform to unify and democratize data, analytics, and AI. It was founded by the original creators of Lakehouse, Apache Spark, Delta Lake, and MLflow.</Employerdescription>
      <Employerwebsite>https://databricks.com</Employerwebsite>
      <Compensationcurrency></Compensationcurrency>
      <Compensationmin></Compensationmin>
      <Compensationmax></Compensationmax>
      <Applyto>https://job-boards.greenhouse.io/databricks/jobs/8458028002?utm_source=yubhub.co&amp;utm_medium=jobs_feed&amp;utm_campaign=apply</Applyto>
      <Location>Remote - Arizona; Remote - California; Remote - Colorado; Remote - Oregon; Remote - Washington</Location>
      <Country></Country>
      <Postedate>2026-04-25</Postedate>
    </job>
    <job>
      <externalid>3ddd69d6-0c9</externalid>
      <Title>Member of Technical Staff - Voice Model</Title>
      <Description><![CDATA[<p>Join the Grok Voice Model team to help build the world&#39;s best voice AI. You will design and execute large-scale speech data curation and processing pipelines, work on pre-training and post-training of speech-language models, and build a comprehensive evaluation framework.</p>
<p>As a member of the team, you will work closely with product teams to integrate voice models into applications and real-time environments. You will define spoken interaction specifications and handle the full lifecycle from prototype to global-scale deployment for stable, low-latency, delightful voice experiences.</p>
<p>We&#39;re seeking exceptionally smart, execution-oriented engineers to help us get there. You will have the opportunity to work on cutting-edge technology and make a significant impact on the development of voice AI.</p>
<p>Responsibilities:</p>
<ul>
<li>Design and execute large-scale speech data curation and processing pipelines</li>
<li>Work on pre-training and post-training of speech-language models</li>
<li>Build a comprehensive evaluation framework</li>
<li>Work closely with product teams to integrate voice models into applications and real-time environments</li>
</ul>
<p>Basic Qualifications:</p>
<ul>
<li>Python expert with deep proficiency in writing clean, efficient code for AI/ML systems</li>
<li>Hands-on experience processing large-scale datasets using tools like Spark and Ray for cleaning, augmentation, and feature extraction</li>
<li>Proficiency in pre-training and post-training speech-language models using JAX/PyTorch, including supervised fine-tuning, reinforcement learning, and optimizations for accuracy, factuality, natural spoken style, detail, and multilingual fluency</li>
<li>Ability to set up and run rigorous evaluation pipelines: objective metrics, human preference studies, content factuality checks, and iterative A/B testing to drive model improvements</li>
<li>Experience building or working with large-scale distributed training and inference systems on Kubernetes</li>
<li>Proactive, self-driven attitude , ready to grind in a fast-paced, high-caliber team to deliver outstanding voice AI experiences</li>
</ul>
<p>Compensation and Benefits:</p>
<p>$150,000 - $450,000 USD</p>
<p>Base salary is just one part of our total rewards package at xAI, which also includes equity, comprehensive medical, vision, and dental coverage, access to a 401(k) retirement plan, short &amp; long-term disability insurance, life insurance, and various other discounts and perks.</p>
<p style="margin-top:24px;font-size:13px;color:#666;">XML job scraping automation by <a href="https://yubhub.co">YubHub</a></p>]]></Description>
      <Jobtype>full-time</Jobtype>
      <Experiencelevel>staff</Experiencelevel>
      <Workarrangement>onsite</Workarrangement>
      <Salaryrange>$150,000 - $450,000 USD</Salaryrange>
      <Skills>Python, Spark, Ray, JAX/PyTorch, Kubernetes</Skills>
      <Category>Engineering</Category>
      <Industry>Technology</Industry>
      <Employername>xAI</Employername>
      <Employerlogo>https://logos.yubhub.co/xai.com.png</Employerlogo>
      <Employerdescription>xAI creates AI systems to understand the universe and aid humanity in its pursuit of knowledge. The team is small and focused on engineering excellence.</Employerdescription>
      <Employerwebsite>https://www.xai.com</Employerwebsite>
      <Compensationcurrency></Compensationcurrency>
      <Compensationmin></Compensationmin>
      <Compensationmax></Compensationmax>
      <Applyto>https://job-boards.greenhouse.io/xai/jobs/5051966007?utm_source=yubhub.co&amp;utm_medium=jobs_feed&amp;utm_campaign=apply</Applyto>
      <Location>Palo Alto, CA</Location>
      <Country></Country>
      <Postedate>2026-04-24</Postedate>
    </job>
    <job>
      <externalid>531dc584-ba0</externalid>
      <Title>Senior DevOps Engineer</Title>
      <Description><![CDATA[<p>Do you ever have the urge to do things better than the last time? We do. And it&#39;s this urge that drives us every day. Our environment of discovery and innovation means we&#39;re able to create deep and valuable relationships with our clients to create real change for them and their industries. It&#39;s what got us here – and it&#39;s what will make our future. At Quantexa, you&#39;ll experience autonomy and support in equal measures allowing you to form a career that matches your ambitions.</p>
<p>You&#39;ll be joining one of our DevOps teams in our R&amp;D department working on the Quantexa Cloud Platform and accompanying solutions, including platforms supporting data-intensive and AI-driven workloads. The platform is comprised of a landscape of low-maintenance, on-demand, and highly secure environments that host our software for customers and partners. These environments also support a wide range of internal use cases, underpinning the work of our R&amp;D teams.</p>
<p>As a Senior DevOps Engineer, you will:</p>
<p>Contribute to the evolution and improvement of our cloud-based platform, with a strong focus on availability, resilience, performance, and security. Take ownership of significant technical problems and initiatives, driving them through to delivery with a high degree of autonomy. Enhance our automation practices, helping reduce operational toil and improve the consistency and reliability of our platform, including the use of modern tooling and AI-assisted approaches where appropriate. Collaborate closely with software engineering teams to strengthen our CI/CD pipelines and optimise build, test, and deployment workflows, with an eye on improving overall developer productivity. Support the development of cloud-based product capabilities that customers can integrate into their own DevOps processes. Contribute to technical discussions, provide guidance on best practices, and help shape engineering standards within the team. Offer informal mentoring and knowledge-sharing to engineers, supporting the growth of the wider DevOps community.</p>
<p>This role focuses on deep hands-on technical expertise and the ability to lead complex workstreams, while stopping short of the architectural ownership and broader technical leadership responsibilities of a Lead Engineer.</p>
<p>Our Stack Includes: Kubernetes, Docker, Istio GitOps / DevOps tooling: ArgoCD, Jenkins, GitHub Actions Scripting &amp; Automation: Bash, Python, Groovy, Golang IaC &amp; Infrastructure Management: Terraform, Ansible, Packer, CasC Provisioning Frameworks: Elasticsearch, Spark, Hadoop, Airflow, PostgreSQL, etc. Observability: Fluentd, Prometheus, Grafana, Alertmanager Public Cloud: Primarily GCP and Azure, with some AWS</p>
<p>We are looking for candidates who: Take pride in designing, building, and delivering high-quality, well-engineered solutions to complex problems. Think holistically, ensuring solutions integrate effectively into large-scale distributed systems. Bring strong hands-on experience across several aspects of our cloud and DevOps stack. Have solid experience with programming/scripting/automation. Demonstrate a strong understanding of information security principles. Have experience operating and supporting cloud-native platforms in production environments. Are comfortable working autonomously, leading technical workstreams, and driving improvements. Enjoy sharing knowledge and supporting the development of other engineers.</p>
<p>Experience in the following would be beneficial: Infrastructure management and general Linux administration. Operating microservice-based architectures (scaling, upgrading, traffic management, deployment strategies). Software build, release engineering, and CI/CD pipeline enhancement. Exposure to a broad selection of the technologies listed in our tech stack. Exposure to platforms or tooling that support AI/ML workflows, data-intensive pipelines, or intelligent automation.</p>
<p>Why join Quantexa? Our perks and quirks. What makes you Q will help you to realize your full potential, flourish, and enjoy what you do, while being recognized and rewarded with our broad range of benefits.</p>
<p>We offer: Competitive salary and Company Bonus Flexible working hours in a hybrid workplace &amp; free access to global WeWork locations &amp; events Pension Scheme with a company contribution of 6% (if you contribute 3%) 25 days annual leave (with the option to buy up to 5 days) + birthday off! Work from Anywhere Scheme: Spend up to 2 months working outside of your country of employment over a rolling 12-month period Family: Enhanced Maternity, Paternity, Adoption, or Shared Parental Leave Private Healthcare with AXA EAP, Well-being Days, Gym Discounts Free Calm App Subscription Workplace Nursery Scheme Team&#39;s Social Budget &amp; Company-wide Summer &amp; Winter Parties Tech &amp; Cycle-to-Work Schemes Volunteer Day off Dog-friendly Offices</p>
<p style="margin-top:24px;font-size:13px;color:#666;">XML job scraping automation by <a href="https://yubhub.co">YubHub</a></p>]]></Description>
      <Jobtype>full-time</Jobtype>
      <Experiencelevel>senior</Experiencelevel>
      <Workarrangement>hybrid</Workarrangement>
      <Salaryrange></Salaryrange>
      <Skills>Kubernetes, Docker, Istio, GitOps, DevOps, tooling, ArgoCD, Jenkins, GitHub Actions, Scripting, Automation, Bash, Python, Groovy, Golang, IaC, Infrastructure Management, Terraform, Ansible, Packer, CasC, Provisioning Frameworks, Elasticsearch, Spark, Hadoop, Airflow, PostgreSQL, Observability, Fluentd, Prometheus, Grafana, Alertmanager, Public Cloud, GCP, Azure, AWS</Skills>
      <Category>Engineering</Category>
      <Industry>Technology</Industry>
      <Employername>Quantexa</Employername>
      <Employerlogo>https://logos.yubhub.co/quantexa.com.png</Employerlogo>
      <Employerdescription>Quantexa is a company that creates deep and valuable relationships with clients to create real change for them and their industries.</Employerdescription>
      <Employerwebsite>https://www.quantexa.com/</Employerwebsite>
      <Compensationcurrency></Compensationcurrency>
      <Compensationmin></Compensationmin>
      <Compensationmax></Compensationmax>
      <Applyto>https://jobs.workable.com/view/5FmBVWpa875z7Aah52FzVu/hybrid-senior-devops-engineer-in-london-at-quantexa?utm_source=yubhub.co&amp;utm_medium=jobs_feed&amp;utm_campaign=apply</Applyto>
      <Location>London</Location>
      <Country></Country>
      <Postedate>2026-04-24</Postedate>
    </job>
    <job>
      <externalid>b243c12b-190</externalid>
      <Title>Data Engineer- Associate</Title>
      <Description><![CDATA[<p>About this role</p>
<p>We are looking for a talented Data Engineer to join the Chief Data Office. This is a unique opportunity to be the first data engineer on the team, responsible for establishing the infrastructure and best practices for onboarding and maintaining data, as well as delivering insights to our stakeholders.</p>
<p>Key Responsibilities:</p>
<ul>
<li>Design, develop, and maintain scalable data pipelines and systems to support data integration and analytics.</li>
<li>Establish best practices for data management, including data quality, data governance, and data security.</li>
<li>Collaborate with stakeholders to understand data requirements and deliver actionable insights.</li>
<li>Partner with adjacent data engineering teams to leverage and enhance existing data infrastructure.</li>
<li>Implement and optimize data storage solutions to ensure efficient data retrieval and processing.</li>
<li>Develop and maintain documentation for data engineering processes and systems.</li>
<li>Lead and mentor junior data engineers and analysts, fostering a culture of continuous learning and improvement.</li>
</ul>
<p>Qualifications:</p>
<ul>
<li>Proven experience as a Data Engineer or in a similar role.</li>
<li>Strong knowledge of data engineering concepts, including ETL processes, data warehousing, and data modelling.</li>
<li>Proficiency in programming languages such as Python, SQL, PostgreSQL, and Java.</li>
<li>Experience with big data technologies such as Hadoop, Spark, and Snowflake.</li>
<li>Familiarity with cloud platforms such as AWS, Azure, or Google Cloud.</li>
<li>Excellent problem-solving skills and attention to detail.</li>
<li>Strong communication and collaboration skills.</li>
<li>Experience with batch processing and API integration.</li>
<li>Experience or eagerness to learn how to maintain and serve data for generative AI applications.</li>
<li>Familiarity with RAG (Retrieval-Augmented Generation) and vector databases.</li>
</ul>
<p>Our benefits</p>
<p>To help you stay energized, engaged and inspired, we offer a wide range of employee benefits including: retirement investment and tools designed to help you in building a sound financial future; access to education reimbursement; comprehensive resources to support your physical health and emotional well-being; family support programs; and Flexible Time Off (FTO) so you can relax, recharge and be there for the people you care about.</p>
<p>Our hybrid work model</p>
<p>BlackRock’s hybrid work model is designed to enable a culture of collaboration and apprenticeship that enriches the experience of our employees, while supporting flexibility for all. Employees are currently required to work at least 4 days in the office per week, with the flexibility to work from home 1 day a week. Some business groups may require more time in the office due to their roles and responsibilities. We remain focused on increasing the impactful moments that arise when we work together in person – aligned with our commitment to performance and innovation. As a new joiner, you can count on this hybrid model to accelerate your learning and onboarding experience here at BlackRock.</p>
<p style="margin-top:24px;font-size:13px;color:#666;">XML job scraping automation by <a href="https://yubhub.co">YubHub</a></p>]]></Description>
      <Jobtype>full-time</Jobtype>
      <Experiencelevel>entry</Experiencelevel>
      <Workarrangement>hybrid</Workarrangement>
      <Salaryrange>Competitive salary and bonus</Salaryrange>
      <Skills>data engineering, ETL processes, data warehousing, data modelling, Python, SQL, PostgreSQL, Java, Hadoop, Spark, Snowflake, AWS, Azure, Google Cloud, batch processing, API integration, generative AI, RAG, vector databases</Skills>
      <Category>Engineering</Category>
      <Industry>Finance</Industry>
      <Employername>BlackRock</Employername>
      <Employerlogo>https://logos.yubhub.co/blackrock.com.png</Employerlogo>
      <Employerdescription>BlackRock is a multinational investment management corporation that provides a range of investment, risk management, and technology services to institutional and retail clients worldwide.</Employerdescription>
      <Employerwebsite>https://www.blackrock.com/</Employerwebsite>
      <Compensationcurrency></Compensationcurrency>
      <Compensationmin></Compensationmin>
      <Compensationmax></Compensationmax>
      <Applyto>https://jobs.workable.com/view/nRA8d7RLJFd4VVphMbi2n2/data-engineer--associate-in-budapest-at-blackrock?utm_source=yubhub.co&amp;utm_medium=jobs_feed&amp;utm_campaign=apply</Applyto>
      <Location>Budapest</Location>
      <Country></Country>
      <Postedate>2026-04-24</Postedate>
    </job>
    <job>
      <externalid>fdff2394-c46</externalid>
      <Title>Senior Staff Geospatial Software Engineer</Title>
      <Description><![CDATA[<p>At Bayer, we&#39;re seeking a Senior Staff Geospatial Software Engineer to play a key role in building distributed analytics capabilities and enabling enterprise-wide access to scientific and operational datasets. As a member of our geospatial data engineering team, you will apply strong software craftsmanship with your knowledge of algorithms, data structures, and geospatial data models.</p>
<p>Our mission is to develop agriculture solutions for a sustainable future that help meet the challenges of feeding a global population projected to grow to over 9.6 billion by 2050. We capture petabytes of data in our operational systems, including genome sequencing data, manufacturing, supply chain, and finance systems. This requires complex analysis to extract meaningful information, and our mission is to understand that data and create software that will help make decisions at scales that has never been possible before.</p>
<p>Key responsibilities:</p>
<ul>
<li>Play a key senior role on a geospatial data engineering team, building distributed analytics capabilities and enabling enterprise-wide access to scientific and operational datasets;</li>
<li>Apply strong software craftsmanship with your knowledge of algorithms, data structures, and geospatial data models;</li>
<li>Partner with other top-level talent in data engineering, software development, and data science to tackle complex, novel problems and deliver solutions with real-world impact on global food systems;</li>
<li>Mentor and guide other data engineers in your areas of expertise with a focus on geographic information science and systems;</li>
<li>Evaluate, implement, and advocate Foss4G technologies, finding the best fit for each use case and integrating them into production-ready solutions;</li>
<li>Lead technical initiatives end-to-end, communicate your technical vision and strategy to the larger organization;</li>
<li>Drive impact across enterprise projects spanning multiple areas of the business. Strength of ideas outweighs position in the organization;</li>
<li>Share our work with the broader geospatial and software engineering community at relevant technical conferences.</li>
</ul>
<p>Requirements:</p>
<ul>
<li>A minimum of a Bachelor&#39;s Degree in a relevant discipline (an additional 2 years of experience will be considered in lieu of a Bachelor&#39;s Degree);</li>
<li>Shipped multiple generations of a software product, demonstrating long-term technical ownership and evolution;</li>
<li>A track record of shipping and maintaining multiple major product releases written in Go or Python;</li>
<li>A track record of designing, building, and maintaining multiple product releases of data-intensive geospatial-centric APIs using a RESTful approach;</li>
<li>Extensive experience with OGC Standards services;</li>
<li>Deep knowledge of geographic science and related technologies including coordinate systems and projections with realizations, global positioning systems, spatial indexing, and spatial topologies;</li>
<li>Experience in design and implementation of Foss4G solutions, particularly leveraging GeoServer, PostGIS, and QGIS with an emphasis on vector data models;</li>
<li>Extensive experience in system design and architecture for large-scale, distributed applications;</li>
<li>Experience with creating and maintaining containerized application deployments;</li>
<li>Familiarity with developing in, deploying to, and working with Kubernetes cluster infrastructure;</li>
<li>Experience with data modeling for large scale databases;</li>
<li>Proficiency in verbal and written English language, capable of connecting with diverse individuals, actively listening to their needs, and supporting meaningful analysis for better decision-making.</li>
</ul>
<p>Bonus points for:</p>
<ul>
<li>Experience with geoarrow, geoparquet, and geopackage data formats;</li>
<li>Experience with emerging geospatial database management systems such as DuckDB Spatial and Sedona DB;</li>
<li>Experience working with distributed geospatial data warehousing (e.g. BigQuery, Snowflake) and compute (e.g. Spark, Sedona);</li>
<li>Experience implementing h3 geospatial indexing;</li>
<li>Contributions to or implementation of these OSGEO projects: GDAL/OGR, GeoServer, GeoTools, PostGIS, PROJ, QGIS, OpenLayers, Leaflet.</li>
</ul>
<p style="margin-top:24px;font-size:13px;color:#666;">XML job scraping automation by <a href="https://yubhub.co">YubHub</a></p>]]></Description>
      <Jobtype>full-time</Jobtype>
      <Experiencelevel>senior</Experiencelevel>
      <Workarrangement>remote</Workarrangement>
      <Salaryrange>$123,760.00 - $185,640.00</Salaryrange>
      <Skills>Go, Python, OGC Standards services, GeoServer, PostGIS, QGIS, Kubernetes, containerized application deployments, data modeling for large scale databases, verbal and written English language, geoarrow, geoparquet, geopackage data formats, DuckDB Spatial, Sedona DB, BigQuery, Snowflake, Spark, Sedona, h3 geospatial indexing, GDAL/OGR, GeoTools, OpenLayers, Leaflet</Skills>
      <Category>Engineering</Category>
      <Industry>Technology</Industry>
      <Employername>Bayer</Employername>
      <Employerlogo>https://logos.yubhub.co/talent.bayer.com.png</Employerlogo>
      <Employerdescription>Bayer is a multinational pharmaceutical and life sciences company that develops agriculture solutions for a sustainable future.</Employerdescription>
      <Employerwebsite>https://talent.bayer.com</Employerwebsite>
      <Compensationcurrency></Compensationcurrency>
      <Compensationmin></Compensationmin>
      <Compensationmax></Compensationmax>
      <Applyto>https://talent.bayer.com/careers/job/562949976931774?utm_source=yubhub.co&amp;utm_medium=jobs_feed&amp;utm_campaign=apply</Applyto>
      <Location></Location>
      <Country></Country>
      <Postedate>2026-04-24</Postedate>
    </job>
    <job>
      <externalid>90f8f34d-528</externalid>
      <Title>Senior Machine Learning Engineer: Ranking</Title>
      <Description><![CDATA[<p><strong>About the Role</strong></p>
<p>As a Senior Machine Learning Engineer on the Ranking team, you will be responsible for enhancing the quality of our ranking systems, ensuring that search, browse, and autocomplete experiences are highly relevant, personalized, and diverse.</p>
<p><strong>Responsibilities</strong></p>
<ul>
<li>Design and develop ML-based ranking solutions to drive improvements in key business metrics such as conversion, engagement, and user satisfaction.</li>
<li>Analyze ranking performance and identify gaps in search, browse, and autocomplete experiences, focusing on relevance, personalization, attractiveness, diversification, and other quality signals.</li>
<li>Innovate and optimize ranking algorithms to advance the ranking pipeline, improve ranking quality, and meet evolving business needs.</li>
</ul>
<p><strong>Requirements</strong></p>
<ul>
<li>At least 4 years of experience with Python for machine learning and backend development.</li>
<li>At least 4 years of experience developing, deploying, and maintaining machine learning models with a strong focus on ranking systems for search, recommendations, or similar applications.</li>
<li>Experience in large-scale ML model training, evaluation, and optimization, with a focus on real-time inference and serving.</li>
<li>Experience with big data frameworks such as Spark for processing large datasets and integrating them into ML pipelines.</li>
<li>Proficiency in using tools like SQL, PySpark, Pandas, and other frameworks to extract, manipulate, and analyze data.</li>
<li>Experience with data pipeline orchestration tools like Airflow or Luigi to manage and automate workflows for ML training and signal delivery.</li>
</ul>
<p><strong>Benefits</strong></p>
<ul>
<li>Unlimited vacation time.</li>
<li>Fully remote team.</li>
<li>Work from home stipend.</li>
<li>Apple laptops provided for new employees.</li>
<li>Training and development budget for every employee.</li>
<li>Maternity &amp; Paternity leave for qualified employees.</li>
<li>Base salary: $80k–$120k USD, depending on knowledge, skills, experience, and interview results.</li>
<li>Stock options.</li>
<li>Regular team offsites to connect and collaborate.</li>
</ul>
<p style="margin-top:24px;font-size:13px;color:#666;">XML job scraping automation by <a href="https://yubhub.co">YubHub</a></p>]]></Description>
      <Jobtype>full-time</Jobtype>
      <Experiencelevel>senior</Experiencelevel>
      <Workarrangement>remote</Workarrangement>
      <Salaryrange>$80k–$120k USD</Salaryrange>
      <Skills>Python, Machine learning, Backend development, Ranking systems, Search, Recommendations, Big data frameworks, Spark, SQL, PySpark, Pandas</Skills>
      <Category>Engineering</Category>
      <Industry>Technology</Industry>
      <Employername>Constructor</Employername>
      <Employerlogo>https://logos.yubhub.co/constructor.io.png</Employerlogo>
      <Employerdescription>Constructor is a US-based company founded in 2019 that provides a search and discovery platform for e-commerce. Its search engine is built in-house using transformers and generative LLMs.</Employerdescription>
      <Employerwebsite>https://constructor.io</Employerwebsite>
      <Compensationcurrency></Compensationcurrency>
      <Compensationmin></Compensationmin>
      <Compensationmax></Compensationmax>
      <Applyto>https://apply.workable.com/j/9C5C1388F9?utm_source=yubhub.co&amp;utm_medium=jobs_feed&amp;utm_campaign=apply</Applyto>
      <Location>Portugal</Location>
      <Country></Country>
      <Postedate>2026-04-24</Postedate>
    </job>
    <job>
      <externalid>5a140428-b77</externalid>
      <Title>Member of Technical Staff - Voice Model</Title>
      <Description><![CDATA[<p>Join the Grok Voice Model team to help build the world&#39;s best voice AI. You will design and execute large-scale speech data curation and processing pipelines, work on pre-training and post-training of speech-language models, and build a comprehensive evaluation framework. As a member of this team, you will work closely with product teams to integrate voice models into applications and real-time environments.</p>
<p>We&#39;re seeking exceptionally smart, execution-oriented engineers to help us get there. You will have the opportunity to work on challenging projects, collaborate with a highly motivated team, and contribute directly to the company&#39;s mission.</p>
<p>Responsibilities:</p>
<ul>
<li>Design and execute large-scale speech data curation and processing pipelines, including collection of diverse real-world audio, synthetic data generation, and automated annotation workflows to enable high-quality model training and evaluation.</li>
<li>Work on pre-training and post-training of speech-language models, with targeted enhancements through supervised fine-tuning, reinforcement learning, and other techniques to ensure Grok Voice responses are accurate, factually grounded, natural and idiomatic in spoken style, conversational in tone, and fluent across multiple languages.</li>
<li>Build and iterate a comprehensive evaluation framework covering objective metrics (accuracy, quality, latency, expressiveness), human preference studies, content factuality assessments, real-time interaction quality, and experimentation infrastructure to measure and improve performance.</li>
<li>Work closely with product teams to integrate voice models into applications and real-time environments, define spoken interaction specifications, and handle the full lifecycle from prototype to global-scale deployment for stable, low-latency, delightful voice experiences.</li>
</ul>
<p>Basic Qualifications:</p>
<ul>
<li>Python expert with deep proficiency in writing clean, efficient code for AI/ML systems.</li>
<li>Hands-on experience processing large-scale datasets using tools like Spark and Ray for cleaning, augmentation, and feature extraction.</li>
<li>Proficiency in pre-training and post-training speech-language models using JAX/PyTorch, including supervised fine-tuning, reinforcement learning, and optimizations for accuracy, factuality, natural spoken style, detail, and multilingual fluency.</li>
<li>Ability to set up and run rigorous evaluation pipelines: objective metrics, human preference studies, content factuality checks, and iterative A/B testing to drive model improvements.</li>
<li>Experience building or working with large-scale distributed training and inference systems on Kubernetes.</li>
<li>Proactive, self-driven attitude , ready to grind in a fast-paced, high-caliber team to deliver outstanding voice AI experiences.</li>
</ul>
<p>Compensation and Benefits:</p>
<p>$150,000 - $450,000 USD</p>
<p>Base salary is just one part of our total rewards package at xAI, which also includes equity, comprehensive medical, vision, and dental coverage, access to a 401(k) retirement plan, short &amp; long-term disability insurance, life insurance, and various other discounts and perks.</p>
<p style="margin-top:24px;font-size:13px;color:#666;">XML job scraping automation by <a href="https://yubhub.co">YubHub</a></p>]]></Description>
      <Jobtype>full-time</Jobtype>
      <Experiencelevel>staff</Experiencelevel>
      <Workarrangement>onsite</Workarrangement>
      <Salaryrange>$150,000 - $450,000 USD</Salaryrange>
      <Skills>Python, Spark, Ray, JAX, PyTorch, Kubernetes</Skills>
      <Category>Engineering</Category>
      <Industry>Technology</Industry>
      <Employername>xAI</Employername>
      <Employerlogo>https://logos.yubhub.co/xai.com.png</Employerlogo>
      <Employerdescription>xAI creates AI systems to understand the universe and aid humanity in its pursuit of knowledge. The team is small and focused on engineering excellence.</Employerdescription>
      <Employerwebsite>https://www.xai.com</Employerwebsite>
      <Compensationcurrency></Compensationcurrency>
      <Compensationmin></Compensationmin>
      <Compensationmax></Compensationmax>
      <Applyto>https://job-boards.greenhouse.io/xai/jobs/5051966007?utm_source=yubhub.co&amp;utm_medium=jobs_feed&amp;utm_campaign=apply</Applyto>
      <Location>Palo Alto, CA</Location>
      <Country></Country>
      <Postedate>2026-04-24</Postedate>
    </job>
    <job>
      <externalid>7e078ceb-e9a</externalid>
      <Title>Data Engineer</Title>
      <Description><![CDATA[<p>At Ford Motor Company, we believe freedom of movement drives human progress. We also believe in providing you with the freedom to define and realize your dreams. With our incredible plans for the future of mobility, we have an exciting opportunity for you to join our expanding area of Prognostics.</p>
<p>Are you enthusiastic to mine raw data and realize its hidden value by building amazing, connected data solutions that benefit our customers? Would you love to accelerate our efforts in implementing advanced physics and ML Models in production?</p>
<p>The Data Engineer role resides within the Ford’s Electric Vehicle organization. In this role, you will work on building scalable and robust data pipelines to process large volumes of connected vehicle data to support the Ford vehicle prognostic initiatives.</p>
<p><strong>Responsibilities</strong></p>
<ul>
<li>Develop exceptional analytical data products using both streaming and batch ingestion patterns on Google Cloud Platform with solid data warehouse principles.</li>
<li>Build data pipelines to monitoring quality of data and performance of analytical models.</li>
<li>Maintain the infrastructure of the data platform using terraform and continuously develop, evaluate, and deliver code using CI/CD.</li>
<li>Collaborate with data analytics stakeholders to streamline the data acquisition, processing, and presentation process.</li>
<li>Implement an enterprise data governance model and actively promote the concept of data - protection, sharing, reuse, quality, and standards.</li>
<li>Enhance and maintain the DevOps capabilities of the data platform.</li>
<li>Continuously optimize and enhance existing data solutions (pipelines, products, infrastructure) for best performance, high security, low vulnerability, low costs, and high reliability.</li>
<li>Work in an agile product team to deliver code frequently using Test Driven Development (TDD), continuous integration and continuous deployment (CI/CD).</li>
<li>Promptly address code quality issues using SonarQube, Checkmarx, Fossa, and Cycode throughout the development lifecycle.</li>
<li>Perform any necessary data mapping, data lineage activities and document information flows.</li>
<li>Monitor the production pipelines and provide production support by addressing production issues as per SLAs.</li>
<li>Provide analysis of connected vehicle data to support new product developments and production vehicle improvements.</li>
<li>Provide visibility to data quality/vehicle/feature issues and work with the business owners to fix the issues.</li>
<li>Demonstrate technical knowledge and communication skills with the ability to advocate for well-designed solutions.</li>
<li>Continuously enhance your domain knowledge of connected vehicle data, connected services and algorithms/models developed by data scientists within Ford.</li>
<li>Stay current on the latest data engineering practices and contribute to the technical direction of the company while keeping a customer-centric approach.</li>
</ul>
<p><strong>Qualifications</strong></p>
<ul>
<li>Master’s degree or foreign equivalent degree in Computer Science, Software Engineering, Information Systems, Data Engineering, or a related field, and 4 years of experience OR equivalent combination of education and experience (6+ years with Bachelor&#39;s Degree).</li>
<li>4 years of professional experience in:</li>
<li>Data engineering, data product development and software product launches</li>
<li>At least three of the following languages: Java, Python, Spark, Scala, SQL</li>
<li>3 years of cloud data/software engineering experience building scalable, reliable, and cost-effective production batch and streaming data pipelines using:</li>
<li>Data warehouses like Amazon Redshift, Microsoft Azure Synapse Analytics, Google BigQuery.</li>
<li>Workflow orchestration tools like Airflow.</li>
<li>Relational Database Management System like MySQL, PostgreSQL, and SQL Server.</li>
<li>Real-Time data streaming platform like Apache Kafka, GCP Pub/Sub</li>
<li>Microservices architecture to deliver large-scale real-time data processing application.</li>
<li>REST APIs for compute, storage, operations, and security.</li>
<li>DevOps tools such as Tekton, GitHub Actions, Git, GitHub, Terraform, Docker.</li>
<li>Project management tools like Atlassian JIRA.</li>
</ul>
<p><strong>Even better if you have...</strong></p>
<ul>
<li>Ph.D. or foreign equivalent degree in Computer Science, Software Engineering, Information System, Data Engineering, or a related field.</li>
<li>2 years of experience with ML Model Development and/or MLOps.</li>
<li>Committed code to improve open-source data/software engineering projects</li>
<li>Experience architecting cloud infrastructure and handling application migrations/upgrades.</li>
<li>GCP Professional Certifications.</li>
<li>Demonstrated passion to mine raw data and realize its hidden value.</li>
<li>Passion to experiment/implement state of the art data engineering methods/techniques.</li>
<li>Experience working in an implementation team from concept to operations, providing deep technical subject matter expertise for successful deployment.</li>
<li>Experience implementing methods for automation of all parts of the pipeline to minimize labor in development and production.</li>
<li>Analytics skills to profile data, troubleshoot data pipeline/product issues.</li>
<li>Ability to simplify, clearly communicate complex data/software ideas/problems and work with cross-functional teams and all levels of management independently.</li>
</ul>
<p>Experience Level: mid</p>
<p style="margin-top:24px;font-size:13px;color:#666;">XML job scraping automation by <a href="https://yubhub.co">YubHub</a></p>]]></Description>
      <Jobtype>full-time</Jobtype>
      <Experiencelevel></Experiencelevel>
      <Workarrangement>hybrid</Workarrangement>
      <Salaryrange>This position is a range of salary grades 6-8.</Salaryrange>
      <Skills>Java, Python, Spark, Scala, SQL, Amazon Redshift, Microsoft Azure Synapse Analytics, Google BigQuery, Airflow, MySQL, PostgreSQL, SQL Server, Apache Kafka, GCP Pub/Sub, Microservices, REST APIs, Tekton, GitHub Actions, Git, GitHub, Terraform, Docker, Atlassian JIRA</Skills>
      <Category>Engineering</Category>
      <Industry>Automotive</Industry>
      <Employername>Ford Motor Company</Employername>
      <Employerlogo>https://logos.yubhub.co/ford.com.png</Employerlogo>
      <Employerdescription>Ford Motor Company is an American multinational automaker headquartered in Dearborn, Michigan. It is one of the largest automobile manufacturers in the world.</Employerdescription>
      <Employerwebsite>https://www.ford.com/</Employerwebsite>
      <Compensationcurrency></Compensationcurrency>
      <Compensationmin></Compensationmin>
      <Compensationmax></Compensationmax>
      <Applyto>https://efds.fa.em5.oraclecloud.com/hcmUI/CandidateExperience/en/sites/CX_1/job/55567?utm_source=yubhub.co&amp;utm_medium=jobs_feed&amp;utm_campaign=apply</Applyto>
      <Location>Dearborn</Location>
      <Country></Country>
      <Postedate>2026-04-24</Postedate>
    </job>
    <job>
      <externalid>9eb594a6-97b</externalid>
      <Title>Product Manager 3</Title>
      <Description><![CDATA[<p>Join the team as our next Data Platform Product Manager in the Data Governance and Insights team.</p>
<p>This position is needed to drive Data Insights and Twilio&#39;s Data Governance initiatives across Twilio. This position is based in India. You will touch many teams within Twilio to ensure safe customer data handling, supporting data privacy and compliance. This team manages data pipeline security, data reliability, and ensuring access controls. We are also the bridge to the reporting systems trusted by customers, executives and shareholders.</p>
<p>In this role, you’ll:</p>
<ul>
<li>Champion customer-facing product development that will reduce time to insights.</li>
<li>Own the cradle to grave product lifecycle for insights platforms.</li>
<li>Understand the needs of our end customers in the global communications market and build a platform to help internal teams manage and leverage their data to derive meaningful insights.</li>
<li>Support Data Governance initiative for data pipelines and insights products, working with product managers and engineering counterparts across various organizations and stakeholders.</li>
</ul>
<p>Twilio values diverse experiences from all kinds of industries, and we encourage everyone who meets the required qualifications to apply. If your career is just starting or hasn&#39;t followed a traditional path, don&#39;t let that stop you from considering Twilio.</p>
<p style="margin-top:24px;font-size:13px;color:#666;">XML job scraping automation by <a href="https://yubhub.co">YubHub</a></p>]]></Description>
      <Jobtype>full-time</Jobtype>
      <Experiencelevel>senior</Experiencelevel>
      <Workarrangement>remote</Workarrangement>
      <Salaryrange></Salaryrange>
      <Skills>data platforms, customer engagement platforms, streaming applications, Kafka, ElasticSearch, Clickhouse, Spark, Presto/Athena, cloud, APIs, communications, enterprise software, data reliability, ETL techniques, collaborative approach, ability to work with distributed, cross-functional teams, great communication skills</Skills>
      <Category>Engineering</Category>
      <Industry>Technology</Industry>
      <Employername>Twilio</Employername>
      <Employerlogo>https://logos.yubhub.co/twilio.com.png</Employerlogo>
      <Employerdescription>Twilio delivers innovative solutions to hundreds of thousands of businesses and empowers millions of developers worldwide to craft personalized customer experiences.</Employerdescription>
      <Employerwebsite>https://www.twilio.com/</Employerwebsite>
      <Compensationcurrency></Compensationcurrency>
      <Compensationmin></Compensationmin>
      <Compensationmax></Compensationmax>
      <Applyto>https://job-boards.greenhouse.io/twilio/jobs/7424471?utm_source=yubhub.co&amp;utm_medium=jobs_feed&amp;utm_campaign=apply</Applyto>
      <Location>Bengaluru, India</Location>
      <Country></Country>
      <Postedate>2026-04-24</Postedate>
    </job>
    <job>
      <externalid>011a3c1b-5f8</externalid>
      <Title>Senior Staff Software Engineer, Indexing &amp; Retrieval Platform</Title>
      <Description><![CDATA[<p>Reddit is a community of communities. It&#39;s built on shared interests, passion, and trust, and is home to the most open and authentic conversations on the internet.</p>
<p>The ML Indexing &amp; Retrieval Platform team at Reddit is responsible for building and scaling the core infrastructure that powers machine learning driven recommendations. We design and maintain systems for ML data ingestion, low-latency retrieval services, and end-to-end lifecycle management of data.</p>
<p>As a Senior Staff Software Engineer, you&#39;ll lead the development of next-generation ML Indexing &amp; Retrieval systems, owning the full lifecycle from ideation to production. You&#39;ll partner closely with product engineers across Content Understanding, Search, Feeds, Ads, Growth, and Safety to deliver high-quality experiences.</p>
<p>You&#39;ll define best practices for observability, reliability, and operational excellence in large-scale distributed systems. You&#39;ll mentor and guide engineers in designing scalable infrastructure and adopting robust DevOps and SRE principles.</p>
<p>We&#39;re looking for someone with 10+ years of experience in software engineering, specializing in Indexing and Retrieval systems. You should have 3+ years in technical leadership, architecting and scaling distributed systems in production environments.</p>
<p>You&#39;ll need deep expertise in large-scale data platforms, including batch indexing and stream processing. You should have proven experience designing and operating large-scale, low-latency retrieval services.</p>
<p>You&#39;ll be skilled in designing cloud-native architectures and managing containerized workloads using Kubernetes and AWS/GCP. You&#39;ll be adept at translating complex technical challenges into clear, actionable strategies.</p>
<p>Strong communication and mentorship skills are essential, as you&#39;ll lead through collaboration, influence, and technical excellence.</p>
<p>Benefits:</p>
<p>Comprehensive Healthcare Benefits and Income Replacement Programs</p>
<p>401k with Employer Match</p>
<p>Global Benefit programs that fit your lifestyle, from workspace to professional development to caregiving support</p>
<p>Family Planning Support</p>
<p>Gender-Affirming Care</p>
<p>Mental Health &amp; Coaching Benefits</p>
<p>Flexible Vacation &amp; Paid Volunteer Time Off</p>
<p>Generous Paid Parental Leave</p>
<p style="margin-top:24px;font-size:13px;color:#666;">XML job scraping automation by <a href="https://yubhub.co">YubHub</a></p>]]></Description>
      <Jobtype>full-time</Jobtype>
      <Experiencelevel>senior</Experiencelevel>
      <Workarrangement>remote</Workarrangement>
      <Salaryrange>$279,200-$390,900 USD</Salaryrange>
      <Skills>Go, Java, Python, Flink, Airflow, Spark, Kubernetes, Docker, AWS, GCP, Vector, Lexical, Key-Value Databases</Skills>
      <Category>Engineering</Category>
      <Industry>Technology</Industry>
      <Employername>Reddit</Employername>
      <Employerlogo>https://logos.yubhub.co/redditinc.com.png</Employerlogo>
      <Employerdescription>Reddit is a community-driven platform with over 121 million daily active unique visitors, featuring 100,000+ active communities.</Employerdescription>
      <Employerwebsite>https://www.redditinc.com</Employerwebsite>
      <Compensationcurrency></Compensationcurrency>
      <Compensationmin></Compensationmin>
      <Compensationmax></Compensationmax>
      <Applyto>https://job-boards.greenhouse.io/reddit/jobs/7844238?utm_source=yubhub.co&amp;utm_medium=jobs_feed&amp;utm_campaign=apply</Applyto>
      <Location>Remote - United States</Location>
      <Country></Country>
      <Postedate>2026-04-24</Postedate>
    </job>
    <job>
      <externalid>6325a0a5-119</externalid>
      <Title>Principal Software Engineer - Data, Personalization - Microsoft AI</Title>
      <Description><![CDATA[<p>As Microsoft continues to redefine the future of AI, we are seeking passionate engineers to tackle some of the most complex and impactful challenges of our time. Our vision is bold , to build intelligent systems that deeply understand users and adapt across agents, applications, services, and infrastructure. This role focuses on building distributed data systems and APIs that power adaptive, context-aware experiences across Microsoft AI. We aim to make Copilot feel like your Copilot , responsive to your preferences, workflows, and goals , while preserving privacy, security, performance, and scale. We are looking for a Principal Software Engineer to lead the design and development of distributed data infrastructure, APIs and personalization pipelines that drive Copilot’s intelligence. You will work across Microsoft AI and Copilot teams. You will possess a methodical approach to problem-solving, proficiency in backend technologies, a familiarity with applied AI and its unique challenges, and the ability to architect solutions that stand the test of time. The right candidate is hands-on and enjoys building world-class consumer experiences and products in a fast-paced environment. A key skill is the judgment to make the right risk vs velocity and value decisions. Microsoft’s mission is to empower every person and every organization on the planet to achieve more. As employees we come together with a growth mindset, innovate to empower others, and collaborate to realize our shared goals. Each day we build on our values of respect, integrity, and accountability to create a culture of inclusion where everyone can thrive at work and beyond. Starting January 26, 2026, Microsoft AI (MAI) employees who live within a 50- mile commute of a designated Microsoft office in the U.S. or 25-mile commute of a non-U.S., country-specific location are expected to work from the office at least four days per week. This expectation is subject to local law and may vary by jurisdiction. Architect scalable, low-latency systems for ingesting, processing, and serving personalized signals. Design data models and APIs that enable Copilot to reason about user context, preferences, and history Build real-time and batch personalization engines that adapt Copilot’s behavior. Collaborate with privacy, security, and responsible AI teams to ensure personalization is safe, transparent, and user-controlled Optimize for performance, reliability, and cost across diverse workloads and geographies. Ship high-quality, well-tested, secure, and maintainable code. Find a path to get things done despite roadblocks to get your work into the hands of users quickly and iteratively. Enjoy working in a fast-paced, design-driven, product development cycle. Embody our Culture and Values. Bachelor’s Degree in Computer Science or related technical field AND 8+ years technical engineering experience with coding in languages including, but not limited to, C, C++, C#, Java, JavaScript, or Python OR equivalent experience. 4+ years’ experience building scalable services, including securing applications and infrastructure on top of cloud infrastructure like Azure, AWS, or GCP. 3+ years’ experience in OSS data technology, such as Kafka, Spark, Flink. Experience with large scale data systems Experience working with AI platforms, frameworks, and APIs. Experience using Machine Learning frameworks, including experience using, deploying, and scaling language learning models, either personally or professionally. Ability to identify, analyze, and resolve complex technical issues, ensuring optimal performance, scalability, and user experience. Demonstrated interpersonal skills and ability to work closely with cross-functional teams, including product managers, designers, and other engineers. Passion for learning new technologies and staying up to date with industry trends, best practices, and emerging technologies in web, data systems and AI. Ability to work in a fast-paced environment, manage multiple priorities, and adapt to changing requirements and deadlines. Proven ability to collaborate and contribute to a positive, inclusive work environment, fostering knowledge sharing and growth within the team.</p>
<p style="margin-top:24px;font-size:13px;color:#666;">XML job scraping automation by <a href="https://yubhub.co">YubHub</a></p>]]></Description>
      <Jobtype>full-time</Jobtype>
      <Experiencelevel>senior</Experiencelevel>
      <Workarrangement>hybrid</Workarrangement>
      <Salaryrange>$163,000 – $296,400 per year</Salaryrange>
      <Skills>C, C++, C#, Java, JavaScript, Python, Azure, AWS, GCP, Kafka, Spark, Flink, Machine Learning, AI platforms, frameworks, APIs</Skills>
      <Category>Engineering</Category>
      <Industry>Technology</Industry>
      <Employername>Microsoft</Employername>
      <Employerlogo>https://logos.yubhub.co/microsoft.ai.png</Employerlogo>
      <Employerdescription>Microsoft is a multinational technology company that develops, manufactures, licenses, and supports a wide range of software products, services, and devices.</Employerdescription>
      <Employerwebsite>https://microsoft.ai</Employerwebsite>
      <Compensationcurrency></Compensationcurrency>
      <Compensationmin></Compensationmin>
      <Compensationmax></Compensationmax>
      <Applyto>https://microsoft.ai/job/principal-software-engineer-data-personalization-microsoft-ai-7/?utm_source=yubhub.co&amp;utm_medium=jobs_feed&amp;utm_campaign=apply</Applyto>
      <Location>Mountain View</Location>
      <Country></Country>
      <Postedate>2026-04-24</Postedate>
    </job>
    <job>
      <externalid>74402ab5-601</externalid>
      <Title>Senior Machine Learning Engineer - Ads R&amp;D</Title>
      <Description><![CDATA[<p>Our mission on the Advertising Product &amp; Technology team is to build a next-generation advertising platform that aligns with our unique value proposition for audio and video. We work to scale the user experience for hundreds of millions of fans and hundreds of thousands of advertisers. This scale brings unique challenges as well as tremendous opportunities for our artists and creators.</p>
<p>We are seeking a Senior Machine Learning Engineer to join the Supply Personalization squad. Supply Personalization focuses on optimizing the volume, timing, and types of ad loads a user receives. By leveraging data, machine learning, causal inference, and large-scale online experimentation, we aim to uncover and learn the most effective strategies for enhancing user experiences and driving business outcomes.</p>
<p>As a Senior Machine Learning Engineer, you will design and implement machine learning systems for ad performance optimization. You will research and apply ML optimization strategies to balance multiple objectives effectively. You will analyze data and use machine learning techniques to understand user behavior and improve ad experiences. You will collaborate with backend engineers, data scientists, data engineers, and product managers to establish baselines, inform product decisions, and develop new technologies.</p>
<p>The ideal candidate will have professional experience in applied machine learning. They will have strong technical expertise in software engineering, data analysis, and machine learning. They will be proficient in programming languages such as Python, Java, or Scala. They will have experience with TensorFlow or PyTorch and working with various aspects of the ML lifecycle. They will also have expertise in developing data pipelines using tools like Apache Beam or Spark.</p>
<p style="margin-top:24px;font-size:13px;color:#666;">XML job scraping automation by <a href="https://yubhub.co">YubHub</a></p>]]></Description>
      <Jobtype>permanent</Jobtype>
      <Experiencelevel>senior</Experiencelevel>
      <Workarrangement>remote</Workarrangement>
      <Salaryrange>$184,050.00 - $262,928.00</Salaryrange>
      <Skills>machine learning, software engineering, data analysis, Python, Java, Scala, TensorFlow, PyTorch, Apache Beam, Spark, LLMs, Ray, Adtech, Recommender Systems</Skills>
      <Category>Engineering</Category>
      <Industry>Technology</Industry>
      <Employername>Spotify</Employername>
      <Employerlogo>https://logos.yubhub.co/spotify.com.png</Employerlogo>
      <Employerdescription>Spotify is a music streaming service that allows users to access millions of songs and podcasts. The company was founded in 2008 and has since become one of the largest music streaming services in the world.</Employerdescription>
      <Employerwebsite>https://www.spotify.com</Employerwebsite>
      <Compensationcurrency></Compensationcurrency>
      <Compensationmin></Compensationmin>
      <Compensationmax></Compensationmax>
      <Applyto>https://jobs.lever.co/spotify/6236f25f-f9cc-47c2-af7b-4ace57332eeb?utm_source=yubhub.co&amp;utm_medium=jobs_feed&amp;utm_campaign=apply</Applyto>
      <Location>New York</Location>
      <Country></Country>
      <Postedate>2026-04-24</Postedate>
    </job>
    <job>
      <externalid>7d23b7cf-337</externalid>
      <Title>Senior Software Engineer</Title>
      <Description><![CDATA[<p>Do you enjoy solving complex technical problems on a global scale?</p>
<p>Microsoft AI Monetization enables advertisers to measure impact and optimize spend through secure, privacy-preserving data collaboration. The Measurement and Data Collaboration Engineering team is responsible for building the next generation of privacy-safe measurement systems that allow advertisers and partners to work with data in highly secure environments. Our platform integrates Microsoft’s Azure Confidential Compute Clean Room (ACCR) with third-party clean room partners to deliver a unified, compliant, and scalable measurement ecosystem. We are looking for a Senior Software Engineer who is passionate about distributed systems, privacy-enhancing technologies, secure data processing, and building reliable production services with global impact.</p>
<p>Responsibilities:</p>
<ul>
<li>Design and build highly scalable backend services and data pipelines that support privacy-preserving measurement and analytics scenarios using C# or Java.</li>
<li>Design secure data collaboration workflows across multiple parties using modern privacy technologies, governance controls, and minimum-aggregation protections.</li>
<li>Drive integrations with external data and measurement partners, designing stable interfaces, schema governance patterns, and robust validation.</li>
<li>Lead initiatives to make delivery of high-quality software routine and efficient through the entire software development lifecycle, from inception and technical design through testing and excellence in production operations.</li>
<li>Collaborate closely with product, data science, privacy, and security teams to translate measurement needs into scalable platform capabilities.</li>
<li>Contribute to engineering team best practices leveraging AI dev tools across the software development lifecycle (SDLC).</li>
</ul>
<p>Qualifications:</p>
<ul>
<li>Bachelor’s degree in computer science or related technical field AND 4+ years technical engineering experience with coding in languages including, but not limited to, C, C++, C#, Java, JavaScript, or Python OR equivalent experience.</li>
<li>Ability to meet Microsoft, customer and/or government security screening requirements are required for this role.</li>
<li>5+ years of experience building and operating large-scale distributed systems, backend services, or data platforms.</li>
<li>Experience with large-scale data processing frameworks (e.g. Spark, SQL-based pipelines) and cloud platforms.</li>
<li>Understanding of secure data processing, encryption, identity, and access control.</li>
<li>Experience building and operating services with strict SLAs.</li>
<li>Experience with Azure.</li>
<li>Background in advertising, marketing technology, attribution, or large-scale analytics.</li>
<li>Experience integrating third-party (vendor/partner) platforms, identity systems, or data collaboration technologies.</li>
<li>Solid problem-solving skills with a focus on reliability, observability, and system design.</li>
</ul>
<p>#MicrosoftAI Software Engineering IC4 – The typical base pay range for this role across the U.S. is USD $119,800 – $234,700 per year. There is a different range applicable to specific work locations, within the San Francisco Bay area and New York City metropolitan area, and the base pay range for this role in those locations is USD $158,400 – $258,000 per year.</p>
<p style="margin-top:24px;font-size:13px;color:#666;">XML job scraping automation by <a href="https://yubhub.co">YubHub</a></p>]]></Description>
      <Jobtype>full-time</Jobtype>
      <Experiencelevel>senior</Experiencelevel>
      <Workarrangement>hybrid</Workarrangement>
      <Salaryrange>$119,800 - $234,700 per year</Salaryrange>
      <Skills>C#, Java, JavaScript, Python, Azure, Spark, SQL, Cloud platforms, Secure data processing, Encryption, Identity, Access control, SLAs, Distributed systems, Backend services, Data platforms</Skills>
      <Category>Engineering</Category>
      <Industry>Technology</Industry>
      <Employername>Microsoft AI</Employername>
      <Employerlogo>https://logos.yubhub.co/microsoft.ai.png</Employerlogo>
      <Employerdescription>Microsoft AI Monetization enables advertisers to measure impact and optimize spend through secure, privacy-preserving data collaboration.</Employerdescription>
      <Employerwebsite>https://microsoft.ai</Employerwebsite>
      <Compensationcurrency></Compensationcurrency>
      <Compensationmin></Compensationmin>
      <Compensationmax></Compensationmax>
      <Applyto>https://microsoft.ai/job/senior-software-engineer-131/?utm_source=yubhub.co&amp;utm_medium=jobs_feed&amp;utm_campaign=apply</Applyto>
      <Location>Redmond</Location>
      <Country></Country>
      <Postedate>2026-04-24</Postedate>
    </job>
    <job>
      <externalid>6ecebedb-31e</externalid>
      <Title>Member of Technical Staff - Data Engineer</Title>
      <Description><![CDATA[<p>As Microsoft continues to push the boundaries of AI, we are on the lookout for individuals to work with us on the most interesting and challenging AI questions of our time. Our vision is bold and broad , to build systems that have true artificial intelligence across agents, applications, services, and infrastructure. It’s also inclusive: we aim to make AI accessible to all , consumers, businesses, developers , so that everyone can realize its benefits.</p>
<p>We’re looking for someone who possesses technical prowess, a methodical approach to problem-solving, proficiency in big data processing technologies, and a mastery of templating to architect solutions that stand the test of time and who will bring an abundance of positive energy, empathy, and kindness to the team every day, in addition to being highly effective.</p>
<p>The Data Platform Engineering team is responsible for building core data pipelines that help fine tune models, support introspection and retrospection of data so that we can constantly evolve and improve human AI interactions.</p>
<p>Microsoft’s mission is to empower every person and every organization on the planet to achieve more. As employees we come together with a growth mindset, innovate to empower others, and collaborate to realize our shared goals. Each day we build on our values of respect, integrity, and accountability to create a culture of inclusion where everyone can thrive at work and beyond.</p>
<p>Starting January 26, 2026, MAI employees are expected to work from a designated Microsoft office at least four days a week if they live within 50 miles (U.S.) or 25 miles (non-U.S., country-specific) of that location. This expectation is subject to local law and may vary by jurisdiction.</p>
<p>Responsibilities:</p>
<ul>
<li>Build scalable data pipelines for sourcing, transforming and publishing data assets for AI use cases.</li>
<li>Work collaboratively with other Platform, infrastructure, application engineers as well as AI Researchers to build next generation data platform products and services.</li>
<li>Ship high-quality, well-tested, secure, and maintainable code.</li>
<li>Find a path to get things done despite roadblocks to get your work into the hands of users quickly and iteratively.</li>
<li>Enjoy working in a fast-paced, design-driven, product development cycle.</li>
<li>Embody our Culture and Values.</li>
</ul>
<p>Qualifications:</p>
<ul>
<li>Bachelor’s Degree in Computer Science, Math, Software Engineering, Computer Engineering, or related field AND 6+ years experience in business analytics, data science, software development, data modeling or data engineering work OR Master’s Degree in Computer Science, Math, Software Engineering, Computer Engineering, or related field AND 4+ years experience in business analytics, data science, software development, or data engineering work OR equivalent experience.</li>
<li>4+ years technical engineering experience building data processing applications (batch and streaming) with coding in languages including, but not limited to, Python, Java, Spark, SQL.</li>
<li>Experience working with Apache Hadoop eco system, Kafka, NoSQL, etc.</li>
<li>3+ years experience with data governance, data compliance and/or data security.</li>
<li>2+ years’ experience building scalable services on top of public cloud infrastructure like Azure, AWS, or GCP.</li>
<li>Extensive use datastores like RDBMS, key-value stores, etc.</li>
<li>2+ years’ experience building distributed systems at scale and extensive systems knowledge that spans bare-metal hosts to containers to networking.</li>
<li>Ability to identify, analyze, and resolve complex technical issues, ensuring optimal performance, scalability, and user experience.</li>
<li>Dedication to writing clean, maintainable, and well-documented code with a focus on application quality, performance, and security.</li>
<li>Demonstrated interpersonal skills and ability to work closely with cross-functional teams, including product managers, designers, and other engineers.</li>
<li>Ability to clearly communicate complex technical concepts to both technical and non-technical stakeholders.</li>
<li>Interest in learning new technologies and staying up to date with industry trends, best practices, and emerging technologies in web development and AI.</li>
<li>Ability to work in a fast-paced environment, manage multiple priorities, and adapt to changing requirements and deadlines.</li>
</ul>
<p>#mai-datainsights #mai-datainsights</p>
<p style="margin-top:24px;font-size:13px;color:#666;">XML job scraping automation by <a href="https://yubhub.co">YubHub</a></p>]]></Description>
      <Jobtype>full-time</Jobtype>
      <Experiencelevel>staff</Experiencelevel>
      <Workarrangement>hybrid</Workarrangement>
      <Salaryrange>$139,900 – $274,800 per year</Salaryrange>
      <Skills>Python, Java, Spark, SQL, Apache Hadoop, Kafka, NoSQL, Azure, AWS, GCP, RDBMS, key-value stores</Skills>
      <Category>Engineering</Category>
      <Industry>Technology</Industry>
      <Employername>Microsoft AI</Employername>
      <Employerlogo>https://logos.yubhub.co/microsoft.ai.png</Employerlogo>
      <Employerdescription>Microsoft AI is a subsidiary of Microsoft Corporation, a multinational technology company.</Employerdescription>
      <Employerwebsite>https://microsoft.ai</Employerwebsite>
      <Compensationcurrency></Compensationcurrency>
      <Compensationmin></Compensationmin>
      <Compensationmax></Compensationmax>
      <Applyto>https://microsoft.ai/job/member-of-technical-staff-data-engineer-5/?utm_source=yubhub.co&amp;utm_medium=jobs_feed&amp;utm_campaign=apply</Applyto>
      <Location>Mountain View</Location>
      <Country></Country>
      <Postedate>2026-04-24</Postedate>
    </job>
    <job>
      <externalid>a384c2f7-cf4</externalid>
      <Title>Principal Applied Scientist</Title>
      <Description><![CDATA[<p>The Microsoft AI Web Data team is looking for a highly qualified Principal Applied Scientist to help build the next-generation platform for Bing and Microsoft AI. The Microsoft AI Web Data Platform (WDP) team builds the data foundation that powers Bing and Microsoft AI experiences, including large-scale grounding and large language model (LLM) training. We operate end-to-end systems that discover, fetch, process, understand and store web content at internet scale. We advance the platform’s capabilities with state-of-the-art modeling and fueling critical Microsoft experiences and pushing the frontier of AI.</p>
<p>In this role, you will translate research into production by advancing the state of the art and applying it to meet today’s AI needs. You will drive the design, development, execution, and implementation of research projects, using scientific principles and techniques to develop, evaluate, and deploy algorithms and solutions that improve system performance, quality, data management, and accuracy.</p>
<p>Join us to shape the future of AI and deliver meaningful value to millions of users. Microsoft’s mission is to empower every person and every organization on the planet to achieve more. As employees we come together with a growth mindset, innovate to empower others, and collaborate to realize our shared goals. Each day we build on our values of respect, integrity, and accountability to create a culture of inclusion where everyone can thrive at work and beyond.</p>
<p>Responsibilities:</p>
<ul>
<li>Develop deep expertise across a broad research area and relevant techniques; stay current on industry trends and advances; and apply these insights to shape product and platform direction.</li>
<li>Partner with stakeholders to understand business and product requirements; incorporate research insights; and provide strategic technical direction for problem solving with solid scientific rigor and measurable business impact.</li>
<li>Mentor and inspire peers and new research talent; build relationships and advocate for research initiatives; share results through industry outreach; collaborate with academia; and strengthen the recruiting pipeline.</li>
<li>Document experiments and outcomes; communicate learnings to accelerate innovation; and help define best practices, including ethics and privacy considerations for research processes and data collection.</li>
<li>Guide and mentor junior team members in developing new technologies that translate into production-ready solutions.</li>
<li>Work closely with partner teams across Microsoft AI to understand shared needs and build a technical roadmap to address them.</li>
</ul>
<p>Qualifications:</p>
<ul>
<li>Bachelor’s Degree in Statistics, Econometrics, Computer Science, Electrical or Computer Engineering, or related field AND 6+ years related experience (e.g., statistics, predictive analytics, research) OR Master’s Degree in Statistics, Econometrics, Computer Science, Electrical or Computer Engineering, or related field AND 4+ years related experience (e.g., statistics, predictive analytics, research) OR Doctorate in Statistics, Econometrics, Computer Science, Electrical or Computer Engineering, or related field AND 3+ years related experience (e.g., statistics, predictive analytics, research) OR equivalent experience.</li>
<li>5+ years experience creating publications (e.g., patents, libraries, peer-reviewed academic papers).</li>
<li>2+ years experience presenting at conferences or other events in the outside research/industry community as an invited speaker.</li>
<li>5+ years experience conducting research as part of a research program (in academic or industry settings).</li>
<li>3+ years experience developing and deploying live production systems, as part of a product team.</li>
<li>3+ years experience developing and deploying products or systems at multiple points in the product cycle from ideation to shipping.</li>
<li>8+ years of experience in product development in machine learning and related areas.</li>
<li>Hands-on experience developing algorithms and models using deep learning frameworks such as TensorFlow and PyTorch.</li>
<li>Active research in at least one of the following areas: LLM training, artificial intelligence, data science, information retrieval, machine learning, or natural language processing.</li>
<li>Demonstrated excellence in communication and cross-team collaboration.</li>
<li>Ability to think big while delivering measurable real-world impact through design and development.</li>
<li>Solid understanding of web documents and web data processing and understanding concepts, methods, applications, and challenges.</li>
<li>Experience with Big Data (Spark, Mapreduce, Cosmos, etc.) and NRT systems</li>
<li>Experience with Search or recommendations.</li>
</ul>
<p>#MicrosoftAI Applied Sciences IC5 – The typical base pay range for this role across the U.S. is USD $139,900 – $274,800 per year. There is a different range applicable to specific work locations, within the San Francisco Bay area and New York City metropolitan area, and the base pay range for this role in those locations is USD $188,000 – $304,200 per year.</p>
<p style="margin-top:24px;font-size:13px;color:#666;">XML job scraping automation by <a href="https://yubhub.co">YubHub</a></p>]]></Description>
      <Jobtype>full-time</Jobtype>
      <Experiencelevel>senior</Experiencelevel>
      <Workarrangement>hybrid</Workarrangement>
      <Salaryrange>$139,900 - $274,800 per year</Salaryrange>
      <Skills>Statistics, Econometrics, Computer Science, Electrical or Computer Engineering, Deep learning frameworks, TensorFlow, PyTorch, Big Data, Spark, Mapreduce, Cosmos, NRT systems, Search, Recommendations</Skills>
      <Category>Engineering</Category>
      <Industry>Technology</Industry>
      <Employername>Microsoft</Employername>
      <Employerlogo>https://logos.yubhub.co/microsoft.ai.png</Employerlogo>
      <Employerdescription>Microsoft is a multinational technology company that develops, manufactures, licenses, and supports a wide range of software products, services, and devices.</Employerdescription>
      <Employerwebsite>https://microsoft.ai</Employerwebsite>
      <Compensationcurrency></Compensationcurrency>
      <Compensationmin></Compensationmin>
      <Compensationmax></Compensationmax>
      <Applyto>https://microsoft.ai/job/principal-applied-scientist-32/?utm_source=yubhub.co&amp;utm_medium=jobs_feed&amp;utm_campaign=apply</Applyto>
      <Location>Redmond</Location>
      <Country></Country>
      <Postedate>2026-04-24</Postedate>
    </job>
    <job>
      <externalid>78d2210e-b8c</externalid>
      <Title>Software Engineer III</Title>
      <Description><![CDATA[<p>ZoomInfo is where careers accelerate. We move fast, think boldly, and empower you to do the best work of your life. You&#39;ll be surrounded by teammates who care deeply, challenge each other, and celebrate wins. With tools that amplify your impact and a culture that backs your ambition, you won&#39;t just contribute. You&#39;ll make things happen–fast.</p>
<p>About the Role: We are seeking an experienced Software Engineer III (Data) to join our fast-paced, collaborative data team. In this role, you will have broad authority to drive the direction of our technographic data services, building world-class data pipelines and systems to process billions of signals and data points. This is an exciting opportunity to solve challenging problems and make a big impact as we invest in making technographics a first-class offering.</p>
<p>Responsibilities:</p>
<ul>
<li>Build and optimize big data pipelines to extract and process signals from the web, job postings, and other sources</li>
<li>Design and implement data architectures and storage solutions to efficiently handle massive data volumes</li>
<li>Collaborate closely with data scientists to support and integrate ML models into data workflows</li>
<li>Continuously improve data quality, performance, and scalability of our technographic data platform</li>
<li>Drive technical strategy and roadmap for the data processing infrastructure</li>
</ul>
<p>What We&#39;re Looking For:</p>
<ul>
<li>Extensive experience building and scaling big data pipelines and architectures from scratch</li>
<li>Deep expertise in big data frameworks (Hadoop, Spark) and the JVM stack (Java, Scala)</li>
<li>Strong software engineering fundamentals and ability to write efficient, high-quality code</li>
<li>Experience with entity recognition and NLP techniques a plus</li>
<li>Proven track record delivering results and driving projects in a fast-paced environment</li>
<li>Excellent collaboration and communication skills to work with data scientists, analysts and product teams</li>
<li>Passion for leveraging huge datasets to power valuable insights</li>
</ul>
<p>Ideal Background:</p>
<ul>
<li>5+ years of experience in software engineering roles.</li>
<li>Experience working with very large datasets and distributed systems</li>
<li>Familiarity building data pipelines at large tech companies or data-driven organisations</li>
<li>Bachelor&#39;s or advanced degree in Computer Science, Engineering or related technical field.</li>
</ul>
<p style="margin-top:24px;font-size:13px;color:#666;">XML job scraping automation by <a href="https://yubhub.co">YubHub</a></p>]]></Description>
      <Jobtype>full-time</Jobtype>
      <Experiencelevel>senior</Experiencelevel>
      <Workarrangement>hybrid</Workarrangement>
      <Salaryrange>$112,000-$176,000 USD</Salaryrange>
      <Skills>big data pipelines, Hadoop, Spark, JVM stack, Java, Scala, entity recognition, NLP techniques</Skills>
      <Category>Engineering</Category>
      <Industry>Technology</Industry>
      <Employername>ZoomInfo</Employername>
      <Employerlogo>https://logos.yubhub.co/zoominfo.com.png</Employerlogo>
      <Employerdescription>ZoomInfo is a Go-To-Market Intelligence Platform that provides AI-ready insights, trusted data, and advanced automation to over 35,000 companies worldwide.</Employerdescription>
      <Employerwebsite>https://www.zoominfo.com/</Employerwebsite>
      <Compensationcurrency></Compensationcurrency>
      <Compensationmin></Compensationmin>
      <Compensationmax></Compensationmax>
      <Applyto>https://job-boards.greenhouse.io/zoominfo/jobs/8509478002?utm_source=yubhub.co&amp;utm_medium=jobs_feed&amp;utm_campaign=apply</Applyto>
      <Location>Bethesda, Maryland, United States; Waltham, Massachusetts, United States</Location>
      <Country></Country>
      <Postedate>2026-04-24</Postedate>
    </job>
    <job>
      <externalid>303dbf9f-c7f</externalid>
      <Title>Staff Product Manager</Title>
      <Description><![CDATA[<p>Join the team as our next Data Platform Product Manager in the Data Governance and Insights team.</p>
<p>This position is needed to own Twilio&#39;s Data Governance initiatives across Twilio. This position is based in India. You will touch many teams within Twilio to ensure safe customer data handling, supporting data privacy and compliance. This team manages data pipeline security, data reliability, and ensuring access controls. We are also the bridge to the reporting systems trusted by customers, executives and shareholders.</p>
<p>In this role, you’ll:</p>
<ul>
<li>Lead the product requirements required to build and operate a central data catalog as a metadata store.</li>
<li>Drive Data Governance initiative, working across various organizations and stakeholders.</li>
<li>Understand the needs of our customers for operational and analytical purposes, and execute on governance of the data pipeline with required access management to fulfill these requirements.</li>
<li>Craft and deliver a vision for data governance at Twilio, working side by side with other product managers and engineering counterparts across Twilio R&amp;D.</li>
</ul>
<p>Twilio values diverse experiences from all kinds of industries, and we encourage everyone who meets the required qualifications to apply. If your career is just starting or hasn&#39;t followed a traditional path, don&#39;t let that stop you from considering Twilio.</p>
<p>We are always looking for people who will bring something new to the table!</p>
<p>*Required:</p>
<ul>
<li>Someone with 10+ years of product management experience in a fast-paced company.</li>
<li>Strong background in Data Governance, having led at least one initiative that included cataloging metadata, data reliability, sensitive data classification and access management.</li>
<li>Worked on data platforms, customer engagement platforms, or streaming applications.</li>
<li>Proficiency in the big data ecosystem e.g. Kafka, Spark, Presto/Athena, or similar technologies.</li>
<li>Technically savvy and experienced with the cloud, APIs, communications, enterprise software, data reliability, and ETL techniques.</li>
<li>You have a customer oriented approach. You have an amazing ability to understand the customer’s challenges and are able to articulate a vision to solve challenges to make an impact.</li>
<li>The ability to solicit customer requirements from many - often opposing - sources, prioritize, and work with engineering and design to deliver.</li>
<li>You are a strategic problem solver and flourish operating in broad scope, from conception through continuous operation of 24x7 services.</li>
<li>You have solved sophisticated problems and have the aptitude to navigate uncharted waters.</li>
</ul>
<p>*Desired:</p>
<ul>
<li>Collaborative approach and ability to work with distributed, cross-functional teams.</li>
<li>Great communication skills. You&#39;re equally at home on zoom presenting to an audience of developers as you are on a zoom talking to users and then coming up with product requirements. Your best days are the ones where you do both on the same day.</li>
</ul>
<p>*Bachelor’s degree Computer Science, Engineering or equivalent experience required.</p>
<p>This role will be remote, and based in India. Travel may be required to participate in project or team in-person meetings.</p>
<p style="margin-top:24px;font-size:13px;color:#666;">XML job scraping automation by <a href="https://yubhub.co">YubHub</a></p>]]></Description>
      <Jobtype>full-time</Jobtype>
      <Experiencelevel>staff</Experiencelevel>
      <Workarrangement>remote</Workarrangement>
      <Salaryrange></Salaryrange>
      <Skills>Data Governance, Metadata Store, Data Pipeline Security, Data Reliability, Access Management, Kafka, Spark, Presto/Athena, Cloud, APIs, Communications, Enterprise Software, ETL Techniques</Skills>
      <Category>Engineering</Category>
      <Industry>Technology</Industry>
      <Employername>Twilio</Employername>
      <Employerlogo>https://logos.yubhub.co/twilio.com.png</Employerlogo>
      <Employerdescription>Twilio delivers innovative solutions to hundreds of thousands of businesses and empowers millions of developers worldwide to craft personalized customer experiences.</Employerdescription>
      <Employerwebsite>https://www.twilio.com/</Employerwebsite>
      <Compensationcurrency></Compensationcurrency>
      <Compensationmin></Compensationmin>
      <Compensationmax></Compensationmax>
      <Applyto>https://job-boards.greenhouse.io/twilio/jobs/7424250?utm_source=yubhub.co&amp;utm_medium=jobs_feed&amp;utm_campaign=apply</Applyto>
      <Location>Bengaluru, India</Location>
      <Country></Country>
      <Postedate>2026-04-24</Postedate>
    </job>
    <job>
      <externalid>f3e660c6-9a8</externalid>
      <Title>Principal Software Engineer - Data, Personalization - Microsoft AI</Title>
      <Description><![CDATA[<p>As Microsoft continues to redefine the future of AI, we are seeking passionate engineers to tackle some of the most complex and impactful challenges of our time. Our vision is bold , to build intelligent systems that deeply understand users and adapt across agents, applications, services, and infrastructure. This role focuses on building distributed data systems and APIs that power adaptive, context-aware experiences across Microsoft AI. We aim to make Copilot feel like your Copilot , responsive to your preferences, workflows, and goals , while preserving privacy, security, performance, and scale.</p>
<p>We are looking for a Principal Software Engineer to lead the design and development of distributed data infrastructure, APIs and personalization pipelines that drive Copilot’s intelligence. You will work across Microsoft AI and Copilot teams. You will possess a methodical approach to problem-solving, proficiency in backend technologies, a familiarity with applied AI and its unique challenges, and the ability to architect solutions that stand the test of time.</p>
<p>The right candidate is hands-on and enjoys building world-class consumer experiences and products in a fast-paced environment. A key skill is the judgment to make the right risk vs velocity and value decisions.</p>
<p>Microsoft’s mission is to empower every person and every organization on the planet to achieve more. As employees we come together with a growth mindset, innovate to empower others, and collaborate to realize our shared goals. Each day we build on our values of respect, integrity, and accountability to create a culture of inclusion where everyone can thrive at work and beyond.</p>
<p>Responsibilities:</p>
<p>Architect scalable, low-latency systems for ingesting, processing, and serving personalized signals.</p>
<p>Design data models and APIs that enable Copilot to reason about user context, preferences, and history</p>
<p>Build real-time and batch personalization engines that adapt Copilot’s behavior.</p>
<p>Collaborate with privacy, security, and responsible AI teams to ensure personalization is safe, transparent, and user-controlled</p>
<p>Optimize for performance, reliability, and cost across diverse workloads and geographies.</p>
<p>Ship high-quality, well-tested, secure, and maintainable code.</p>
<p>Find a path to get things done despite roadblocks to get your work into the hands of users quickly and iteratively.</p>
<p>Enjoy working in a fast-paced, design-driven, product development cycle.</p>
<p>Embody our Culture and Values.</p>
<p>Qualifications:</p>
<p>Required Qualifications:</p>
<p>Bachelor’s Degree in Computer Science or related technical field AND 8+ years technical engineering experience with coding in languages including, but not limited to, C, C++, C#, Java, JavaScript, or Python OR equivalent experience.</p>
<p>Preferred Qualifications:</p>
<p>4+ years’ experience building scalable services, including securing applications and infrastructure on top of cloud infrastructure like Azure, AWS, or GCP.</p>
<p>3+ years’ experience in OSS data technology, such as Kafka, Spark, Flink.</p>
<p>Experience with large scale data systems</p>
<p>Experience working with AI platforms, frameworks, and APIs.</p>
<p>Experience using Machine Learning frameworks, including experience using, deploying, and scaling language learning models, either personally or professionally.</p>
<p>Ability to identify, analyze, and resolve complex technical issues, ensuring optimal performance, scalability, and user experience.</p>
<p>Demonstrated interpersonal skills and ability to work closely with cross-functional teams, including product managers, designers, and other engineers.</p>
<p>Passion for learning new technologies and staying up to date with industry trends, best practices, and emerging technologies in web, data systems and AI.</p>
<p>Ability to work in a fast-paced environment, manage multiple priorities, and adapt to changing requirements and deadlines.</p>
<p>Proven ability to collaborate and contribute to a positive, inclusive work environment, fostering knowledge sharing and growth within the team.</p>
<p style="margin-top:24px;font-size:13px;color:#666;">XML job scraping automation by <a href="https://yubhub.co">YubHub</a></p>]]></Description>
      <Jobtype>full-time</Jobtype>
      <Experiencelevel>senior</Experiencelevel>
      <Workarrangement>hybrid</Workarrangement>
      <Salaryrange>$163,000 – $296,400 per year</Salaryrange>
      <Skills>C, C++, C#, Java, JavaScript, Python, Azure, AWS, GCP, Kafka, Spark, Flink, Machine Learning, AI, Cloud infrastructure, Data technology, AI platforms, Frameworks, APIs, Language learning models</Skills>
      <Category>Engineering</Category>
      <Industry>Technology</Industry>
      <Employername>Microsoft</Employername>
      <Employerlogo>https://logos.yubhub.co/microsoft.ai.png</Employerlogo>
      <Employerdescription>Microsoft is a multinational technology company that develops, manufactures, licenses, and supports a wide range of software products, services, and devices.</Employerdescription>
      <Employerwebsite>https://microsoft.ai</Employerwebsite>
      <Compensationcurrency></Compensationcurrency>
      <Compensationmin></Compensationmin>
      <Compensationmax></Compensationmax>
      <Applyto>https://microsoft.ai/job/principal-software-engineer-data-personalization-microsoft-ai-6/?utm_source=yubhub.co&amp;utm_medium=jobs_feed&amp;utm_campaign=apply</Applyto>
      <Location>New York</Location>
      <Country></Country>
      <Postedate>2026-04-24</Postedate>
    </job>
    <job>
      <externalid>28608bb0-b72</externalid>
      <Title>Software Engineer - Full Stack</Title>
      <Description><![CDATA[<p>Help millions of people find the right local businesses and services at the moments that matter most. At Bing Places, we build the systems that power local discovery across Microsoft experiences. You’ll work at the intersection of engineering, data, and product to improve the quality, relevance, and trustworthiness of local search at global scale.</p>
<p>In this role, you’ll build and operate scalable systems that power accurate and trustworthy local search experiences across Microsoft. As a Software Engineer II on Bing Places, you’ll collaborate with engineers, data scientists, and product partners to integrate diverse data sources, improve ranking quality, and ship features used by millions of customers.</p>
<p>The role offers solid growth opportunities as you deepen your expertise in distributed systems, geospatial data. Microsoft’s mission is to empower every person and every organization on the planet to achieve more. As employees we come together with a growth mindset, innovate to empower others, and collaborate to realize our shared goals. Each day we build on our values of respect, integrity, and accountability to create a culture of inclusion where everyone can thrive at work and beyond.</p>
<p>Responsibilities:</p>
<p>Contribute to architecture, engineering standards, and development practices across the team.</p>
<p>Work with appropriate stakeholders to determine user requirements for a set of features.</p>
<p>Contribute to the identification of dependencies, and the development of design documents for a product area with little oversight.</p>
<p>Create and implement code for a product, service, or feature, reusing code as applicable.</p>
<p>Contribute to efforts to break down larger work items into smaller work items and provides estimation.</p>
<p>Act as a Designated Responsible Individual (DRI) working on-call to monitor system/product feature/service for degradation, downtime, or interruptions and gain approval to restore system/product/service for simple problems.</p>
<p>Remain current in skills by investing time and effort into staying abreast of current developments that will improve the availability, reliability, efficiency, observability, and performance of products while also driving consistency in monitoring and operations at scale.</p>
<p>Qualifications:</p>
<p>Required Qualifications:</p>
<p>Bachelor’s Degree in Computer Science or related technical field AND 2+ years technical engineering experience with coding in languages including, but not limited to, C, C++, C#, Java, JavaScript, or Python OR equivalent experience.</p>
<p>Ability to meet Microsoft, customer and/or government security screening requirements are required for this role. These requirements include but are not limited to the following specialized security screenings: Microsoft Cloud Background Check: This position will be required to pass the Microsoft Cloud background check upon hire/transfer and every two years thereafter.</p>
<p>Preferred Qualifications:</p>
<p>Master’s Degree in Computer Science or related technical field AND 3+ years technical engineering experience with coding in languages including, but not limited to, C, C++, C#, Java, JavaScript, or Python OR Bachelor’s Degree in Computer Science or related technical field AND 5+ years technical engineering experience with coding in languages including, but not limited to, C, C++, C#, Java, JavaScript, or Python OR equivalent experience.</p>
<p>1+ with data engineering leveraging tools such as Apache Hadoop or Spark or equivalent experience.</p>
<p>Experience with Azure Cloud, Azure Data Factory (ADF) 3+ years of experience in solving, design, coding, and debugging skills.</p>
<p>Demonstrated experience with products that involve high availability/reliability and low latency systems.</p>
<p>#MicrosoftAI</p>
<p style="margin-top:24px;font-size:13px;color:#666;">XML job scraping automation by <a href="https://yubhub.co">YubHub</a></p>]]></Description>
      <Jobtype>full-time</Jobtype>
      <Experiencelevel>mid</Experiencelevel>
      <Workarrangement>hybrid</Workarrangement>
      <Salaryrange>Base pay range for this role across the U.S. is USD $100,600 – $199,000 per year.</Salaryrange>
      <Skills>C, C++, C#, Java, JavaScript, Python, Apache Hadoop, Spark, Azure Cloud, Azure Data Factory (ADF)</Skills>
      <Category>Engineering</Category>
      <Industry>Technology</Industry>
      <Employername>Microsoft</Employername>
      <Employerlogo>https://logos.yubhub.co/microsoft.ai.png</Employerlogo>
      <Employerdescription>Microsoft is a multinational technology company that develops, manufactures, licenses, and supports a wide range of software products, services, and devices.</Employerdescription>
      <Employerwebsite>https://microsoft.ai</Employerwebsite>
      <Compensationcurrency></Compensationcurrency>
      <Compensationmin></Compensationmin>
      <Compensationmax></Compensationmax>
      <Applyto>https://microsoft.ai/job/software-engineer-full-stack-2/?utm_source=yubhub.co&amp;utm_medium=jobs_feed&amp;utm_campaign=apply</Applyto>
      <Location>Redmond</Location>
      <Country></Country>
      <Postedate>2026-04-24</Postedate>
    </job>
    <job>
      <externalid>78718a5a-14f</externalid>
      <Title>Solutions Architect Spain</Title>
      <Description><![CDATA[<p>At Databricks, our core values are at the heart of everything we do; creating a culture of proactiveness and a customer-centric mindset guides us to create a unified platform that makes data science and analytics accessible to everyone.</p>
<p>We provide a user-friendly and intuitive platform that makes it easy to turn insights into action and fosters a culture of creativity, experimentation, and continuous improvement.</p>
<p>As a Solutions Architect Spain, you will be an essential part of this mission, using your technical expertise to demonstrate how our Data &amp; Intelligence Platform can help customers solve their complex data challenges.</p>
<p>You&#39;ll work with a collaborative, customer-focused team who values innovation and creativity, using your skills to create customised solutions to help our customers achieve their goals and guide their businesses forward.</p>
<p>Join us in our quest to change how people work with data and make a better world!</p>
<p>The impact you will have:</p>
<ul>
<li>Form successful relationships with clients in Spain, providing technical and business value to Databricks customers in collaboration with Account Executives.</li>
</ul>
<ul>
<li>Operate as an expert in big data analytics to excite customers about Databricks. You will develop into a ‘champion’ and trusted advisor on multiple issues of architecture, design, and implementation to lead to the successful adoption of the Databricks Data Intelligence Platform.</li>
</ul>
<ul>
<li>Scale best practices in your field and support customers by authoring reference architectures, how-tos, and demo applications, and help build the Databricks community in your region by leading workshops, seminars, and meet-ups.</li>
</ul>
<ul>
<li>Grow your knowledge and expertise to the level of a technical and/or industry specialist.</li>
</ul>
<p>What we look for:</p>
<ul>
<li>Engage customers in technical sales, challenge their questions, guide clear outcomes, and communicate technical and value propositions.</li>
</ul>
<ul>
<li>Develop customer relationships and build internal partnerships with account executives and teams.</li>
</ul>
<ul>
<li>Prior experience with coding in a core programming language (i.e., Python, Java, Scala) and willingness to learn a base level of Spark.</li>
</ul>
<ul>
<li>Proficient with Big Data Analytics technologies, including hands-on expertise with complex proofs-of-concept and public cloud platform(s).</li>
</ul>
<ul>
<li>Experienced in use case discovery, scoping, and delivering complex solution architecture designs to multiple audiences requiring an ability to context switch in levels of technical depth.</li>
</ul>
<p>Mandatory requirements:</p>
<ul>
<li>The location for the role should be in the Madrid region (i.e. within a commutable distance for a hybrid schedule).</li>
</ul>
<ul>
<li>Flexibility to travel (up to 30% as required for customer meetings, events and trainings).</li>
</ul>
<ul>
<li>Business proficiency in both Spanish and English required.</li>
</ul>
<p style="margin-top:24px;font-size:13px;color:#666;">XML job scraping automation by <a href="https://yubhub.co">YubHub</a></p>]]></Description>
      <Jobtype>full-time</Jobtype>
      <Experiencelevel>senior</Experiencelevel>
      <Workarrangement>hybrid</Workarrangement>
      <Salaryrange></Salaryrange>
      <Skills>Python, Java, Scala, Big Data Analytics, Spark, Cloud Computing</Skills>
      <Category>Engineering</Category>
      <Industry>Technology</Industry>
      <Employername>Databricks</Employername>
      <Employerlogo>https://logos.yubhub.co/databricks.com.png</Employerlogo>
      <Employerdescription>Databricks is a data and AI company that provides a unified platform for data science and analytics. Over 10,000 organisations worldwide rely on its Data Intelligence Platform.</Employerdescription>
      <Employerwebsite>https://databricks.com</Employerwebsite>
      <Compensationcurrency></Compensationcurrency>
      <Compensationmin></Compensationmin>
      <Compensationmax></Compensationmax>
      <Applyto>https://job-boards.greenhouse.io/databricks/jobs/8506127002?utm_source=yubhub.co&amp;utm_medium=jobs_feed&amp;utm_campaign=apply</Applyto>
      <Location>Madrid</Location>
      <Country></Country>
      <Postedate>2026-04-24</Postedate>
    </job>
    <job>
      <externalid>a5bf2d0f-61e</externalid>
      <Title>Data Processing Product Area Engineer</Title>
      <Description><![CDATA[<p>In the Data Processing Product Area, we build the platforms and tools that power how Spotify processes, manages, and consumes data at scale.</p>
<p>Our work enables teams across Spotify to solve complex data challenges with confidence. We focus on improving efficiency, quality, and innovation,helping engineers and data practitioners move faster while maintaining reliability and cost awareness.</p>
<p><strong>Responsibilities</strong></p>
<ul>
<li>Collaborate with product and engineering teams to build and operate infrastructure that enables reliable, scalable data processing</li>
<li>Design and develop tools that improve developer productivity and reduce the overhead of working with data pipelines</li>
<li>Contribute to platform reliability, performance, and cost efficiency across large-scale data systems</li>
<li>Partner with data experts and squads across Spotify to evolve best practices, standards, and tooling</li>
<li>Support the adoption and migration of data pipelines onto new platform capabilities</li>
<li>Help shape the future of Spotify’s data ecosystem through continuous improvement and experimentation</li>
</ul>
<p><strong>Requirements</strong></p>
<ul>
<li>Experience building backend systems using Java</li>
<li>Familiarity with platform engineering concepts and working in data-intensive environments</li>
<li>Comfortable working with large-scale data using SQL and modern analytics platforms such as BigQuery</li>
<li>Experience with distributed data processing frameworks such as Spark, Flink, Beam, or similar</li>
<li>Experienced with cloud infrastructure, containerized applications, and DevOps practices</li>
<li>Working knowledge of Kubernetes and its core concepts</li>
<li>Experience building or maintaining data pipelines using Scala and/or Python</li>
<li>Care about code quality, testing, and delivering reliable systems</li>
<li>Enjoy working in collaborative environments and navigating open-ended problems</li>
<li>Curious, experiment-driven, and motivated by using data to inform decisions</li>
</ul>
<p><strong>Benefits</strong></p>
<ul>
<li>Flexible working arrangements</li>
<li>Opportunities for professional growth and development</li>
<li>Collaborative and dynamic work environment</li>
</ul>
<p><strong>What We Offer</strong></p>
<ul>
<li>Competitive salary and benefits package</li>
<li>Opportunity to work with a leading technology company</li>
<li>Collaborative and dynamic work environment</li>
</ul>
<p style="margin-top:24px;font-size:13px;color:#666;">XML job scraping automation by <a href="https://yubhub.co">YubHub</a></p>]]></Description>
      <Jobtype>full-time</Jobtype>
      <Experiencelevel>mid</Experiencelevel>
      <Workarrangement>hybrid</Workarrangement>
      <Salaryrange></Salaryrange>
      <Skills>Java, platform engineering, SQL, BigQuery, Spark, Flink, Beam, Kubernetes, Scala, Python</Skills>
      <Category>Engineering</Category>
      <Industry>Technology</Industry>
      <Employername>Spotify</Employername>
      <Employerlogo>https://logos.yubhub.co/spotify.com.png</Employerlogo>
      <Employerdescription>Spotify is a music streaming service that offers users access to millions of songs, podcasts, and videos. It was founded in 2006 and is headquartered in Stockholm, Sweden.</Employerdescription>
      <Employerwebsite>https://www.spotify.com</Employerwebsite>
      <Compensationcurrency></Compensationcurrency>
      <Compensationmin></Compensationmin>
      <Compensationmax></Compensationmax>
      <Applyto>https://jobs.lever.co/spotify/db6450c7-5017-4aa5-8a64-52e39f1ed525?utm_source=yubhub.co&amp;utm_medium=jobs_feed&amp;utm_campaign=apply</Applyto>
      <Location>London</Location>
      <Country></Country>
      <Postedate>2026-04-24</Postedate>
    </job>
    <job>
      <externalid>685d83d0-48a</externalid>
      <Title>Principal Software Engineer - Data, Personalization - Microsoft AI</Title>
      <Description><![CDATA[<p>As Microsoft continues to redefine the future of AI, we are seeking passionate engineers to tackle some of the most complex and impactful challenges of our time. Our vision is bold , to build intelligent systems that deeply understand users and adapt across agents, applications, services, and infrastructure. This role focuses on building distributed data systems and APIs that power adaptive, context-aware experiences across Microsoft AI. We aim to make Copilot feel like your Copilot , responsive to your preferences, workflows, and goals , while preserving privacy, security, performance, and scale. We are looking for a Principal Software Engineer to lead the design and development of distributed data infrastructure, APIs and personalization pipelines that drive Copilot’s intelligence. You will work across Microsoft AI and Copilot teams. You will possess a methodical approach to problem-solving, proficiency in backend technologies, a familiarity with applied AI and its unique challenges, and the ability to architect solutions that stand the test of time. The right candidate is hands-on and enjoys building world-class consumer experiences and products in a fast-paced environment. A key skill is the judgment to make the right risk vs velocity and value decisions. Microsoft’s mission is to empower every person and every organization on the planet to achieve more. As employees we come together with a growth mindset, innovate to empower others, and collaborate to realize our shared goals. Each day we build on our values of respect, integrity, and accountability to create a culture of inclusion where everyone can thrive at work and beyond. Starting January 26, 2026, Microsoft AI (MAI) employees who live within a 50- mile commute of a designated Microsoft office in the U.S. or 25-mile commute of a non-U.S., country-specific location are expected to work from the office at least four days per week. This expectation is subject to local law and may vary by jurisdiction. Responsibilities:</p>
<p>Architect scalable, low-latency systems for ingesting, processing, and serving personalized signals. Design data models and APIs that enable Copilot to reason about user context, preferences, and history Build real-time and batch personalization engines that adapt Copilot’s behavior. Collaborate with privacy, security, and responsible AI teams to ensure personalization is safe, transparent, and user-controlled Optimize for performance, reliability, and cost across diverse workloads and geographies. Ship high-quality, well-tested, secure, and maintainable code. Find a path to get things done despite roadblocks to get your work into the hands of users quickly and iteratively. Enjoy working in a fast-paced, design-driven, product development cycle. Embody our Culture and Values.</p>
<p>Qualifications:</p>
<p>Bachelor’s Degree in Computer Science or related technical field AND 8+ years technical engineering experience with coding in languages including, but not limited to, C, C++, C#, Java, JavaScript, or Python OR equivalent experience. 4+ years’ experience building scalable services, including securing applications and infrastructure on top of cloud infrastructure like Azure, AWS, or GCP. 3+ years’ experience in OSS data technology, such as Kafka, Spark, Flink. Experience with large scale data systems Experience working with AI platforms, frameworks, and APIs. Experience using Machine Learning frameworks, including experience using, deploying, and scaling language learning models, either personally or professionally. Ability to identify, analyze, and resolve complex technical issues, ensuring optimal performance, scalability, and user experience. Demonstrated interpersonal skills and ability to work closely with cross-functional teams, including product managers, designers, and other engineers. Passion for learning new technologies and staying up to date with industry trends, best practices, and emerging technologies in web, data systems and AI. Ability to work in a fast-paced environment, manage multiple priorities, and adapt to changing requirements and deadlines. Proven ability to collaborate and contribute to a positive, inclusive work environment, fostering knowledge sharing and growth within the team.</p>
<p>#MicrosoftAI Software Engineering IC6 – The typical base pay range for this role across the U.S. is USD $163,000 – $296,400 per year. There is a different range applicable to specific work locations, within the San Francisco Bay area and New York City metropolitan area, and the base pay range for this role in those locations is USD $220,800 – $331,200 per year. Certain roles may be eligible for benefits and other compensation. Find additional benefits and pay information here: https://careers.microsoft.com/us/en/us-corporate-pay</p>
<p style="margin-top:24px;font-size:13px;color:#666;">XML job scraping automation by <a href="https://yubhub.co">YubHub</a></p>]]></Description>
      <Jobtype>full-time</Jobtype>
      <Experiencelevel>senior</Experiencelevel>
      <Workarrangement>hybrid</Workarrangement>
      <Salaryrange></Salaryrange>
      <Skills>C, C++, C#, Java, JavaScript, Python, Azure, AWS, GCP, Kafka, Spark, Flink, large scale data systems, AI platforms, frameworks, APIs, Machine Learning frameworks</Skills>
      <Category>Engineering</Category>
      <Industry>Technology</Industry>
      <Employername>Microsoft</Employername>
      <Employerlogo>https://logos.yubhub.co/microsoft.ai.png</Employerlogo>
      <Employerdescription>Microsoft is a multinational technology company that develops, manufactures, licenses, and supports a wide range of software products, services, and devices.</Employerdescription>
      <Employerwebsite>https://microsoft.ai</Employerwebsite>
      <Compensationcurrency></Compensationcurrency>
      <Compensationmin></Compensationmin>
      <Compensationmax></Compensationmax>
      <Applyto>https://microsoft.ai/job/principal-software-engineer-data-personalization-microsoft-ai-5/?utm_source=yubhub.co&amp;utm_medium=jobs_feed&amp;utm_campaign=apply</Applyto>
      <Location>Redmond</Location>
      <Country></Country>
      <Postedate>2026-04-24</Postedate>
    </job>
    <job>
      <externalid>dbee541c-ce7</externalid>
      <Title>Software Engineer III, Community Builders</Title>
      <Description><![CDATA[<p>We are seeking a talented Backend Engineer to join our team. As a key contributor, you will be responsible for designing, developing, and maintaining backend application services, ensuring the performance, security, and scalability of our systems. You will work collaboratively with product managers, designers, data scientists, and other engineers to deliver high-quality products. Your responsibilities will include contributing to the full development cycle, writing design documents and code, and receiving valuable feedback on your work. You will continuously learn and improve your technical and non-technical abilities.</p>
<p>Technologies We Use Our teams leverage a diverse and modern technology stack. While specific technologies may vary by team, we generally work with:</p>
<p>Languages: Go, Python Frameworks: Spark, Kafka, Airflow Datastores: BigQuery, Redis, Cassandra, PostgreSQL Tools: Kubernetes, Docker</p>
<p>What We Are Looking For A Bachelor&#39;s degree or higher in a quantitative or computer science-related field. 2+ years of software engineering experience in a scalable computing environment. A passion for learning and adapting to new technologies. Strong communication and collaboration skills, with the ability to work effectively with diverse stakeholders. Entrepreneurial spirit. You are self-directed, innovative, and biased towards action in fast-paced environments. You love to build new things and thrive in ambiguity and can easily navigate failure.</p>
<p>Benefits Comprehensive Healthcare Benefits and Income Replacement Programs 401k with Employer Match Global Benefit programs that fit your lifestyle, from workspace to professional development to caregiving support Family Planning Support Gender-Affirming Care Mental Health &amp; Coaching Benefits Flexible Vacation &amp; Paid Volunteer Time Off Generous Paid Parental Leave</p>
<p style="margin-top:24px;font-size:13px;color:#666;">XML job scraping automation by <a href="https://yubhub.co">YubHub</a></p>]]></Description>
      <Jobtype>full-time</Jobtype>
      <Experiencelevel>senior</Experiencelevel>
      <Workarrangement>remote</Workarrangement>
      <Salaryrange>$164,200-$229,900 USD</Salaryrange>
      <Skills>Go, Python, Spark, Kafka, Airflow, BigQuery, Redis, Cassandra, PostgreSQL, Kubernetes, Docker</Skills>
      <Category>Engineering</Category>
      <Industry>Technology</Industry>
      <Employername>Reddit Inc.</Employername>
      <Employerlogo>https://logos.yubhub.co/redditinc.com.png</Employerlogo>
      <Employerdescription>Reddit is a social news and discussion website with over 121 million daily active unique visitors and 100,000+ active communities.</Employerdescription>
      <Employerwebsite>https://www.redditinc.com</Employerwebsite>
      <Compensationcurrency></Compensationcurrency>
      <Compensationmin></Compensationmin>
      <Compensationmax></Compensationmax>
      <Applyto>https://job-boards.greenhouse.io/reddit/jobs/7767702?utm_source=yubhub.co&amp;utm_medium=jobs_feed&amp;utm_campaign=apply</Applyto>
      <Location>Remote - United States</Location>
      <Country></Country>
      <Postedate>2026-04-24</Postedate>
    </job>
    <job>
      <externalid>492042ed-9ee</externalid>
      <Title>Member of Technical Staff - Data Engineer</Title>
      <Description><![CDATA[<p>As Microsoft continues to push the boundaries of AI, we are on the lookout for individuals to work with us on the most interesting and challenging AI questions of our time. Our vision is bold and broad , to build systems that have true artificial intelligence across agents, applications, services, and infrastructure. It’s also inclusive: we aim to make AI accessible to all , consumers, businesses, developers , so that everyone can realize its benefits.</p>
<p>We’re looking for someone who possesses technical prowess, a methodical approach to problem-solving, proficiency in big data processing technologies, and a mastery of templating to architect solutions that stand the test of time and who will bring an abundance of positive energy, empathy, and kindness to the team every day, in addition to being highly effective.</p>
<p>The Data Platform Engineering team is responsible for building core data pipelines that help fine tune models, support introspection and retrospection of data so that we can constantly evolve and improve human AI interactions.</p>
<p>Microsoft’s mission is to empower every person and every organization on the planet to achieve more. As employees we come together with a growth mindset, innovate to empower others, and collaborate to realize our shared goals. Each day we build on our values of respect, integrity, and accountability to create a culture of inclusion where everyone can thrive at work and beyond.</p>
<p>Starting January 26, 2026, MAI employees are expected to work from a designated Microsoft office at least four days a week if they live within 50 miles (U.S.) or 25 miles (non-U.S., country-specific) of that location. This expectation is subject to local law and may vary by jurisdiction.</p>
<p>Responsibilities:</p>
<ul>
<li>Build scalable data pipelines for sourcing, transforming and publishing data assets for AI use cases.</li>
<li>Work collaboratively with other Platform, infrastructure, application engineers as well as AI Researchers to build next generation data platform products and services.</li>
<li>Ship high-quality, well-tested, secure, and maintainable code.</li>
<li>Find a path to get things done despite roadblocks to get your work into the hands of users quickly and iteratively.</li>
<li>Enjoy working in a fast-paced, design-driven, product development cycle.</li>
<li>Embody our Culture and Values.</li>
</ul>
<p>Qualifications:</p>
<ul>
<li>Bachelor’s Degree in Computer Science, Math, Software Engineering, Computer Engineering, or related field AND 6+ years experience in business analytics, data science, software development, data modeling or data engineering work OR Master’s Degree in Computer Science, Math, Software Engineering, Computer Engineering, or related field AND 4+ years experience in business analytics, data science, software development, or data engineering work OR equivalent experience.</li>
<li>4+ years technical engineering experience building data processing applications (batch and streaming) with coding in languages including, but not limited to, Python, Java, Spark, SQL.</li>
<li>Experience working with Apache Hadoop eco system, Kafka, NoSQL, etc.</li>
<li>3+ years experience with data governance, data compliance and/or data security.</li>
<li>2+ years’ experience building scalable services on top of public cloud infrastructure like Azure, AWS, or GCP.</li>
<li>Extensive use datastores like RDBMS, key-value stores, etc.</li>
<li>2+ years’ experience building distributed systems at scale and extensive systems knowledge that spans bare-metal hosts to containers to networking.</li>
<li>Ability to identify, analyze, and resolve complex technical issues, ensuring optimal performance, scalability, and user experience.</li>
<li>Dedication to writing clean, maintainable, and well-documented code with a focus on application quality, performance, and security.</li>
<li>Demonstrated interpersonal skills and ability to work closely with cross-functional teams, including product managers, designers, and other engineers.</li>
<li>Ability to clearly communicate complex technical concepts to both technical and non-technical stakeholders.</li>
<li>Interest in learning new technologies and staying up to date with industry trends, best practices, and emerging technologies in web development and AI.</li>
<li>Ability to work in a fast-paced environment, manage multiple priorities, and adapt to changing requirements and deadlines.</li>
</ul>
<p style="margin-top:24px;font-size:13px;color:#666;">XML job scraping automation by <a href="https://yubhub.co">YubHub</a></p>]]></Description>
      <Jobtype>full-time</Jobtype>
      <Experiencelevel>staff</Experiencelevel>
      <Workarrangement>hybrid</Workarrangement>
      <Salaryrange>$139,900 - $274,800 per year</Salaryrange>
      <Skills>Python, Java, Spark, SQL, Apache Hadoop, Kafka, NoSQL, data governance, data compliance, data security, Azure, AWS, GCP, RDBMS, key-value stores, distributed systems, containerization, networking, web development, AI</Skills>
      <Category>Engineering</Category>
      <Industry>Technology</Industry>
      <Employername>Microsoft AI</Employername>
      <Employerlogo>https://logos.yubhub.co/microsoft.ai.png</Employerlogo>
      <Employerdescription>Microsoft AI is a subsidiary of Microsoft Corporation, a multinational technology company headquartered in Redmond, Washington.</Employerdescription>
      <Employerwebsite>https://microsoft.ai</Employerwebsite>
      <Compensationcurrency></Compensationcurrency>
      <Compensationmin></Compensationmin>
      <Compensationmax></Compensationmax>
      <Applyto>https://microsoft.ai/job/member-of-technical-staff-data-engineer-6/?utm_source=yubhub.co&amp;utm_medium=jobs_feed&amp;utm_campaign=apply</Applyto>
      <Location>Redmond</Location>
      <Country></Country>
      <Postedate>2026-04-24</Postedate>
    </job>
    <job>
      <externalid>3c9b96bf-348</externalid>
      <Title>Software Engineer II</Title>
      <Description><![CDATA[<p>Imagine helping millions of users discover the best local businesses and services,right when they need them. At Bing Places, we’re on a mission to improve the quality and relevance of local search results across Microsoft platforms. You’ll be part of a team that blends data science, engineering, and product thinking to deliver intelligent, high-impact experiences that shape how people interact with the world around them.</p>
<p>As a Software Engineer II in Bing Places, you will design and build scalable systems that enhance the accuracy, freshness, and trustworthiness of local search results. You’ll collaborate across disciplines to integrate diverse data sources, develop intelligent ranking algorithms, and ship features that directly impact millions of users. This opportunity will allow you to accelerate your career growth, deepen your understanding of geospatial and business data, and sharpen your skills in distributed systems and machine learning.</p>
<p>We offer flexible work arrangements, including partial work-from-home options. Microsoft’s mission is to empower every person and every organization on the planet to achieve more. As employees we come together with a growth mindset, innovate to empower others, and collaborate to realize our shared goals. Each day we build on our values of respect, integrity, and accountability to create a culture of inclusion where everyone can thrive at work and beyond.</p>
<p style="margin-top:24px;font-size:13px;color:#666;">XML job scraping automation by <a href="https://yubhub.co">YubHub</a></p>]]></Description>
      <Jobtype>full-time</Jobtype>
      <Experiencelevel>mid</Experiencelevel>
      <Workarrangement>hybrid</Workarrangement>
      <Salaryrange>$100,600 - $199,000 per year</Salaryrange>
      <Skills>C, C++, C#, Java, JavaScript, Python, Apache Hadoop, Spark, Azure Cloud, Azure Data Factory (ADF), Azure Machine Learning (AML)</Skills>
      <Category>Engineering</Category>
      <Industry>Technology</Industry>
      <Employername>Microsoft AI</Employername>
      <Employerlogo>https://logos.yubhub.co/microsoft.ai.png</Employerlogo>
      <Employerdescription>Microsoft is a multinational technology company that develops, manufactures, licenses, and supports a wide range of software products, services, and devices.</Employerdescription>
      <Employerwebsite>https://microsoft.ai</Employerwebsite>
      <Compensationcurrency></Compensationcurrency>
      <Compensationmin></Compensationmin>
      <Compensationmax></Compensationmax>
      <Applyto>https://microsoft.ai/job/software-engineer-ii-19/?utm_source=yubhub.co&amp;utm_medium=jobs_feed&amp;utm_campaign=apply</Applyto>
      <Location>Bellevue</Location>
      <Country></Country>
      <Postedate>2026-04-24</Postedate>
    </job>
    <job>
      <externalid>2fa970ee-3db</externalid>
      <Title>Member of Technical Staff - Data Engineer</Title>
      <Description><![CDATA[<p>As Microsoft continues to push the boundaries of AI, we are on the lookout for individuals to work with us on the most interesting and challenging AI questions of our time. Our vision is bold and broad , to build systems that have true artificial intelligence across agents, applications, services, and infrastructure. It’s also inclusive: we aim to make AI accessible to all , consumers, businesses, developers , so that everyone can realize its benefits.</p>
<p>We’re looking for someone who possesses technical prowess, a methodical approach to problem-solving, proficiency in big data processing technologies, and a mastery of templating to architect solutions that stand the test of time and who will bring an abundance of positive energy, empathy, and kindness to the team every day, in addition to being highly effective.</p>
<p>The Data Platform Engineering team is responsible for building core data pipelines that help fine tune models, support introspection and retrospection of data so that we can constantly evolve and improve human AI interactions.</p>
<p>Microsoft’s mission is to empower every person and every organization on the planet to achieve more. As employees we come together with a growth mindset, innovate to empower others, and collaborate to realize our shared goals. Each day we build on our values of respect, integrity, and accountability to create a culture of inclusion where everyone can thrive at work and beyond.</p>
<p>Starting January 26, 2026, MAI employees are expected to work from a designated Microsoft office at least four days a week if they live within 50 miles (U.S.) or 25 miles (non-U.S., country-specific) of that location. This expectation is subject to local law and may vary by jurisdiction.</p>
<p>Responsibilities: Build scalable data pipelines for sourcing, transforming and publishing data assets for AI use cases. Work collaboratively with other Platform, infrastructure, application engineers as well as AI Researchers to build next generation data platform products and services. Ship high-quality, well-tested, secure, and maintainable code. Find a path to get things done despite roadblocks to get your work into the hands of users quickly and iteratively. Enjoy working in a fast-paced, design-driven, product development cycle. Embody our Culture and Values.</p>
<p>Qualifications: Required Qualifications: Bachelor’s Degree in Computer Science, Math, Software Engineering, Computer Engineering, or related field AND 6+ years experience in business analytics, data science, software development, data modeling or data engineering work OR Master’s Degree in Computer Science, Math, Software Engineering, Computer Engineering, or related field AND 4+ years experience in business analytics, data science, software development, or data engineering work OR equivalent experience. Preferred Qualifications: 4+ years technical engineering experience building data processing applications (batch and streaming) with coding in languages including, but not limited to, Python, Java, Spark, SQL. Experience working with Apache Hadoop eco system, Kafka, NoSQL, etc. 3+ years experience with data governance, data compliance and/or data security. 2+ years’ experience building scalable services on top of public cloud infrastructure like Azure, AWS, or GCP. Extensive use datastores like RDBMS, key-value stores, etc. 2+ years’ experience building distributed systems at scale and extensive systems knowledge that spans bare-metal hosts to containers to networking. Ability to identify, analyze, and resolve complex technical issues, ensuring optimal performance, scalability, and user experience. Dedication to writing clean, maintainable, and well-documented code with a focus on application quality, performance, and security. Demonstrated interpersonal skills and ability to work closely with cross-functional teams, including product managers, designers, and other engineers. Ability to clearly communicate complex technical concepts to both technical and non-technical stakeholders. Interest in learning new technologies and staying up to date with industry trends, best practices, and emerging technologies in web development and AI. Ability to work in a fast-paced environment, manage multiple priorities, and adapt to changing requirements and deadlines.</p>
<p style="margin-top:24px;font-size:13px;color:#666;">XML job scraping automation by <a href="https://yubhub.co">YubHub</a></p>]]></Description>
      <Jobtype>full-time</Jobtype>
      <Experiencelevel>staff</Experiencelevel>
      <Workarrangement>hybrid</Workarrangement>
      <Salaryrange>$139,900 – $274,800 per year</Salaryrange>
      <Skills>Python, Java, Spark, SQL, Apache Hadoop, Kafka, NoSQL, data governance, data compliance, data security, Azure, AWS, GCP, RDBMS, key-value stores, distributed systems, containerization, networking, web development, AI</Skills>
      <Category>Engineering</Category>
      <Industry>Technology</Industry>
      <Employername>Microsoft AI</Employername>
      <Employerlogo>https://logos.yubhub.co/microsoft.ai.png</Employerlogo>
      <Employerdescription>Microsoft AI is a subsidiary of Microsoft Corporation, a multinational technology company.</Employerdescription>
      <Employerwebsite>https://microsoft.ai</Employerwebsite>
      <Compensationcurrency></Compensationcurrency>
      <Compensationmin></Compensationmin>
      <Compensationmax></Compensationmax>
      <Applyto>https://microsoft.ai/job/member-of-technical-staff-data-engineer-4/?utm_source=yubhub.co&amp;utm_medium=jobs_feed&amp;utm_campaign=apply</Applyto>
      <Location>New York</Location>
      <Country></Country>
      <Postedate>2026-04-24</Postedate>
    </job>
    <job>
      <externalid>72bdbfc7-7b8</externalid>
      <Title>Senior Software Engineer</Title>
      <Description><![CDATA[<p>The Ads Data Platform Team, part of Microsoft AI, is hiring a Senior Software Engineer. This role is available in Redmond, WA. Our team powers the backbone of Microsoft’s global ads marketplace,gathering, storing, and enriching over half a trillion ad-serving events every day. We build data platforms that fuel business analytics, machine learning models, and real-time reporting at massive scale.</p>
<p>As part of our team, you’ll:</p>
<p>Design and operate high-scale, high-performance systems that process billions of events through near-real-time and offline pipelines. Build data applications that directly impact Microsoft Ads’ double-digit annual growth. Work on cutting-edge technologies in distributed systems, machine learning, and big data.</p>
<p>Online advertising is one of the fastest-growing businesses on the Internet, with $70B of a $600B market already online,and we’re just getting started. You’ll tackle technical challenges that demand computational intelligence, scalable algorithms, and interdisciplinary expertise across data mining, optimization, and economics.</p>
<p>Be part of a results-driven, inclusive culture where your ideas matter and your work create measurable business impact. Microsoft’s mission is to empower every person and every organization on the planet to achieve more. As employees we come together with a growth mindset, innovate to empower others, and collaborate to realize our shared goals. Each day we build on our values of respect, integrity, and accountability to create a culture of inclusion where everyone can thrive at work and beyond.</p>
<p style="margin-top:24px;font-size:13px;color:#666;">XML job scraping automation by <a href="https://yubhub.co">YubHub</a></p>]]></Description>
      <Jobtype>full-time</Jobtype>
      <Experiencelevel>senior</Experiencelevel>
      <Workarrangement>onsite</Workarrangement>
      <Salaryrange>$119,800 - $234,700 per year</Salaryrange>
      <Skills>C, C++, C#, Java, JavaScript, Python, Azure, Machine learning, online system design, implementation and qualification, Distributed Systems, Big Data Technologies, Spark, Hadoop, HDFS, Kafka, Flink, Scala</Skills>
      <Category>Engineering</Category>
      <Industry>Technology</Industry>
      <Employername>Microsoft AI</Employername>
      <Employerlogo>https://logos.yubhub.co/microsoft.ai.png</Employerlogo>
      <Employerdescription>Microsoft AI is a subsidiary of Microsoft Corporation, a multinational technology company. It focuses on developing artificial intelligence and machine learning technologies.</Employerdescription>
      <Employerwebsite>https://microsoft.ai</Employerwebsite>
      <Compensationcurrency></Compensationcurrency>
      <Compensationmin></Compensationmin>
      <Compensationmax></Compensationmax>
      <Applyto>https://microsoft.ai/job/senior-software-engineer-133/?utm_source=yubhub.co&amp;utm_medium=jobs_feed&amp;utm_campaign=apply</Applyto>
      <Location>Redmond</Location>
      <Country></Country>
      <Postedate>2026-04-24</Postedate>
    </job>
    <job>
      <externalid>551df03a-e42</externalid>
      <Title>Engineering Manager -  Batch Compute Infrastructure</Title>
      <Description><![CDATA[<p><strong>Job Description</strong></p>
<p>We are seeking an experienced Engineering Manager to lead our Batch Compute Infrastructure team at Stripe. As a key member of our engineering organization, you will be responsible for defining the multi-year roadmap for Stripe&#39;s Batch Compute Infrastructure, leading complex architectural shifts and modernization.</p>
<p><strong>Responsibilities</strong></p>
<ul>
<li>Drive Strategic Vision: Define the multi-year roadmap for Stripe’s Batch Compute Infrastructure, leading complex architectural shifts and modernization.</li>
<li>Lead and Scale: Build, mentor, and aggressively scale a high-performing team of engineers, proactively investing in their career development and fostering a culture of operational excellence.</li>
<li>Ensure Operational Rigor: Maintain unwavering reliability for a Tier-0 infrastructure processing tens of thousands of daily workloads, proactively mitigating risks and managing complex on-call telemetry.</li>
<li>Cross-Functional Orchestration: Collaborate deeply with data platform teams, finance, and user groups to define compute efficiency metrics, execute massive-scale cost optimization strategies, and guarantee compliance with global financial regulations.</li>
<li>Technical Stewardship: Provide technical guidance in architecture reviews, evaluating critical cost, performance, and reliability trade-offs in distributed systems design involving Hadoop, Spark, AWS cloud primitives, and modern metastores.</li>
</ul>
<p><strong>Requirements</strong></p>
<ul>
<li>10+ years of professional software development and engineering experience.</li>
<li>3+ years of direct engineering management experience, successfully building and operating high-velocity technical teams.</li>
<li>Deep technical background in building, scaling, and maintaining large-scale distributed data systems or Tier-0 infrastructure using open-source tools (e.g., Hadoop, Spark, Celeborn, Airflow, Kafka).</li>
<li>Proven track record of driving significant infrastructure efficiency, managing capacity planning, and making data-driven cost-performance trade-offs.</li>
<li>Experience working effectively in highly cross-functional, global organizations.</li>
</ul>
<p><strong>Preferred Requirements</strong></p>
<ul>
<li>Experience managing remote or geographically distributed engineering teams.</li>
<li>Familiarity with managing a massive fleet of Linux servers, on-premise Hadoop clusters, and modern cloud data architectures (e.g., AWS S3, Graviton).</li>
<li>Demonstrated ability to navigate strategic ambiguity and deliver complex, multi-quarter infrastructural projects from inception to completion.</li>
<li>Deep empathy for internal data users with a passion for building robust developer tooling and abstractions.</li>
</ul>
<p style="margin-top:24px;font-size:13px;color:#666;">XML job scraping automation by <a href="https://yubhub.co">YubHub</a></p>]]></Description>
      <Jobtype>full-time</Jobtype>
      <Experiencelevel>senior</Experiencelevel>
      <Workarrangement>remote</Workarrangement>
      <Salaryrange></Salaryrange>
      <Skills>Hadoop, Spark, Celeborn, Airflow, Kafka, Linux, AWS, Cloud Computing, Remote Engineering Management, Distributed Systems Design, Cloud Architecture, DevOps, Agile Methodologies</Skills>
      <Category>Engineering</Category>
      <Industry>Technology</Industry>
      <Employername>Stripe</Employername>
      <Employerlogo>https://logos.yubhub.co/stripe.com.png</Employerlogo>
      <Employerdescription>Stripe is a financial infrastructure platform for businesses, used by millions of companies worldwide.</Employerdescription>
      <Employerwebsite>https://stripe.com/</Employerwebsite>
      <Compensationcurrency></Compensationcurrency>
      <Compensationmin></Compensationmin>
      <Compensationmax></Compensationmax>
      <Applyto>https://job-boards.greenhouse.io/stripe/jobs/7827623?utm_source=yubhub.co&amp;utm_medium=jobs_feed&amp;utm_campaign=apply</Applyto>
      <Location>Bengaluru</Location>
      <Country></Country>
      <Postedate>2026-04-24</Postedate>
    </job>
    <job>
      <externalid>8610ea3d-93b</externalid>
      <Title>Cloud Platform Engineer</Title>
      <Description><![CDATA[<p>The Business Development/Management Technology team at FIC &amp; Risk Technology is building and operating platforms that support recruiting, hiring, and onboarding of investment professionals. We are currently integrating multiple legacy and new systems into a unified, cloud-native platform to standardize processes, workflows, and data models across the organisation.</p>
<p>This integration will enable seamless collaboration between teams and provide reliable, scalable data for analytics and reporting. We are looking for a Cloud Platform Engineer to design, build, and operate our AWS-based infrastructure and data platforms, using modern DevOps practices, infrastructure as code, and secure, well-engineered services in Python and C#.</p>
<p>The successful candidate will collaborate with global technology and business teams to design cloud-native solutions that support business development and onboarding workflows. They will partner with global stakeholders to understand requirements and translate them into secure, scalable AWS architectures and platform capabilities.</p>
<p>Key responsibilities include leading the end-to-end delivery of cloud and platform features, including design, implementation (Python/C#), infrastructure as code, testing, and deployment using DevOps practices.</p>
<p>We are looking for a highly skilled engineer with 6+ years of experience in software or platform engineering, with significant time spent building and operating solutions in cloud environments (AWS preferred).</p>
<p>The ideal candidate will have strong hands-on programming experience in Python and C#, with solid understanding of object-oriented design, design patterns, service-oriented / microservices architectures, concurrency, and SOLID principles.</p>
<p>They will also have proven experience designing and operating AWS-based platforms (e.g., EC2, ECS/EKS, Lambda, S3, RDS, IAM) using infrastructure as code (Terraform, CloudFormation, or CDK).</p>
<p>In addition, the successful candidate will have practical experience implementing DevOps practices and CI/CD pipelines (e.g., Jenkins, GitHub Actions, Azure DevOps), including automated testing, security scanning, and deployment.</p>
<p>Experience supporting data science and analytics platforms, including orchestration tools such as Airflow, distributed processing engines such as Spark, and cloud-native data pipelines is also required.</p>
<p>Good understanding of SQL and core database concepts; familiarity with AWS analytics services (e.g., Glue, EMR, Redshift, Athena) is a plus.</p>
<p>Awareness of cloud security best practices, including IAM, network security, data encryption, and secure configuration management is also necessary.</p>
<p>Strong problem-solving and analytical skills; demonstrated ability to take ownership, deliver in a fast-paced environment, and collaborate effectively with global teams is essential.</p>
<p>Excellent communication skills, with ability to work closely with both technical and non-technical stakeholders is also required.</p>
<p>Experience estimating, monitoring, and optimizing AWS infrastructure costs, including use of tools such as AWS Cost Explorer, AWS Budgets, and cost-allocation tagging strategies is desirable.</p>
<p>Experience designing and operating workloads across multiple cloud environments and on-premises, using centralized policies, governance, and controls to support business-aligned teams is also beneficial.</p>
<p>Working knowledge of networking across on-premises and cloud environments, including VPC design, subnets, routing, VPNs/Direct Connect, load balancing, DNS, and network security controls is necessary.</p>
<p>Nice to have experience with additional big data tools or platforms (e.g., Kafka, Databricks, Snowflake, Flink).</p>
<p>Familiarity with Capital Markets concepts and operating models is also beneficial.</p>
<p>The estimated base salary range for this position is $175,000 to $250,000, which is specific to New York and may change in the future.</p>
<p>Millennium pays a total compensation package which includes a base salary, discretionary performance bonus, and a comprehensive benefits package.</p>
<p>When finalising an offer, we take into consideration an individual&#39;s experience level and the qualifications they bring to the role to formulate a competitive total compensation package.</p>
<p style="margin-top:24px;font-size:13px;color:#666;">XML job scraping automation by <a href="https://yubhub.co">YubHub</a></p>]]></Description>
      <Jobtype>full-time</Jobtype>
      <Experiencelevel>senior</Experiencelevel>
      <Workarrangement>onsite</Workarrangement>
      <Salaryrange>$175,000 to $250,000</Salaryrange>
      <Skills>AWS, Python, C#, DevOps, Infrastructure as Code, Cloud Security, SQL, Database Concepts, Networking, Airflow, Spark, Kafka, Databricks, Snowflake, Flink, Capital Markets</Skills>
      <Category>Engineering</Category>
      <Industry>Finance</Industry>
      <Employername>FIC &amp; Risk Technology</Employername>
      <Employerlogo>https://logos.yubhub.co/mlp.eightfold.ai.png</Employerlogo>
      <Employerdescription>FIC &amp; Risk Technology is a technology company that provides solutions for financial institutions.</Employerdescription>
      <Employerwebsite>https://mlp.eightfold.ai</Employerwebsite>
      <Compensationcurrency></Compensationcurrency>
      <Compensationmin></Compensationmin>
      <Compensationmax></Compensationmax>
      <Applyto>https://mlp.eightfold.ai/careers/job/755955139979?utm_source=yubhub.co&amp;utm_medium=jobs_feed&amp;utm_campaign=apply</Applyto>
      <Location>New York, New York, United States of America</Location>
      <Country></Country>
      <Postedate>2026-04-18</Postedate>
    </job>
    <job>
      <externalid>b68ff4cc-e74</externalid>
      <Title>Data Engineer, Safeguards</Title>
      <Description><![CDATA[<p><strong>About the role</strong></p>
<p>Anthropic is looking for a Data Engineer to join the Safeguards team and build the data foundations that keep our AI systems safe. The Safeguards team works to monitor models, prevent misuse, and ensure user well-being.</p>
<p>You&#39;ll design and build the data pipelines, warehousing solutions, and analytical tooling that power our safety and trust efforts at scale. You&#39;ll work closely with engineers, data scientists, and policy teams to ensure the Safeguards organization has the data it needs to detect abuse patterns, measure the effectiveness of safety interventions, and make informed decisions about model behavior and enforcement.</p>
<p>This is a high-impact role where your work will directly support Anthropic&#39;s mission to develop AI that is safe and beneficial.</p>
<p><strong>Responsibilities</strong></p>
<ul>
<li>Design, build, and maintain scalable data pipelines that support safety monitoring, abuse detection, and enforcement workflows</li>
<li>Develop and optimize data models and warehousing solutions to enable efficient analysis of large-scale usage and safety data</li>
<li>Build and maintain dashboards and reporting infrastructure that give Safeguards teams visibility into model behavior, misuse patterns, and enforcement outcomes</li>
<li>Collaborate with engineers to integrate data from multiple sources , including model outputs, user reports, and automated classifiers , into a unified analytical layer</li>
<li>Implement data quality frameworks, monitoring, and alerting to ensure the reliability of safety-critical data</li>
<li>Partner with research teams to surface data insights that inform model improvements and safety interventions</li>
<li>Develop self-service data tooling that enables stakeholders to explore safety data and generate reports independently</li>
<li>Contribute to data governance practices, including access controls, retention policies, and privacy-compliant data handling</li>
</ul>
<p><strong>You may be a good fit if you:</strong></p>
<ul>
<li>Have 3+ years of experience in data engineering, analytics engineering, or a related role</li>
<li>Are proficient in SQL and Python, with experience building and maintaining ETL/ELT pipelines</li>
<li>Have hands-on experience with modern data stack tools such as dbt, Airflow, Spark, or similar orchestration and transformation frameworks</li>
<li>Have worked with cloud data platforms (BigQuery, Redshift, Snowflake, or similar)</li>
<li>Are comfortable building dashboards and data visualizations using tools like Looker, Tableau, or Metabase</li>
<li>Communicate clearly and can translate complex data concepts for both technical and non-technical audiences</li>
<li>Are results-oriented, flexible, and willing to pick up slack even when it falls outside your job description</li>
<li>Care about the societal impacts of AI and are motivated by safety work</li>
</ul>
<p><strong>Strong candidates may have:</strong></p>
<ul>
<li>Experience with trust &amp; safety, integrity, fraud, or abuse detection data systems</li>
<li>Experience with large-scale event streaming systems (Kafka, Pub/Sub, Kinesis)</li>
<li>Built data infrastructure that supports ML model monitoring or evaluation</li>
<li>A background in statistical analysis, or experience collaborating closely with data scientists</li>
<li>Developed internal tooling or self-service analytics platforms</li>
</ul>
<p><strong>Strong candidates need not have:</strong></p>
<ul>
<li>A formal degree in Computer Science or a related field , we value practical experience and demonstrated ability over credentials</li>
<li>Prior experience in AI or machine learning , you&#39;ll learn the domain-specific context on the job</li>
<li>Previous experience at an AI safety or research organization</li>
<li>Deep expertise across every tool listed above , familiarity with a subset and a willingness to learn is enough</li>
</ul>
<p><strong>Logistics</strong></p>
<p>Minimum education: Bachelor’s degree or an equivalent combination of education, training, and/or experience Required field of study: A field relevant to the role as demonstrated through coursework, training, or professional experience Minimum years of experience: Years of experience required will correlate with the internal job level requirements for the position Location-based hybrid policy: Currently, we expect all staff to be in one of our offices at least 25% of the time. However, some roles may require more time in our offices. Visa sponsorship: We do sponsor visas! However, we aren&#39;t able to successfully sponsor visas for every role and every candidate. But if we make you an offer, we will make every reasonable effort to get you a visa, and we retain an immigration lawyer to help with this.</p>
<p><strong>How we&#39;re different</strong></p>
<p>We believe that the highest-impact AI research will be big science. At Anthropic we work as a single cohesive team on just a few large-scale research efforts. And we value impact , advancing our long-term goals of steerable, trustworthy AI , rather than work on smaller and more specific puzzles. We view AI research as an empirical science, which has as much in common with physics and biology as with traditional efforts in computer science. We&#39;re an extremely collaborative group, and we host frequent research discussions to ensure that we are pursuing the highest-impact work at any given time. As such, we greatly value communication skills. The easiest way to understand our research directions is to read our recent research. This research continues many of the directions our team worked on prior to Anthropic, including: GPT-3, Circuit-Based Interpretability, Multimodal Neurons, Scaling Laws, AI &amp; Compute, Concrete Problems in AI Safety, and Learning from Human Preferences.</p>
<p><strong>Come work with us!</strong></p>
<p>Anthropic is a public benefit corporation headquartered in San Francisco. We offer competitive compensation and benefits, optional equity donation matching, generous vacation and parental leave, flexible working hours, and a lovely office space in which to collaborate with colleagues.</p>
<p style="margin-top:24px;font-size:13px;color:#666;">XML job scraping automation by <a href="https://yubhub.co">YubHub</a></p>]]></Description>
      <Jobtype>full-time</Jobtype>
      <Experiencelevel>mid</Experiencelevel>
      <Workarrangement>hybrid</Workarrangement>
      <Salaryrange>£170,000-£220,000 GBP</Salaryrange>
      <Skills>SQL, Python, ETL/ELT pipelines, dbt, Airflow, Spark, cloud data platforms, BigQuery, Redshift, Snowflake, Looker, Tableau, Metabase</Skills>
      <Category>Engineering</Category>
      <Industry>Technology</Industry>
      <Employername>Anthropic</Employername>
      <Employerlogo>https://logos.yubhub.co/anthropic.com.png</Employerlogo>
      <Employerdescription>Anthropic is a public benefit corporation that creates reliable, interpretable, and steerable AI systems.</Employerdescription>
      <Employerwebsite>https://www.anthropic.com/</Employerwebsite>
      <Compensationcurrency></Compensationcurrency>
      <Compensationmin></Compensationmin>
      <Compensationmax></Compensationmax>
      <Applyto>https://job-boards.greenhouse.io/anthropic/jobs/5156057008?utm_source=yubhub.co&amp;utm_medium=jobs_feed&amp;utm_campaign=apply</Applyto>
      <Location>London, UK</Location>
      <Country></Country>
      <Postedate>2026-04-18</Postedate>
    </job>
    <job>
      <externalid>33eb12c1-537</externalid>
      <Title>Solutions Architect</Title>
      <Description><![CDATA[<p>Join our team as a Solutions Architect and play a crucial role in helping our customers solve their complex data challenges. As a key member of our Field Engineering team, you will work closely with customers to understand their needs and develop customized solutions using our Data Intelligence Platform.</p>
<p>We&#39;re looking for someone with a strong technical background in big data analytics, who can operate as a trusted advisor to our customers. You will be responsible for developing successful relationships with clients, providing technical and business value, and scaling best practices in your field.</p>
<p>As a Solutions Architect, you will:</p>
<ul>
<li>Form successful relationships with clients throughout your assigned territory</li>
<li>Operate as an expert in big data analytics to excite customers about Databricks</li>
<li>Develop into a &#39;champion&#39; and trusted advisor on multiple issues of architecture, design, and implementation</li>
<li>Scale best practices in your field and support customers by authoring reference architectures, how-tos, and demo applications</li>
<li>Grow your knowledge and expertise to the level of a technical and/or industry specialist</li>
</ul>
<p>To succeed in this role, you will need:</p>
<ul>
<li>Experience with coding in a core programming language (i.e., Python, Java, Scala)</li>
<li>A base level in Spark</li>
<li>A builder mindset with a passion for quick prototyping and experience in vibe coding</li>
<li>Proficient with Big Data Analytics technologies, including hands-on expertise with complex proofs-of-concept and public cloud platform(s)</li>
<li>Experienced in use case discovery, scoping, and delivering complex solution architecture designs to multiple audiences</li>
<li>Joy in drilling deeper on tough technical questions and solution architecture while always keeping the big picture in mind</li>
</ul>
<p>Fluency in German is a strong advantage, but not required.</p>
<p>At Databricks, we strive to provide comprehensive benefits and perks that meet the needs of all of our employees. For specific details on the benefits offered in your region, please visit our website.</p>
<p>Our Commitment to Diversity and Inclusion</p>
<p>At Databricks, we are committed to fostering a diverse and inclusive culture where everyone can excel. We take great care to ensure that our hiring practices are inclusive and meet equal employment opportunity standards.</p>
<p>If access to export-controlled technology or source code is required for performance of job duties, it is within Employer&#39;s discretion whether to apply for a U.S. government license for such positions, and Employer may decline to proceed with an applicant on this basis alone.</p>
<p style="margin-top:24px;font-size:13px;color:#666;">XML job scraping automation by <a href="https://yubhub.co">YubHub</a></p>]]></Description>
      <Jobtype>full-time</Jobtype>
      <Experiencelevel>senior</Experiencelevel>
      <Workarrangement>onsite</Workarrangement>
      <Salaryrange></Salaryrange>
      <Skills>Python, Java, Scala, Spark, Big Data Analytics, Cloud Computing, Machine Learning, Data Science, Cloud Architecture</Skills>
      <Category>Engineering</Category>
      <Industry>Technology</Industry>
      <Employername>Databricks</Employername>
      <Employerlogo>https://logos.yubhub.co/databricks.com.png</Employerlogo>
      <Employerdescription>Databricks is a data and AI company that provides a unified platform for data science and analytics. It was founded by the original creators of Lakehouse, Apache Spark, Delta Lake, and MLflow.</Employerdescription>
      <Employerwebsite>https://databricks.com</Employerwebsite>
      <Compensationcurrency></Compensationcurrency>
      <Compensationmin></Compensationmin>
      <Compensationmax></Compensationmax>
      <Applyto>https://job-boards.greenhouse.io/databricks/jobs/8500326002?utm_source=yubhub.co&amp;utm_medium=jobs_feed&amp;utm_campaign=apply</Applyto>
      <Location>Munich, Germany</Location>
      <Country></Country>
      <Postedate>2026-04-18</Postedate>
    </job>
    <job>
      <externalid>e58b08f7-c31</externalid>
      <Title>Senior Data Engineer</Title>
      <Description><![CDATA[<p>As a Senior Data Engineer on the Analytics Team, you will collaborate with stakeholders across the company to design, build and implement data pipelines and models that enable our next generation of technology to be deployed around the world. You will have a hand in helping shape the data platform vision at Anduril.</p>
<p>We&#39;re looking for software and data engineers who are seeking high impact collaborative roles focused on driving operational execution. Ideally you are looking to learn what it takes to build the next generation of defence technology.</p>
<p>Your responsibilities will include leading the design and roadmap for our data platform, partnering with operations, product, and engineering to advocate best practices and build supporting systems and infrastructure for the various data needs, owning the ingest and egress frameworks for data pipelines that stitch together various data sources in order to produce valuable data products that drive the business, and managing a large user base and providing true data self-service at scale.</p>
<p>We use Palantir Foundry as our central hub for data-driven applications, visualizations and large-scale data analysis across the Anduril org. We also use SQLMesh for data transformations, Athena for querying data, Apache Iceberg as our table format, and Flyte for orchestration.</p>
<p>Required qualifications include 5+ years of experience in a data engineering role building products, ideally in a fast-paced environment, good foundations in Python or another language, experience with Spark, PySpark, SQL and dbt, experience with Enterprise Data Systems like Palantir Foundry, and experience with or interest in learning how to develop data services and data products.</p>
<p>The salary range for this role is $166,000-$220,000 USD.</p>
<p style="margin-top:24px;font-size:13px;color:#666;">XML job scraping automation by <a href="https://yubhub.co">YubHub</a></p>]]></Description>
      <Jobtype>full-time</Jobtype>
      <Experiencelevel>senior</Experiencelevel>
      <Workarrangement>onsite</Workarrangement>
      <Salaryrange>$166,000-$220,000 USD</Salaryrange>
      <Skills>Python, Spark, PySpark, SQL, dbt, Palantir Foundry, SQLMesh, Athena, Apache Iceberg, Flyte</Skills>
      <Category>Engineering</Category>
      <Industry>Technology</Industry>
      <Employername>Anduril</Employername>
      <Employerlogo>https://logos.yubhub.co/anduril.com.png</Employerlogo>
      <Employerdescription>Anduril is a defence technology company working to solve big problems in defence.</Employerdescription>
      <Employerwebsite>https://www.anduril.com/</Employerwebsite>
      <Compensationcurrency></Compensationcurrency>
      <Compensationmin></Compensationmin>
      <Compensationmax></Compensationmax>
      <Applyto>https://job-boards.greenhouse.io/andurilindustries/jobs/4587312007?utm_source=yubhub.co&amp;utm_medium=jobs_feed&amp;utm_campaign=apply</Applyto>
      <Location>Costa Mesa, California, United States</Location>
      <Country></Country>
      <Postedate>2026-04-18</Postedate>
    </job>
    <job>
      <externalid>8a3caae4-044</externalid>
      <Title>Member of Technical Staff - Imagine Model</Title>
      <Description><![CDATA[<p>As a Member of Technical Staff on the Imagine Model Team, you will develop cutting-edge AI experiences beyond text, with a strong focus on enabling high-fidelity understanding and generation across image and video modalities. Responsibilities span data curation, modeling, training, inference serving, and product integration, covering both pretraining and post-training phases. You will collaborate closely with product teams to push model frontiers and deliver exceptional end-to-end user experiences.</p>
<p>Key responsibilities include creating and driving engineering agendas to advance multimodal capabilities, improving data quality through annotation, filtering, augmentation, synthetic generation, captioning, and in-depth data studies, designing evaluation frameworks, metrics, benchmarks, evals, and reward models tailored to image/video/audio quality and coherence, implementing efficient algorithms for state-of-the-art model performance, and developing scalable data collection and processing pipelines for multimodal (primarily image/video-focused) datasets.</p>
<p>The ideal candidate will have a track record in leading studies that significantly improve neural network capabilities and performance through better data or modeling, experience in data-driven experiment designs, systematic analysis, and iterative model debugging, experience developing or working with large-scale distributed machine learning systems, and ability to deliver optimal end-to-end user experiences.</p>
<p style="margin-top:24px;font-size:13px;color:#666;">XML job scraping automation by <a href="https://yubhub.co">YubHub</a></p>]]></Description>
      <Jobtype>full-time</Jobtype>
      <Experiencelevel>staff</Experiencelevel>
      <Workarrangement>hybrid</Workarrangement>
      <Salaryrange>$180,000 - $440,000 USD</Salaryrange>
      <Skills>data curation, modeling, training, inference serving, product integration, large-scale distributed machine learning systems, SFT, RL, evals, human/synthetic data collection, agentic systems, Python, JAX/XLA, PyTorch, Rust/C++, Spark, Ray</Skills>
      <Category>Engineering</Category>
      <Industry>Technology</Industry>
      <Employername>xAI</Employername>
      <Employerlogo>https://logos.yubhub.co/xai.com.png</Employerlogo>
      <Employerdescription>xAI creates AI systems that can accurately understand the universe and aid humanity in its pursuit of knowledge.</Employerdescription>
      <Employerwebsite>https://www.xai.com/</Employerwebsite>
      <Compensationcurrency></Compensationcurrency>
      <Compensationmin></Compensationmin>
      <Compensationmax></Compensationmax>
      <Applyto>https://job-boards.greenhouse.io/xai/jobs/5051985007?utm_source=yubhub.co&amp;utm_medium=jobs_feed&amp;utm_campaign=apply</Applyto>
      <Location>Palo Alto, CA; Seattle, WA</Location>
      <Country></Country>
      <Postedate>2026-04-18</Postedate>
    </job>
    <job>
      <externalid>63a79841-36e</externalid>
      <Title>Solutions Architect (Vietnam)</Title>
      <Description><![CDATA[<p>At Databricks, we&#39;re seeking a Solutions Architect to join our Field Engineering team in Vietnam. As a key member of our team, you will work closely with customers to understand their complex data challenges and provide technical expertise to demonstrate how our Data Intelligence Platform can help them solve these issues.</p>
<p>You will form successful relationships with clients throughout Vietnam, providing technical and business value to Databricks customers in collaboration with Account Executives. You will operate as an expert in big data analytics, developing into a &#39;champion&#39; and trusted advisor on multiple issues of architecture, design, and implementation to lead to the successful adoption of the Databricks Data Intelligence Platform.</p>
<p>Your responsibilities will include:</p>
<ul>
<li>Developing customer relationships and building internal partnerships with account executives and teams</li>
<li>Engaging customers in technical sales, challenging their questions, guiding clear outcomes, and communicating technical and value propositions</li>
<li>Prior experience with coding in a core programming language (i.e., Python, Java, Scala) and willingness to learn a base level of Spark</li>
<li>Proficient with Big Data Analytics technologies, including hands-on expertise with complex proofs-of-concept and public cloud platform(s)</li>
<li>Experienced in use case discovery, scoping, and delivering complex solution architecture designs to multiple audiences requiring an ability to context switch in levels of technical depth</li>
<li>Proficiency in the Vietnamese language is required as this role serves clients based in Vietnam and involves direct customer communications in the Vietnamese language</li>
</ul>
<p>In return, you will have the opportunity to grow your knowledge and expertise to the level of a technical and/or industry specialist, and contribute to the success of our customers and the growth of our organization.</p>
<p>If you&#39;re passionate about working with data and AI, and want to make a real impact, we encourage you to apply for this exciting opportunity.</p>
<p style="margin-top:24px;font-size:13px;color:#666;">XML job scraping automation by <a href="https://yubhub.co">YubHub</a></p>]]></Description>
      <Jobtype>full-time</Jobtype>
      <Experiencelevel>senior</Experiencelevel>
      <Workarrangement>remote</Workarrangement>
      <Salaryrange></Salaryrange>
      <Skills>Python, Java, Scala, Big Data Analytics, Spark, Cloud Computing, Data Science, Machine Learning, Data Engineering, Data Architecture, Cloud Security, DevOps</Skills>
      <Category>Engineering</Category>
      <Industry>Technology</Industry>
      <Employername>Databricks</Employername>
      <Employerlogo>https://logos.yubhub.co/databricks.com.png</Employerlogo>
      <Employerdescription>Databricks is a data and AI company that provides a unified platform for data science and analytics. Over 10,000 organizations worldwide rely on its Data Intelligence Platform.</Employerdescription>
      <Employerwebsite>https://databricks.com</Employerwebsite>
      <Compensationcurrency></Compensationcurrency>
      <Compensationmin></Compensationmin>
      <Compensationmax></Compensationmax>
      <Applyto>https://job-boards.greenhouse.io/databricks/jobs/8472732002?utm_source=yubhub.co&amp;utm_medium=jobs_feed&amp;utm_campaign=apply</Applyto>
      <Location>Singapore</Location>
      <Country></Country>
      <Postedate>2026-04-18</Postedate>
    </job>
    <job>
      <externalid>7b97bd88-535</externalid>
      <Title>Named Core Account Executive - Public Sector</Title>
      <Description><![CDATA[<p>We&#39;re looking for a passionate and results-driven Enterprise Account Executive to help our government partners harness the full power of the Databricks platform. This is a pioneering sales role within our Public Sector Sales team, reporting directly to the Senior Sales Director. You&#39;ll work at the forefront of digital transformation in the Public Sector space, helping agencies reimagine how they deliver outcomes and play a pivotal role in shaping Databricks&#39; presence across the country.</p>
<p>As a successful candidate, you are a creative, energetic self-starter who understands the sales process. You know how to sell innovation and change through customer vision expansion and can drive deals forward to compress decision cycles. You love understanding a product in depth and are passionate about communicating its value to Customers and System Integrators. Always hunting for new opportunities, you will be asked to close net new accounts while maintaining existing accounts. Along with the chance to close an exciting deal, we also offer accelerators above 100% quota accomplishment.</p>
<p>The Impact You Will Have:</p>
<ul>
<li>Driving Consumption: Help customers derive value from the platform by identifying key use cases and increasing usage.</li>
<li>Champion real-world change: Lead initiatives that make measurable impact , from accelerating innovation in health and education to supporting sustainability and economic development.</li>
<li>Shape Databricks’ Public Sector footprint: Identify, structure, and close strategic opportunities that align our capabilities with Australia’s digital priorities.</li>
<li>Inspire through connection: Build trusted relationships with senior decision-makers, tell compelling stories about our impact, and influence at the highest levels.</li>
<li>Execute with excellence: Manage the end-to-end sales cycle , from prospecting and initial engagement to closing transformative deals and driving platform adoption.</li>
<li>Collaborate for success: Work with solution architects, customer success, and global teams to deliver solutions that empower customers and ensure long-term success.</li>
</ul>
<p>What We Look For:</p>
<ul>
<li>A passion for impact , you’re motivated by helping organisations use technology for the public good.</li>
<li>Proven experience in enterprise or Public Sector sales, ideally in Cloud, Data, or SaaS.</li>
<li>Strategic and structured in approach, with strong execution and accountability.</li>
<li>Excellent communication and relationship-building skills with senior stakeholders.</li>
<li>Growth mindset, creativity, and adaptability in dynamic, innovative environments.</li>
<li>Knowledge of Spark and Big Data is highly desirable</li>
</ul>
<p style="margin-top:24px;font-size:13px;color:#666;">XML job scraping automation by <a href="https://yubhub.co">YubHub</a></p>]]></Description>
      <Jobtype>full-time</Jobtype>
      <Experiencelevel>senior</Experiencelevel>
      <Workarrangement>onsite</Workarrangement>
      <Salaryrange></Salaryrange>
      <Skills>public sector sales, cloud sales, data sales, sales strategy, account management, customer success, relationship building, communication skills, spark, big data, ai, machine learning, data analytics</Skills>
      <Category>Sales</Category>
      <Industry>Technology</Industry>
      <Employername>Databricks</Employername>
      <Employerlogo>https://logos.yubhub.co/databricks.com.png</Employerlogo>
      <Employerdescription>Databricks is a data and AI company that provides a data intelligence platform to unify and democratize data, analytics, and AI. It was founded by the original creators of Lakehouse, Apache Spark, Delta Lake, and MLflow.</Employerdescription>
      <Employerwebsite>https://databricks.com</Employerwebsite>
      <Compensationcurrency></Compensationcurrency>
      <Compensationmin></Compensationmin>
      <Compensationmax></Compensationmax>
      <Applyto>https://job-boards.greenhouse.io/databricks/jobs/8441879002?utm_source=yubhub.co&amp;utm_medium=jobs_feed&amp;utm_campaign=apply</Applyto>
      <Location>Melbourne, Australia</Location>
      <Country></Country>
      <Postedate>2026-04-18</Postedate>
    </job>
    <job>
      <externalid>a9a579af-fdc</externalid>
      <Title>Sr. Solutions Architect</Title>
      <Description><![CDATA[<p>At Databricks, we are seeking a Senior Solutions Architect to join our Field Engineering team. As a key member of our team, you will work closely with customers to understand their complex data challenges and develop customized solutions using our Data Intelligence Platform.</p>
<p>Our team is responsible for demonstrating the value of our platform to customers and providing them with the necessary expertise to succeed. We are looking for someone who is passionate about data and has a strong technical background in software engineering.</p>
<p>In this role, you will have the opportunity to work with a variety of customers across different industries and geographies. You will also have the chance to contribute to the development of our technical community engagement initiatives, including customer-facing collateral and workshops.</p>
<p>We offer a competitive salary and benefits package, as well as opportunities for professional growth and development. If you are a motivated and experienced software engineer looking for a new challenge, we encourage you to apply.</p>
<p>Responsibilities:</p>
<ul>
<li>Develop customer engagement strategies in partnership with Account Executive(s) in your designated territory.</li>
<li>Coach junior Solutions Architects and teams on use case prioritization and building technical champions.</li>
<li>Influence stakeholders at all levels through complex engagements with the wider cloud ecosystem and 3rd party applications, ensuring they are excited by the Databricks vision and solution strategy.</li>
<li>Be a &#39;champion’ for both customers and colleagues, operating as an expert solution architect and trusted advisor for significant data analytics architecture, design, and adoption of the Databricks Data Intelligence Platform.</li>
<li>Contribute to Databricks&#39; technical community engagement by developing customer-facing collateral and leading workshops, seminars, and meet-ups.</li>
</ul>
<p>Requirements:</p>
<ul>
<li>Know how to engage in complex customer interactions and sales lifecycle in a technical pre-sales capacity.</li>
<li>Ability to influence decision-makers and C-level executives by developing relationships and orchestrating teams to achieve long-term customer success.</li>
<li>Prior experience with coding in a core programming language (i.e., Python, Java, Scala) and willingness to learn a base level of Spark.</li>
<li>Hands-on expertise with complex proofs-of-concept and public cloud platform(s).</li>
<li>Know how to provide technical solutions for specialized customer needs and navigate a competitive landscape.</li>
</ul>
<p>Benefits:</p>
<ul>
<li>Comprehensive benefits and perks package</li>
<li>Opportunities for professional growth and development</li>
<li>Competitive salary</li>
<li>Flexible working hours</li>
<li>Collaborative and dynamic work environment</li>
</ul>
<p>We are an equal opportunities employer and welcome applications from all qualified candidates.</p>
<p style="margin-top:24px;font-size:13px;color:#666;">XML job scraping automation by <a href="https://yubhub.co">YubHub</a></p>]]></Description>
      <Jobtype>full-time</Jobtype>
      <Experiencelevel>senior</Experiencelevel>
      <Workarrangement>onsite</Workarrangement>
      <Salaryrange></Salaryrange>
      <Skills>Python, Java, Scala, Spark, Cloud computing, Data analytics, Software engineering, Machine learning, Data science, Cloud architecture, DevOps, Agile methodologies</Skills>
      <Category>Engineering</Category>
      <Industry>Technology</Industry>
      <Employername>Databricks</Employername>
      <Employerlogo>https://logos.yubhub.co/databricks.com.png</Employerlogo>
      <Employerdescription>Databricks is a data and AI company that provides a unified platform for data science and analytics. It was founded by the original creators of Lakehouse, Apache Spark, Delta Lake, and MLflow.</Employerdescription>
      <Employerwebsite>https://databricks.com</Employerwebsite>
      <Compensationcurrency></Compensationcurrency>
      <Compensationmin></Compensationmin>
      <Compensationmax></Compensationmax>
      <Applyto>https://job-boards.greenhouse.io/databricks/jobs/8194862002?utm_source=yubhub.co&amp;utm_medium=jobs_feed&amp;utm_campaign=apply</Applyto>
      <Location>Sydney, Australia</Location>
      <Country></Country>
      <Postedate>2026-04-18</Postedate>
    </job>
    <job>
      <externalid>22bcbb50-ef4</externalid>
      <Title>Member of Technical Staff - Data Platform</Title>
      <Description><![CDATA[<p><strong>About the Role</strong></p>
<p>The Data Platform team at xAI builds and operates the infrastructure responsible for all large-scale data transport and processing across the company.</p>
<p>As a software engineer on the Data Platform team, you will design, build, and operate the distributed systems powering X&#39;s data movement and compute.</p>
<p><strong>Responsibilities</strong></p>
<ul>
<li>Design and implement high-throughput, low-latency data ingestion and transport systems.</li>
<li>Scale and optimise multi-tenant Kafka infrastructure supporting real-time workloads.</li>
<li>Extend and tune Spark, Flink, and Trino for demanding production pipelines.</li>
<li>Build interfaces, APIs, and pipelines enabling teams to query, process, and move data at petabyte scale.</li>
<li>Debug and optimise distributed systems, with a focus on reliability and performance under load.</li>
<li>Collaborate with ML, product, and infrastructure teams to unblock critical data workflows.</li>
</ul>
<p><strong>Basic Qualifications</strong></p>
<ul>
<li>Proven expertise in distributed systems, stream processing, or large-scale data platforms.</li>
<li>Proficiency in Rust, Go, Scala or similar systems languages.</li>
<li>Hands-on experience with Kafka, Flink, Spark, Trino, or Hadoop in production.</li>
<li>Strong debugging, profiling, and performance optimisation skills.</li>
<li>Track record of shipping and maintaining critical infrastructure.</li>
<li>Comfortable working in fast-moving, high-stakes environments with minimal guardrails.</li>
</ul>
<p><strong>Compensation and Benefits</strong></p>
<p>$180,000 - $440,000 USD</p>
<p>Base salary is just one part of our total rewards package at X, which also includes equity, comprehensive medical, vision, and dental coverage, access to a 401(k) retirement plan, short &amp; long-term disability insurance, life insurance, and various other discounts and perks.</p>
<p style="margin-top:24px;font-size:13px;color:#666;">XML job scraping automation by <a href="https://yubhub.co">YubHub</a></p>]]></Description>
      <Jobtype>full-time</Jobtype>
      <Experiencelevel>staff</Experiencelevel>
      <Workarrangement>onsite</Workarrangement>
      <Salaryrange>$180,000 - $440,000 USD</Salaryrange>
      <Skills>Rust, Go, Scala, Kafka, Flink, Spark, Trino, Hadoop, distributed systems, stream processing, large-scale data platforms</Skills>
      <Category>Engineering</Category>
      <Industry>Technology</Industry>
      <Employername>xAI</Employername>
      <Employerlogo>https://logos.yubhub.co/x.com.png</Employerlogo>
      <Employerdescription>xAI creates AI systems to understand the universe and aid humanity in its pursuit of knowledge.</Employerdescription>
      <Employerwebsite>https://www.x.com/</Employerwebsite>
      <Compensationcurrency></Compensationcurrency>
      <Compensationmin></Compensationmin>
      <Compensationmax></Compensationmax>
      <Applyto>https://job-boards.greenhouse.io/xai/jobs/4803862007?utm_source=yubhub.co&amp;utm_medium=jobs_feed&amp;utm_campaign=apply</Applyto>
      <Location>Palo Alto, CA</Location>
      <Country></Country>
      <Postedate>2026-04-18</Postedate>
    </job>
    <job>
      <externalid>8871a994-591</externalid>
      <Title>Machine Learning Engineer, Core Engineering</Title>
      <Description><![CDATA[<p>We&#39;re seeking a talented Machine Learning Engineer to join our Core Engineering team. As a Machine Learning Engineer at Pinterest, you will build cutting-edge technology using the latest advances in deep learning and machine learning to personalize Pinterest. You will partner closely with teams across Pinterest to experiment and improve ML models for various product surfaces, while gaining knowledge of how ML works in different areas.</p>
<p>Key Responsibilities:</p>
<ul>
<li>Build cutting-edge technology using the latest advances in deep learning and machine learning to personalize Pinterest</li>
<li>Partner closely with teams across Pinterest to experiment and improve ML models for various product surfaces (Homefeed, Ads, Growth, Shopping, and Search), while gaining knowledge of how ML works in different areas</li>
<li>Use data-driven methods and leverage the unique properties of our data to improve candidate retrieval</li>
<li>Work in a high-impact environment with quick experimentation and product launches</li>
<li>Keep up with industry trends in recommendation systems</li>
</ul>
<p>Requirements:</p>
<ul>
<li>2+ years of industry experience applying machine learning methods (e.g., user modeling, personalization, recommender systems, search, ranking, natural language processing, reinforcement learning, and graph representation learning)</li>
<li>End-to-end hands-on experience with building data processing pipelines, large-scale machine learning systems, and big data technologies (e.g., Hadoop/Spark)</li>
<li>Degree in computer science, machine learning, statistics, or related field</li>
</ul>
<p>Nice to Have:</p>
<ul>
<li>M.S. or PhD in Machine Learning or related areas</li>
<li>Publications at top ML conferences</li>
<li>Experience using Cursor, Copilot, Codex, or similar AI coding assistants for development, debugging, testing, and refactoring</li>
<li>Familiarity with LLM-powered productivity tools for documentation search, experiment analysis, SQL/data exploration, and engineering workflow acceleration</li>
<li>Expertise in scalable real-time systems that process stream data</li>
<li>Passion for applied ML and the Pinterest product</li>
</ul>
<p>Relocation Statement:</p>
<p>This position is not eligible for relocation assistance. Visit our PinFlex page to learn more about our working model.</p>
<p style="margin-top:24px;font-size:13px;color:#666;">XML job scraping automation by <a href="https://yubhub.co">YubHub</a></p>]]></Description>
      <Jobtype>full-time</Jobtype>
      <Experiencelevel>mid</Experiencelevel>
      <Workarrangement>remote</Workarrangement>
      <Salaryrange>$138,905-$285,982 USD</Salaryrange>
      <Skills>machine learning, deep learning, data processing pipelines, large-scale machine learning systems, big data technologies, Hadoop, Spark, natural language processing, reinforcement learning, graph representation learning, Cursor, Copilot, Codex, LLM-powered productivity tools, scalable real-time systems, stream data</Skills>
      <Category>Engineering</Category>
      <Industry>Technology</Industry>
      <Employername>Pinterest</Employername>
      <Employerlogo>https://logos.yubhub.co/pinterest.com.png</Employerlogo>
      <Employerdescription>Pinterest is a social media platform with over 500 million users worldwide, offering a vast collection of ideas and inspiration for users to create a life they love.</Employerdescription>
      <Employerwebsite>https://www.pinterest.com/</Employerwebsite>
      <Compensationcurrency></Compensationcurrency>
      <Compensationmin></Compensationmin>
      <Compensationmax></Compensationmax>
      <Applyto>https://job-boards.greenhouse.io/pinterest/jobs/6121450?utm_source=yubhub.co&amp;utm_medium=jobs_feed&amp;utm_campaign=apply</Applyto>
      <Location>San Francisco, CA, US; Palo Alto, CA, US; Seattle, WA, US; Remote, US</Location>
      <Country></Country>
      <Postedate>2026-04-18</Postedate>
    </job>
    <job>
      <externalid>33f60e2b-f34</externalid>
      <Title>Sr. Solutions Architect - Greenfield (New Logo) France</Title>
      <Description><![CDATA[<p>Job Title: Sr. Solutions Architect - Greenfield (New Logo) France</p>
<p>We are seeking a Senior Solutions Architect to join our team in Paris. As a Senior Solutions Architect, you will be responsible for providing technical and business value to Databricks customers in collaboration with Account Executives.</p>
<p>The location for the role should be in the Paris region (i.e. within a commutable distance for a hybrid schedule).</p>
<p>At Databricks, our core values are at the heart of everything we do; creating a culture of proactiveness and a customer-centric mindset guides us to create a unified platform that makes data science and analytics accessible to everyone.</p>
<p>You will be an essential part of this mission, using your technical expertise to demonstrate how our Data &amp; Intelligence Platform can help customers solve their complex data challenges.</p>
<p>Responsibilities:</p>
<ul>
<li>Form successful relationships with strategic enterprise clients within the Greenfield territory,</li>
<li>Operate as an expert in big data analytics to excite customers about Databricks,</li>
<li>Develop into a ‘champion’ and trusted advisor on multiple issues of architecture, design, and implementation to lead to the successful adoption of the Databricks Data Intelligence Platform,</li>
<li>Scale best practices in your field and support customers by authoring reference architectures, how-tos, and demo applications,</li>
<li>Grow your knowledge and expertise to the level of a technical and/or industry specialist.</li>
</ul>
<p>What we look for:</p>
<ul>
<li>Engage customers in technical sales, challenge their questions, guide clear outcomes, and communicate technical and value propositions,</li>
<li>Develop customer relationships and build internal partnerships with account executives and teams,</li>
<li>Experience with managing strategic enterprise accounts,</li>
<li>Prior experience with coding in a core programming language (i.e., Python, Java, Scala) and willingness to learn a base level of Spark,</li>
<li>Proficient with Big Data Analytics technologies, including hands-on expertise with complex proofs-of-concept and public cloud platform(s),</li>
<li>Experienced in use case discovery, scoping, and delivering complex solution architecture designs to multiple audiences requiring an ability to context switch in levels of technical depth.</li>
</ul>
<p>Mandatory requirements:</p>
<ul>
<li>The location for the role should be in the Paris region (i.e. within a commutable distance for a hybrid schedule),</li>
<li>Flexibility to travel (up to 30% as required for customer meetings, events and trainings),</li>
<li>Business proficiency in both French and English required.</li>
</ul>
<p style="margin-top:24px;font-size:13px;color:#666;">XML job scraping automation by <a href="https://yubhub.co">YubHub</a></p>]]></Description>
      <Jobtype>full-time</Jobtype>
      <Experiencelevel>senior</Experiencelevel>
      <Workarrangement>hybrid</Workarrangement>
      <Salaryrange></Salaryrange>
      <Skills>big data analytics, cloud platform, complex proofs-of-concept, core programming language, solution architecture, Spark, Python, Java, Scala</Skills>
      <Category>Engineering</Category>
      <Industry>Technology</Industry>
      <Employername>Databricks</Employername>
      <Employerlogo>https://logos.yubhub.co/databricks.com.png</Employerlogo>
      <Employerdescription>Databricks is a company that provides a data and AI platform.</Employerdescription>
      <Employerwebsite>https://databricks.com</Employerwebsite>
      <Compensationcurrency></Compensationcurrency>
      <Compensationmin></Compensationmin>
      <Compensationmax></Compensationmax>
      <Applyto>https://job-boards.greenhouse.io/databricks/jobs/8449356002?utm_source=yubhub.co&amp;utm_medium=jobs_feed&amp;utm_campaign=apply</Applyto>
      <Location>Paris, France</Location>
      <Country></Country>
      <Postedate>2026-04-18</Postedate>
    </job>
    <job>
      <externalid>f1fd3aa0-de6</externalid>
      <Title>Hunter Account Executive - Philippines</Title>
      <Description><![CDATA[<p>We are looking for a dynamic Enterprise Account Executive to join our rapidly growing team in Singapore. As an Enterprise Account Executive in Databricks, you will be responsible for selling our enterprise cloud data platform powered by Apache Spark to customers in the Philippines. Your primary goal will be to close new accounts while maintaining existing accounts.</p>
<p>Key Responsibilities:</p>
<ul>
<li>Present a territory plan within the first 90 days</li>
<li>Meet with CIOs, IT executives, LOB executives, Program Managers, and other important partners</li>
<li>Close both new accounts and existing accounts</li>
<li>Identify and close quick, small wins while managing longer, complex sales cycles</li>
<li>Exceed activity, pipeline, and revenue targets</li>
<li>Track all customer details including use case, purchase time frames, next steps, and forecasting in Salesforce</li>
<li>Use a solution-based approach to selling and creating value for customers</li>
<li>Promote Databricks&#39; enterprise cloud data platform powered by Apache Spark</li>
<li>Ensure 100% satisfaction among all customers</li>
<li>Prioritize opportunities and apply appropriate resources</li>
</ul>
<p>Requirements:</p>
<ul>
<li>7+ years of Enterprise Sales experience exceeding quotas, covering relevant accounts and industries</li>
<li>Success closing new accounts while working existing accounts</li>
<li>Understanding of Spark and big data preferable</li>
<li>Bachelors Degree and 5+ years of experience selling SaaS solutions to Enterprise Customers in Philippines</li>
</ul>
<p>Benefits:</p>
<ul>
<li>Comprehensive benefits and perks that meet the needs of all of our employees</li>
</ul>
<p>Commitment to Diversity and Inclusion:</p>
<ul>
<li>Databricks is committed to fostering a diverse and inclusive culture where everyone can excel</li>
<li>We take great care to ensure that our hiring practices are inclusive and meet equal employment opportunity standards</li>
</ul>
<p style="margin-top:24px;font-size:13px;color:#666;">XML job scraping automation by <a href="https://yubhub.co">YubHub</a></p>]]></Description>
      <Jobtype>full-time</Jobtype>
      <Experiencelevel>executive</Experiencelevel>
      <Workarrangement>onsite</Workarrangement>
      <Salaryrange></Salaryrange>
      <Skills>Enterprise Sales, Cloud Sales, SaaS Sales, Spark, Big Data, Customer Vision Expansion, Solution-Based Selling, Salesforce</Skills>
      <Category>Sales</Category>
      <Industry>Technology</Industry>
      <Employername>Databricks</Employername>
      <Employerlogo>https://logos.yubhub.co/databricks.com.png</Employerlogo>
      <Employerdescription>Databricks is a software company that provides a data and AI platform. It has over 10,000 customers worldwide.</Employerdescription>
      <Employerwebsite>https://databricks.com</Employerwebsite>
      <Compensationcurrency></Compensationcurrency>
      <Compensationmin></Compensationmin>
      <Compensationmax></Compensationmax>
      <Applyto>https://job-boards.greenhouse.io/databricks/jobs/7856268002?utm_source=yubhub.co&amp;utm_medium=jobs_feed&amp;utm_campaign=apply</Applyto>
      <Location>Singapore</Location>
      <Country></Country>
      <Postedate>2026-04-18</Postedate>
    </job>
    <job>
      <externalid>0a154c39-08a</externalid>
      <Title>Senior Machine Learning Platform Engineer (Platform)</Title>
      <Description><![CDATA[<p>Ready to be pushed beyond what you think you’re capable of?</p>
<p>At Coinbase, our mission is to increase economic freedom in the world.</p>
<p>We&#39;re seeking a Senior Machine Learning Platform Engineer to join our Machine Learning Platform team. The team builds the foundational components for feature engineering and training/serving ML models at Coinbase. Our platform is used to combat fraud, personalize user experiences, and to analyze blockchains.</p>
<p>As a Senior Machine Learning Platform Engineer, you will:</p>
<p>Form a deep understanding of our Machine Learning Engineers’ needs and our current capabilities and gaps. Mentor our talented junior engineers on how to build high quality software, and take their skills to the next level. Continually raise our engineering standards to maintain high-availability and low-latency for our ML inference infrastructure that runs both predictive ML models and LLMs. Optimize low latency streaming pipelines to give our ML models the freshest and highest quality data. Evangelize state-of-the-art practices on building high-performance distributed training jobs that process large volumes of data. Build tooling to observe the quality of data going into our models and to detect degradations impacting model performance.</p>
<p>What we look for in you:</p>
<p>5+ yrs of industry experience as a Software Engineer. Strong understanding of distributed systems. Lead by example through high quality code and excellent communication skills. Great sense of design, and can bring clarity to complex technical requirements. Treat other engineers as a customer, and have an obsessive focus on delivering them a seamless experience. Mastery of the fundamentals, such that you can quickly jump between many varied technologies and still operate at a high level. Demonstrates the ability to responsibly use generative AI tools and copilots (e.g., LibreChat, Gemini, Glean) in daily workflows, continuously learn as tools evolve, and apply human-in-the-loop practices to deliver business-ready outputs and drive measurable improvements in efficiency, cost, and quality.</p>
<p>Nice to haves:</p>
<p>Experience building ML models and working with ML systems. Experience working on a platform team, and building developer tooling. Experience with the technologies we use (Python, Golang, Ray, Tecton, Spark, Airflow, Databricks, Snowflake, and DynamoDB).</p>
<p>Job ID: P75535</p>
<p>Pay Transparency Notice: Depending on your work location, the target annual base salary for this position can range as detailed below. Total compensation may also include equity and bonus eligibility and benefits (including medical, dental, vision and 401(k)). Annual base salary range (excluding equity and bonus): $186,065-$225,000 USD</p>
<p style="margin-top:24px;font-size:13px;color:#666;">XML job scraping automation by <a href="https://yubhub.co">YubHub</a></p>]]></Description>
      <Jobtype>full-time</Jobtype>
      <Experiencelevel>senior</Experiencelevel>
      <Workarrangement>remote</Workarrangement>
      <Salaryrange>$186,065-$225,000 USD</Salaryrange>
      <Skills>distributed systems, high-quality code, excellent communication skills, design, fundamentals, generative AI tools, copilots, ML models, ML systems, platform team, developer tooling, Python, Golang, Ray, Tecton, Spark, Airflow, Databricks, Snowflake, DynamoDB</Skills>
      <Category>Engineering</Category>
      <Industry>Technology</Industry>
      <Employername>Coinbase</Employername>
      <Employerlogo>https://logos.yubhub.co/coinbase.com.png</Employerlogo>
      <Employerdescription>Coinbase is a cryptocurrency exchange and wallet service.</Employerdescription>
      <Employerwebsite>https://www.coinbase.com/</Employerwebsite>
      <Compensationcurrency></Compensationcurrency>
      <Compensationmin></Compensationmin>
      <Compensationmax></Compensationmax>
      <Applyto>https://job-boards.greenhouse.io/coinbase/jobs/7604203?utm_source=yubhub.co&amp;utm_medium=jobs_feed&amp;utm_campaign=apply</Applyto>
      <Location>Remote - USA</Location>
      <Country></Country>
      <Postedate>2026-04-18</Postedate>
    </job>
    <job>
      <externalid>8f03ad2d-96f</externalid>
      <Title>Software Engineer, Research Data Platform</Title>
      <Description><![CDATA[<p>We&#39;re looking for engineers who love working directly with users and who excel at building data products. The Research Data Platform team builds the tools that Anthropic&#39;s researchers use every day to manage, query, and analyze the data that goes into training and evaluating frontier models.</p>
<p>As a Software Engineer on the Research Data Platform team, you will:</p>
<ul>
<li>Build and operate data pipelines that extract data from research training runs and land it in storage systems that are easy and fast to query</li>
<li>Work closely with researchers to design and build APIs, libraries, and web interfaces that support data management, exploration, and analysis</li>
<li>Develop dataset management, data cataloging, and provenance tooling that researchers use in their day-to-day work</li>
<li>Embed with research teams to understand their workflows, identify high-leverage tooling opportunities, and ship solutions quickly</li>
<li>Collaborate with adjacent teams to build on existing systems rather than reinventing them</li>
</ul>
<p>We do not require prior ML or AI training experience. If you enjoy working closely with technical users, learning new domains quickly, and building tools people actually want to use, you&#39;ll pick up the research context fast.</p>
<p>Strong candidates may also have experience with large-scale ETL, columnar storage formats, and query engines (e.g., Spark, BigQuery, DuckDB, Parquet), high-volume time series data , ingestion, storage, and efficient querying, data cataloging, lineage, or metadata management systems, or ML experiment tracking or metrics platforms.</p>
<p style="margin-top:24px;font-size:13px;color:#666;">XML job scraping automation by <a href="https://yubhub.co">YubHub</a></p>]]></Description>
      <Jobtype>full-time</Jobtype>
      <Experiencelevel>mid</Experiencelevel>
      <Workarrangement>hybrid</Workarrangement>
      <Salaryrange>$320,000-$405,000 USD</Salaryrange>
      <Skills>large-scale ETL, columnar storage formats, query engines, high-volume time series data, data cataloging, lineage, metadata management systems, ML experiment tracking, Spark, BigQuery, DuckDB, Parquet</Skills>
      <Category>Engineering</Category>
      <Industry>Technology</Industry>
      <Employername>Anthropic</Employername>
      <Employerlogo>https://logos.yubhub.co/anthropic.com.png</Employerlogo>
      <Employerdescription>Anthropic is a public benefit corporation that creates reliable, interpretable, and steerable AI systems.</Employerdescription>
      <Employerwebsite>https://www.anthropic.com/</Employerwebsite>
      <Compensationcurrency></Compensationcurrency>
      <Compensationmin></Compensationmin>
      <Compensationmax></Compensationmax>
      <Applyto>https://job-boards.greenhouse.io/anthropic/jobs/5191226008?utm_source=yubhub.co&amp;utm_medium=jobs_feed&amp;utm_campaign=apply</Applyto>
      <Location>San Francisco, CA | New York City, NY</Location>
      <Country></Country>
      <Postedate>2026-04-18</Postedate>
    </job>
    <job>
      <externalid>1a3559e1-edb</externalid>
      <Title>Enterprise Account Executive - Financial Services</Title>
      <Description><![CDATA[<p>As an Enterprise Account Executive - Financial Services at Databricks, you will be a strategic sales professional experienced in selling into enterprise accounts. You will have experience working with Financial Services accounts and know how to sell innovation and change through customer vision expansion. You will guide deals forward to compress decision cycles and communicate the value of our products to customers and system integrators.</p>
<p>Key responsibilities include meeting with CIOs, IT executives, LOB executives, program managers, and other important partners. You will close both new accounts and existing accounts, identify and close quick, small wins while managing longer, complex sales cycles, exceed activity, pipeline, and revenue targets, and track all customer details in Salesforce.</p>
<p>You will use a solution-based approach to selling and creating value for customers, promote Databricks&#39; enterprise cloud data platform powered by Apache Spark, ensure 100% satisfaction among all customers, and build a plan for success internally at Databricks and externally with your accounts.</p>
<p>We look for individuals with field sales experience within big data, Cloud, or SaaS sales, prior customer relationships with CIOs, program managers, and essential decision makers, and the ability to simply articulate intricate cloud technologies. A minimum of 7+ years of experience exceeding sales quotas is required, as well as understanding of Spark and big data, and French speaking skills.</p>
<p style="margin-top:24px;font-size:13px;color:#666;">XML job scraping automation by <a href="https://yubhub.co">YubHub</a></p>]]></Description>
      <Jobtype>full-time</Jobtype>
      <Experiencelevel>executive</Experiencelevel>
      <Workarrangement>onsite</Workarrangement>
      <Salaryrange></Salaryrange>
      <Skills>field sales experience, big data, Cloud, SaaS sales, Spark, French speaking, prior customer relationships, CIOs, program managers, essential decision makers</Skills>
      <Category>Sales</Category>
      <Industry>Technology</Industry>
      <Employername>Databricks</Employername>
      <Employerlogo>https://logos.yubhub.co/databricks.com.png</Employerlogo>
      <Employerdescription>Databricks is a data and AI company that provides a data intelligence platform to unify and democratize data, analytics, and AI. Over 10,000 organisations worldwide rely on its platform.</Employerdescription>
      <Employerwebsite>https://databricks.com</Employerwebsite>
      <Compensationcurrency></Compensationcurrency>
      <Compensationmin></Compensationmin>
      <Compensationmax></Compensationmax>
      <Applyto>https://job-boards.greenhouse.io/databricks/jobs/8482144002?utm_source=yubhub.co&amp;utm_medium=jobs_feed&amp;utm_campaign=apply</Applyto>
      <Location>Montréal, Canada</Location>
      <Country></Country>
      <Postedate>2026-04-18</Postedate>
    </job>
    <job>
      <externalid>973b554f-cde</externalid>
      <Title>Senior Software Engineer - Backend</Title>
      <Description><![CDATA[<p>At Databricks, we are building and running the world&#39;s best data and AI infrastructure platform so our customers can use deep data insights to improve their business.</p>
<p>As a senior software engineer with a backend focus, you will work with your team to build infrastructure and products for the Databricks platform at scale.</p>
<p>Our backend teams span many domains across our essential service platforms, including distributed systems, at-scale service architecture and monitoring, workflow orchestration, and developer experience.</p>
<p>You will deliver reliable and high-performance services and client libraries for storing and accessing humongous amounts of data on cloud storage backends, such as AWS S3 and Azure Blob Store.</p>
<p>You will also build reliable, scalable services using Scala, Kubernetes, and data pipelines using Spark and Databricks to power the pricing infrastructure that serves millions of cluster-hours per day.</p>
<p>Additionally, you will develop product features that empower customers to easily view and control platform usage.</p>
<p>We look for candidates with a BS (or higher) in Computer Science or a related field, 3+ years of production-level experience in Java, Scala, C++, or a similar language, experience developing large-scale distributed systems, and good knowledge of SQL.</p>
<p style="margin-top:24px;font-size:13px;color:#666;">XML job scraping automation by <a href="https://yubhub.co">YubHub</a></p>]]></Description>
      <Jobtype>full-time</Jobtype>
      <Experiencelevel>senior</Experiencelevel>
      <Workarrangement>onsite</Workarrangement>
      <Salaryrange></Salaryrange>
      <Skills>Java, Scala, C++, SQL, Kubernetes, Spark, Databricks</Skills>
      <Category>Engineering</Category>
      <Industry>Technology</Industry>
      <Employername>Databricks</Employername>
      <Employerlogo>https://logos.yubhub.co/databricks.com.png</Employerlogo>
      <Employerdescription>Databricks is a data and AI company that provides a unified and democratized data, analytics, and AI platform to over 10,000 organizations worldwide.</Employerdescription>
      <Employerwebsite>https://databricks.com/</Employerwebsite>
      <Compensationcurrency></Compensationcurrency>
      <Compensationmin></Compensationmin>
      <Compensationmax></Compensationmax>
      <Applyto>https://job-boards.greenhouse.io/databricks/jobs/8029671002?utm_source=yubhub.co&amp;utm_medium=jobs_feed&amp;utm_campaign=apply</Applyto>
      <Location>Amsterdam, Netherlands</Location>
      <Country></Country>
      <Postedate>2026-04-18</Postedate>
    </job>
    <job>
      <externalid>829e8859-ae5</externalid>
      <Title>Solutions Architect</Title>
      <Description><![CDATA[<p>At Databricks, we&#39;re looking for a Solutions Architect to join our Field Engineering team. As a Solutions Architect, you will be an essential part of our mission to inspire customers to make informed decisions that push their business forward. You will work with a collaborative, customer-focused team that values innovation and creativity, using your technical expertise to demonstrate how our Data Intelligence Platform can help customers solve their complex data challenges.</p>
<p>Your responsibilities will include:</p>
<ul>
<li>Forming successful relationships with clients throughout your assigned territory, providing technical and business value to Databricks customers in collaboration with Account Executives.</li>
<li>Operating as an expert in big data analytics to excite customers about Databricks. You will develop into a ‘champion’ and trusted advisor on multiple issues of architecture, design, and implementation to lead to the successful adoption of the Databricks Data Intelligence Platform.</li>
<li>Scaling best practices in your field and supporting customers by authoring reference architectures, how-tos, and demo applications, and helping build the Databricks community in your region by leading workshops, seminars, and meet-ups.</li>
</ul>
<p>We&#39;re looking for someone with prior experience in technical sales, customer relationship development, and a strong understanding of big data analytics technologies. You should be proficient in coding in a core programming language (such as Python, Java, or Scala) and willing to learn a base level of Spark.</p>
<p>As a Solutions Architect at Databricks, you will have the opportunity to grow your knowledge and expertise to the level of a technical and/or industry specialist, and contribute to the success of our customers and the growth of our company.</p>
<p style="margin-top:24px;font-size:13px;color:#666;">XML job scraping automation by <a href="https://yubhub.co">YubHub</a></p>]]></Description>
      <Jobtype>full-time</Jobtype>
      <Experiencelevel>senior</Experiencelevel>
      <Workarrangement>remote</Workarrangement>
      <Salaryrange></Salaryrange>
      <Skills>big data analytics, data intelligence platform, Spark, Python, Java, Scala, technical sales, customer relationship development, cloud computing, machine learning, data science</Skills>
      <Category>Engineering</Category>
      <Industry>Technology</Industry>
      <Employername>Databricks</Employername>
      <Employerlogo>https://logos.yubhub.co/databricks.com.png</Employerlogo>
      <Employerdescription>Databricks is a data and AI company that provides a unified platform for data science and analytics.</Employerdescription>
      <Employerwebsite>https://databricks.com</Employerwebsite>
      <Compensationcurrency></Compensationcurrency>
      <Compensationmin></Compensationmin>
      <Compensationmax></Compensationmax>
      <Applyto>https://job-boards.greenhouse.io/databricks/jobs/8368209002?utm_source=yubhub.co&amp;utm_medium=jobs_feed&amp;utm_campaign=apply</Applyto>
      <Location>Remote - Denmark</Location>
      <Country></Country>
      <Postedate>2026-04-18</Postedate>
    </job>
    <job>
      <externalid>a1ccc8c6-f09</externalid>
      <Title>Geo Hunter Account Executive, Manufacturing &amp; High-Tech</Title>
      <Description><![CDATA[<p>As a Geo Hunter Enterprise Account Executive at Databricks, you will be responsible for selling into and activating Large Manufacturing accounts. You will be a strategic sales professional with experience in selling innovation and change through customer vision expansion. Your goal will be to guide deals forward to compress decision cycles and close exciting deals. We offer accelerators above 100% quota attainment.</p>
<p>Key responsibilities include:</p>
<ul>
<li>Meeting with CIOs, IT executives, LOB executives, Program Managers, and other important partners</li>
<li>Closing both new accounts and existing accounts</li>
<li>Identifying and closing quick, small wins while managing longer, complex sales cycles</li>
<li>Exceeding activity, pipeline, and revenue targets</li>
<li>Tracking all customer details including use case, purchase time frames, next steps, and forecasting in Salesforce</li>
<li>Using a solution-based approach to selling and creating value for customers</li>
<li>Promoting Databricks&#39; enterprise cloud data platform powered by Apache Spark</li>
<li>Ensuring 100% satisfaction among all customers</li>
<li>Prioritizing opportunities and applying appropriate resources</li>
<li>Building a plan for success internally at Databricks and externally with your accounts</li>
</ul>
<p>We are looking for someone with:</p>
<ul>
<li>Previous experience in an early-stage company and knowledge of how to navigate and be successful</li>
<li>Field sales experience within big data, Cloud, or SaaS sales</li>
<li>Experience managing large, complex Manufacturing accounts is preferred</li>
<li>Prior customer relationships with CIOs, program managers, and essential decision makers</li>
<li>Ability to simply articulate intricate cloud technologies</li>
<li>5+ years experience exceeding sales quotas</li>
<li>Success closing new accounts while working existing accounts</li>
<li>Understanding of Spark and big data preferable</li>
<li>Passion for cloud technologies</li>
<li>Bachelor&#39;s Degree</li>
</ul>
<p style="margin-top:24px;font-size:13px;color:#666;">XML job scraping automation by <a href="https://yubhub.co">YubHub</a></p>]]></Description>
      <Jobtype>full-time</Jobtype>
      <Experiencelevel>executive</Experiencelevel>
      <Workarrangement>onsite</Workarrangement>
      <Salaryrange>$167,100-$229,800 USD</Salaryrange>
      <Skills>big data, Cloud, SaaS sales, sales quotas, Spark, Apache Spark, Delta Lake, MLflow, cloud technologies, customer vision expansion, solution-based approach, customer satisfaction</Skills>
      <Category>Sales</Category>
      <Industry>Technology</Industry>
      <Employername>Databricks</Employername>
      <Employerlogo>https://logos.yubhub.co/databricks.com.png</Employerlogo>
      <Employerdescription>Databricks is a data and AI company that provides a data intelligence platform to unify and democratize data, analytics, and AI. It was founded by the original creators of Lakehouse, Apache Spark, Delta Lake, and MLflow.</Employerdescription>
      <Employerwebsite>https://databricks.com</Employerwebsite>
      <Compensationcurrency></Compensationcurrency>
      <Compensationmin></Compensationmin>
      <Compensationmax></Compensationmax>
      <Applyto>https://job-boards.greenhouse.io/databricks/jobs/8193347002?utm_source=yubhub.co&amp;utm_medium=jobs_feed&amp;utm_campaign=apply</Applyto>
      <Location>Northeast - United States</Location>
      <Country></Country>
      <Postedate>2026-04-18</Postedate>
    </job>
    <job>
      <externalid>01819c10-867</externalid>
      <Title>PhD Machine Learning Engineer, Intern</Title>
      <Description><![CDATA[<p><strong>Job Description</strong></p>
<p>We&#39;re excited to offer PhD machine learning engineering internships for the summer of 2026. As an intern, you&#39;ll contribute to critical projects that directly enhance Stripe&#39;s suite of products, focusing on areas such as foundation models used for dozens of tasks e.g. fraud detection, enhanced support, and predicting user behavior.</p>
<p><strong>Responsibilities</strong></p>
<ul>
<li>Develop and deploy large-scale machine learning systems that drive significant business value across various domains.</li>
<li>Engage in the end-to-end process of designing, training, improving, and launching machine learning models.</li>
<li>Write production-scale ML models that will be deployed to help Stripe enable economic infrastructure access for a diverse range of businesses globally.</li>
<li>Collaborate across teams to incorporate feedback and proactively seek solutions to challenges.</li>
<li>Rapidly learn new technologies and approaches, demonstrating a strong ability to ask insightful questions and communicate the status of your work effectively.</li>
</ul>
<p><strong>Who We&#39;re Looking For</strong></p>
<ul>
<li>A deep understanding of computer science, obtained through the pursuit of a PhD in Computer Science, Machine Learning, or a closely related field, with the expectation of graduating in winter 2026 or spring/summer 2027.</li>
<li>Practical experience with programming and machine learning, evidenced by projects, classwork, or research. Familiarity with languages such as Python, Scala, Spark and libraries such as Pandas, NumPy, and Scikit-learn.</li>
<li>Expertise in areas of machine learning such as supervised and unsupervised learning techniques, ML operations, and possibly experience in Large Language Models or Reinforcement Learning.</li>
<li>Demonstrated ability to work on collaborative projects, with experience in receiving and applying feedback from various stakeholders.</li>
<li>A proactive approach to learning unfamiliar systems and a demonstrated ability to understand complex systems independently.</li>
</ul>
<p><strong>What We Offer</strong></p>
<ul>
<li>Join us for an unforgettable summer internship and help shape the future of global commerce.</li>
<li>At Stripe, you won&#39;t just be working on theoretical projects; you&#39;ll make a tangible impact on the world&#39;s economic infrastructure.</li>
</ul>
<p style="margin-top:24px;font-size:13px;color:#666;">XML job scraping automation by <a href="https://yubhub.co">YubHub</a></p>]]></Description>
      <Jobtype>internship</Jobtype>
      <Experiencelevel>entry</Experiencelevel>
      <Workarrangement>remote</Workarrangement>
      <Salaryrange></Salaryrange>
      <Skills>Python, Scala, Spark, Pandas, NumPy, Scikit-learn, Supervised learning, Unsupervised learning, ML operations, Large Language Models, Reinforcement Learning</Skills>
      <Category>Engineering</Category>
      <Industry>Technology</Industry>
      <Employername>Stripe</Employername>
      <Employerlogo>https://logos.yubhub.co/stripe.com.png</Employerlogo>
      <Employerdescription>Stripe is a financial infrastructure platform for businesses, used by millions of companies worldwide.</Employerdescription>
      <Employerwebsite>https://stripe.com/</Employerwebsite>
      <Compensationcurrency></Compensationcurrency>
      <Compensationmin></Compensationmin>
      <Compensationmax></Compensationmax>
      <Applyto>https://job-boards.greenhouse.io/stripe/jobs/7216664?utm_source=yubhub.co&amp;utm_medium=jobs_feed&amp;utm_campaign=apply</Applyto>
      <Location>San Francisco, New York City, Seattle</Location>
      <Country></Country>
      <Postedate>2026-04-18</Postedate>
    </job>
    <job>
      <externalid>a57339aa-939</externalid>
      <Title>Staff Data Engineer, tvScientific</Title>
      <Description><![CDATA[<p>We&#39;re seeking a Staff Data Engineer to lead the design, implementation, and evolution of our identity services and data governance platform. This role is critical to ensuring trusted, privacy-safe, and well-governed data across the organization.</p>
<p>You will work at the intersection of data engineering, identity resolution, privacy, and platform reliability. This is an individual contributor role, where you will work to define and implement a strategic vision for data engineering within the organization.</p>
<p>Responsibilities:</p>
<ul>
<li>Design and maintain a scalable identity resolution platform</li>
<li>Build pipelines and services to ingest, normalize, link, and version identity data across multiple sources</li>
<li>Ensure deterministic and probabilistic matching logic that is transparent, auditable, and measurable</li>
<li>Partner with product and analytics teams to expose identity data through reliable, well-documented APIs and datasets</li>
<li>Build and operate batch and streaming pipelines using modern data stack tools</li>
<li>Create clear documentation, standards, and runbooks for identity and governance systems</li>
<li>Own data governance foundations including data lineage, quality checks, schema enforcement, and access controls</li>
<li>Implement privacy-by-design principles (PII handling, consent enforcement, retention policies)</li>
<li>Collaborate with legal, privacy, and security teams to operationalize regulatory requirements (e.g., GDPR, CCPA)</li>
<li>Establish monitoring and alerting for data quality, freshness, and integrity</li>
</ul>
<p>What we&#39;re looking for:</p>
<ul>
<li>Production data engineering experience</li>
<li>Bachelor’s degree in computer science, related field or equivalent experience</li>
<li>Proficiency in Spark and Scala, with proven experience building data infrastructure in Spark using Scala</li>
<li>Experience in delivering significant technical initiatives and building reliable, large scale services</li>
<li>Experience in delivering APIs backed by relationship-heavy datasets</li>
<li>Experience implementing data governance practices, including data quality, metadata management, and access controls</li>
<li>Strong understanding of privacy-by-design principles and handling of sensitive or regulated data</li>
<li>Familiarity with data lakes, cloud warehouses, and storage formats</li>
<li>Strong proficiency in AWS services</li>
<li>Excellent written and verbal communication skills</li>
<li>Successful design and implementation of scalable and efficient data infrastructure</li>
<li>High attention to detail in implementation of automated data quality checks</li>
<li>Effective collaboration with cross-functional teams</li>
<li>Demonstrated ability to use AI to improve speed and quality in your day-to-day workflow for relevant outputs</li>
<li>Strong track record of critical evaluation and verification of AI-assisted work (e.g., testing, source-checking, data validation, peer review)</li>
<li>High integrity and ownership: you protect sensitive data, avoid over-reliance on AI, and remain accountable for final decisions and deliverables</li>
</ul>
<p style="margin-top:24px;font-size:13px;color:#666;">XML job scraping automation by <a href="https://yubhub.co">YubHub</a></p>]]></Description>
      <Jobtype>full-time</Jobtype>
      <Experiencelevel>staff</Experiencelevel>
      <Workarrangement>remote</Workarrangement>
      <Salaryrange>$177,185-$364,795 USD</Salaryrange>
      <Skills>Spark, Scala, Data Engineering, Identity Resolution, Privacy, Platform Reliability, Data Governance, Data Lineage, Quality Checks, Schema Enforcement, Access Controls, AWS Services</Skills>
      <Category>Engineering</Category>
      <Industry>Technology</Industry>
      <Employername>tvScientific</Employername>
      <Employerlogo>https://logos.yubhub.co/tvscientific.com.png</Employerlogo>
      <Employerdescription>tvScientific is a technology company that provides a CTV advertising platform for performance marketers.</Employerdescription>
      <Employerwebsite>https://www.tvscientific.com/</Employerwebsite>
      <Compensationcurrency></Compensationcurrency>
      <Compensationmin></Compensationmin>
      <Compensationmax></Compensationmax>
      <Applyto>https://job-boards.greenhouse.io/pinterest/jobs/7642253?utm_source=yubhub.co&amp;utm_medium=jobs_feed&amp;utm_campaign=apply</Applyto>
      <Location>San Francisco, CA, US; Remote, US</Location>
      <Country></Country>
      <Postedate>2026-04-18</Postedate>
    </job>
    <job>
      <externalid>bd5139e2-87e</externalid>
      <Title>Solutions Architect</Title>
      <Description><![CDATA[<p>At Databricks, we&#39;re seeking a Solutions Architect to join our Field Engineering team. As a key member of our team, you will be responsible for demonstrating the value of our Data Intelligence Platform to customers and helping them solve complex data challenges.</p>
<p>Your primary responsibilities will include:</p>
<ul>
<li>Building strong relationships with clients across your assigned territory, providing technical and business value to Databricks customers in collaboration with Account Executives.</li>
</ul>
<ul>
<li>Operating as an expert in big data analytics to excite customers about Databricks and develop into a &#39;champion&#39; and trusted advisor on multiple issues of architecture, design, and implementation.</li>
</ul>
<ul>
<li>Scaling best practices in your field and supporting customers by authoring reference architectures, how-tos, and demo applications, and helping build the Databricks community in your region by leading workshops, seminars, and meet-ups.</li>
</ul>
<ul>
<li>Growing your knowledge and expertise to the level of a technical and/or industry specialist.</li>
</ul>
<p>We&#39;re looking for someone with prior experience in technical sales, customer relationship development, and a strong understanding of big data analytics technologies, including hands-on expertise with complex proofs-of-concept and public cloud platforms.</p>
<p style="margin-top:24px;font-size:13px;color:#666;">XML job scraping automation by <a href="https://yubhub.co">YubHub</a></p>]]></Description>
      <Jobtype>full-time</Jobtype>
      <Experiencelevel>senior</Experiencelevel>
      <Workarrangement>onsite</Workarrangement>
      <Salaryrange></Salaryrange>
      <Skills>big data analytics, technical sales, customer relationship development, cloud platforms, Spark, Python, Java, Scala</Skills>
      <Category>Engineering</Category>
      <Industry>Technology</Industry>
      <Employername>Databricks</Employername>
      <Employerlogo>https://logos.yubhub.co/databricks.com.png</Employerlogo>
      <Employerdescription>Databricks is a data and AI company that provides a unified platform for data science and analytics. Over 10,000 organisations worldwide rely on its Data Intelligence Platform.</Employerdescription>
      <Employerwebsite>https://databricks.com</Employerwebsite>
      <Compensationcurrency></Compensationcurrency>
      <Compensationmin></Compensationmin>
      <Compensationmax></Compensationmax>
      <Applyto>https://job-boards.greenhouse.io/databricks/jobs/8437032002?utm_source=yubhub.co&amp;utm_medium=jobs_feed&amp;utm_campaign=apply</Applyto>
      <Location>Australian Capital Territory, Australia</Location>
      <Country></Country>
      <Postedate>2026-04-18</Postedate>
    </job>
    <job>
      <externalid>1a13a584-68d</externalid>
      <Title>Solutions Architect (Indonesia)</Title>
      <Description><![CDATA[<p>We are seeking a Solutions Architect to join our team in Indonesia. As a Solutions Architect, you will be responsible for demonstrating how our Data Intelligence Platform can help customers solve their complex data challenges. You will work with a collaborative, customer-focused team that values innovation and creativity, using your skills to create customized solutions to help our customers achieve their goals and guide their businesses forward.</p>
<p>The impact you will have:</p>
<ul>
<li>Form successful relationships with clients throughout Indonesia, providing technical and business value to Databricks customers in collaboration with Account Executives.</li>
<li>Operate as an expert in big data analytics to excite customers about Databricks. You will develop into a ‘champion’ and trusted advisor on multiple issues of architecture, design, and implementation to lead to the successful adoption of the Databricks Data Intelligence Platform.</li>
<li>Scale best practices in your field and support customers by authoring reference architectures, how-tos, and demo applications, and help build the Databricks community in your region by leading workshops, seminars, and meet-ups.</li>
<li>Grow your knowledge and expertise to the level of a technical and/or industry specialist.</li>
</ul>
<p>What we look for:</p>
<ul>
<li>Engage customers in technical sales, challenge their questions, guide clear outcomes, and communicate technical and value propositions.</li>
<li>Develop customer relationships and build internal partnerships with account executives and teams.</li>
<li>Prior experience with coding in a core programming language (i.e., Python, Java, Scala) and willingness to learn a base level of Spark.</li>
<li>Proficient with Big Data Analytics technologies, including hands-on expertise with complex proofs-of-concept and public cloud platform(s).</li>
<li>Experienced in use case discovery, scoping, and delivering complex solution architecture designs to multiple audiences requiring an ability to context switch in levels of technical depth.</li>
<li>Proficiency in the Bahasa Indonesian language is required as this role serves clients based in Indonesia and involves direct customer communications in the Bahasa Indonesian language.</li>
</ul>
<p style="margin-top:24px;font-size:13px;color:#666;">XML job scraping automation by <a href="https://yubhub.co">YubHub</a></p>]]></Description>
      <Jobtype>full-time</Jobtype>
      <Experiencelevel>senior</Experiencelevel>
      <Workarrangement>onsite</Workarrangement>
      <Salaryrange></Salaryrange>
      <Skills>Python, Java, Scala, Big Data Analytics, Spark, Bahasa Indonesian</Skills>
      <Category>Engineering</Category>
      <Industry>Technology</Industry>
      <Employername>Databricks</Employername>
      <Employerlogo>https://logos.yubhub.co/databricks.com.png</Employerlogo>
      <Employerdescription>Databricks is a data and AI company that provides a unified platform for data science and analytics. It was founded by the original creators of Lakehouse, Apache Spark, Delta Lake, and MLflow.</Employerdescription>
      <Employerwebsite>https://databricks.com</Employerwebsite>
      <Compensationcurrency></Compensationcurrency>
      <Compensationmin></Compensationmin>
      <Compensationmax></Compensationmax>
      <Applyto>https://job-boards.greenhouse.io/databricks/jobs/8438763002?utm_source=yubhub.co&amp;utm_medium=jobs_feed&amp;utm_campaign=apply</Applyto>
      <Location>Singapore</Location>
      <Country></Country>
      <Postedate>2026-04-18</Postedate>
    </job>
    <job>
      <externalid>8aadecbf-9e0</externalid>
      <Title>Geo Hunter Account Executive, Manufacturing</Title>
      <Description><![CDATA[<p>As a Geo Hunter Account Executive at Databricks, you will be a strategic sales professional experienced in selling into and activating Large Manufacturing accounts. You will know how to sell innovation and change through customer vision expansion and guide deals forward to compress decision cycles. You will love understanding a product in depth and be passionate about communicating its value to Customers and System Integrators.</p>
<p>Your responsibilities will include meeting with CIOs, IT executives, LOB executives, Program Managers, and other important partners, closing both new accounts and existing accounts, identifying and closing quick, small wins while managing longer, complex sales cycles, exceeding activity, pipeline, and revenue targets, tracking all customer details including use case, purchase time frames, next steps, and forecasting in Salesforce, using a solution-based approach to selling and creating value for customers, promoting Databricks&#39; enterprise cloud data platform powered by Apache Spark, ensuring 100% satisfaction among all customers, prioritizing opportunities and applying appropriate resources, and building a plan for success internally at Databricks and externally with your accounts.</p>
<p>We look for individuals who have previously worked in an early stage company and know how to navigate and be successful, have field sales experience within big data, Cloud, or SaaS sales, have experience managing large, complex Manufacturing accounts, have prior customer relationships with CIOs, program managers, and essential decision makers, can simply articulate intricate cloud technologies, have 5+ years experience exceeding sales quotas, have success closing new accounts while working existing accounts, and have an understanding of Spark and big data.</p>
<p>The pay range for this role is $167,100-$229,800 USD, and the total compensation package may also include eligibility for annual performance bonus, equity, and benefits.</p>
<p style="margin-top:24px;font-size:13px;color:#666;">XML job scraping automation by <a href="https://yubhub.co">YubHub</a></p>]]></Description>
      <Jobtype>full-time</Jobtype>
      <Experiencelevel>executive</Experiencelevel>
      <Workarrangement>remote</Workarrangement>
      <Salaryrange>$167,100-$229,800 USD</Salaryrange>
      <Skills>big data, Cloud, SaaS sales, Salesforce, Apache Spark, customer relationship management, solution-based selling, Spark, cloud technologies</Skills>
      <Category>Sales</Category>
      <Industry>Technology</Industry>
      <Employername>Databricks</Employername>
      <Employerlogo>https://logos.yubhub.co/databricks.com.png</Employerlogo>
      <Employerdescription>Databricks is a data and AI company that provides a unified and democratized data, analytics, and AI platform to over 10,000 organizations worldwide.</Employerdescription>
      <Employerwebsite>https://databricks.com</Employerwebsite>
      <Compensationcurrency></Compensationcurrency>
      <Compensationmin></Compensationmin>
      <Compensationmax></Compensationmax>
      <Applyto>https://job-boards.greenhouse.io/databricks/jobs/8438296002?utm_source=yubhub.co&amp;utm_medium=jobs_feed&amp;utm_campaign=apply</Applyto>
      <Location>California; Remote - Colorado; Remote - Oregon; Remote - Washington</Location>
      <Country></Country>
      <Postedate>2026-04-18</Postedate>
    </job>
    <job>
      <externalid>34a04ec5-ae9</externalid>
      <Title>Machine Learning Engineer II</Title>
      <Description><![CDATA[<p>We&#39;re looking for a Machine Learning Engineer II to join our Growth Platform engineering group. As a Machine Learning Engineer II, you will be responsible for developing and implementing ML models to improve user targeting and personalization for growth initiatives. You will design and build scalable ML pipelines for data processing, model training, and deployment. You will collaborate with cross-functional teams to identify potential ML solutions for growth opportunities. You will conduct A/B tests to evaluate the performance of ML models and optimize their impact on key growth metrics. You will analyze large datasets to extract insights and inform decision-making for user acquisition and retention strategies. You will contribute to the development of our ML infrastructure, ensuring it can support rapid experimentation and deployment. You will stay up-to-date with the latest advancements in ML and recommend new techniques to enhance our growth efforts. You will participate in code reviews and collaborate with team members as needed. You will thoughtfully leverage AI tools to speed up design, coding, debugging, and documentation, while applying your own critical thinking to validate outputs and explain how you used AI in your workflow. You will shape our AI-assisted engineering practices by sharing patterns, guardrails, and learnings with the team so we can safely increase our impact without compromising code quality, reliability, or candidate expectations.</p>
<p>To be successful in this role, you will need to have 3+ years of experience applying ML to real-world problems, preferably in a growth or user acquisition context. You will need to have excellent communication skills and the ability to work effectively in cross-functional teams. You will need to have strong problem-solving skills and the ability to translate business requirements into technical solutions. You will need to have strong programming skills in Python and experience with PyTorch. You will need to have proficiency in data processing and analysis using tools like SQL, Spark, or Hadoop. You will need to have experience with recommendation systems, user modeling, or personalization algorithms. You will need to have familiarity with statistical analysis. You will need to have experience using AI coding assistants and agentic tools as a force-multiplier, and equally comfortable solving problems from first principles when those tools aren’t available. You will need to have a Bachelor’s/Master’s degree in a relevant field or equivalent experience.</p>
<p style="margin-top:24px;font-size:13px;color:#666;">XML job scraping automation by <a href="https://yubhub.co">YubHub</a></p>]]></Description>
      <Jobtype>full-time</Jobtype>
      <Experiencelevel>mid</Experiencelevel>
      <Workarrangement>hybrid</Workarrangement>
      <Salaryrange></Salaryrange>
      <Skills>Python, PyTorch, SQL, Spark, Hadoop, Recommendation systems, User modeling, Personalization algorithms, Statistical analysis, AI coding assistants, Natural Language Processing, Data visualization, Cloud platforms, Containerization technologies</Skills>
      <Category>Engineering</Category>
      <Industry>Technology</Industry>
      <Employername>Pinterest</Employername>
      <Employerlogo>https://logos.yubhub.co/pinterest.com.png</Employerlogo>
      <Employerdescription>Pinterest is a social media platform that allows users to discover and save ideas for future reference.</Employerdescription>
      <Employerwebsite>https://www.pinterest.com/</Employerwebsite>
      <Compensationcurrency></Compensationcurrency>
      <Compensationmin></Compensationmin>
      <Compensationmax></Compensationmax>
      <Applyto>https://job-boards.greenhouse.io/pinterest/jobs/7681666?utm_source=yubhub.co&amp;utm_medium=jobs_feed&amp;utm_campaign=apply</Applyto>
      <Location>Dublin, IE</Location>
      <Country></Country>
      <Postedate>2026-04-18</Postedate>
    </job>
    <job>
      <externalid>d0ee3e8e-4f6</externalid>
      <Title>Staff Software Engineer</Title>
      <Description><![CDATA[<p>About Us</p>
<p>dbt Labs is the pioneer of analytics engineering, helping data teams transform raw data into reliable, actionable insights.</p>
<p>As of February 2025, we&#39;ve surpassed $100 million in annual recurring revenue (ARR) and serve more than 5,400 dbt Platform customers, including AstraZeneca, Sky, Nasdaq, Volvo, JetBlue, and SafetyCulture.</p>
<p>We&#39;re backed by top-tier investors including Andreessen Horowitz, Sequoia Capital, and Altimeter.</p>
<p><strong>About The Team</strong></p>
<p>dbt Fusion is building the next generation of data execution and connectivity infrastructure, enabling dbt workloads to run efficiently across diverse compute engines and data platforms.</p>
<p>As a Senior Engineer on the Fusion Adapters and Connectivity team, you&#39;ll design and ship core abstractions powering how dbt communicates with execution systems , leveraging Rust, Go, Arrow, and emerging open standards.</p>
<p>This is a rare opportunity to work at the intersection of systems programming, database internals, and high-visibility open-source development.</p>
<p>Your work will shape a foundational platform leveraged across the dbt ecosystem and the broader data community.</p>
<p><strong>You are a good fit if you have:</strong></p>
<ul>
<li>Strong programming background in Rust, Go, C++ or similar performance-oriented languages.</li>
</ul>
<ul>
<li>Experience designing or maintaining SDKs, libraries, connectors, or compute/data integration codebases.</li>
</ul>
<ul>
<li>Exposure to data warehouses, query engines, Arrow/columnar ecosystems, or execution runtimes.</li>
</ul>
<ul>
<li>A desire to build foundational platform components that other teams and community members rely on.</li>
</ul>
<ul>
<li>Comfort working in public code review loops, async-first communication, and collaborative RFC processes.</li>
</ul>
<ul>
<li>A mindset grounded in debuggability, reliability, and ownership in ambiguous problem spaces.</li>
</ul>
<p><strong>In this role, you can expect to:</strong></p>
<ul>
<li>Design, build, and maintain Rust-first connectivity layers, execution APIs, and adapter scaffolding.</li>
</ul>
<ul>
<li>Partner with teams building the dbt compiler, semantic layer, and runtime to evolve adapter interfaces and system boundaries.</li>
</ul>
<ul>
<li>Contribute to Arrow/ADBC and other open-source specifications or implementations, strengthening the data ecosystem.</li>
</ul>
<ul>
<li>Own CI, testing frameworks, profiling, error reporting surfaces, and release readiness for Fusion adapters.</li>
</ul>
<ul>
<li>Debug complex interoperability and performance issues across drivers, engines, and compute domains.</li>
</ul>
<ul>
<li>Collaborate with internal and community maintainers to review PRs, write RFCs, and evolve public code architectures.</li>
</ul>
<ul>
<li>Mentor engineers on systems best practices and contribute to shared patterns around resilience, debuggability, and API clarity.</li>
</ul>
<p><strong>You&#39;ll have an edge if you have:</strong></p>
<ul>
<li>Contributed to or interacted with Arrow, ADBC, DuckDB, Presto, DataFusion, Spark, ClickHouse, or similar engines.</li>
</ul>
<ul>
<li>Experience shaping adapter/plugin standards, driver contracts, or architectural interfaces used by others.</li>
</ul>
<ul>
<li>Familiarity with Rust async ecosystems (tokio, tower, tracing) or Go concurrency practices.</li>
</ul>
<ul>
<li>Prior OSS governance experience , triaging issues, reviewing PRs, or working with community maintainers.</li>
</ul>
<ul>
<li>An interest in building developer-experience layers or scaffolding frameworks for adapter authors.</li>
</ul>
<p><strong>Qualifications:</strong></p>
<ul>
<li>6+ years experience in software engineering, with strong systems-level skills.</li>
</ul>
<ul>
<li>2+ years working in open-source, SDK, runtime, or low-level integration environments.</li>
</ul>
<ul>
<li>Bachelor&#39;s degree in Computer Science / related field or equivalent experience through industry OSS contributions.</li>
</ul>
<p style="margin-top:24px;font-size:13px;color:#666;">XML job scraping automation by <a href="https://yubhub.co">YubHub</a></p>]]></Description>
      <Jobtype>full-time</Jobtype>
      <Experiencelevel>staff</Experiencelevel>
      <Workarrangement>remote</Workarrangement>
      <Salaryrange></Salaryrange>
      <Skills>Rust, Go, C++, Arrow, ADBC, DuckDB, Presto, DataFusion, Spark, ClickHouse</Skills>
      <Category>Engineering</Category>
      <Industry>Technology</Industry>
      <Employername>dbt Labs</Employername>
      <Employerlogo>https://logos.yubhub.co/getdbt.com.png</Employerlogo>
      <Employerdescription>dbt Labs is a leading analytics engineering platform, serving over 5,400 customers and generating $100 million in annual recurring revenue.</Employerdescription>
      <Employerwebsite>https://www.getdbt.com/</Employerwebsite>
      <Compensationcurrency></Compensationcurrency>
      <Compensationmin></Compensationmin>
      <Compensationmax></Compensationmax>
      <Applyto>https://job-boards.greenhouse.io/dbtlabsinc/jobs/4641221005?utm_source=yubhub.co&amp;utm_medium=jobs_feed&amp;utm_campaign=apply</Applyto>
      <Location>India - Remote</Location>
      <Country></Country>
      <Postedate>2026-04-18</Postedate>
    </job>
    <job>
      <externalid>fa9a54d7-549</externalid>
      <Title>Senior Site Reliability Engineer, Data Infrastructure</Title>
      <Description><![CDATA[<p>As a Senior Site Reliability Engineer, you will own the reliability and performance of our Kubernetes-based data platform. You will design and operate highly available, multi-region systems, ensuring our services meet strict uptime and latency targets.</p>
<p>Day-to-day, you’ll work on scaling infrastructure, improving deployment pipelines, and hardening our security posture. You’ll play a key role in evolving our DevSecOps practices while partnering closely with engineering teams to ensure services are built for reliability from day one.</p>
<p>We operate with production-grade discipline, supporting mission-critical services with stringent uptime requirements and a focus on automation, observability, and resilience.</p>
<p>The Platform &amp; Infrastructure Engineering team in the Data Infrastructure organization is responsible for the reliability, scalability, and security of the company’s data platform. The team builds and operates the foundational systems that power data ingestion, transformation, analytics, and internal AI workloads at scale.</p>
<p>About the role:</p>
<ul>
<li>5+ years of experience in Site Reliability Engineering, Platform Engineering, or Infrastructure Engineering roles</li>
<li>Deep expertise in Kubernetes and containerized software services, including cluster design, operations, and troubleshooting in production environments</li>
<li>Strong experience building and operating CI/CD systems, including tools such as Argo CD and GitHub Actions</li>
<li>Proven experience owning production systems with high availability requirements (≥99.99% uptime), including incident response, SLI/SLO/SLA definition, error budgets, and postmortems</li>
<li>Hands-on experience designing and operating geo-replicated, multi-region, active-active systems, including traffic routing, failover strategies, and data consistency tradeoffs</li>
<li>Strong experience building and owning observability components, including metrics, logging, and tracing (e.g., Prometheus, Grafana, OpenTelemetry).</li>
<li>Experience with infrastructure as code (e.g., Helm, Terraform, Pulumi) and automated environment provisioning</li>
<li>Strong understanding of system performance tuning, capacity planning, and resource optimization in distributed systems</li>
<li>Experience implementing and operating security best practices in cloud-native environments (e.g., secrets management, network policies, vulnerability scanning)</li>
</ul>
<p>Preferred:</p>
<ul>
<li>Experience operating data platforms or data-intensive workloads (e.g., Spark, Airflow, Kafka, Flink)</li>
<li>Familiarity with service mesh technologies (e.g., Istio, Linkerd)</li>
<li>Experience working in regulated environments with compliance frameworks such as GDPR, SOC 2, HIPAA, or SOX</li>
<li>Background in building internal developer platforms or self-service infrastructure</li>
</ul>
<p>Wondering if you’re a good fit?</p>
<p>We believe in investing in our people, and value candidates who can bring their own diversified experiences to our teams – even if you aren’t a 100% skill or experience match.</p>
<p>Here are a few qualities we’ve found compatible with our team. If some of this describes you, we’d love to talk.</p>
<ul>
<li>You love building highly reliable systems that operate at scale</li>
<li>You’re curious about how to continuously improve system resilience, security, and operations</li>
<li>You’re an expert in diagnosing and solving complex distributed systems problems</li>
</ul>
<p>Why CoreWeave?</p>
<p>At CoreWeave, we work hard, have fun, and move fast! We’re in an exciting stage of hyper-growth that you will not want to miss out on. We’re not afraid of a little chaos, and we’re constantly learning.</p>
<p>Our team cares deeply about how we build our product and how we work together, which is represented through our core values:</p>
<ul>
<li>Be Curious at Your Core</li>
<li>Act Like an Owner</li>
<li>Empower Employees</li>
<li>Deliver Best-in-Class Client Experiences</li>
<li>Achieve More Together</li>
</ul>
<p>We support and encourage an entrepreneurial outlook and independent thinking. We foster an environment that encourages collaboration and provides the opportunity to develop innovative solutions to complex problems.</p>
<p>As we get set for take off, the growth opportunities within the organization are constantly expanding. You will be surrounded by some of the best talent in the industry, who will want to learn from you, too.</p>
<p>Come join us!</p>
<p>The base salary range for this role is $165,000 to $242,000. The starting salary will be determined based on job-related knowledge, skills, experience, and market location. We strive for both market alignment and internal equity when determining compensation.</p>
<p>In addition to base salary, our total rewards package includes a discretionary bonus, equity awards, and a comprehensive benefits program (all based on eligibility).</p>
<p>What We Offer</p>
<p>The range we’ve posted represents the typical compensation range for this role. To determine actual compensation, we review the market rate for each candidate which can include a variety of factors. These include qualifications, experience, interview performance, and location.</p>
<p>In addition to a competitive salary, we offer a variety of benefits to support your needs, including:</p>
<ul>
<li>Medical, dental, and vision insurance</li>
<li>100% paid for by CoreWeave</li>
<li>Company-paid Life Insurance</li>
<li>Voluntary supplemental life insurance</li>
<li>Short and long-term disability insurance</li>
<li>Flexible Spending Account</li>
<li>Health Savings Account</li>
<li>Tuition Reimbursement</li>
<li>Ability to Participate in Employee Stock Purchase Program (ESPP)</li>
<li>Mental Wellness Benefits through Spring Health</li>
<li>Family-Forming support provided by Carrot</li>
<li>Paid Parental Leave</li>
<li>Flexible, full-service childcare support with Kinside</li>
<li>401(k) with a generous employer match</li>
<li>Flexible PTO</li>
<li>Catered lunch each day in our office and data center locations</li>
<li>A casual work environment</li>
<li>A work culture focused on innovative disruption</li>
</ul>
<p>Our Workplace</p>
<p>While we prioritize a hybrid work environment, remote work may be considered for candidates located more than 30 miles from an office, based on role requirements for specialized skill sets.</p>
<p>New hires will be invited to attend onboarding at one of our hubs within their first month.</p>
<p>Teams also gather quarterly to support collaboration.</p>
<p>California Consumer Privacy Act - California applicants only</p>
<p>CoreWeave is an equal opportunity employer, committed to fostering an inclusive and supportive workplace.</p>
<p>All qualified applicants and candidates will receive consideration for employment without regard to race, color, religion, sex, disability, age, sexual orientation, gender identity, national origin, veteran status, or genetic information.</p>
<p>As part of this commitment and consistent with the Americans with Disabilities Act (ADA), CoreWeave will ensure that qualified applicants and candidates with disabilities are provided reasonable accommodations for the hiring process, unless such accommodation would cause an undue hardship.</p>
<p>If reasonable accommodation is needed, please contact: careers@coreweave.com.</p>
<p>Export Control Compliance</p>
<p>This position requires access to export controlled information.</p>
<p>To conform to U.S. Government export regulations applicable to that information, applicant must either be (A) a U.S. person, defined as a (i) U.S. citizen or national, (ii) U.S. lawful permanent resident (green card holder), (iii) refugee under 8 U.S.C. § 1157, or (iv) asylee under 8 U.S.C. § 1158, (B) eligible to access the export controlled information without restrictions, or (C) otherwise exempt from the export regulations.</p>
<p>If you are not a U.S. person, you will be required to provide documentation of your eligibility to access the export controlled information before being considered for this position.</p>
<p>Please note that CoreWeave is subject to the requirements of the U.S. Department of Commerce&#39;s Export Administration Regulations (EAR) and the U.S. Department of State&#39;s International Traffic in Arms Regulations (ITAR).</p>
<p>By applying for this position, you acknowledge that you have read and understood the export control requirements and that you will comply with them.</p>
<p>If you have any questions or concerns regarding the export control requirements, please contact: careers@coreweave.com.</p>
<p style="margin-top:24px;font-size:13px;color:#666;">XML job scraping automation by <a href="https://yubhub.co">YubHub</a></p>]]></Description>
      <Jobtype>full-time</Jobtype>
      <Experiencelevel>senior</Experiencelevel>
      <Workarrangement>hybrid</Workarrangement>
      <Salaryrange>$165,000 to $242,000</Salaryrange>
      <Skills>Kubernetes, containerized software services, cluster design, operations, troubleshooting, CI/CD systems, Argo CD, GitHub Actions, production systems, high availability, incident response, SLI/SLO/SLA definition, error budgets, postmortems, geo-replicated, multi-region, active-active systems, traffic routing, failover strategies, data consistency tradeoffs, observability components, metrics, logging, tracing, Prometheus, Grafana, OpenTelemetry, infrastructure as code, Helm, Terraform, Pulumi, automated environment provisioning, system performance tuning, capacity planning, resource optimization, distributed systems, security best practices, cloud-native environments, secrets management, network policies, vulnerability scanning, Spark, Airflow, Kafka, Flink, service mesh technologies, Istio, Linkerd, regulated environments, compliance frameworks, GDPR, SOC 2, HIPAA, SOX, internal developer platforms, self-service infrastructure</Skills>
      <Category>Engineering</Category>
      <Industry>Technology</Industry>
      <Employername>CoreWeave</Employername>
      <Employerlogo>https://logos.yubhub.co/coreweave.com.png</Employerlogo>
      <Employerdescription>CoreWeave is a cloud computing company that provides a platform for building and scaling artificial intelligence.</Employerdescription>
      <Employerwebsite>https://www.coreweave.com</Employerwebsite>
      <Compensationcurrency></Compensationcurrency>
      <Compensationmin></Compensationmin>
      <Compensationmax></Compensationmax>
      <Applyto>https://job-boards.greenhouse.io/coreweave/jobs/4671535006?utm_source=yubhub.co&amp;utm_medium=jobs_feed&amp;utm_campaign=apply</Applyto>
      <Location>New York, NY / Bellevue, WA</Location>
      <Country></Country>
      <Postedate>2026-04-18</Postedate>
    </job>
    <job>
      <externalid>d3c0ed5e-154</externalid>
      <Title>Machine Learning Engineer, Payments ML Accelerator</Title>
      <Description><![CDATA[<p><strong>About the role</strong></p>
<p>As a machine learning engineer on our team, you&#39;ll develop advanced ML solutions that directly impact Stripe&#39;s payment products and core business metrics.</p>
<p><strong>About the team</strong></p>
<p>The Payments ML Accelerator team is developing foundational ML capabilities that drive innovation across Stripe&#39;s payment products. We build deep learning models that tackle Stripe&#39;s most complex payment challenges - from fraud detection to authorization optimization - and deliver measurable business impact.</p>
<p><strong>Responsibilities</strong></p>
<ul>
<li>Design and deploy deep learning architectures and foundation models to address problems across key payment entities such as merchants, issuers, or customers</li>
<li>Identify high-impact opportunities, and drive the long-term ML roadmap through well-scoped high-leverage initiatives</li>
<li>Architect generalizable ML workflows to enable rapid scaling and optimized online performance</li>
<li>Deploy ML models online and ensure operational stability</li>
<li>Experiment with advanced ML solutions in the industry and ideate on product applications</li>
<li>Explore cutting-edge ML techniques and evaluate their potential to solve business problems</li>
<li>Work closely with ML infrastructure teams to shape new platform capabilities</li>
</ul>
<p><strong>Who you are</strong></p>
<p>We are looking for ML Engineers who are passionate about using ML to improve products and delight customers. You have experience developing streaming feature pipelines, building ML models, and deploying them to production, even if it involves making substantial changes to backend code. You are comfortable with ambiguity, love to take initiative, and have a bias towards action.</p>
<p><strong>Minimum requirements</strong></p>
<ul>
<li>Minimum 7 years of industry experience doing end-to-end ML development on a machine learning team and bringing ML models to production</li>
<li>Proficient in Python, Scala, and Spark</li>
<li>Proficient in deep learning and LLM/foundation models</li>
</ul>
<p><strong>Preferred qualifications</strong></p>
<ul>
<li>MS/PhD degree in quantitative field or ML/AI (e.g. computer science, math, physics, statistics)</li>
<li>Knowledge about how to manipulate data to perform analysis, including querying data, defining metrics, or slicing and dicing data to evaluate a hypothesis</li>
<li>Experience evaluating niche and upcoming ML solutions</li>
</ul>
<p style="margin-top:24px;font-size:13px;color:#666;">XML job scraping automation by <a href="https://yubhub.co">YubHub</a></p>]]></Description>
      <Jobtype>full-time</Jobtype>
      <Experiencelevel>senior</Experiencelevel>
      <Workarrangement>onsite</Workarrangement>
      <Salaryrange></Salaryrange>
      <Skills>Python, Scala, Spark, Deep learning, LLM/foundation models, MS/PhD degree in quantitative field or ML/AI, Knowledge about how to manipulate data to perform analysis, Experience evaluating niche and upcoming ML solutions</Skills>
      <Category>Engineering</Category>
      <Industry>Finance</Industry>
      <Employername>Stripe</Employername>
      <Employerlogo>https://logos.yubhub.co/stripe.com.png</Employerlogo>
      <Employerdescription>Stripe is a financial infrastructure platform for businesses, used by millions of companies worldwide.</Employerdescription>
      <Employerwebsite>https://stripe.com/</Employerwebsite>
      <Compensationcurrency></Compensationcurrency>
      <Compensationmin></Compensationmin>
      <Compensationmax></Compensationmax>
      <Applyto>https://job-boards.greenhouse.io/stripe/jobs/7079044?utm_source=yubhub.co&amp;utm_medium=jobs_feed&amp;utm_campaign=apply</Applyto>
      <Location>Seattle; San Francisco; New York City</Location>
      <Country></Country>
      <Postedate>2026-04-18</Postedate>
    </job>
    <job>
      <externalid>a02999d2-33b</externalid>
      <Title>Staff Software Engineer - Backend</Title>
      <Description><![CDATA[<p>At Databricks, we are enabling data teams to solve the world&#39;s toughest problems by building and running the world&#39;s best data and AI infrastructure platform. As a software engineer with a backend focus, you will work with your team to build infrastructure and products for the Databricks platform at scale.</p>
<p>The impact you&#39;ll have is significant, spanning many domains across our essential service platforms. You might work on challenges such as:</p>
<ul>
<li>Distributed systems, at-scale service architecture and monitoring, workflow orchestration, and developer experience.</li>
</ul>
<ul>
<li>Delivering reliable and high-performance services and client libraries for storing and accessing humongous amounts of data on cloud storage backends, e.g., AWS S3, Azure Blob Store.</li>
</ul>
<ul>
<li>Building reliable, scalable services, e.g., Scala, Kubernetes, and data pipelines, e.g., Spark, Databricks, to power the pricing infrastructure that serves millions of cluster-hours per day and develop product features that empower customers to easily view and control platform usage.</li>
</ul>
<p>What we look for in a candidate includes:</p>
<ul>
<li>A Bachelor&#39;s degree (or higher) in Computer Science, or a related field.</li>
</ul>
<ul>
<li>7+ years of production-level experience in one of: Java, Scala, C++, or similar languages.</li>
</ul>
<ul>
<li>Experience developing large-scale distributed systems.</li>
</ul>
<ul>
<li>Experience working on a SaaS platform or with Service-Oriented Architectures.</li>
</ul>
<ul>
<li>Good knowledge of SQL.</li>
</ul>
<p>Benefits at Databricks include comprehensive benefits and perks that meet the needs of all employees. For specific details on the benefits offered in your region, please click here.</p>
<p style="margin-top:24px;font-size:13px;color:#666;">XML job scraping automation by <a href="https://yubhub.co">YubHub</a></p>]]></Description>
      <Jobtype>full-time</Jobtype>
      <Experiencelevel>staff</Experiencelevel>
      <Workarrangement>onsite</Workarrangement>
      <Salaryrange></Salaryrange>
      <Skills>Java, Scala, C++, SQL, distributed systems, at-scale service architecture and monitoring, workflow orchestration, developer experience, cloud storage backends, AWS S3, Azure Blob Store, Kubernetes, Spark, Databricks</Skills>
      <Category>Engineering</Category>
      <Industry>Technology</Industry>
      <Employername>Databricks</Employername>
      <Employerlogo>https://logos.yubhub.co/databricks.com.png</Employerlogo>
      <Employerdescription>Databricks is a data and AI company that provides a data and AI infrastructure platform for customers to use deep data insights to improve their business. It was founded by the original creators of Lakehouse, Apache Spark, Delta Lake, and MLflow.</Employerdescription>
      <Employerwebsite>https://databricks.com/</Employerwebsite>
      <Compensationcurrency></Compensationcurrency>
      <Compensationmin></Compensationmin>
      <Compensationmax></Compensationmax>
      <Applyto>https://job-boards.greenhouse.io/databricks/jobs/7984907002?utm_source=yubhub.co&amp;utm_medium=jobs_feed&amp;utm_campaign=apply</Applyto>
      <Location>Berlin, Germany</Location>
      <Country></Country>
      <Postedate>2026-04-18</Postedate>
    </job>
    <job>
      <externalid>100be909-8a8</externalid>
      <Title>Senior Solutions Engineer</Title>
      <Description><![CDATA[<p>You will be an essential part of our mission to create a unified platform that makes data science and analytics accessible to everyone. As a Senior Solutions Engineer, you will use your technical expertise to demonstrate how our Data Intelligence Platform can help customers solve their complex data challenges.</p>
<p>You&#39;ll work with a collaborative, customer-focused team who values innovation and creativity, using your skills to create customized solutions to help our customers achieve their goals and guide their businesses forward.</p>
<p>Your responsibilities will include:</p>
<ul>
<li>Forming successful relationships with clients throughout your assigned territory to provide technical and business value in collaboration with an Account Executive and a Senior Solutions Architect.</li>
<li>Gaining excitement from clients about Databricks through hands-on evaluation and Spark programming, integrating with the wider cloud ecosystem and 3rd party applications.</li>
<li>Contributing to building the Databricks technical community through engagement at workshops, seminars, and meet-ups.</li>
<li>Becoming a Big Data Analytics advisor on aspects of architecture and design.</li>
<li>Supporting your customers by authoring reference architectures, how-tos, and demo applications.</li>
<li>Developing both technically and in the pre-sales aspect with the goal of becoming an independently operating Solutions Architect.</li>
</ul>
<p>We look for individuals who are familiar with working with clients, creating a narrative, answering customer questions, aligning the agenda with important interests, and achieving tangible outcomes. You should be able to independently deliver a technical proposition, identify customers&#39; pain-points, and explain important areas for business value to develop a trusted advisor skillset.</p>
<p>The ideal candidate will have knowledge of a core programming language such as Python, and be knowledgeable in a core Big Data Analytics domain with some exposure to advanced proofs-of-concept and an understanding of a major public cloud platform. Experience diving deeper into solution architecture and Data Engineering is also desirable.</p>
<p style="margin-top:24px;font-size:13px;color:#666;">XML job scraping automation by <a href="https://yubhub.co">YubHub</a></p>]]></Description>
      <Jobtype>full-time</Jobtype>
      <Experiencelevel>senior</Experiencelevel>
      <Workarrangement>remote</Workarrangement>
      <Salaryrange></Salaryrange>
      <Skills>Python, Big Data Analytics, Spark, Cloud Ecosystem, Solution Architecture, Data Engineering</Skills>
      <Category>Engineering</Category>
      <Industry>Technology</Industry>
      <Employername>Databricks</Employername>
      <Employerlogo>https://logos.yubhub.co/databricks.com.png</Employerlogo>
      <Employerdescription>Databricks is a data and AI company that provides a unified platform for data science and analytics. Over 10,000 organisations worldwide use the Databricks Data Intelligence Platform.</Employerdescription>
      <Employerwebsite>https://databricks.com</Employerwebsite>
      <Compensationcurrency></Compensationcurrency>
      <Compensationmin></Compensationmin>
      <Compensationmax></Compensationmax>
      <Applyto>https://job-boards.greenhouse.io/databricks/jobs/8025494002?utm_source=yubhub.co&amp;utm_medium=jobs_feed&amp;utm_campaign=apply</Applyto>
      <Location>Aarhus, Denmark; Remote - Denmark</Location>
      <Country></Country>
      <Postedate>2026-04-18</Postedate>
    </job>
  </jobs>
</source>