<?xml version="1.0" encoding="UTF-8"?>
<source>
  <jobs>
    <job>
      <externalid>4cd24ec0-18d</externalid>
      <Title>Praktikum Controlling – Digitalisierung &amp; KI aktiv mitgestalten</Title>
      <Description><![CDATA[<p>As a member of our Data &amp; AI Team, you will contribute to the development and implementation of modern reporting and KI solutions that support data-driven decision-making processes in Controlling. Your tasks will include:</p>
<p>Developing and implementing innovative planning tools Business Intelligence with SAP Analytics Cloud: creating and enhancing dashboards and reports for management Data modelling and integration with SAP Datasphere to create powerful data structures for analysis and forecasting Implementing and testing small machine learning solutions to automate and improve Controlling processes Supporting the Data &amp; AI Team in developing the strategy, methods, and tools for digital transformation in Controlling Collaborating in interdisciplinary projects focused on digitalisation and data-based process optimisation</p>
<p>You will work closely with our Data &amp; AI Team to design and implement cutting-edge solutions that drive business success. Your contributions will help shape the future of Controlling at Porsche.</p>
<p>In this role, you will have the opportunity to work on various projects, including:</p>
<p>Developing and implementing new reporting and analytics solutions Enhancing existing dashboards and reports Creating data models and integrating them with SAP Datasphere Implementing machine learning algorithms to automate and improve Controlling processes Collaborating with cross-functional teams to drive digital transformation</p>
<p>As a member of our team, you will be part of a dynamic and innovative environment where you can grow professionally and personally. You will have access to state-of-the-art technology and tools, as well as opportunities for professional development and networking.</p>
<p>If you are passionate about data-driven decision-making, innovation, and collaboration, we encourage you to apply for this exciting opportunity.</p>
<p>Key responsibilities:</p>
<ul>
<li>Develop and implement modern reporting and KI solutions</li>
<li>Create and enhance dashboards and reports for management</li>
<li>Design and implement data models and integrate them with SAP Datasphere</li>
<li>Implement and test small machine learning solutions</li>
<li>Support the Data &amp; AI Team in developing the strategy, methods, and tools for digital transformation</li>
<li>Collaborate in interdisciplinary projects focused on digitalisation and data-based process optimisation</li>
</ul>
<p>Requirements:</p>
<ul>
<li>Bachelor&#39;s degree in Economics, Computer Science, Mathematics, Data Science, Business Administration, or a related field</li>
<li>First-hand experience in Business Intelligence, preferably with SAP Analytics Cloud and SAP Datasphere</li>
<li>Basic knowledge of data modelling, data analysis, and machine learning</li>
<li>Proficiency in MS Office and interest in modern database and cloud technologies</li>
<li>Self-motivated, structured, and analytical working style</li>
<li>Enjoy working in a team and being part of innovative digital solutions</li>
<li>Flexibility and high motivation to learn and apply new technologies in Controlling</li>
<li>Excellent German language skills and good English language skills</li>
</ul>
<p>Benefits:</p>
<ul>
<li>Personal mentors and an open feedback culture</li>
<li>Digital learning and working tools</li>
<li>Mobile learning after coordination with the team</li>
<li>Own project work</li>
<li>Team events on a voluntary basis</li>
<li>Active internship community</li>
</ul>
<p>Duration: 5-6 months</p>
<p>Start date: September</p>
<p>Location: Zuffenhausen</p>
<p>We look forward to receiving your application!</p>
<p style="margin-top:24px;font-size:13px;color:#666;">XML job scraping automation by <a href="https://yubhub.co">YubHub</a></p>]]></Description>
      <Jobtype>full-time</Jobtype>
      <Experiencelevel>mid</Experiencelevel>
      <Workarrangement>onsite</Workarrangement>
      <Salaryrange></Salaryrange>
      <Skills>SAP Analytics Cloud, SAP Datasphere, Machine Learning, Data Modelling, Business Intelligence, Python, R, SQL</Skills>
      <Category>Finance</Category>
      <Industry>Automotive</Industry>
      <Employername>Dr. Ing. h.c. F. Porsche AG</Employername>
      <Employerlogo>https://logos.yubhub.co/jobs.porsche.com.png</Employerlogo>
      <Employerdescription>Porsche is a renowned German luxury sports car manufacturer with a global presence.</Employerdescription>
      <Employerwebsite>https://jobs.porsche.com</Employerwebsite>
      <Compensationcurrency></Compensationcurrency>
      <Compensationmin></Compensationmin>
      <Compensationmax></Compensationmax>
      <Applyto>https://jobs.porsche.com/index.php?ac=jobad&amp;id=19216</Applyto>
      <Location>Zuffenhausen</Location>
      <Country></Country>
      <Postedate>2026-04-22</Postedate>
    </job>
    <job>
      <externalid>0b431957-4b8</externalid>
      <Title>Planning Analyst</Title>
      <Description><![CDATA[<p>The Planning organisation plays a critical role in driving operational outcomes by translating demand insights into precise production and procurement signals. Within this group, the Systems &amp; Process team partners closely with Anduril&#39;s Analytics team to build scalable workflows, ontologies, and data models that power planning excellence.</p>
<p>We&#39;re looking for a Planning Analyst who thrives at the intersection of digital and the physical hardware world. You&#39;ll design and implement data models, workflows, and analytical tools that directly improve operational performance,enabling Anduril to plan more effectively at scale.</p>
<p><strong>Responsibilities</strong></p>
<ul>
<li>Partner with Planning leadership and Anduril&#39;s Analytics team to build secure, timely, and extensible ontologies for domains such as Supply Chain, Manufacturing, and Finance.</li>
<li>Develop reporting and data applications that provide near real-time visibility into planning KPIs, including forecast accuracy, fill rates, and inventory turns.</li>
<li>Conduct ad-hoc analyses on demand signals, inventory strategies, and supply risk, delivering insights that balance service levels with profitability.</li>
<li>Run scenario planning and sensitivity analyses to evaluate the impact of market or contract volatility on operational outcomes.</li>
<li>Collaborate with Analytics engineers to generalise planning workflows and data products across Anduril&#39;s production and sustainment organisations.</li>
<li>Act as a subject matter expert for planning tools and systems, supporting upgrades, enhancements, and best practice adoption.</li>
<li>Become a trusted resource for leadership by helping them run their organisations more effectively through data-driven insights.</li>
</ul>
<p><strong>Requirements</strong></p>
<ul>
<li>3+ years of experience in an analytics-focused role (Analytics Engineer, Data Engineer, or Analyst/Consultant with strong supply chain or operations background).</li>
<li>Strong operational instincts and a track record of owning complex, ambiguous planning problems from start to finish.</li>
<li>Ability to think strategically and execute tactically.</li>
<li>Data-driven mindset and ability to turn analytics into action.</li>
<li>Systems fluency, with experience working across ERP platforms (e.g., NetSuite, SAP, Oracle).</li>
<li>Cross-functional empathy and ability to understand the needs of engineering, product, deployment, and finance.</li>
<li>Comfortable with ambiguity and ability to build structure in environments that are scaling fast.</li>
<li>High ownership, low ego, and value results over credit.</li>
<li>Ability to anticipate friction before it happens and proactively work to prevent issues rather than react to them.</li>
<li>Demonstrated experience building and owning processes and ontologies in a fast-paced environment.</li>
<li>Expert-level SQL skills and proficiency in Python (or other programming languages).</li>
<li>Experience with BI and analytics tools (Looker, Tableau, Power BI, Palantir Foundry, dbt, Redshift, etc.).</li>
<li>Strong ability to translate technical models into actionable insights for non-technical stakeholders.</li>
<li>Self-starter mindset and ability to prioritise velocity and impact.</li>
<li>Eligible to obtain and maintain an active U.S. Secret security clearance.</li>
</ul>
<p style="margin-top:24px;font-size:13px;color:#666;">XML job scraping automation by <a href="https://yubhub.co">YubHub</a></p>]]></Description>
      <Jobtype>full-time</Jobtype>
      <Experiencelevel>mid</Experiencelevel>
      <Workarrangement>onsite</Workarrangement>
      <Salaryrange>$98,000-$130,000 USD</Salaryrange>
      <Skills>SQL, Python, ERP platforms (e.g., NetSuite, SAP, Oracle), BI and analytics tools (Looker, Tableau, Power BI, Palantir Foundry, dbt, Redshift, etc.), Data modelling, Workflow design, Analytical tools, Supply chain management, Manufacturing, Finance</Skills>
      <Category>Engineering</Category>
      <Industry>Technology</Industry>
      <Employername>Anduril</Employername>
      <Employerlogo>https://logos.yubhub.co/anduril.com.png</Employerlogo>
      <Employerdescription>Anduril is a technology company that develops and manufactures advanced sensors and artificial intelligence systems for defence and security applications.</Employerdescription>
      <Employerwebsite>https://www.anduril.com/</Employerwebsite>
      <Compensationcurrency></Compensationcurrency>
      <Compensationmin></Compensationmin>
      <Compensationmax></Compensationmax>
      <Applyto>https://job-boards.greenhouse.io/andurilindustries/jobs/4657698007</Applyto>
      <Location>Costa Mesa, California, United States</Location>
      <Country></Country>
      <Postedate>2026-04-18</Postedate>
    </job>
    <job>
      <externalid>14d46b7f-188</externalid>
      <Title>Senior Analyst, Field Analytics</Title>
      <Description><![CDATA[<p>We are looking for a Senior Analyst, Field Analytics to join our Go-To-Market Strategy &amp; Operations group. As part of this team, you will drive insight and scale within the global field organisation by building high-impact technical assets, ranging from executive Tableau dashboards to standardised Snowflake datasets.</p>
<p>Your responsibilities will include designing, building, and maintaining high-visibility Tableau dashboards and reporting assets that provide actionable insights to business partners across the global organisation. You will also build and optimise production-grade data sets in Snowflake, ensuring that all field data (Pipeline, Bookings, Productivity) is clean, structured, and easily accessible for self-service analysis.</p>
<p>In addition, you will take ownership of the technical documentation for all GTM reporting assets, ensuring data lineage, metric definitions, and logic are clearly defined and accessible. You will also champion the use of Generative AI tools to accelerate the analytics lifecycle, including automating SQL query generation, streamlining data preparation, and enhancing report documentation.</p>
<p>To succeed in this role, you will need to have 5+ years of professional experience in Data Analytics or Business Intelligence, ideally within a global delivery model. You will also need to have technical stack expertise in SQL (Snowflake), BI visualisation tool (Tableau), and CRM data (Salesforce).</p>
<p>Preferred qualifications include proficiency with scripting (python) &amp; data modelling (dbt), deep understanding of enterprise software sales processes, field operations, and cross-functional GTM mechanics.</p>
<p style="margin-top:24px;font-size:13px;color:#666;">XML job scraping automation by <a href="https://yubhub.co">YubHub</a></p>]]></Description>
      <Jobtype>full-time</Jobtype>
      <Experiencelevel>senior</Experiencelevel>
      <Workarrangement>hybrid</Workarrangement>
      <Salaryrange></Salaryrange>
      <Skills>SQL (Snowflake), BI visualisation tool (Tableau), CRM data (Salesforce), Generative AI tools, Data Analytics, Business Intelligence, Scripting (python), Data modelling (dbt), Enterprise software sales processes, Field operations, Cross-functional GTM mechanics</Skills>
      <Category>Engineering</Category>
      <Industry>Technology</Industry>
      <Employername>Okta</Employername>
      <Employerlogo>https://logos.yubhub.co/okta.com.png</Employerlogo>
      <Employerdescription>Okta builds the trusted, neutral infrastructure that enables organisations to safely embrace the new era of AI.</Employerdescription>
      <Employerwebsite>https://www.okta.com/</Employerwebsite>
      <Compensationcurrency></Compensationcurrency>
      <Compensationmin></Compensationmin>
      <Compensationmax></Compensationmax>
      <Applyto>https://job-boards.greenhouse.io/okta/jobs/7728562</Applyto>
      <Location>Bengaluru, India</Location>
      <Country></Country>
      <Postedate>2026-04-18</Postedate>
    </job>
    <job>
      <externalid>3ff860ce-94c</externalid>
      <Title>Staff, Advanced Analytics, CS Safety</Title>
      <Description><![CDATA[<p>We are looking for a Staff Advanced Analyst to help Airbnb enable travel for our millions of guests and hosts on our platform. This role will sit under the Advanced Analytics family and support Product and Business leaders within our CS Safety organisation.</p>
<p>As a Staff Advanced Analyst, you will be a data thought partner to product and business leaders across teams through providing insights, recommendations, and enabling data-informed decisions. You will drive day-to-day analytics and create scalable data tools, identify pain points in travelling and hosting, and work with product leadership to improve experiences for our guest, host, and agent community.</p>
<p>In addition, you will leverage Airbnb&#39;s rich and unique data, state-of-the-art machine learning infrastructure, and other central data science tools to build and grow the measurement capacity within the organisation. You will also be deeply involved in the technical details of the various systems we build, and will have the opportunity to collaborate with a strong team of engineers, product managers, designers, and operations agents to achieve shared, cross-functional goals to help keep Airbnb&#39;s community safe and trusted.</p>
<p>Key responsibilities include:</p>
<ul>
<li>Leading and driving data-driven roadmaps for the CS Safety working groups</li>
<li>Recommending actionable solutions backed by data and metrics to product and operational problems</li>
<li>Building and owning an insights and reporting platform that measures and improves the effectiveness of behaviours, product interfaces, and processes across the CS Safety platform and contact centre network</li>
<li>Performing data modelling of the various entities using tools and frameworks for optimising community and agent experiences</li>
<li>Defining and evaluating key metrics in an unstructured problem space, including measurement of the ML models that drive product development</li>
<li>Anticipating emerging safety risks through early-warning indicators, trend analysis, predictive modelling, and scenario planning to assess operational risk</li>
<li>Influencing data-driven decisions across business verticals in day-to-day via business reviews, scorecards, self-serve portal, OKRs, and planning among others</li>
<li>Influencing experimentation and measurement strategies; conducting power analyses, defining exit criteria, and using statistical models to improve inference</li>
</ul>
<p>Requirements include:</p>
<ul>
<li>A minimum of 10+ years of industry experience in business analytics and a degree (Masters or PhD) in a quantitative field (e.g., Statistics, Econometrics, Computer Science, Engineering, Mathematics, Data Science, Operations Research)</li>
<li>Experience supporting safety, risk, Trust &amp; Safety, compliance, or employee wellbeing in high-volume call centre or customer operations environments</li>
<li>Expert skills in SQL and expert in at least one programming language for data analysis (Python or R)</li>
<li>Experience with non-experimental causal inference methods, experimentation, and machine learning techniques, ideally in a multi-sided platform setting</li>
<li>Working knowledge of schema design and high-dimensional data modelling (ETL framework like Airflow)</li>
<li>Ability to work under conditions of ambiguity in a fast-growth, sometimes uncertain and complex environment</li>
<li>Comfortable operating independently with minimal planning, direction, and supervision</li>
<li>Proven track record of influencing senior leaders and driving outcomes</li>
</ul>
<p>Experience Level: Staff Employment Type: Full-time Workplace Type: Remote Category: Engineering Industry: Technology Salary Range: $176,000-$220,000 USD Required Skills: SQL, Python, R, Machine Learning, Data Analysis, Data Modelling, Causal Inference, Experimentation, Statistical Models Preferred Skills: Data Science, Operations Research, Statistics, Econometrics, Computer Science, Engineering, Mathematics</p>
<p style="margin-top:24px;font-size:13px;color:#666;">XML job scraping automation by <a href="https://yubhub.co">YubHub</a></p>]]></Description>
      <Jobtype>full-time</Jobtype>
      <Experiencelevel>staff</Experiencelevel>
      <Workarrangement>remote</Workarrangement>
      <Salaryrange>$176,000-$220,000 USD</Salaryrange>
      <Skills>SQL, Python, R, Machine Learning, Data Analysis, Data Modelling, Causal Inference, Experimentation, Statistical Models, Data Science, Operations Research, Statistics, Econometrics, Computer Science, Engineering, Mathematics</Skills>
      <Category>engineering</Category>
      <Industry>technology</Industry>
      <Employername>Airbnb</Employername>
      <Employerlogo>https://logos.yubhub.co/airbnb.com.png</Employerlogo>
      <Employerdescription>Airbnb is a global online marketplace for short-term vacation rentals. It was founded in 2007 and has since grown to over 5 million hosts who have welcomed over 2 billion guest arrivals.</Employerdescription>
      <Employerwebsite>https://www.airbnb.com/</Employerwebsite>
      <Compensationcurrency></Compensationcurrency>
      <Compensationmin></Compensationmin>
      <Compensationmax></Compensationmax>
      <Applyto>https://job-boards.greenhouse.io/airbnb/jobs/7579193</Applyto>
      <Location>United States</Location>
      <Country></Country>
      <Postedate>2026-04-18</Postedate>
    </job>
    <job>
      <externalid>ca34a0c3-c68</externalid>
      <Title>Data Analyst, Product</Title>
      <Description><![CDATA[<p>As a Product Data Analyst on GitLab&#39;s Product Data Insights team, you&#39;ll help product teams make better decisions through trusted data and clear analysis. Working closely with Product Managers, Engineering, UX, and leadership, you&#39;ll examine customer behaviour across the customer journey to improve both the customer experience and business outcomes.</p>
<p>In this role, you&#39;ll build business intelligence solutions, define and maintain product key performance indicators, and turn usage data into insights that support strategic decisions. This role focuses on GitLab&#39;s Platforms sections, an important area that helps ensure GitLab&#39;s infrastructure can scale over time. You&#39;ll also support our Dedicated offering by exploring usage and behavioural patterns across that deployment type.</p>
<p>In GitLab&#39;s all-remote, asynchronous environment, you&#39;ll contribute to a transparent, values-driven culture where data storytelling helps teams understand product health and act with confidence.</p>
<p>Responsibilities:</p>
<ul>
<li>Collaborate with stakeholders ranging from individual contributor Product Managers to leadership to understand business questions and define the right analytical approach.</li>
<li>Gather data from multiple sources and connect disparate data points into a clear story about product usage, customer behaviour, and product health.</li>
<li>Establish reporting for new products and features, including defining metrics and building repeatable analysis that teams can use to track adoption and performance.</li>
<li>Analyze usage patterns within GitLab&#39;s Platforms sections to help teams understand how this area is performing and where improvements may be needed.</li>
<li>Explore behavioural trends for the Dedicated offering to surface insights that can inform product decisions and the customer experience.</li>
<li>Partner with UX to better understand user behaviour and support research with quantitative analysis.</li>
<li>Work with Engineering teams on data collection and instrumentation approaches so product usage can be measured accurately and consistently.</li>
<li>Collaborate with Analytics Engineers on data modelling needs for reporting and analysis, helping ensure the underlying data supports reliable decision-making.</li>
</ul>
<p>Requirements:</p>
<ul>
<li>Advanced SQL skills and the ability to use data to answer complex product questions with clarity and accuracy.</li>
<li>Experience building business intelligence visualisations and dashboards, ideally in Tableau or a similar tool.</li>
<li>Experience working directly with Product and Engineering teams on data creation, instrumentation, and measurement strategy.</li>
<li>Experience partnering with Data and Analytics Engineering teams to shape data models for reporting and product analysis.</li>
<li>Strong analytical thinking and the ability to turn product usage data into practical recommendations for stakeholders at different levels.</li>
<li>Experience conducting and interpreting A/B tests to evaluate product changes and support evidence-based decisions.</li>
<li>Clear written and verbal communication skills, with the ability to explain complex findings in simple language.</li>
<li>Comfort working in an all-remote, asynchronous environment and collaborating effectively across functions while staying aligned with GitLab&#39;s values.</li>
</ul>
<p style="margin-top:24px;font-size:13px;color:#666;">XML job scraping automation by <a href="https://yubhub.co">YubHub</a></p>]]></Description>
      <Jobtype>full-time</Jobtype>
      <Experiencelevel>mid</Experiencelevel>
      <Workarrangement>remote</Workarrangement>
      <Salaryrange>$78,400-$168,000 USD</Salaryrange>
      <Skills>SQL, Tableau, Business Intelligence, Data Analysis, Data Modelling, A/B Testing, Communication</Skills>
      <Category>Engineering</Category>
      <Industry>Technology</Industry>
      <Employername>GitLab</Employername>
      <Employerlogo>https://logos.yubhub.co/about.gitlab.com.png</Employerlogo>
      <Employerdescription>GitLab is an intelligent orchestration platform for DevSecOps, used by over 50 million registered users and more than 50% of the Fortune 100. It provides a suite of tools for software development, deployment, and management.</Employerdescription>
      <Employerwebsite>https://about.gitlab.com/</Employerwebsite>
      <Compensationcurrency></Compensationcurrency>
      <Compensationmin></Compensationmin>
      <Compensationmax></Compensationmax>
      <Applyto>https://job-boards.greenhouse.io/gitlab/jobs/8455464002</Applyto>
      <Location>Remote, North America</Location>
      <Country></Country>
      <Postedate>2026-04-18</Postedate>
    </job>
    <job>
      <externalid>205a5f25-1f0</externalid>
      <Title>Senior Manager, Infrastructure Data Science</Title>
      <Description><![CDATA[<p>Databricks is looking for a Senior Manager, Infrastructure Data Science to shape the future of Databricks infrastructure through data science. You will tackle some of the most complex challenges related to capacity planning, performance optimisation, reliability engineering, infrastructure efficiency, and customer experience.</p>
<p>At Databricks, we enable data teams to solve the world&#39;s toughest problems by building and running the world&#39;s best data and AI infrastructure platform.</p>
<p>As a Senior Manager, Infrastructure Data Science, you will lead a team of data scientists and work directly in partnership with engineering leaders to empower them with data-driven insights and solutions.</p>
<p>Key responsibilities include:</p>
<ul>
<li>Thought leadership and strategic guidance on infrastructure planning, balancing current needs with future growth projections to ensure scalability and cost-effectiveness.</li>
<li>Promoting a data-driven approach to infrastructure decisions, influencing stakeholders across engineering, and supporting the use of data science insights for high-impact, aligned strategies.</li>
<li>Implementing data-driven solutions to identify, predict, and mitigate infrastructure risks and failures, reducing downtime and improving system reliability and performance, directly impacting end-user satisfaction and operational continuity.</li>
<li>Spearheading analyses to improve resource utilisation efficiency, identifying and eliminating inefficiencies across infrastructure usage, resulting in cost savings and optimised performance.</li>
<li>Establishing data frameworks that empower support teams to troubleshoot and resolve product issues faster, decreasing response times and enhancing customer experience and support quality.</li>
<li>Mentoring and managing a team of data scientists, instilling best practices in data science, engineering, and fostering a collaborative environment focused on innovative, scalable infrastructure solutions.</li>
</ul>
<p>We look for candidates with 10+ years of infrastructure data science, machine learning, advanced analytics experience in high-velocity, high-growth companies, as well as 5+ years of management experience hiring and developing teams.</p>
<p style="margin-top:24px;font-size:13px;color:#666;">XML job scraping automation by <a href="https://yubhub.co">YubHub</a></p>]]></Description>
      <Jobtype>full-time</Jobtype>
      <Experiencelevel>senior</Experiencelevel>
      <Workarrangement>onsite</Workarrangement>
      <Salaryrange>$228,600-$314,250 USD</Salaryrange>
      <Skills>infrastructure data science, machine learning, advanced analytics, data visualisation, data engineering, data modelling, big data technologies, leadership, communication</Skills>
      <Category>Engineering</Category>
      <Industry>Technology</Industry>
      <Employername>Databricks</Employername>
      <Employerlogo>https://logos.yubhub.co/databricks.com.png</Employerlogo>
      <Employerdescription>Databricks is a global organisation with over 7000 employees, founded in 2013 by the original creators of Apache Spark.</Employerdescription>
      <Employerwebsite>https://databricks.com</Employerwebsite>
      <Compensationcurrency></Compensationcurrency>
      <Compensationmin></Compensationmin>
      <Compensationmax></Compensationmax>
      <Applyto>https://job-boards.greenhouse.io/databricks/jobs/7734812002</Applyto>
      <Location>San Francisco, California</Location>
      <Country></Country>
      <Postedate>2026-04-18</Postedate>
    </job>
    <job>
      <externalid>49214f94-4ba</externalid>
      <Title>Senior Manager, Infrastructure Data Science</Title>
      <Description><![CDATA[<p>We are looking for a Senior Manager, Infrastructure Data Science to shape the future of Databricks infrastructure through data science. You will tackle some of the most complex challenges related to capacity planning, performance optimisation, reliability engineering, infrastructure efficiency, and customer experience.</p>
<p>As a Senior Manager, you will lead a team of data scientists and work directly in partnership with engineering leaders to empower them with data-driven insights and solutions. You will promote a data-driven approach to infrastructure decisions, influencing stakeholders across engineering, and support to leverage data science insights for high-impact, aligned strategies.</p>
<p>Key responsibilities include:</p>
<ul>
<li>Thought leadership and strategic guidance on infrastructure planning, balancing current needs with future growth projections to ensure scalability and cost-effectiveness.</li>
<li>Implement data-driven solutions to identify, predict, and mitigate infrastructure risks and failures, reducing downtime and improving system reliability and performance, directly impacting end-user satisfaction and operational continuity.</li>
<li>Spearhead analyses to improve resource utilisation efficiency, identifying and eliminating inefficiencies across infrastructure usage, resulting in cost savings and optimised performance.</li>
<li>Establish data frameworks that empower support teams to troubleshoot and resolve product issues faster, decreasing response times and enhancing customer experience and support quality.</li>
<li>Mentor and manage a team of data scientists, instilling best practices in data science, engineering, and fostering a collaborative environment focused on innovative, scalable infrastructure solutions.</li>
</ul>
<p>Requirements include:</p>
<ul>
<li>10+ years of infrastructure data science, machine learning, advanced analytics experience in high-velocity, high-growth companies.</li>
<li>5+ years of management experience hiring and developing teams.</li>
<li>Experience developing data science, analytics, and machine learning and AI products and capabilities in a cloud environment.</li>
<li>Knowledge of statistics and rigorous analytical techniques.</li>
<li>Experience with data visualisation tools, knowledge of data engineering, data modelling, and big data technologies.</li>
<li>Leadership skills and experience to lead across functional and organisational lines.</li>
<li>Strong communication skills to explain and evangelise analytics and data science to executives and the senior management team.</li>
<li>Bias to action and passion for delivering high-quality data solutions.</li>
<li>A passion for problem-solving and comfort with ambiguity.</li>
<li>MS or Ph.D. in quantitative fields (Statistics, Math, CS or Engineering).</li>
</ul>
<p>Pay Range Transparency</p>
<p>Databricks is committed to fair and equitable compensation practices. The pay range(s) for this role is listed below and represents the expected salary range for non-commissionable roles or on-target earnings for commissionable roles. Actual compensation packages are based on several factors that are unique to each candidate, including but not limited to job-related skills, depth of experience, relevant certifications and training, and specific work location. Based on the factors above, Databricks anticipates utilising the full width of the range. The total compensation package for this position may also include eligibility for annual performance bonus, equity, and the benefits listed above. For more information regarding which range your location is in visit our page here.</p>
<p>Zone 1 Pay Range $228,600-$314,250 USD</p>
<p style="margin-top:24px;font-size:13px;color:#666;">XML job scraping automation by <a href="https://yubhub.co">YubHub</a></p>]]></Description>
      <Jobtype>full-time</Jobtype>
      <Experiencelevel>senior</Experiencelevel>
      <Workarrangement>onsite</Workarrangement>
      <Salaryrange>$228,600-$314,250 USD</Salaryrange>
      <Skills>infrastructure data science, machine learning, advanced analytics, cloud environment, statistics, data visualisation tools, data engineering, data modelling, big data technologies, leadership skills, communication skills, bias to action, passion for problem-solving, comfort with ambiguity, MS or Ph.D. in quantitative fields</Skills>
      <Category>Engineering</Category>
      <Industry>Technology</Industry>
      <Employername>Databricks</Employername>
      <Employerlogo>https://logos.yubhub.co/databricks.com.png</Employerlogo>
      <Employerdescription>A global organisation founded in 2013 by the original creators of Apache Spark, with over 7000 employees., Databricks builds and runs the world&apos;s best data and AI infrastructure platform.</Employerdescription>
      <Employerwebsite>https://databricks.com/</Employerwebsite>
      <Compensationcurrency></Compensationcurrency>
      <Compensationmin></Compensationmin>
      <Compensationmax></Compensationmax>
      <Applyto>https://job-boards.greenhouse.io/databricks/jobs/7641390002</Applyto>
      <Location>Mountain View, California</Location>
      <Country></Country>
      <Postedate>2026-04-18</Postedate>
    </job>
    <job>
      <externalid>9e36f934-a6f</externalid>
      <Title>Elastic Consultant - Public Sector</Title>
      <Description><![CDATA[<p>We are seeking an Elastic Consultant to join our team in the Public Sector. As a consultant, you will work with customers to deliver and execute on professional services engagements. You will have the opportunity to work with a tremendous services, engineering, and sales team and wear many hats. This is a critical role, as consultants have an amazing chance to make an immediate impact on the success of Elastic and our customers.</p>
<p>Strong customer advocacy, relationship building, and communications skills are essential for this role. You will need to be able to easily pivot from delivery to strategic engagements with customers. You will work with the wider Elastic organisation to support the customer&#39;s goals, their strategic requirements, and their journey with Elastic.</p>
<p>Responsibilities:</p>
<ul>
<li>Strong customer advocacy, relationship building, and communications skills</li>
<li>Ability to easily pivot from delivery to strategic engagements with customers</li>
<li>Work with the wider Elastic organisation to support the customer&#39;s goals, their strategic requirements, and their journey with Elastic</li>
<li>Ownership of the strategic roadmap with the customer including quarterly strategic sessions with senior and key stakeholders</li>
<li>Solution design, development, and integration of Elastic products and APIs, platform architecture, and capacity planning in mission-critical environments</li>
<li>Comfortable working remotely in a highly distributed team</li>
<li>Development of demos and proof-of-concepts that highlight the value of the Elastic Stack</li>
<li>Data modelling, query development and optimisation, cluster tuning and scaling with a focus on fast search and analytics at scale</li>
<li>Solving our customers&#39; most challenging data problems</li>
<li>Working closely with the Elastic engineering, product management, and support teams to identify feature enhancements, extensions</li>
<li>Engaging with the Elastic Sales team to scope opportunities while assessing technical risks, questions, or concerns</li>
</ul>
<p>Requirements:</p>
<ul>
<li>Hands-on experience and an understanding of Elasticsearch and/or Lucene</li>
<li>Minimum of 2 years&#39; experience as a Software Engineer, System Administrator, or DevOps Engineer</li>
<li>Minimum of 5 years&#39; experience working as a Consultant, working to deliver and execute on professional services engagements</li>
<li>Currently holding current UK security clearance, or has previously held security clearance, or is willing to undergo security clearance</li>
<li>Experience as a technical instructor or public speaker to large audiences on enterprise infrastructure software technology to engineers, developers, and other technical positions</li>
<li>Excel at working directly with customers to gather, prioritise, plan and execute solutions to customer business requirements as it relates to our technologies</li>
<li>Understanding and passion for open-source technology and knowledge and proficient in at least one programming language</li>
<li>Hands-on experience with large distributed systems from an architecture and development perspective</li>
<li>Knowledge of information retrieval and/or analytics domain</li>
<li>The nature of the work that you will be doing will require a high percentage of work onsite with customers, and you should be expected to travel as a result of this requirement</li>
<li>Understanding of Linux, Java, and databases</li>
</ul>
<p>Bonus Points:</p>
<ul>
<li>Deep understanding of Elasticsearch and Lucene, including Elastic Certified Engineer certification</li>
<li>BS, MS, or PhD in Computer Science or related engineering discipline</li>
<li>Strong knowledge of Java and Linux/Unix environment, software development, and/or experience with distributed systems</li>
<li>Experience and interest in delivering and/or developing product training</li>
<li>Experience contributing to an open-source project or documentation</li>
</ul>
<p>As a distributed company, diversity drives our identity. Whether you&#39;re looking to launch a new career or grow an existing one, Elastic is the type of company where you can balance great work with great life. Your age is only a number. It doesn&#39;t matter if you&#39;re just out of college or your children are; we need you for what you can do. We strive to have parity of benefits across regions and while regulations differ from place to place, we believe taking care of our people is the right thing to do.</p>
<p>Competitive pay based on the work you do here and not your previous salary Health coverage for you and your family in many locations Ability to craft your calendar with flexible locations and schedules for many roles Generous number of vacation days each year Increase your impact - We match up to $2000 (or local currency equivalent) for financial donations and service Up to 40 hours each year to use toward volunteer projects you love Embracing parenthood with minimum of 16 weeks of parental leave</p>
<p>Elastic is an equal opportunity employer and is committed to creating an inclusive culture that celebrates different perspectives, experiences, and backgrounds. Qualified applicants will receive consideration for employment without regard to race, ethnicity, color, religion, sex, pregnancy, sexual orientation, gender perception or identity, national origin, age, marital status, protected veteran status, disability status, or any other basis protected by federal, state or local law, ordinance or regulation.</p>
<p style="margin-top:24px;font-size:13px;color:#666;">XML job scraping automation by <a href="https://yubhub.co">YubHub</a></p>]]></Description>
      <Jobtype>full-time</Jobtype>
      <Experiencelevel>senior</Experiencelevel>
      <Workarrangement>remote</Workarrangement>
      <Salaryrange></Salaryrange>
      <Skills>Elasticsearch, Lucene, Java, Linux, Databases, Data modelling, Query development and optimisation, Cluster tuning and scaling, Fast search and analytics at scale, Information retrieval and/or analytics domain, Elastic Certified Engineer certification, BS, MS, or PhD in Computer Science or related engineering discipline, Software development, Distributed systems, Product training, Open-source project or documentation</Skills>
      <Category>Engineering</Category>
      <Industry>Technology</Industry>
      <Employername>Elastic</Employername>
      <Employerlogo>https://logos.yubhub.co/elastic.co.png</Employerlogo>
      <Employerdescription>Elastic is a company that develops and distributes technology for search, security, and observability. It has seen significant growth in its work within the Public Sector.</Employerdescription>
      <Employerwebsite>https://www.elastic.co/</Employerwebsite>
      <Compensationcurrency></Compensationcurrency>
      <Compensationmin></Compensationmin>
      <Compensationmax></Compensationmax>
      <Applyto>https://job-boards.greenhouse.io/elastic/jobs/7451737</Applyto>
      <Location>United Kingdom</Location>
      <Country></Country>
      <Postedate>2026-04-18</Postedate>
    </job>
    <job>
      <externalid>0efa9467-f65</externalid>
      <Title>Senior Solutions Architect</Title>
      <Description><![CDATA[<p>We&#39;re seeking a Senior Solutions Architect to join our Post Sales team and partner with enterprise customers to design and implement complex, AI-enabled Airtable solutions.</p>
<p>In this high-impact role, you&#39;ll lead the architecture and delivery of enterprise implementations, translating business workflows into scalable, AI-powered systems that accelerate time-to-value and drive long-term adoption. You&#39;ll work closely with Engagement Managers, Account teams, and Product to shape the future of how organisations leverage Airtable AI to transform their operations.</p>
<p><strong>Responsibilities</strong></p>
<ul>
<li>Architect and deliver paid SOW enterprise Airtable implementations for strategic customers</li>
<li>Lead scoping and design of Airtable AI solutions during professional services engagements</li>
<li>Develop repeatable AI solution patterns, including AI workflows, automations, and structured data pipelines</li>
<li>Establish best practices for data modelling, governance, and AI-enabled workflow design</li>
<li>Partner with Engagement Managers to scope services engagements and define implementation plans</li>
<li>Produce solution architecture diagrams, data models, and workflow documentation for enterprise customers</li>
<li>Drive adoption of Airtable AI capabilities across multiple enterprise implementations</li>
<li>Provide technical guidance and troubleshoot architectural challenges during implementations</li>
<li>Collaborate with Product and internal teams to relay AI feature feedback and influence the roadmap</li>
</ul>
<p><strong>Requirements</strong></p>
<ul>
<li>5+ years of solution architecture, solution engineering, or technical consulting experience for enterprise SaaS platforms</li>
<li>Strong understanding of data modelling, database design, and governance</li>
<li>Experience designing workflow automation and operational systems</li>
<li>Familiarity with AI-enabled SaaS capabilities (LLMs, AI automation, AI-assisted workflows, or AI copilots)</li>
<li>Ability to translate complex business processes into scalable technical architectures</li>
<li>Strong stakeholder communication skills across technical and executive audiences</li>
<li>Experience scoping and delivering enterprise implementations</li>
<li>Proficiency in process mapping and architecture documentation tools (e.g., Lucidchart, Visio)</li>
<li>Experience designing AI-driven workflow solutions (LLMs, prompt design, AI agents, AI automation)</li>
<li>Experience implementing platforms like Airtable, Notion, ServiceNow, Salesforce, Workato, or similar workflow/data systems</li>
<li>Familiarity with APIs, integrations, and automation frameworks</li>
<li>Experience with enterprise data governance and security models</li>
<li>Experience in Professional Services or consulting organisations</li>
<li>Technical familiarity with scripting or low-code/no-code platforms</li>
<li>Experience developing repeatable architecture patterns or implementation frameworks</li>
</ul>
<p style="margin-top:24px;font-size:13px;color:#666;">XML job scraping automation by <a href="https://yubhub.co">YubHub</a></p>]]></Description>
      <Jobtype>full-time</Jobtype>
      <Experiencelevel>senior</Experiencelevel>
      <Workarrangement>remote</Workarrangement>
      <Salaryrange></Salaryrange>
      <Skills>Solution Architecture, Data Modelling, Database Design, Governance, AI-Enabled SaaS Capabilities, LLMs, AI Automation, AI-Assisted Workflows, AI Copilots, Process Mapping, Architecture Documentation Tools, APIs, Integrations, Automation Frameworks, Enterprise Data Governance, Security Models, Professional Services, Scripting, Low-Code/No-Code Platforms</Skills>
      <Category>Engineering</Category>
      <Industry>Technology</Industry>
      <Employername>Airtable</Employername>
      <Employerlogo>https://logos.yubhub.co/airtable.com.png</Employerlogo>
      <Employerdescription>Airtable is a no-code app platform that empowers people to accelerate their most critical business processes. It has over 500,000 organisations, including 80% of the Fortune 100, relying on it to transform how work gets done.</Employerdescription>
      <Employerwebsite>https://airtable.com/</Employerwebsite>
      <Compensationcurrency></Compensationcurrency>
      <Compensationmin></Compensationmin>
      <Compensationmax></Compensationmax>
      <Applyto>https://job-boards.greenhouse.io/airtable/jobs/8487502002</Applyto>
      <Location>San Francisco, CA; New York, NY; Remote - US</Location>
      <Country></Country>
      <Postedate>2026-04-18</Postedate>
    </job>
    <job>
      <externalid>319e8ea9-a7e</externalid>
      <Title>Sr. Manager of Sales Compensation</Title>
      <Description><![CDATA[<p>We are seeking an experienced and highly analytical Senior Manager of Sales Compensation to develop, implement, and manage sales commission plans that drive performance, align with business objectives, and ensure accuracy and timely payment to the sales organisation.</p>
<p>The ideal candidate will possess a deep understanding of sales processes, compensation best practices, and strong leadership skills to manage the end-to-end compensation cycle, specifically within the high-tech industry.</p>
<p>Key responsibilities include:</p>
<ul>
<li><p>Leading the annual review and design process for all sales incentive compensation plans, ensuring alignment with corporate strategy, sales goals, competitive market data, and industry best practices.</p>
</li>
<li><p>Modelling, analysing, and forecasting the financial impact of proposed compensation plan changes, presenting data-driven recommendations to executive leadership.</p>
</li>
<li><p>Collaborating with Sales, Finance, HR, and Legal teams to ensure plan feasibility, compliance, and effective rollout.</p>
</li>
<li><p>Overseeing the end-to-end administration of sales compensation, including territory and quota management, data integrity, calculation, and conflict resolution.</p>
</li>
<li><p>Managing and optimising the sales compensation process to ensure accurate and efficient compensation calculations.</p>
</li>
<li><p>Evaluating and leading implementation of a robust, scalable compensation system.</p>
</li>
<li><p>Developing and maintaining documentation, policies, and procedures related to sales compensation plans and processes.</p>
</li>
<li><p>Ensuring compliance with all internal policies and external regulations.</p>
</li>
<li><p>Generating regular and ad-hoc compensation reports and dashboards to provide insights into plan performance, sales effectiveness, and cost of sales.</p>
</li>
<li><p>Conducting quarterly and annual audits of compensation data and calculations to ensure accuracy and resolve discrepancies.</p>
</li>
<li><p>Analysing sales attainment, plan effectiveness, and making data-driven recommendations for plan adjustments.</p>
</li>
<li><p>Serving as the primary subject matter expert on all sales compensation matters, including industry standards and emerging trends.</p>
</li>
<li><p>Developing and delivering training materials and presentations to educate the sales force and management on compensation plan details and changes.</p>
</li>
<li><p>Mentoring and guiding junior team members, fostering a culture of accuracy, accountability, and continuous improvement.</p>
</li>
</ul>
<p style="margin-top:24px;font-size:13px;color:#666;">XML job scraping automation by <a href="https://yubhub.co">YubHub</a></p>]]></Description>
      <Jobtype>full-time</Jobtype>
      <Experiencelevel>senior</Experiencelevel>
      <Workarrangement>remote</Workarrangement>
      <Salaryrange>$135,000 - $205,000 a year</Salaryrange>
      <Skills>Sales Compensation, Finance, Sales Operations, Sales Performance Management (SPM) software, Excel, SQL, BI platforms, MBA or Master&apos;s degree in a quantitative field, Experience in the Aerospace &amp; Defense, or High-tech industry, Advanced proficiency in data modelling and analysis</Skills>
      <Category>Sales</Category>
      <Industry>Technology</Industry>
      <Employername>Shield AI</Employername>
      <Employerlogo>https://logos.yubhub.co/shield.ai.png</Employerlogo>
      <Employerdescription>Shield AI is a venture-backed deep-tech company founded in 2015, with a mission to protect service members and civilians with intelligent systems.</Employerdescription>
      <Employerwebsite>https://www.shield.ai</Employerwebsite>
      <Compensationcurrency></Compensationcurrency>
      <Compensationmin></Compensationmin>
      <Compensationmax></Compensationmax>
      <Applyto>https://jobs.lever.co/shieldai/03f58f8e-e3fe-4b52-a77e-14c154629f3f</Applyto>
      <Location>United States</Location>
      <Country></Country>
      <Postedate>2026-04-17</Postedate>
    </job>
    <job>
      <externalid>85f1ada0-78d</externalid>
      <Title>Security Engineer</Title>
      <Description><![CDATA[<p>We&#39;re seeking a Security Engineer at the senior-level or above on our Security Operations team with strong detection engineering experience. You&#39;ll design and develop high-fidelity detection content, build and operate the data pipelines that power our security operations, develop automation playbooks that accelerate response, and work across a uniquely diverse telemetry landscape spanning cloud infrastructure, embedded vessel platforms, corporate systems, and operational technology.</p>
<p>This role is heavily weighted toward detection engineering. You should think in terms of adversary behaviour and telemetry coverage, not just alert triage. You&#39;ll own detections end-to-end: from identifying gaps in coverage, through designing and testing detection logic, to tuning and validating in production.</p>
<p>Key Responsibilities:</p>
<ul>
<li><p>Design, build, test, and tune high-fidelity detection rules and analytic queries across endpoint, cloud, network, identity, and DLP telemetry sources</p>
</li>
<li><p>Develop and maintain detection content using detection-as-code practices including version-controlled logic, automated testing, and CI/CD deployment</p>
</li>
<li><p>Map detection coverage to MITRE ATT&amp;CK, identify gaps, and prioritise new detection development based on threat intelligence and business risk</p>
</li>
<li><p>Engineer correlation rules, behavioural analytics, and anomaly-based detections that minimise false positives while surfacing real adversary tradecraft</p>
</li>
<li><p>Own the detection lifecycle from initial development through production tuning, performance monitoring, and retirement</p>
</li>
<li><p>Build and operate pipelines to ingest, normalise, enrich, and manage security telemetry at scale across diverse data sources, using Terraform and infrastructure-as-code practices to deploy and maintain logging and detection infrastructure</p>
</li>
<li><p>Design and maintain log collection, parsing, and enrichment configurations that ensure the right telemetry is available at the right fidelity for detection and investigation</p>
</li>
<li><p>Evaluate and onboard new telemetry sources as Saronic&#39;s infrastructure and threat landscape evolve</p>
</li>
<li><p>Monitor pipeline health, data quality, and ingestion reliability to ensure detections operate on complete and accurate data</p>
</li>
<li><p>Develop and manage automated response playbooks in SOAR platforms to accelerate containment and reduce analyst toil</p>
</li>
<li><p>Build automation that enriches alerts with contextual data, reducing investigation time and improving analyst decision-making</p>
</li>
<li><p>Support incident response efforts and translate lessons learned into improved detections and playbooks</p>
</li>
<li><p>Partner with SOC analysts, Cloud Security, Product Security, and IT teams to close visibility and detection gaps across environments</p>
</li>
<li><p>Collaborate with threat intelligence to ensure detection engineering is informed by current adversary TTPs relevant to defence, maritime, and autonomous systems</p>
</li>
</ul>
<p>Required Qualifications:</p>
<ul>
<li><p>3+ years of hands-on experience in detection engineering, security operations, security automation, or a closely related security engineering role</p>
</li>
<li><p>Demonstrated experience designing, testing, and tuning detection rules and analytic queries across production security telemetry (endpoint, cloud, network, identity, or DLP)</p>
</li>
<li><p>Hands-on experience with SIEM platforms and proficiency with query languages such as SPL, KQL, or equivalent</p>
</li>
<li><p>Experience building and operating security data pipelines, including log ingestion, normalisation, enrichment, and data quality management</p>
</li>
<li><p>Understanding of data engineering concepts including ETL pipelines, data modelling, schema design, and indexing as applied to security telemetry</p>
</li>
<li><p>Hands-on coding experience in Python, PowerShell, Go, or Rust for security automation, detection tooling, or pipeline development, and familiarity with Terraform for managing detection and logging infrastructure as code</p>
</li>
<li><p>Understanding of MITRE ATT&amp;CK framework and its application to detection coverage and gap analysis</p>
</li>
<li><p>Ability to obtain and maintain a security clearance</p>
</li>
</ul>
<p>Preferred Qualifications:</p>
<ul>
<li><p>Experience in defence, aerospace, robotics, autonomy, or other high-assurance environments</p>
</li>
<li><p>Experience with EDR platforms including custom detection rule creation and telemetry analysis</p>
</li>
<li><p>Experience with cloud-native detection in AWS and Microsoft 365/Azure</p>
</li>
<li><p>Experience using Terraform to deploy and manage security monitoring infrastructure, log pipeline components, or cloud-native security service configurations</p>
</li>
<li><p>Hands-on experience with incident response, threat hunting, or adversary emulation</p>
</li>
<li><p>Exposure to embedded Linux, operational technology, or ICS telemetry and detection</p>
</li>
<li><p>Familiarity with NIST SP 800-171, NIST SP 800-53, or CMMC and their logging and monitoring requirements</p>
</li>
<li><p>Relevant certifications such as GCIH, GCIA, GCDA, GSOM, OSDA, or OSCP</p>
</li>
</ul>
<p>Additional Information:</p>
<ul>
<li><p>Benefits: Medical Insurance, Dental and Vision Insurance, Time Off, Parental Leave, Competitive Salary, Retirement Plan, Stock Options, Life and Disability Insurance, Pet Insurance</p>
</li>
<li><p>This role requires access to export-controlled information or items that require &#39;U.S. Person&#39; status.</p>
</li>
</ul>
<p style="margin-top:24px;font-size:13px;color:#666;">XML job scraping automation by <a href="https://yubhub.co">YubHub</a></p>]]></Description>
      <Jobtype>full-time</Jobtype>
      <Experiencelevel>senior</Experiencelevel>
      <Workarrangement>onsite</Workarrangement>
      <Salaryrange></Salaryrange>
      <Skills>detection engineering, security operations, security automation, SIEM platforms, query languages, data engineering, ETL pipelines, data modelling, schema design, indexing, Python, PowerShell, Go, Rust, Terraform, MITRE ATT&amp;CK framework, security clearance, EDR platforms, cloud-native detection, incident response, threat hunting, adversary emulation, embedded Linux, operational technology, ICS telemetry, NIST SP 800-171, NIST SP 800-53, CMMC, GCIH, GCIA, GCDA, GSOM, OSDA, OSCP</Skills>
      <Category>Engineering</Category>
      <Industry>Technology</Industry>
      <Employername>Saronic Technologies</Employername>
      <Employerlogo>https://logos.yubhub.co/saronictechnologies.com.png</Employerlogo>
      <Employerdescription>Saronic Technologies is a leader in revolutionizing defense autonomy at sea, dedicated to developing state-of-the-art solutions that enhance maritime operations for the Department of Defense (DoD) through autonomous and intelligent platforms.</Employerdescription>
      <Employerwebsite>https://www.saronictechnologies.com/</Employerwebsite>
      <Compensationcurrency></Compensationcurrency>
      <Compensationmin></Compensationmin>
      <Compensationmax></Compensationmax>
      <Applyto>https://jobs.lever.co/saronic/79424778-76c1-41c6-8385-cba5f6ddc50e</Applyto>
      <Location>San Francisco</Location>
      <Country></Country>
      <Postedate>2026-04-17</Postedate>
    </job>
    <job>
      <externalid>e503559e-cf7</externalid>
      <Title>Senior Machine Learning Engineer</Title>
      <Description><![CDATA[<p><strong>Job Title: Senior Machine Learning Engineer</strong></p>
<p><strong>Job Description:</strong></p>
<p>Before 1965, it was extremely difficult and time-consuming to analyze complicated signals, like radio or images. You could solve it, but you had to throw a ton of compute at it. That all changed with the invention of the Fast Fourier transform, which could efficiently break that signal down into the frequencies that are a part of it.</p>
<p>The Risk Onboarding team is working on efficiently reviewing customers’ applications without compromising on quality. We are the front line of defense for preventing money laundering and financial crimes, building systems to verify that someone is who they say they are and that we are allowed to do business with them.</p>
<p><strong>About Us:</strong></p>
<p>At Mercury, we craft an exceptional banking experience for startups. Our team is focused on ensuring our products create a safe environment that meets the needs of our customers, administrators, and regulators.</p>
<p><strong>Job Responsibilities:</strong></p>
<p>As part of this role, you will:</p>
<ul>
<li>Partner with data science &amp; engineering teams to design and deploy ML &amp; Gen AI microservices, primarily focusing on automating reviews</li>
<li>Work with a full-stack engineering team to embed these services into the overall review experience, including human in the loop, escalations, and feeding human decisions back into the service</li>
<li>Implement testing, observability, alerting, and disaster recovery for all services</li>
<li>Implement tracing, performance, and regression testing</li>
<li>Feel a strong sense of product ownership and actively seek responsibility – we often self-organize on small/medium projects, and we want someone who’s excited to help shape and build Mercury’s future</li>
</ul>
<p><strong>Ideal Candidate:</strong></p>
<p>The ideal candidate for the role has:</p>
<ul>
<li>7+ years of experience in roles like machine learning engineering, data engineering, backend software engineering, and/or devops</li>
<li>Expertise with:</li>
</ul>
<ul>
<li>A full modern data stack: Snowflake, dbt, Fivetran, Airbyte, Dagster, Airflow</li>
<li>SQL, dbt, Python</li>
<li>OLAP / OLTP data modelling and architecture</li>
<li>Key-value stores: Redis, dynamoDB, or equivalent</li>
<li>Streaming / real-time data pipelines: Kinesis, Kafka, Redpanda</li>
<li>API frameworks: FastAPI, Flask, etc.</li>
<li>Production ML Service experience</li>
<li>Working across full-stack development environment, with experience transferable to Haskell, React, and TypeScript</li>
</ul>
<p><strong>Total Rewards Package:</strong></p>
<p>The total rewards package at Mercury includes base salary, equity (stock options/RSUs), and benefits. Our salary and equity ranges are highly competitive within the SaaS and fintech industry and are updated regularly using the most reliable compensation survey data for our industry. New hire offers are made based on a candidate’s experience, expertise, geographic location, and internal pay equity relative to peers.</p>
<p><strong>Salary Range:</strong></p>
<p>Our target new hire base salary ranges for this role are the following:</p>
<ul>
<li>US employees (any location): $200,700 - $250,900</li>
<li>Canadian employees (any location): CAD 189,700 - 237,100</li>
</ul>
<p><strong>Diversity &amp; Belonging:</strong></p>
<p>Mercury values diversity &amp; belonging and is proud to be an Equal Employment Opportunity employer. All individuals seeking employment at Mercury are considered without regard to race, color, religion, national origin, age, sex, marital status, ancestry, physical or mental disability, veteran status, gender identity, sexual orientation, or any other legally protected characteristic.</p>
<p style="margin-top:24px;font-size:13px;color:#666;">XML job scraping automation by <a href="https://yubhub.co">YubHub</a></p>]]></Description>
      <Jobtype>full-time</Jobtype>
      <Experiencelevel>senior</Experiencelevel>
      <Workarrangement>remote</Workarrangement>
      <Salaryrange>$200,700 - $250,900 (US) | CAD 189,700 - 237,100 (Canada)</Salaryrange>
      <Skills>Snowflake, dbt, Fivetran, Airbyte, Dagster, Airflow, SQL, Python, OLAP / OLTP data modelling and architecture, Redis, dynamoDB, Kinesis, Kafka, Redpanda, FastAPI, Flask, Production ML Service experience, Haskell, React, TypeScript</Skills>
      <Category>Engineering</Category>
      <Industry>Finance</Industry>
      <Employername>Mercury</Employername>
      <Employerlogo>https://logos.yubhub.co/mercury.com.png</Employerlogo>
      <Employerdescription>Mercury is a fintech company that provides banking services through Choice Financial Group and Column N.A.</Employerdescription>
      <Employerwebsite>https://www.mercury.com/</Employerwebsite>
      <Compensationcurrency></Compensationcurrency>
      <Compensationmin></Compensationmin>
      <Compensationmax></Compensationmax>
      <Applyto>https://job-boards.greenhouse.io/mercury/jobs/5639559004</Applyto>
      <Location>San Francisco, CA, New York, NY, Portland, OR, or Remote within Canada or United States</Location>
      <Country></Country>
      <Postedate>2026-04-17</Postedate>
    </job>
    <job>
      <externalid>6e92655b-cbb</externalid>
      <Title>Senior Data Scientist - Banking</Title>
      <Description><![CDATA[<p>We&#39;re looking for a full-stack Data Scientist to support our Cards &amp; Credit roadmap, partnering closely with Product, Engineering, Design, Underwriting, and Operations to shape how our card and credit products evolve and scale.</p>
<p>In this role, you&#39;ll apply strong analytical judgment and product intuition to help us understand customer behaviour, evaluate trade-offs, and make smart investment decisions across the cards and lending lifecycles , from eligibility and activation to spend, retention, incentives, and credit performance. You&#39;ll help build a data-informed culture across Mercury so teams can move quickly, measure what matters, and invest intelligently.</p>
<p>Key responsibilities include:</p>
<ul>
<li>Bringing impeccable communication and complete ownership , independently identifying opportunities, developing strong points of view, and influencing executives, Cards &amp; Credit leaders, and cross-functional partners through clear, concise, and persuasive storytelling.</li>
<li>Developing a nuanced understanding of cardholder behaviour and economics, helping teams reason about trade-offs between growth, engagement, risk, and unit economics.</li>
<li>Defining, owning, and analysing metrics that inform both tactical decisions and long-term strategy across the cards and credit lifecycle (e.g., eligibility, activation, spend, utilisation, rewards, retention, loss signals).</li>
<li>Designing and evaluating experiments using rigorous statistical approaches, including A/B testing, cohort analysis, causal inference techniques, and trend analysis.</li>
<li>Building and improving data pipelines and tools to streamline data collection, processing, and analysis workflows, ensuring the integrity, reliability, and security of data assets.</li>
<li>Building and deploying predictive models to forecast key outcomes, inform product treatments, and deepen understanding of causal drivers.</li>
</ul>
<p>Requirements include:</p>
<ul>
<li>7+ years of experience working with large datasets to drive product or business impact in data science or analytics roles.</li>
<li>Fluency in SQL and comfort with Python.</li>
<li>Strong judgment in defining and analysing product metrics, running experiments, and translating ambiguous questions into structured analyses.</li>
<li>Exceptional proactivity and independence , identifying opportunities, forming strong points of view, and making your case to stakeholders.</li>
<li>Experience with ETL processes and modern data modelling (e.g., dbt, dimensional models, Airflow), with a solid understanding of how data is produced and consumed.</li>
<li>Experience in analytical approaches ranging from behavioural modelling to experimentation to optimisation , and, importantly, know when simpler approaches are the right answer.</li>
<li>Apply AI tools to accelerate analytical and business workflows, improving scalability, decision quality, and reducing manual or repetitive work across teams.</li>
</ul>
<p>Nice to have:</p>
<ul>
<li>Experience working on cards or credit products, with familiarity in card economics and lifecycle concepts (e.g., spend behaviour, interchange, rewards and incentives, utilisation, credit limits, retention).</li>
<li>Experience developing quantitative pricing models or engines (e.g., dynamic pricing, incentive optimisation, or marketplace pricing systems).</li>
<li>Experience applying optimisation techniques to resource allocation or decision systems (e.g., customer operations, capacity planning, or policy optimisation).</li>
<li>Experience building or supporting credit models, including probability of default modelling, cashflow modelling, or dynamic credit limit setting.</li>
</ul>
<p style="margin-top:24px;font-size:13px;color:#666;">XML job scraping automation by <a href="https://yubhub.co">YubHub</a></p>]]></Description>
      <Jobtype>full-time</Jobtype>
      <Experiencelevel>senior</Experiencelevel>
      <Workarrangement>remote</Workarrangement>
      <Salaryrange>$200,700 - $250,900 USD</Salaryrange>
      <Skills>SQL, Python, ETL processes, modern data modelling, A/B testing, cohort analysis, causal inference techniques, trend analysis, data pipelines, predictive models, cardholder behaviour and economics, quantitative pricing models, optimisation techniques, credit models, probability of default modelling, cashflow modelling, dynamic credit limit setting</Skills>
      <Category>Finance</Category>
      <Industry>Finance</Industry>
      <Employername>Mercury</Employername>
      <Employerlogo>https://logos.yubhub.co/mercury.com.png</Employerlogo>
      <Employerdescription>Mercury is a fintech company that provides financial infrastructure for startups and growing businesses.</Employerdescription>
      <Employerwebsite>https://www.mercury.com/</Employerwebsite>
      <Compensationcurrency></Compensationcurrency>
      <Compensationmin></Compensationmin>
      <Compensationmax></Compensationmax>
      <Applyto>https://job-boards.greenhouse.io/mercury/jobs/5799320004</Applyto>
      <Location>San Francisco, CA, New York, NY, Portland, OR, or Remote within Canada or United States</Location>
      <Country></Country>
      <Postedate>2026-04-17</Postedate>
    </job>
    <job>
      <externalid>faec8dc3-4d3</externalid>
      <Title>Senior Machine Learning Scientist</Title>
      <Description><![CDATA[<p>We are seeking a Senior Machine Learning Scientist to help grow the Machine Learning Science team. The ideal candidate has a strong knowledge of artificial intelligence (AI), including machine learning (ML) fundamentals and extensive experience with deep learning (DL) methods. They will be responsible for the development of algorithms for early, blood-based detection tests for cancer. They will build on a foundation of ML/DL and statistical skills to develop models for identifying molecular signals from blood. They will also work with computational biologists, molecular biologists and ML engineers to design and drive research experiments, and will have a significant impact on the continued growth of an organisation dedicated to changing the entire landscape of cancer.</p>
<p>The role reports to the Director, Machine Learning Science. This role can be a Hybrid role based in our Brisbane, California headquarters (2-3 days per week in office), or remote.</p>
<p>Key responsibilities include:</p>
<ul>
<li>Independently pursuing cutting-edge research in AI applied to biological problems</li>
<li>Building new models or fine-tuning existing models to identify biological changes resulting from disease</li>
<li>Building models that achieve high accuracy and that generalise robustly to new data</li>
<li>Applying contemporary interpretability techniques to provide a deeper understanding of the underlying signal identified by the model, ideally suggesting potential biological mechanisms</li>
<li>Working closely with ML Engineering partners to ensure that Freenome&#39;s computational infrastructure supports optimal model training and iteration</li>
<li>Taking a mindful, transparent, and humane approach to your work</li>
</ul>
<p>Requirements include:</p>
<ul>
<li>PhD or equivalent research experience with an AI emphasis and in a relevant, quantitative field such as Computer Science, Statistics, Mathematics, Engineering, Computational Biology, or Bioinformatics</li>
<li>3+ years of postdoc or post-PhD industry experience achieving impactful results using relevant modelling techniques</li>
<li>Expertise, demonstrated by research publications or industry achievements, in applied machine learning, deep learning and complex data modelling</li>
<li>Practical and theoretical understanding of fundamental ML models like generalised linear models, kernel machines, decision trees and forests, neural networks</li>
<li>Practical and theoretical understanding of DL models like large language models or other foundation models</li>
<li>Extensive experience with training paradigms like supervised learning, self-supervised learning, and contrastive learning</li>
<li>Proficient in current state of the art in ML/DL approaches in different domains, with an ability to envision their applications in biological data</li>
<li>Proficiency in a general-purpose programming language: Python, R, Java, C, C++, etc.</li>
<li>Proficiency in one or more ML frameworks such as; Pytorch, Tensorflow and Jax; and ML platforms like Hugging Face</li>
<li>Experience in ML analysis and developer tools like TensorBoard, MLflow or Weights &amp; Biases</li>
<li>Excellent ability to communicate across disciplines, work collaboratively, and make progress in smaller steps via experimental iterations</li>
<li>A passion for innovation and demonstrated initiative in tackling new areas of research</li>
</ul>
<p>Nice to have qualifications include:</p>
<ul>
<li>Deep domain-specific experience in computational biology, genomics, proteomics or a related field</li>
<li>Experience in building DL models for genomic data, with knowledge of state-of-the-art DNA foundation models</li>
<li>Experience in NGS data analysis and bioinformatic pipelines</li>
<li>Experience with containerized cloud computing environments such as Docker in GCP, Azure, or AWS</li>
<li>Experience in a production software engineering environment, including the use of automated regression testing, version control, and deployment systems</li>
</ul>
<p style="margin-top:24px;font-size:13px;color:#666;">XML job scraping automation by <a href="https://yubhub.co">YubHub</a></p>]]></Description>
      <Jobtype>full-time</Jobtype>
      <Experiencelevel>senior</Experiencelevel>
      <Workarrangement>hybrid</Workarrangement>
      <Salaryrange>$173,775 - $246,750</Salaryrange>
      <Skills>PhD or equivalent research experience, Applied machine learning, Deep learning, Complex data modelling, Generalised linear models, Kernel machines, Decision trees and forests, Neural networks, Large language models, Supervised learning, Self-supervised learning, Contrastive learning, Python, R, Java, C, C++, Pytorch, Tensorflow, Jax, Hugging Face, TensorBoard, MLflow, Weights &amp; Biases</Skills>
      <Category>Engineering</Category>
      <Industry>Healthcare</Industry>
      <Employername>Freenome</Employername>
      <Employerlogo>https://logos.yubhub.co/freenome.com.png</Employerlogo>
      <Employerdescription>Freenome is a biotechnology company focused on developing liquid biopsy tests for cancer.</Employerdescription>
      <Employerwebsite>https://freenome.com</Employerwebsite>
      <Compensationcurrency></Compensationcurrency>
      <Compensationmin></Compensationmin>
      <Compensationmax></Compensationmax>
      <Applyto>https://job-boards.greenhouse.io/freenome/jobs/7963050002</Applyto>
      <Location>Brisbane, California</Location>
      <Country></Country>
      <Postedate>2026-04-17</Postedate>
    </job>
    <job>
      <externalid>4b700ee3-482</externalid>
      <Title>Analytics Engineer (Finance)</Title>
      <Description><![CDATA[<p>We are looking for an Analytics Engineer to join our team. As an Analytics Engineer, you will be responsible for translating data requirements from across the organisation into robust and reusable data models, with a particular focus on financial regulatory submissions or financial analytics.</p>
<p>Maintain consistent and clear documentation and communicate with business stakeholders (both technical and non-technical).</p>
<p>Collaborate with the wider data team to help meet the business goals, including peer reviews.</p>
<p>Take ownership of a project end-to-end and manage priorities accordingly.</p>
<p>Our ideal candidate will have strong experience with SQL, experience working within the credit domain, and be a self-starter with the ability to think outside the box.</p>
<p>They will also have good attention to detail, strong experience with Looker or a similar visualisation tool, and strong communication and documentation skills for both technical and non-technical audiences.</p>
<p>As a member of our team, you will have the opportunity to work on a wide range of projects and contribute to the development of our data capabilities.</p>
<p>We offer a competitive salary and benefits package, including 25 days holiday, an extra day&#39;s holiday for your birthday, and annual leave increased with length of service.</p>
<p>We are an equal opportunities employer and welcome applications from all qualified candidates.</p>
<p style="margin-top:24px;font-size:13px;color:#666;">XML job scraping automation by <a href="https://yubhub.co">YubHub</a></p>]]></Description>
      <Jobtype>full-time</Jobtype>
      <Experiencelevel>mid</Experiencelevel>
      <Workarrangement>hybrid</Workarrangement>
      <Salaryrange></Salaryrange>
      <Skills>SQL, Looker, credit domain, data modelling, financial analytics, dbt, data visualisation</Skills>
      <Category>Engineering</Category>
      <Industry>Finance</Industry>
      <Employername>Starling Bank</Employername>
      <Employerlogo>https://logos.yubhub.co/starlingbank.com.png</Employerlogo>
      <Employerdescription>Starling Bank is a digital bank that provides financial services. It has over 3.5 million accounts and employs over 2,800 people across five offices.</Employerdescription>
      <Employerwebsite>https://www.starlingbank.com/</Employerwebsite>
      <Compensationcurrency></Compensationcurrency>
      <Compensationmin></Compensationmin>
      <Compensationmax></Compensationmax>
      <Applyto>https://apply.workable.com/j/D74D88F51C</Applyto>
      <Location>Southampton</Location>
      <Country></Country>
      <Postedate>2026-03-20</Postedate>
    </job>
    <job>
      <externalid>8b0e9386-fa9</externalid>
      <Title>Data Engineering &amp; Data Science Consultant</Title>
      <Description><![CDATA[<p><strong>Data Engineering &amp; Data Science Consultant</strong></p>
<p>You will work hands-on on the design, build, and operationalisation of modern data and analytics solutions. You will contribute across the full lifecycle – from data ingestion and transformation to analytics, machine learning, and production deployment. You will collaborate closely with data engineers, architects, data scientists, and business stakeholders to deliver scalable, reliable, and value-driven data solutions in complex client environments.</p>
<p><strong>Responsibilities</strong></p>
<ul>
<li>Apply data science and machine learning techniques to real-world business problems</li>
<li>Work with structured and semi-structured data in data lakes, lakehouses, and data warehouses</li>
<li>Develop and optimise data transformations for analytical and machine learning workloads</li>
<li>Support the productionisation of data and ML solutions, including monitoring and optimisation</li>
</ul>
<p><strong>Requirements</strong></p>
<ul>
<li>3–5 years of experience in data engineering, data science, or analytics</li>
<li>Hands-on experience delivering data and analytics solutions in project-based or client environments</li>
<li>Strong problem-solving skills and a pragmatic, delivery-oriented mindset</li>
</ul>
<p><strong>Data Engineering Foundations</strong></p>
<ul>
<li>Experience building end-to-end data pipelines (ingestion, transformation, storage)</li>
<li>Solid understanding of data modelling, data transformations, and feature engineering</li>
<li>Familiarity with cloud-based data platforms, such as Azure, AWS, or GCP</li>
</ul>
<p><strong>Applied Data Science &amp; Analytics</strong></p>
<ul>
<li>Experience applying statistical analysis and machine learning techniques</li>
<li>Strong programming skills in Python</li>
<li>Very good SQL skills and experience working with relational databases</li>
</ul>
<p><strong>Nice to have</strong></p>
<ul>
<li>Experience with streaming technologies (e.g. Kafka, Azure Event Hubs)</li>
<li>Exposure to GenAI, NLP, time series, or advanced analytics use cases</li>
<li>Experience with NoSQL databases (e.g. MongoDB, Cosmos DB)</li>
</ul>
<p><strong>Language &amp; Mobility</strong></p>
<ul>
<li>Very good English skills</li>
<li>Willingness to travel for project-related work</li>
</ul>
<p><strong>Benefits</strong></p>
<p>Join our growing Data &amp; Analytics practice and make a difference. In this practice you will be utilizing the most innovative technological solutions in modern data ecosystem. In this role you’ll be able to see your own ideas transform into breakthrough results in the areas of Data &amp; Analytics strategy, Data Management &amp; Governance, Data Platforms &amp; Engineering, Analytics &amp; Data Science.</p>
<p style="margin-top:24px;font-size:13px;color:#666;">XML job scraping automation by <a href="https://yubhub.co">YubHub</a></p>]]></Description>
      <Jobtype>full-time</Jobtype>
      <Experiencelevel>mid</Experiencelevel>
      <Workarrangement>onsite</Workarrangement>
      <Salaryrange></Salaryrange>
      <Skills>data science, machine learning, data engineering, cloud-based data platforms, data modelling, data transformations, feature engineering, Python, SQL, relational databases, streaming technologies, GenAI, NLP, time series, advanced analytics, NoSQL databases</Skills>
      <Category>Engineering</Category>
      <Industry>Technology</Industry>
      <Employername>Infosys Consulting - Europe</Employername>
      <Employerlogo>https://logos.yubhub.co/view.com.png</Employerlogo>
      <Employerdescription>Infosys Consulting - Europe is a globally renowned management consulting firm that works with market leading brands across sectors. The company is a mid-size player within the scale of Infosys, a top-5 powerhouse IT brand.</Employerdescription>
      <Employerwebsite>https://jobs.workable.com</Employerwebsite>
      <Compensationcurrency></Compensationcurrency>
      <Compensationmin></Compensationmin>
      <Compensationmax></Compensationmax>
      <Applyto>https://jobs.workable.com/view/43f8dm12rcrpZUsa228TbZ/data-engineering-%26-data-science-consultant-in-london-at-infosys-consulting---europe</Applyto>
      <Location>London</Location>
      <Country></Country>
      <Postedate>2026-03-09</Postedate>
    </job>
    <job>
      <externalid>11a36eab-3cb</externalid>
      <Title>Senior Data Engineer</Title>
      <Description><![CDATA[<p><strong>Job Description</strong></p>
<p>Are you ready to contribute to the evolution of our data pipelines for our B2C division? At Future, we are transforming our data-driven decision-making processes and we are looking for a passionate and experienced Data Engineer to join us.</p>
<p>This is an exciting opportunity for someone who excels in a creative environment, enjoys solving complex data challenges, and is eager to build impactful business insights, for this role you will directly report into the Head of Data Engineering</p>
<p><strong>Responsibilities</strong></p>
<ul>
<li>Develop and maintain new/current features of the data platform.</li>
<li>Responsible for delivery of development projects, including scoping, writing and sizing of stories involved.</li>
<li>Take ownership of BAU processes, develop area specific domain mastery, and seek means to automate them or reduce their impact.</li>
<li>Proposes and advocates for changes to reduce risk, cost and overhead.</li>
<li>Provide appropriate documentation for pipelines developed</li>
<li>Parameterise pipelines so configuration can be changed easily without having to perform deep changes to the codebase</li>
<li>Apply appropriate testing principles to ensure code is fit for purpose</li>
</ul>
<p><strong>Experience</strong></p>
<ul>
<li>Experience using Python on Google Cloud Platform for Big Data projects, BigQuery, DataFlow (Apache Beam), Cloud Run Functions, Cloud Run, Cloud Workflows, Cloud Composure</li>
<li>SQL development skills</li>
<li>Experience using Dataform or dbt</li>
<li>Demonstrated strength in data modelling, ETL development, and data warehousing</li>
<li>Knowledge of data management fundamentals and data storage principles</li>
<li>Familiarity with statistical models or data mining algorithms and practical experience applying these to business problems</li>
</ul>
<p><strong>What&#39;s in it for you</strong></p>
<p>The expected range for this role is £50,000 - £60,000</p>
<p>This is a Hybrid role from our Bath Office, working three days from the office, two from home … Plus more great perks, which include;</p>
<ul>
<li>Uncapped leave, because we trust you to manage your workload and time</li>
<li>When we hit our targets, enjoy a share of our profits with a bonus</li>
<li>Refer a friend and get rewarded when they join Future</li>
<li>Wellbeing support with access to our Colleague Assistant Programmes</li>
<li>Opportunity to purchase shares in Future, with our Share Incentive Plan</li>
</ul>
<p style="margin-top:24px;font-size:13px;color:#666;">XML job scraping automation by <a href="https://yubhub.co">YubHub</a></p>]]></Description>
      <Jobtype>full-time</Jobtype>
      <Experiencelevel>senior</Experiencelevel>
      <Workarrangement>hybrid</Workarrangement>
      <Salaryrange>£50,000 - £60,000</Salaryrange>
      <Skills>Python, Google Cloud Platform, BigQuery, DataFlow, Apache Beam, Cloud Run Functions, Cloud Run, Cloud Workflows, Cloud Composure, SQL, Dataform, dbt, data modelling, ETL development, data warehousing, data management fundamentals, data storage principles, statistical models, data mining algorithms</Skills>
      <Category>Engineering</Category>
      <Industry>Technology</Industry>
      <Employername>Future</Employername>
      <Employerlogo>https://logos.yubhub.co/j.com.png</Employerlogo>
      <Employerdescription>Future is a global leader in specialist media, with over 3,000 employees working across 200+ media brands.</Employerdescription>
      <Employerwebsite>https://apply.workable.com</Employerwebsite>
      <Compensationcurrency></Compensationcurrency>
      <Compensationmin></Compensationmin>
      <Compensationmax></Compensationmax>
      <Applyto>https://apply.workable.com/j/3535C2B9B5</Applyto>
      <Location>Bath</Location>
      <Country></Country>
      <Postedate>2026-03-09</Postedate>
    </job>
    <job>
      <externalid>6d5e164b-74d</externalid>
      <Title>Data Engineer</Title>
      <Description><![CDATA[<p><strong>Data Engineer</strong></p>
<p>Are you ready to contribute to the evolution of our data pipelines for our B2C division? We are transforming our data-driven decision-making processes and we are looking for a passionate and experienced Data Engineer to join us. This is an exciting opportunity for someone who grows in a creative environment, enjoys solving complex data challenges. You&#39;ll report into the Lead Data Engineer for this position and sit within the wider Data Engineer team.</p>
<p>The Data &amp; Business Intelligence team guides our organisation to become more data-driven. Our to market changes gives us a competitive edge. By ensuring visibility of objective performance data, we empower our teams to make rapid, informed decisions that enhance overall performance.</p>
<p><strong>Responsibilities</strong></p>
<ul>
<li>Maintain new/current features of the data platform.</li>
<li>Responsible for delivery of development projects.</li>
<li>Utilise established software engineering practices and principles.</li>
<li>Take ownership of BAU processes, develop area specific domain mastery.</li>
<li>Ensure compliance matters are followed.</li>
<li>Utilise CI/CD and infrastructure as code (Terraform) for rapid deployment of changes.</li>
</ul>
<p><strong>Experience</strong></p>
<ul>
<li>Experience using Python on Google Cloud Platform for Big Data projects, BigQuery, DataFlow (Apache Beam), Cloud Run Functions, Cloud Run, Cloud Workflows, Cloud Composure.</li>
<li>SQL development skills.</li>
<li>Demonstrated strength in data modelling, ETL development, and data warehousing.</li>
<li>Knowledge of data management fundamentals and data storage principles.</li>
<li>Familiarity with statistical models or data mining algorithms and practical experience applying these to business problems.</li>
</ul>
<p><strong>What&#39;s in it for you</strong></p>
<p>The expected range for this role is £45,000 - £50,000. This is a Hybrid role from our Bath Office, working three days from the office, two from home. Plus more great perks, which include:</p>
<ul>
<li>Uncapped leave, because we trust you to manage your workload and time.</li>
<li>When we hit our targets, enjoy a share of our profits with a bonus.</li>
<li>Refer a friend and get rewarded when they join Future.</li>
<li>Wellbeing support with access to our Colleague Assistant Programmes.</li>
<li>Opportunity to purchase shares in Future, with our Share Incentive Plan.</li>
</ul>
<p style="margin-top:24px;font-size:13px;color:#666;">XML job scraping automation by <a href="https://yubhub.co">YubHub</a></p>]]></Description>
      <Jobtype>full-time</Jobtype>
      <Experiencelevel>mid</Experiencelevel>
      <Workarrangement>hybrid</Workarrangement>
      <Salaryrange>£45,000 - £50,000</Salaryrange>
      <Skills>Python, Google Cloud Platform, BigQuery, DataFlow, Apache Beam, Cloud Run Functions, Cloud Run, Cloud Workflows, Cloud Composure, SQL, data modelling, ETL development, data warehousing, data management fundamentals, data storage principles, statistical models, data mining algorithms</Skills>
      <Category>Engineering</Category>
      <Industry>Technology</Industry>
      <Employername>Future</Employername>
      <Employerlogo>https://logos.yubhub.co/j.com.png</Employerlogo>
      <Employerdescription>Future is a global leader in specialist media, with over 3,000 employees working across 200+ media brands.</Employerdescription>
      <Employerwebsite>https://apply.workable.com</Employerwebsite>
      <Compensationcurrency></Compensationcurrency>
      <Compensationmin></Compensationmin>
      <Compensationmax></Compensationmax>
      <Applyto>https://apply.workable.com/j/BDB1B6F4CF</Applyto>
      <Location></Location>
      <Country></Country>
      <Postedate>2026-03-09</Postedate>
    </job>
    <job>
      <externalid>07222a52-75c</externalid>
      <Title>Data Analyst</Title>
      <Description><![CDATA[<p>We&#39;re looking for a Data Analyst to join our team, working in our Tower Bridge office three days a week. As a Data Analyst, you&#39;ll be responsible for taking ownership of the data relationship with business stakeholders, reporting on baseline performance and trends, and driving us towards a self-serve first culture.</p>
<p>Responsibilities</p>
<ul>
<li>Taking ownership of the data relationship with business stakeholders, bridging commercial and product discussions</li>
<li>Reporting on our baseline performance and trends, keeping our finger on the pulse</li>
<li>Driving us towards a self-serve first culture</li>
<li>Reporting on complex experimentation</li>
<li>Close analyst-stakeholder relationships are the most crucial component of a well-functioning data team, stakeholders should consider you part of their team</li>
<li>Varied work across data disciplines, e.g. data product design, data modelling, analytical deep-dives, contributing to data science, AI and machine learning projects</li>
</ul>
<p>What does a great candidate look like</p>
<ul>
<li>Demonstrates curiosity and a proactive approach to learning, constantly seeking opportunities to deepen understanding and explore new ideas</li>
<li>Exhibits a strong desire for ownership and accountability, taking the initiative to drive projects forward and deliver impactful results</li>
<li>Skilled at translating complex business challenges into actionable data insights, driving meaningful outcomes and business impact</li>
<li>Excellent communication skills, able to convey ideas clearly and effectively to both technical and non-technical stakeholders</li>
<li>While recognising the value of experience, greater emphasis is placed on finding the right cultural fit and individual capabilities for the role</li>
<li>Strives to set high standards and continually seeks opportunities for personal and team growth, fostering an environment of continuous improvement</li>
<li>Inspired by the prospect of contributing to industry innovation and eager to participate in reimagining the future landscape</li>
</ul>
<p>Technical Skills</p>
<ul>
<li>Proficient in SQL with a proven track record of handling complex data queries and manipulation</li>
<li>Demonstrable experience with Tableau or similar data visualisation tools, including the ability to create insightful and user-friendly dashboards and reports</li>
<li>Experience with DBT (Data Build Tool) or similar data transformation technologies, with a keen understanding of data modelling and ETL processes</li>
<li>A background in statistics and/or programming, with the ability to apply statistical methods and algorithms to extract insights from data, especially surrounding experimentation</li>
<li>Familiarity with machine learning (ML) and artificial intelligence (AI) techniques, preferably in Python, R, or another programming language, enabling the development and implementation of predictive models and data-driven solutions</li>
</ul>
<p style="margin-top:24px;font-size:13px;color:#666;">XML job scraping automation by <a href="https://yubhub.co">YubHub</a></p>]]></Description>
      <Jobtype>full-time</Jobtype>
      <Experiencelevel>mid</Experiencelevel>
      <Workarrangement>hybrid</Workarrangement>
      <Salaryrange></Salaryrange>
      <Skills>SQL, Tableau, DBT, data modelling, ETL processes, statistics, programming, machine learning, artificial intelligence</Skills>
      <Category>Engineering</Category>
      <Industry>Technology</Industry>
      <Employername>Zoopla</Employername>
      <Employerlogo>https://logos.yubhub.co/j.com.png</Employerlogo>
      <Employerdescription>Zoopla is a property brand that provides data and information on every UK property, with over 50 million people visiting their website every month.</Employerdescription>
      <Employerwebsite>https://apply.workable.com</Employerwebsite>
      <Compensationcurrency></Compensationcurrency>
      <Compensationmin></Compensationmin>
      <Compensationmax></Compensationmax>
      <Applyto>https://apply.workable.com/j/654B0ECCFF</Applyto>
      <Location></Location>
      <Country></Country>
      <Postedate>2026-03-09</Postedate>
    </job>
    <job>
      <externalid>bb7bb8e9-e31</externalid>
      <Title>Data Engineer - 12 Month TFT</Title>
      <Description><![CDATA[<p>We&#39;re looking for an experienced Data Engineer to join our team at Electronic Arts. As a Data Engineer, you will collaborate with the Marketing team to implement data strategies and develop complex ETL pipelines that support dashboards for promoting deeper understanding of our business.</p>
<p>You will have experience developing and establishing scalable, efficient, automated processes for large-scale data analyses. You will also stay informed of the latest trends and research on all aspects of data engineering and analytics.</p>
<p>Key Responsibilities:</p>
<ul>
<li>Design, implement and maintain efficient, scalable and robust data pipelines using cloud-native and open-source technologies</li>
<li>Develop and optimize ETL/ELT processes to ingest, transform, and deliver data from diverse sources</li>
<li>Automate deployment and monitoring of data workflows using CI/CD best practices</li>
<li>Guide communications between our users and studio engineers to provide scalable end-to-end solutions</li>
<li>Promote strategies to improve our data modelling, quality and architecture</li>
<li>Participate in code reviews, mentor junior engineers, and contribute to team knowledge sharing</li>
</ul>
<p>Required Qualifications:</p>
<ul>
<li>4+ years relevant industry experience in a data engineering role and graduate degree in Computer Science, Statistics, Informatics, Information Systems or another quantitative field</li>
<li>Proficiency in writing SQL queries and knowledge of cloud-based databases like Snowflake, Redshift, BigQuery or other big data solutions</li>
<li>Experience in data modelling and tools such as dbt, ETL processes, and data warehousing</li>
<li>Experience with at least one of the programming languages like Python, Java</li>
<li>Experience with version control and code review tools such as Git</li>
<li>Knowledge of latest data pipeline orchestration tools such as Airflow</li>
<li>Experience with cloud platforms (AWS, GCP, or Azure) and infrastructure-as-code tools (e.g., Docker, Terraform, CloudFormation)</li>
</ul>
<p>Nice to Have:</p>
<ul>
<li>Experience in gaming and working with its telemetry data or data from similar sources</li>
<li>Experience with big data platforms and technologies such as EMR, Databricks, Kafka, Spark, Iceberg</li>
<li>Experience in developing engineering solutions based on near real-time/streaming dataset</li>
<li>Exposure to AI/ML, MLOps concepts and collaboration with data science or AI teams.</li>
</ul>
<p>Pay Transparency - North America</p>
<p>The ranges listed below are what EA in good faith expects to pay applicants for this role in these locations at the time of this posting. If you reside in a different location, a recruiter will advise on the applicable range and benefits. Pay offered will be determined based on a number of relevant business and candidate factors (e.g. education, qualifications, certifications, experience, skills, geographic location, or business needs).</p>
<p>Pay Ranges: $100,000 - $139,500 CAD</p>
<p style="margin-top:24px;font-size:13px;color:#666;">XML job scraping automation by <a href="https://yubhub.co">YubHub</a></p>]]></Description>
      <Jobtype>temporary</Jobtype>
      <Experiencelevel>mid</Experiencelevel>
      <Workarrangement>hybrid</Workarrangement>
      <Salaryrange>$100,000 - $139,500 CAD</Salaryrange>
      <Skills>SQL, cloud-based databases, data modelling, ETL processes, data warehousing, Python, Java, Git, Airflow, cloud platforms, infrastructure-as-code tools, gaming telemetry data, big data platforms, EMR, Databricks, Kafka, Spark, Iceberg, AI/ML, MLOps</Skills>
      <Category>Engineering</Category>
      <Industry>Technology</Industry>
      <Employername>Electronic Arts</Employername>
      <Employerlogo>https://logos.yubhub.co/jobs.ea.com.png</Employerlogo>
      <Employerdescription>Electronic Arts is a leading video game developer and publisher with a portfolio of popular games and experiences.</Employerdescription>
      <Employerwebsite>https://jobs.ea.com</Employerwebsite>
      <Compensationcurrency></Compensationcurrency>
      <Compensationmin></Compensationmin>
      <Compensationmax></Compensationmax>
      <Applyto>https://jobs.ea.com/en_US/careers/JobDetail/Data-Engineer-12-month-TFT/212451</Applyto>
      <Location>Vancouver</Location>
      <Country></Country>
      <Postedate>2026-03-09</Postedate>
    </job>
    <job>
      <externalid>3e1dc0a8-943</externalid>
      <Title>Regional BOM Analyst</Title>
      <Description><![CDATA[<p><strong>Job Purpose</strong></p>
<p>As a Regional BOM Analyst at Honda, you will apply broad theoretical knowledge in Regional Spec Control Operations. Your primary responsibility will be to manage and administer NA Regional and Global engineering drawings and manufacturing design revision issuance to all North American and Global plants as needed.</p>
<p><strong>Key Accountabilities</strong></p>
<p><strong>Design Change Delivery - BEAM Bill of Material System Setting</strong></p>
<ul>
<li>Handle engineering technical records and project information for individual design changes or full BOM design changes</li>
<li>Review design drawings, confirm part hierarchy and structure changes, and understand inter/intra company part supply relationships</li>
<li>Interpret regional and global parts supply/install agreements to ensure data is sent to the correct plants</li>
<li>Understand each model&#39;s feature and application list change points</li>
<li>Configure frame/engine/transmission/differential combination setups and changepoint reconversions</li>
<li>Apply reason codes by change point to support supplier/factory instruction sheet issuance</li>
</ul>
<p><strong>Manufacturing Instruction Delivery - BEAM Bill of Material setting</strong></p>
<ul>
<li>Handle engineering technical data by configuring part drawing manufacturing change points, confirming part hierarchy, quantity, and application accuracy</li>
<li>Understand inter/intra company part supply relationships, in-house delivery setups, and regional and global parts supply/install agreements</li>
<li>Interpret feature and application list change points, configure frame/engine/transmission/differential combination setups, and changepoint reconversions</li>
<li>Apply reason codes by change point to support instruction sheet issuance and VIN capture</li>
<li>Determine the need to request supplier or plant supply settings, quantities confirmation, and splitting</li>
<li>Confirm application at multiple plants and verify originating department content/objective</li>
</ul>
<p><strong>Export Bill of Material – Mgmt.</strong></p>
<ul>
<li>Manage parts supplied from North America to the world</li>
<li>Communicate with multiple regions for application timing, part color setting, model build process kick-off, and execution</li>
<li>Address customer inquiries/concerns promptly and professionally to ensure customer satisfaction</li>
<li>Build customer relationships and teamwork</li>
<li>Attend and support BOM and New Model meetings with North America International Operations Office (NAIOO) as needed</li>
</ul>
<p><strong>Communication &amp; Coordination</strong></p>
<ul>
<li>Facilitate or support all North America plants/departments with design and engineering Bill of Material clarification and configuration information per Operational Rules</li>
<li>Support New Model meetings as needed</li>
</ul>
<p><strong>Business Plan Themes</strong></p>
<ul>
<li>Lead or participate in a team that will execute strategic business initiatives</li>
<li>Theme work may include process maps, calculations of benefits/efficiency, time studies, or multi-department collaboration</li>
<li>Teams report status monthly/quarterly to management to communicate/share progress on theme</li>
</ul>
<p><strong>Qualifications, Experience, and Skills</strong></p>
<ul>
<li>Bachelor&#39;s degree or equivalent relevant experience</li>
<li>0-4 years of experience with Part Drawing Control or Engineering Change Management, Supplement experience in Supply Chain, Production Control, or Manufacturing Engineering is a plus</li>
<li>Recognize and demonstrate knowledge of BOM/Parts List Check procedure</li>
<li>Recognize and demonstrate knowledge of Specification Notice Procedures issuance/management (D/C and MI)</li>
<li>Recognize and demonstrate Honda Engineering Standards Knowledge</li>
<li>Recognize and demonstrate CATIA Knowledge</li>
<li>Recognize and demonstrate new model development flow knowledge</li>
<li>Recognize and demonstrate data modelling knowledge</li>
<li>Recognize and demonstrate product maker layout flow knowledge</li>
<li>Understand importance of technical data quality accuracy and integration</li>
<li>Excel (macro knowledge a +), PowerPoint</li>
</ul>
<p style="margin-top:24px;font-size:13px;color:#666;">XML job scraping automation by <a href="https://yubhub.co">YubHub</a></p>]]></Description>
      <Jobtype>full-time</Jobtype>
      <Experiencelevel>entry</Experiencelevel>
      <Workarrangement>onsite</Workarrangement>
      <Salaryrange>$55,700.00 - $83,600.00</Salaryrange>
      <Skills>BOM/Parts List Check procedure, Specification Notice Procedures issuance/management, Honda Engineering Standards Knowledge, CATIA Knowledge, new model development flow knowledge, data modelling knowledge, product maker layout flow knowledge, Excel (macro knowledge a +), PowerPoint</Skills>
      <Category>Engineering</Category>
      <Industry>Automotive</Industry>
      <Employername>Honda</Employername>
      <Employerlogo>https://logos.yubhub.co/careers.honda.com.png</Employerlogo>
      <Employerdescription>Honda is a multinational corporation that designs, manufactures, and markets automobiles, motorcycles, and power equipment. It is one of the world&apos;s largest automobile manufacturers.</Employerdescription>
      <Employerwebsite>https://careers.honda.com</Employerwebsite>
      <Compensationcurrency></Compensationcurrency>
      <Compensationmin></Compensationmin>
      <Compensationmax></Compensationmax>
      <Applyto>https://careers.honda.com/us/en/job/10204/Regional-BOM-Analyst</Applyto>
      <Location>Raymond, Ohio</Location>
      <Country></Country>
      <Postedate>2026-03-08</Postedate>
    </job>
    <job>
      <externalid>9475bb73-df7</externalid>
      <Title>Product Owner, Enrichment</Title>
      <Description><![CDATA[<p><strong>About Anthropic</strong></p>
<p>Anthropic&#39;s mission is to create reliable, interpretable, and steerable AI systems. We want AI to be safe and beneficial for our users and for society as a whole.</p>
<p><strong>About the role</strong></p>
<p>We are looking for a Product Owner, Enrichment to own and drive the strategy, architecture, and execution of our data enrichment ecosystem. This role sits at the intersection of Revenue Operations, Data Engineering, and Go-to-Market strategy, and is responsible for building and maintaining a best-in-class enrichment infrastructure that delivers a reliable, comprehensive source of truth for company and contact data across global markets.</p>
<p>You will be the subject matter expert and product owner for all enrichment tools, data sources, and processes—including platforms like Clay, Dun &amp; Bradstreet, ZoomInfo, and other third-party providers. You will design and operate the systems that power account hierarchies, firmographic enrichment, contact discovery, and signal detection, ensuring our GTM teams have the accurate, complete data they need to identify, prioritise, and close business.</p>
<p>This is a hands-on, technically-oriented role that requires deep experience working with large datasets, complex system integrations, and Salesforce data modelling. You will collaborate closely with Sales, Marketing, Data Science, Data Engineering, and Revenue Operations to ensure our enrichment strategy supports both near-term GTM execution and long-term data infrastructure goals.</p>
<p><strong>Responsibilities:</strong></p>
<ul>
<li>Own the end-to-end enrichment strategy and roadmap, serving as the product owner for all enrichment tools, vendors, and data sources including Clay, Dun &amp; Bradstreet, ZoomInfo, and emerging providers</li>
</ul>
<ul>
<li>Build and maintain a unified enrichment master—a reliable source of truth for company and person data including parent-child account hierarchies, firmographics, technographics, and contact intelligence across domestic and international markets</li>
</ul>
<ul>
<li>Design and implement waterfall enrichment workflows that orchestrate multiple data providers to maximise coverage, accuracy, and cost efficiency while minimising redundancy</li>
</ul>
<ul>
<li>Architect enrichment data models within Salesforce, making strategic decisions about how enrichment data is stored, related, and surfaced (e.g., custom objects vs. direct field integration, parent account structures, enrichment audit trails)</li>
</ul>
<ul>
<li>Hands-on data manipulation and transformation—write queries, build data pipelines, and work directly with data warehouses (e.g., Snowflake, BigQuery) to clean, transform, match, and deduplicate enrichment data at scale</li>
</ul>
<ul>
<li>Lead international enrichment strategy, addressing the unique challenges of enriching company and contact data across global markets with varying data availability, provider coverage, and regulatory requirements</li>
</ul>
<ul>
<li>Partner with Data Science and Data Engineering to define enrichment schemas, resolve entity matching challenges, and build scalable infrastructure that supports both real-time and batch enrichment processes</li>
</ul>
<ul>
<li>Collaborate with Sales, Marketing, and Revenue Operations to understand GTM data needs, translate business requirements into enrichment solutions, and ensure enrichment outputs directly support pipeline generation, territory planning, lead routing, and account scoring</li>
</ul>
<ul>
<li>Define and track enrichment KPIs including match rates, data completeness, freshness, accuracy, and downstream GTM impact—using metrics to continuously improve the enrichment ecosystem</li>
</ul>
<ul>
<li>Evaluate and onboard new enrichment vendors and data sources, conducting proof-of-concept testing and negotiating contracts in partnership with procurement</li>
</ul>
<ul>
<li>Explore and implement AI-powered enrichment capabilities, including prompt-based enrichment using LLMs to supplement traditional data providers for emerging companies, startups, and hard-to-enrich segments</li>
</ul>
<p><strong>You may be a good fit if you have:</strong></p>
<ul>
<li>10+ years of experience in data enrichment, data operations, or revenue/marketing operations with hands-on ownership of enrichment tools and strategy in a B2B SaaS or enterprise technology environment</li>
</ul>
<ul>
<li>Deep expertise with enrichment platforms such as Clay, Dun &amp; Bradstreet (D-U-N-S, Data Blocks, hierarchies), ZoomInfo, Clearbit, People Data Labs, or comparable providers, including experience building waterfall enrichment workflows and enrichment masters</li>
</ul>
<ul>
<li>Strong Salesforce experience (required)—including data modelling for enrichment (custom objects, account hierarchies, parent-child relationships), integration architecture, and understanding of how enrichment data flows through the CRM to support GTM processes</li>
</ul>
<ul>
<li>Hands-on technical skills for data manipulation including SQL proficiency, experience with data warehouses (Snowflake, BigQuery, or similar), and comfort working with ETL/reverse ETL pipelines, APIs, and data transformation tools</li>
</ul>
<ul>
<li>Strong product ownership mindset with experience managing roadmaps, backlogs, and stakeholder priorities—able to translate business needs into technical requirements and drive execution across cross-functional teams</li>
</ul>
<ul>
<li>Dual data + RevOps mindset—equally comfortable working with Data Science and Data Engineering on infrastructure and schema design as you are partnering with Sales and GTM teams on pipeline and territory optimisation</li>
</ul>
<ul>
<li>Excellent communication skills to bridge technical and business audiences, lead stakeholder discovery sessions, and present enrichment strategy and impact to leadership</li>
</ul>
<p><strong>Strong candidates may have:</strong></p>
<ul>
<li>Experience building or leveraging AI-powered enrichment prompts (e.g., using LLMs to research and enrich company data, identify signals, or fill gaps where traditional providers lack coverage)</li>
</ul>
<p style="margin-top:24px;font-size:13px;color:#666;">XML job scraping automation by <a href="https://yubhub.co">YubHub</a></p>]]></Description>
      <Jobtype>full-time</Jobtype>
      <Experiencelevel>senior</Experiencelevel>
      <Workarrangement>hybrid</Workarrangement>
      <Salaryrange></Salaryrange>
      <Skills>data enrichment, data operations, revenue/marketing operations, enrichment tools, data sources, platforms like Clay, Dun &amp; Bradstreet, ZoomInfo, Salesforce, data modelling, integration architecture, SQL, data warehouses, ETL/reverse ETL pipelines, APIs, data transformation tools, product ownership, roadmaps, backlogs, stakeholder priorities, data science, data engineering, infrastructure, schema design, communication, technical and business audiences, AI-powered enrichment, LLMs, prompt-based enrichment, emerging companies, startups, hard-to-enrich segments</Skills>
      <Category>Engineering</Category>
      <Industry>Technology</Industry>
      <Employername>Anthropic</Employername>
      <Employerlogo>https://logos.yubhub.co/anthropic.com.png</Employerlogo>
      <Employerdescription>Anthropic is a quickly growing organisation that aims to create reliable, interpretable, and steerable AI systems. It has a team of researchers, engineers, policy experts, and business leaders working together to build beneficial AI systems.</Employerdescription>
      <Employerwebsite>https://www.anthropic.com</Employerwebsite>
      <Compensationcurrency></Compensationcurrency>
      <Compensationmin></Compensationmin>
      <Compensationmax></Compensationmax>
      <Applyto>https://job-boards.greenhouse.io/anthropic/jobs/5127289008</Applyto>
      <Location>San Francisco, CA | New York City, NY</Location>
      <Country></Country>
      <Postedate>2026-03-08</Postedate>
    </job>
    <job>
      <externalid>8516ca2f-5df</externalid>
      <Title>Data Science Engineer, Capacity &amp; Efficiency</Title>
      <Description><![CDATA[<p><strong>About the Role</strong></p>
<p>As a member of the Compute team, you will play a critical role in Anthropic&#39;s mission of building safe and beneficial AI by ensuring we understand, optimize, and strategically manage our cloud infrastructure spend. Your work will directly impact how efficiently we operate our multi-cloud and datacenter footprint, from forecasting infrastructure needs and planning capacity, to driving utilization improvements and reducing unit costs across our compute, storage, and networking resources.</p>
<p>You will work closely with Compute Finance, Infrastructure Engineers, and Product to translate raw cloud billing data into actionable efficiency insights and influence capacity planning &amp; allocation. You will help build deep visibility into our infrastructure spend, forecast capacity needs, attribute costs accurately across teams and workloads, model resource demand curves, and help identify efficiency opportunities across our fleet.</p>
<p><strong>Responsibilities:</strong></p>
<ul>
<li>Build and maintain cloud cost attribution models that accurately allocate infrastructure spend (compute, accelerators, storage, networking, data transfer) across teams, products, and workloads, providing clear visibility into who is spending what and why.</li>
</ul>
<ul>
<li>Build and maintain cost of revenue pipelines and models</li>
</ul>
<ul>
<li>Partner with infrastructure, finance, and procurement stakeholders to analyse utilization patterns, identify inefficiencies, and drive optimization initiatives that improve the cost-effectiveness of our non-accelerator cloud resources.</li>
</ul>
<ul>
<li>Develop forecasting models for non-accelerator infrastructure demand, incorporating business growth projections, product roadmaps, and historical spend trends to enable proactive capacity planning and budget accuracy.</li>
</ul>
<ul>
<li>Define and track unit cost metrics (e.g., cost per request, cost per GB stored, cost per pipeline run) and identify opportunities to reduce them, influencing infrastructure and engineering roadmaps with data-driven recommendations.</li>
</ul>
<ul>
<li>Develop unit cost economics for various workloads and applications, and using the metrics to drive efficiency efforts across product and infrastructure teams.</li>
</ul>
<ul>
<li>Build a cost-aware culture across the organisation by creating self-serve dashboards, automated reporting, and accessible datasets that give engineering and finance teams clear visibility into cloud spend and efficiency metrics.</li>
</ul>
<p><strong>You might be a good fit if you have:</strong></p>
<ul>
<li>6+ years of experience in data science, analytics, or FinOps roles, with a focus on cloud infrastructure cost analysis, capacity planning, or efficiency optimisation.</li>
</ul>
<ul>
<li>Experience building spend forecasting models and large-scale cost attribution systems.</li>
</ul>
<ul>
<li>Deep knowledge of cloud billing systems, cost allocation methodologies, and spend optimisation levers (e.g., reserved instances, committed use discounts, rightsizing, spot/preemptible usage).</li>
</ul>
<ul>
<li>A passion for the company&#39;s mission of building helpful, honest, and harmless AI.</li>
</ul>
<ul>
<li>Expertise in Python, SQL, forecasting, data modelling and data visualisation tools.</li>
</ul>
<ul>
<li>A bias for action and urgency, not letting perfect be the enemy of the effective.</li>
</ul>
<ul>
<li>A strong disposition to thrive in ambiguity, taking initiative to create clarity and forward progress.</li>
</ul>
<ul>
<li>A deep curiosity and energy for pulling the thread on hard questions.</li>
</ul>
<ul>
<li>Experience in turning open questions and data into concise and insightful analysis.</li>
</ul>
<ul>
<li>Highly effective written communication and presentation skills.</li>
</ul>
<p><strong>Logistics</strong></p>
<p><strong>Education requirements:</strong> We require at least a Bachelor&#39;s degree in a related field or equivalent experience. <strong>Location-based hybrid policy:</strong> Currently, we expect all staff to be in one of our offices at least 25% of the time. However, some roles may require more time in our offices.</p>
<p><strong>Visa sponsorship:</strong> We do sponsor visas! However, we aren&#39;t able to successfully sponsor visas for every role and every candidate. But if we make you an offer, we will make every reasonable effort to get you a visa, and we retain an immigration lawyer to help with this.</p>
<p><strong>We encourage you to apply even if you do not believe you meet every single qualification.</strong> Not all strong candidates will meet every single qualification as listed. Research shows that people who identify as being from underrepresented groups are more prone to experiencing imposter syndrome and doubting the strength of their candidacy, so we urge you not to exclude yourself prematurely and to submit an application if you&#39;re interested in this work.</p>
<p><strong>Your safety matters to us.</strong> To protect yourself from potential scams, remember that Anthropic recruiters only contact you from @anthropic.com email addresses. In some cases, we may partner with vetted recruiting agencies who will identify themselves as working on behalf of Anthropic.</p>
<p style="margin-top:24px;font-size:13px;color:#666;">XML job scraping automation by <a href="https://yubhub.co">YubHub</a></p>]]></Description>
      <Jobtype>full-time</Jobtype>
      <Experiencelevel>senior</Experiencelevel>
      <Workarrangement>hybrid</Workarrangement>
      <Salaryrange>$275,000 - $370,000 USD</Salaryrange>
      <Skills>cloud infrastructure cost analysis, capacity planning, efficiency optimisation, Python, SQL, forecasting, data modelling, data visualisation, reserved instances, committed use discounts, rightsizing, spot/preemptible usage</Skills>
      <Category>Engineering</Category>
      <Industry>Technology</Industry>
      <Employername>Anthropic</Employername>
      <Employerlogo>https://logos.yubhub.co/anthropic.com.png</Employerlogo>
      <Employerdescription>Anthropic is a company that aims to create reliable, interpretable, and steerable AI systems. Its mission is to build safe and beneficial AI systems for users and society.</Employerdescription>
      <Employerwebsite>https://job-boards.greenhouse.io</Employerwebsite>
      <Compensationcurrency></Compensationcurrency>
      <Compensationmin></Compensationmin>
      <Compensationmax></Compensationmax>
      <Applyto>https://job-boards.greenhouse.io/anthropic/jobs/5125881008</Applyto>
      <Location>New York City, NY; San Francisco, CA; Seattle, WA</Location>
      <Country></Country>
      <Postedate>2026-03-08</Postedate>
    </job>
    <job>
      <externalid>8ae6102f-700</externalid>
      <Title>GRC Automation Engineering Lead</Title>
      <Description><![CDATA[<p><strong>About the Role</strong></p>
<p>We are seeking a GRC Automation Lead to join our GRC organisation and build the technical foundation for how we scale our risk and compliance programs. In this role, you will lead the team that designs and implements automated workflows, data pipelines, and integrations that transform manual compliance processes into scalable engineering systems.</p>
<p>This is a greenfield opportunity to establish the team, architecture, and integrations that will define how we approach governance, risk, and compliance at Anthropic. The core challenge is a data problem: compliance information lives across dozens of systems—cloud infrastructure, identity providers, HR platforms, ticketing tools, code repositories—and your job is to design systems that bring it together, normalise it, and make it actionable.</p>
<p>At Anthropic, you&#39;ll also have a unique advantage: the ability to design AI-powered workflows where Claude acts as an extension of your team, handling tasks that would traditionally require additional headcount or manual effort. You&#39;ll need ingenuity to identify where agentic AI can accelerate evidence collection, interpret unstructured data, triage compliance gaps, and augment human judgment in risk assessments.</p>
<p>Working closely with Security, IT, and Engineering teams, you&#39;ll translate compliance and regulatory requirements into solutions that support audit programs including SOC 2, ISO, HIPAA, and FedRAMP, building systems that combine traditional automation with AI capabilities to achieve scale that wouldn&#39;t otherwise be possible.</p>
<p><strong>Responsibilities:</strong></p>
<ul>
<li>Lead the team that establishes foundational GRC processes and architecture. Design and build automated workflows for risk management and compliance, creating scalable systems that enable continuous monitoring as Anthropic grows.</li>
</ul>
<ul>
<li>Build data pipelines that aggregate risk, control, and asset information from across our technology stack. This means solving hard data integration problems: mapping disparate schemas, handling inconsistent data quality, and creating unified views of compliance posture through dashboards and reporting tools.</li>
</ul>
<ul>
<li>Inform GRC platform strategy and implementation: in partnership with other programs, evaluate, select, and deploy tooling that meets our compliance requirements.</li>
</ul>
<ul>
<li>Translate written policies and compliance requirements into policy-as-code—working with Engineering and Security teams to express requirements as enforceable rules, automated checks, and continuous validation rather than static documents.</li>
</ul>
<ul>
<li>Establish feedback loops between policy and implementation: surface where technical controls diverge from written requirements, identify where policies need to evolve based on infrastructure realities, and ensure that compliance requirements are expressed in terms engineers can act on.</li>
</ul>
<ul>
<li>Design and deploy agentic AI workflows that extend team capacity, using Claude to automate evidence analysis, monitor control effectiveness, draft audit responses, interpret policy documents, and handle other tasks that require reasoning over unstructured information.</li>
</ul>
<ul>
<li>Design and maintain integrations connecting GRC tooling with cloud infrastructure, identity management systems, HRIS platforms, ticketing systems, version control, and CI/CD pipelines—working with engineers to implement integrations that enable automated evidence collection and continuous compliance validation.</li>
</ul>
<ul>
<li>Build and lead the GRC Automation function as we scale: hiring team members, establishing practices, and defining the technical roadmap for governance and compliance automation at Anthropic.</li>
</ul>
<p><strong>You may be a good fit if you:</strong></p>
<ul>
<li>Have 3-4+ years of experience managing technical individual contributors or systems-focused teams, with a proven track record of building or scaling small teams (2-5 people) in security, compliance, automation, or operations functions.</li>
</ul>
<ul>
<li>Are a systems thinker first. You understand how complex environments work: how data flows between systems, where integration points exist, what breaks when systems don&#39;t talk to each other. Your strength is designing the right architecture and environment for security monitoring, not necessarily implementing it yourself.</li>
</ul>
<ul>
<li>Have 5+ years of experience designing automated workflows, data pipelines, or system integrations, whether through traditional development, low-code platforms, GRC tools, or process automation. We care about your ability to solve integration problems, not your programming language proficiency.</li>
</ul>
<ul>
<li>Proficiency to write production level code in at least one programming language (e.g., Python, Rust, Go)</li>
</ul>
<ul>
<li>Have a relentless focus on data integration: you understand how to pull data from multiple sources, normalise it, join it meaningfully, and surface insights. You&#39;re comfortable reasoning about messy, inconsistent data and designing systems that handle edge cases gracefully.</li>
</ul>
<ul>
<li>Understand APIs and integration patterns conceptually: REST APIs, webhooks, authentication flows, polling vs. push architectures, and can evaluate systems based on how well they expose data and support automation, even if you&#39;re not writing the integration code yourself.</li>
</ul>
<ul>
<li>Can work independently with minimal guidance, taking ownership of complex problems from design through implementation while managing ambiguity inherent in early-stage programs.</li>
</ul>
<ul>
<li>Have strong analytical and problem-solving skills, with the ability to break down complex problems into manageable parts and develop creative solutions.</li>
</ul>
<ul>
<li>Are able to communicate complex technical ideas to both technical and non-technical stakeholders, with a strong focus on collaboration and teamwork.</li>
</ul>
<ul>
<li>Are passionate about staying up-to-date with industry trends and emerging technologies, with a willingness to learn and adapt to new tools and techniques.</li>
</ul>
<p style="margin-top:24px;font-size:13px;color:#666;">XML job scraping automation by <a href="https://yubhub.co">YubHub</a></p>]]></Description>
      <Jobtype>full-time</Jobtype>
      <Experiencelevel>senior</Experiencelevel>
      <Workarrangement>hybrid</Workarrangement>
      <Salaryrange></Salaryrange>
      <Skills>GRC, Automation, Data Pipelines, System Integrations, APIs, Integration Patterns, REST APIs, Webhooks, Authentication Flows, Polling vs. Push Architectures, Data Integration, Data Normalisation, Data Joining, Data Modelling, Data Analysis, Data Visualisation, Agile Methodologies, Scrum, Kanban, Continuous Integration, Continuous Deployment, Continuous Monitoring, Cloud Infrastructure, Identity Providers, HR Platforms, Ticketing Tools, Code Repositories, Version Control, CI/CD Pipelines, GRC Tools, Policy-as-Code, Automated Checks, Continuous Validation, Feedback Loops, Policy Implementation, Technical Controls, Policy Evolution, Infrastructure Realities, Compliance Requirements, Engineer Communication, Technical Ideas, Collaboration, Teamwork, Industry Trends, Emerging Technologies, Learning, Adaptation, New Tools, New Techniques, Python, Rust, Go, Java, C++, JavaScript, TypeScript, SQL, NoSQL, Cloud Computing, DevOps, Security, Compliance, Risk Management, Audit Programs, SOC 2, ISO, HIPAA, FedRAMP, GRC Platforms, GRC Tools, Policy Management, Compliance Management, Risk Management, Audit Management, Compliance Automation, GRC Automation, Policy Automation, Compliance Orchestration, Risk Orchestration, Audit Orchestration, Compliance Intelligence, Risk Intelligence, Audit Intelligence, Compliance Analytics, Risk Analytics, Audit Analytics, Compliance Reporting, Risk Reporting, Audit Reporting, Compliance Dashboarding, Risk Dashboarding, Audit Dashboarding</Skills>
      <Category>Engineering</Category>
      <Industry>Technology</Industry>
      <Employername>Anthropic</Employername>
      <Employerlogo>https://logos.yubhub.co/anthropic.com.png</Employerlogo>
      <Employerdescription>Anthropic is a technology company that aims to create reliable, interpretable, and steerable AI systems. It has a quickly growing team of researchers, engineers, policy experts, and business leaders.</Employerdescription>
      <Employerwebsite>https://job-boards.greenhouse.io</Employerwebsite>
      <Compensationcurrency></Compensationcurrency>
      <Compensationmin></Compensationmin>
      <Compensationmax></Compensationmax>
      <Applyto>https://job-boards.greenhouse.io/anthropic/jobs/4980335008</Applyto>
      <Location>San Francisco, CA | New York City, NY | Seattle, WA</Location>
      <Country></Country>
      <Postedate>2026-03-08</Postedate>
    </job>
    <job>
      <externalid>255c8146-d03</externalid>
      <Title>Data Scientist</Title>
      <Description><![CDATA[<p>We are seeking a highly skilled Data Scientist to join our team. As a Data Scientist, you will play a key role in analysing and interpreting complex data sets to inform our racing strategy and improve our performance on the track.</p>
<p><strong>Responsibilities</strong></p>
<ul>
<li>Work closely with our racing team to understand their needs and develop data-driven solutions to improve their performance</li>
<li>Develop and maintain complex data models and algorithms to analyse and interpret large data sets</li>
<li>Collaborate with our data engineering team to design and implement data pipelines and architectures</li>
<li>Communicate complex technical information to non-technical stakeholders, including our racing team and senior management</li>
</ul>
<p><strong>Requirements</strong></p>
<ul>
<li>A strong background in data science, including a degree in a relevant field such as mathematics, statistics, or computer science</li>
<li>Proficiency in programming languages such as Python, R, or SQL</li>
<li>Experience with data visualisation tools such as Tableau or Power BI</li>
<li>Strong analytical and problem-solving skills, with the ability to work independently and as part of a team</li>
</ul>
<p><strong>Benefits</strong></p>
<ul>
<li>Competitive salary and benefits package</li>
<li>Opportunity to work with a world-class racing team and contribute to their success</li>
<li>Collaborative and dynamic work environment</li>
<li>Access to cutting-edge technology and tools</li>
<li>Professional development opportunities</li>
</ul>
<p>Note: The salary range for this role is competitive and will be discussed in more detail during the interview process.</p>
<p style="margin-top:24px;font-size:13px;color:#666;">XML job scraping automation by <a href="https://yubhub.co">YubHub</a></p>]]></Description>
      <Jobtype>full-time</Jobtype>
      <Experiencelevel>mid</Experiencelevel>
      <Workarrangement>onsite</Workarrangement>
      <Salaryrange>Competitive salary and benefits package</Salaryrange>
      <Skills>Python, R, SQL, Tableau, Power BI, Data visualisation, Data analysis, Data modelling, Algorithms, Machine learning, Deep learning, Data engineering, Cloud computing</Skills>
      <Category>Engineering</Category>
      <Industry>Motorsport</Industry>
      <Employername>Williams Racing</Employername>
      <Employerlogo>https://logos.yubhub.co/careers.williamsf1.com.png</Employerlogo>
      <Employerdescription>Williams Racing is a British Formula One racing team that has been competing in the sport since 1977. The team is based in Grove, Oxfordshire, and has a strong reputation for innovation and technical excellence.</Employerdescription>
      <Employerwebsite>https://careers.williamsf1.com</Employerwebsite>
      <Compensationcurrency></Compensationcurrency>
      <Compensationmin></Compensationmin>
      <Compensationmax></Compensationmax>
      <Applyto>https://careers.williamsf1.com/job/indirect-procurement-business-partner-ftc-in-grove-wantage-jid-495</Applyto>
      <Location>Grove, Oxfordshire</Location>
      <Country></Country>
      <Postedate>2026-03-07</Postedate>
    </job>
    <job>
      <externalid>24e8b22f-86b</externalid>
      <Title>Member of Technical Staff, AI Data - MAI Superintelligence Team</Title>
      <Description><![CDATA[<p><strong>Summary</strong></p>
<p>Microsoft AI are looking for a talented Member of Technical Staff, AI Data to join their MAI Superintelligence Team in London. This role sits at the heart of strategic decision-making, turning market data into actionable insights for a company that&#39;s revolutionising AI technology. You&#39;ll work directly with leadership to shape the company&#39;s direction in the AI market.</p>
<p><strong>About the Role</strong></p>
<p>As a Member of Technical Staff, AI Data, you will be responsible for designing and developing data pipelines that ingest enormous amounts of multi-modal training data (text, audio, images, video). You will also build and maintain cutting-edge infrastructure that can store and process the petabytes of data needed to power models. You will partner with the pretraining and post-training teams to improve our data recipe by rigorous and careful experimentation. You will collaborate with the product team and other engineers and researchers across Microsoft AI to identify gaps in the current generation of models.</p>
<p><strong>Accountabilities</strong></p>
<ul>
<li>Design and develop data pipelines that ingest enormous amounts of multi-modal training data (text, audio, images, video).</li>
<li>Build and maintain cutting-edge infrastructure that can store and process the petabytes of data needed to power models.</li>
</ul>
<p><strong>The Candidate we&#39;re looking for</strong></p>
<p><strong>Experience:</strong></p>
<ul>
<li>Bachelor&#39;s Degree in Computer Science, Math, Software Engineering, Computer Engineering, or related field AND experience in business analytics, data science, software development, data modelling or data engineering work OR equivalent experience.</li>
</ul>
<p><strong>Technical skills:</strong></p>
<ul>
<li>Expertise in large scale data engineering ideally applied to AI</li>
<li>Expertise in Spark, Kubernetes or similar</li>
</ul>
<p><strong>Personal attributes:</strong></p>
<ul>
<li>Passionate about the role of data in large-scale AI model training</li>
<li>Will thrive in a highly collaborative, fast-paced environment</li>
<li>High degree of craftsmanship and pay close attention to details</li>
</ul>
<p><strong>Benefits</strong></p>
<ul>
<li>Competitive salary</li>
<li>Comprehensive benefits package</li>
<li>Opportunities for professional growth and development</li>
<li>Collaborative and dynamic work environment</li>
<li>Access to cutting-edge technology and tools</li>
<li>Flexible work arrangements</li>
</ul>
<p style="margin-top:24px;font-size:13px;color:#666;">XML job scraping automation by <a href="https://yubhub.co">YubHub</a></p>]]></Description>
      <Jobtype>full-time</Jobtype>
      <Experiencelevel>staff</Experiencelevel>
      <Workarrangement>onsite</Workarrangement>
      <Salaryrange>Competitive salary</Salaryrange>
      <Skills>large scale data engineering, AI, Spark, Kubernetes, data science, software development, data modelling</Skills>
      <Category>Engineering</Category>
      <Industry>Technology</Industry>
      <Employername>Microsoft AI</Employername>
      <Employerlogo>https://logos.yubhub.co/microsoft.ai.png</Employerlogo>
      <Employerdescription>Microsoft AI is a leading technology company that is on a mission to create the largest and most advanced multimodal dataset in the world. This dataset will power the training of the world&apos;s most capable AI frontier models, pushing the boundaries of scale, performance, and product deployment.</Employerdescription>
      <Employerwebsite>https://microsoft.ai</Employerwebsite>
      <Compensationcurrency></Compensationcurrency>
      <Compensationmin></Compensationmin>
      <Compensationmax></Compensationmax>
      <Applyto>https://microsoft.ai/job/member-of-technical-staff-ai-data-mai-superintelligence-team/</Applyto>
      <Location>London</Location>
      <Country></Country>
      <Postedate>2026-03-06</Postedate>
    </job>
    <job>
      <externalid>940d3035-4e7</externalid>
      <Title>Senior Manager, Fintech and Global Planning Systems</Title>
      <Description><![CDATA[<p>You will lead EA&#39;s Connected Planning Center of Excellence (CoE). You will oversee the business-side strategy, operating model, and delivery for Anaplan-enabled planning and forecasting solutions across Finance and partner functions.</p>
<p><strong>What you&#39;ll do</strong></p>
<ul>
<li>Lead the global Connected Planning core CoE team, defining standards, design principles, and best practices for Anaplan enabled planning and forecasting</li>
<li>Be the primary business owner for Anaplan within Finance</li>
</ul>
<p><strong>What you need</strong></p>
<ul>
<li>Experience with connected planning and finance processes</li>
<li>Experience with Anaplan model design and solutions</li>
</ul>
<p style="margin-top:24px;font-size:13px;color:#666;">XML job scraping automation by <a href="https://yubhub.co">YubHub</a></p>]]></Description>
      <Jobtype>full-time</Jobtype>
      <Experiencelevel>senior</Experiencelevel>
      <Workarrangement>hybrid</Workarrangement>
      <Salaryrange>$130,800 - $183,000 CAD</Salaryrange>
      <Skills>connected planning, finance processes, Anaplan model design, Anaplan solutions, data modelling, analytical skills, stakeholder management</Skills>
      <Category>Finance</Category>
      <Industry>Technology</Industry>
      <Employername>Electronic Arts</Employername>
      <Employerlogo>https://logos.yubhub.co/jobs.ea.com.png</Employerlogo>
      <Employerdescription>Electronic Arts creates next-level entertainment experiences that inspire players and fans around the world. Here, everyone is part of the story. Part of a community that connects across the globe. A place where creativity thrives, new perspectives are invited, and ideas matter. A team where everyone makes play happen.</Employerdescription>
      <Employerwebsite>https://jobs.ea.com</Employerwebsite>
      <Compensationcurrency></Compensationcurrency>
      <Compensationmin></Compensationmin>
      <Compensationmax></Compensationmax>
      <Applyto>https://jobs.ea.com/en_US/careers/JobDetail/Senior-Manager-Fintech-and-Global-Planning-Systems/212795</Applyto>
      <Location>Guildford</Location>
      <Country></Country>
      <Postedate>2026-02-21</Postedate>
    </job>
    <job>
      <externalid>03b5d4bd-eb5</externalid>
      <Title>Data Engineer II - Mobile Growth</Title>
      <Description><![CDATA[<p>As a Data Engineer II - Mobile Growth, you will support some of our largest game titles by helping us understand player engagement and measure the effectiveness of our marketing efforts. We are looking for an experienced Data Engineer with broad technical skills and ability to work with large amounts of data.</p>
<p><strong>What you&#39;ll do</strong></p>
<p>You will work with analysts, understand requirements, and develop technical specifications for ETLs</p>
<ul>
<li>You will implement efficient, scalable and reliable data pipelines to move and transform data.</li>
<li>You will promote strategies to improve our data modelling, quality and architecture</li>
<li>You will work with big data solutions, ETL pipelines and dashboard tools.</li>
</ul>
<p><strong>What you need</strong></p>
<ul>
<li>5+ years relevant industry experience in a data engineering role and graduate degree in Computer Science, Statistics, Informatics, Information Systems or another quantitative field</li>
<li>Proficiency in writing SQL queries and knowledge of cloud-based databases like Snowflake, Redshift, BigQuery or other big data solutions</li>
</ul>
<p style="margin-top:24px;font-size:13px;color:#666;">XML job scraping automation by <a href="https://yubhub.co">YubHub</a></p>]]></Description>
      <Jobtype>full-time</Jobtype>
      <Experiencelevel>mid</Experiencelevel>
      <Workarrangement>hybrid</Workarrangement>
      <Salaryrange>$119,600 - $167,300 CAD</Salaryrange>
      <Skills>SQL, cloud-based databases, data modelling, ETL processes, data warehousing, Python, data pipeline tools, version control systems, containerization, orchestration technologies</Skills>
      <Category>Engineering</Category>
      <Industry>Technology</Industry>
      <Employername>Electronic Arts</Employername>
      <Employerlogo>https://logos.yubhub.co/jobs.ea.com.png</Employerlogo>
      <Employerdescription>Electronic Arts creates next-level entertainment experiences that inspire players and fans around the world. Here, everyone is part of the story. Part of a community that connects across the globe. A place where creativity thrives, new perspectives are invited, and ideas matter.</Employerdescription>
      <Employerwebsite>https://jobs.ea.com</Employerwebsite>
      <Compensationcurrency></Compensationcurrency>
      <Compensationmin></Compensationmin>
      <Compensationmax></Compensationmax>
      <Applyto>https://jobs.ea.com/en_US/careers/JobDetail/Data-Engineer-II-Mobile-Growth/211355</Applyto>
      <Location>Vancouver</Location>
      <Country></Country>
      <Postedate>2026-01-05</Postedate>
    </job>
  </jobs>
</source>