{"version":"0.1","company":{"name":"YubHub","url":"https://yubhub.co","jobsUrl":"https://yubhub.co/jobs/skill/optimisation"},"x-facet":{"type":"skill","slug":"optimisation","display":"Optimisation","count":11},"x-feed-size-limit":100,"x-feed-sort":"enriched_at desc","x-feed-notice":"This feed contains at most 100 jobs (the most recently enriched). For the full corpus, use the paginated /stats/by-facet endpoint or /search.","x-generator":"yubhub-xml-generator","x-rights":"Free to redistribute with attribution: \"Data by YubHub (https://yubhub.co)\"","x-schema":"Each entry in `jobs` follows https://schema.org/JobPosting. YubHub-native raw fields carry `x-` prefix.","jobs":[{"@context":"https://schema.org","@type":"JobPosting","identifier":{"@type":"PropertyValue","name":"YubHub","value":"job_3bea703f-195"},"title":"Senior Consultant (all genders)","description":"<p>Join our team and develop with us powerful, consistent planning processes – from strategic planning to operational fine-tuning.</p>\n<p>As a Senior Consultant (all genders), you will be responsible for:</p>\n<ul>\n<li>Analysing and mapping complex planning processes in SAP ERP and SAP S/4HANA (including ePP/DS, PP/DS, and PP)</li>\n<li>Transforming and further developing existing planning processes towards SAP S/4HANA with a focus on modern, future-oriented planning solutions</li>\n<li>Conceptualising and implementing innovative optimisation procedures (such as heuristics, optimiser, solver) for supply chain planning processes in various industries (including OEM, automotive suppliers, consumer goods, medical devices &amp; body care)</li>\n<li>Taking over (partial) project responsibility in the field of supply chain planning, including steering work packages and stakeholders</li>\n<li>Actively shaping and further developing internal advisory approaches and innovative solutions in the field of supply chain planning</li>\n</ul>\n<p>To be successful in this role, you will need:</p>\n<ul>\n<li>A successful academic background and at least one year of professional experience in consulting, in-house consulting, or industry</li>\n<li>Passion for at least one of the following SAP solutions: SAP PP/DS, SAP SCM/APO, or SAP S/4HANA</li>\n<li>Expertise in designing and implementing planning processes, such as production and fine planning, inventory planning and optimisation, and network or material planning</li>\n<li>Your work style is characterised by sovereign appearance as a trusted advisor at all levels (from top management to operational implementation) combined with developed analytical, conceptual, and problem-solving skills</li>\n</ul>\n<p>We offer:</p>\n<ul>\n<li>A dynamic and supportive environment where you can grow continuously in your tasks, knowledge, and responsibility</li>\n<li>Flexible working hours and locations</li>\n<li>Opportunities for professional development and networking</li>\n<li>A comprehensive overview of our benefits can be found here</li>\n</ul>\n<p style=\"margin-top:24px;font-size:13px;color:#666;\">XML job scraping automation by <a href=\"https://yubhub.co\">YubHub</a></p>","url":"https://yubhub.co/jobs/job_3bea703f-195","directApply":true,"hiringOrganization":{"@type":"Organization","name":"MHP","sameAs":"https://mhp.com","logo":"https://logos.yubhub.co/mhp.com.png"},"x-apply-url":"https://jobs.porsche.com/index.php?ac=jobad&id=20499&utm_source=yubhub.co&utm_medium=jobs_feed&utm_campaign=apply","x-work-arrangement":"onsite","x-experience-level":"senior","x-job-type":"full-time","x-salary-range":"Competitive salary","x-skills-required":["SAP PP/DS","SAP SCM/APO","SAP S/4HANA","Supply chain planning","Production planning","Fine planning","Inventory planning","Optimisation","Heuristics","Optimiser","Solver"],"x-skills-preferred":[],"datePosted":"2026-04-27T13:08:28.299Z","employmentType":"FULL_TIME","occupationalCategory":"Consulting","industry":"Technology","skills":"SAP PP/DS, SAP SCM/APO, SAP S/4HANA, Supply chain planning, Production planning, Fine planning, Inventory planning, Optimisation, Heuristics, Optimiser, Solver"},{"@context":"https://schema.org","@type":"JobPosting","identifier":{"@type":"PropertyValue","name":"YubHub","value":"job_5a1f5eb4-c83"},"title":"Distributed Systems Engineer - Data Platform (Delivery, Database, Retrieval)","description":"<p>About Us</p>\n<p>At Cloudflare, we are on a mission to help build a better Internet. Today the company runs one of the world’s largest networks that powers millions of websites and other Internet properties for customers ranging from individual bloggers to SMBs to Fortune 500 companies.</p>\n<p>We protect and accelerate any Internet application online without adding hardware, installing software, or changing a line of code. Internet properties powered by Cloudflare all have web traffic routed through its intelligent global network, which gets smarter with every request. As a result, they see significant improvement in performance and a decrease in spam and other attacks.</p>\n<p>Cloudflare was named to Entrepreneur Magazine’s Top Company Cultures list and ranked among the World’s Most Innovative Companies by Fast Company.</p>\n<p><strong>About Role</strong></p>\n<p>We are looking for experienced and highly motivated engineers to join our DATA Org and help build the future of data at Cloudflare. Our organisation is responsible for the entire data lifecycle - from ingestion and processing to storage and retrieval - powering the critical logs and analytics that provide our customers with real-time visibility into the health and performance of their online properties.</p>\n<p>Our mission is to empower customers to leverage their data to drive better outcomes for their business. We build and maintain a suite of high-performance, scalable systems that handle more than a billion events in a second.</p>\n<p>As an engineer in our organisation, you will have the opportunity to work on complex distributed systems challenges across different parts of our data stack.</p>\n<p>Our Data Org is composed of several key teams, and you could contribute to any of the following areas:</p>\n<ul>\n<li>Data Delivery: You will build and operate our distributed data delivery pipeline, a high-throughput, low-latency system (primarily written in Go) responsible for ingesting, processing, and routing massive volumes of data from across Cloudflare&#39;s global network to multi-core destination.</li>\n</ul>\n<ul>\n<li>Analytical Database Platform: Contribute to our core analytical platform powered by ClickHouse. This team builds and maintains a high-performance, scalable database platform optimised for the immense analytical workloads generated by our products and services.</li>\n</ul>\n<ul>\n<li>Data Retrieval: Be responsible for building the customer-facing products that make data accessible and actionable. This includes developing our public GraphQL API, building robust log delivery solutions and integrations with customer destinations, and contributing to our alerting products, which empower users to configure and receive near real-time alerts based on the logs and metrics observed by our data platform.</li>\n</ul>\n<p><strong>Responsibilities</strong></p>\n<p>As a Software Engineer in our Data Organisation depending on the team you join, you will focus on a subset of the following areas:</p>\n<ul>\n<li>Design, develop, and maintain scalable and reliable distributed systems across the entire data lifecycle.</li>\n</ul>\n<ul>\n<li>Build and optimise key components of our high-throughput data delivery platform to ensure data integrity and low-latency delivery.</li>\n</ul>\n<ul>\n<li>Develop new and improve existing components for the Cloudflare Analytical Platform to extend functionality and performance.</li>\n</ul>\n<ul>\n<li>Scale, monitor, and maintain the performance of our large-scale database clusters to accommodate the growing volume of data.</li>\n</ul>\n<ul>\n<li>Develop and enhance our customer-facing GraphQL APIs, log delivery, and alerting solutions, focusing on performance, reliability, and user experience.</li>\n</ul>\n<ul>\n<li>Work to identify and remove bottlenecks across our data platforms, from streamlining data ingestion processes to optimising query performance.</li>\n</ul>\n<ul>\n<li>Collaborate with other teams across Cloudflare to understand their data needs and build solutions that empower them to make data-driven decisions.</li>\n</ul>\n<ul>\n<li>Collaborate with the ClickHouse open-source community to add new features and contribute to the upstream codebase.</li>\n</ul>\n<ul>\n<li>Participate in the development of the next generation of our data platforms, including researching and evaluating new technologies and approaches.</li>\n</ul>\n<p><strong>Key Qualifications</strong></p>\n<ul>\n<li>3+ years of experience working in software development covering distributed systems and databases.</li>\n</ul>\n<ul>\n<li>Strong programming skills (Golang is preferable), as well as a deep understanding of software development best practices and principles.</li>\n</ul>\n<ul>\n<li>Hands-on experience with modern observability stacks, including Prometheus, Grafana, and a strong understanding of handling high-cardinality metrics at scale.</li>\n</ul>\n<ul>\n<li>Strong knowledge of SQL and database internals, including experience with database design, optimisation, and performance tuning.</li>\n</ul>\n<ul>\n<li>A solid foundation in computer science, including algorithms, data structures, distributed systems, and concurrency.</li>\n</ul>\n<ul>\n<li>Strong analytical and problem-solving skills, with a willingness to debug, troubleshoot, and learn about complex problems at high scale.</li>\n</ul>\n<ul>\n<li>Ability to work collaboratively in a team environment and communicate effectively with other teams across Cloudflare.</li>\n</ul>\n<ul>\n<li>Experience with ClickHouse is a plus.</li>\n</ul>\n<ul>\n<li>Experience with data streaming technologies (e.g., Kafka, Flink) is a plus.</li>\n</ul>\n<ul>\n<li>Experience developing and scaling APIs, particularly GraphQL, is a plus.</li>\n</ul>\n<ul>\n<li>Experience with Infrastructure as Code tools like SALT or Terraform is a plus.</li>\n</ul>\n<ul>\n<li>Experience with Linux container technologies, such as Docker and Kubernetes, is a plus.</li>\n</ul>\n<p>If you&#39;re passionate about building scalable and performant data platforms using cutting-edge technologies and want to work with a world-class team of engineers, then we want to hear from you!</p>\n<p>Join us in our mission to help build a better internet for everyone!</p>\n<p>This role requires flexibility to be on-call outside of standard working hours to address technical issues as needed.</p>\n<p><strong>What Makes Cloudflare Special?</strong></p>\n<p>We’re not just a highly ambitious, large-scale technology company. We’re a highly ambitious, large-scale technology company with a soul.</p>\n<p>Fundamental to our mission to help build a better Internet is protecting the free and open Internet.</p>\n<p>Project Galileo: Since 2014, we&#39;ve equipped more than 2,400 journalism and civil society organisations in 111 countries with powerful tools to defend themselves against attacks that would otherwise censor their work, technology already used by Cloudflare’s enterprise customers--at no cost.</p>\n<p>Athenian Project: In 2017, we created the Athenian Project to ensure that state and local governments have the highest level of protection and reliability for free, so that their constituents have access to election information and voter registration.</p>\n<p>Since the project, we&#39;ve provided services to more than 425 local government election websites in 33 states.</p>\n<p>1.1.1.1: We released 1.1.1.1 to help fix the foundation of the Internet by building a faster, more secure and privacy-centric public DNS resolver. This is available publicly for everyone to use - it is the first consumer-focused service Cloudflare has ever released.</p>\n<p>Here’s the deal - we don’t store client IP addresses never, ever. We will continue to abide by our privacy commitment and ensure that</p>\n<p style=\"margin-top:24px;font-size:13px;color:#666;\">XML job scraping automation by <a href=\"https://yubhub.co\">YubHub</a></p>","url":"https://yubhub.co/jobs/job_5a1f5eb4-c83","directApply":true,"hiringOrganization":{"@type":"Organization","name":"Cloudflare","sameAs":"https://www.cloudflare.com/","logo":"https://logos.yubhub.co/cloudflare.com.png"},"x-apply-url":"https://job-boards.greenhouse.io/cloudflare/jobs/7462801?utm_source=yubhub.co&utm_medium=jobs_feed&utm_campaign=apply","x-work-arrangement":"hybrid","x-experience-level":"senior","x-job-type":"full-time","x-salary-range":null,"x-skills-required":["Golang","Prometheus","Grafana","SQL","database internals","database design","optimisation","performance tuning","algorithms","data structures","distributed systems","concurrency","ClickHouse","Kafka","Flink","GraphQL","Infrastructure as Code","Linux container technologies","Docker","Kubernetes"],"x-skills-preferred":[],"datePosted":"2026-04-26T15:37:43.579Z","jobLocation":{"@type":"Place","address":{"@type":"PostalAddress","addressLocality":"Hybrid"}},"employmentType":"FULL_TIME","occupationalCategory":"Engineering","industry":"Technology","skills":"Golang, Prometheus, Grafana, SQL, database internals, database design, optimisation, performance tuning, algorithms, data structures, distributed systems, concurrency, ClickHouse, Kafka, Flink, GraphQL, Infrastructure as Code, Linux container technologies, Docker, Kubernetes"},{"@context":"https://schema.org","@type":"JobPosting","identifier":{"@type":"PropertyValue","name":"YubHub","value":"job_2e9cc602-f4f"},"title":"Distributed Systems Engineer - Data Platform (Delivery, Database, Retrieval)","description":"<p>About Us</p>\n<p>At Cloudflare, we are on a mission to help build a better Internet. Today the company runs one of the world&#39;s largest networks that powers millions of websites and other Internet properties for customers ranging from individual bloggers to SMBs to Fortune 500 companies.</p>\n<p>We protect and accelerate any Internet application online without adding hardware, installing software, or changing a line of code. Internet properties powered by Cloudflare all have web traffic routed through its intelligent global network, which gets smarter with every request. As a result, they see significant improvement in performance and a decrease in spam and other attacks.</p>\n<p>Cloudflare was named to Entrepreneur Magazine&#39;s Top Company Cultures list and ranked among the World&#39;s Most Innovative Companies by Fast Company.</p>\n<p>About Role</p>\n<p>We are looking for experienced and highly motivated engineers to join our DATA Org and help build the future of data at Cloudflare. Our organisation is responsible for the entire data lifecycle - from ingestion and processing to storage and retrieval - powering the critical logs and analytics that provide our customers with real-time visibility into the health and performance of their online properties.</p>\n<p>Our mission is to empower customers to leverage their data to drive better outcomes for their business. We build and maintain a suite of high-performance, scalable systems that handle more than a billion events in a second.</p>\n<p>As an engineer in our organisation, you will have the opportunity to work on complex distributed systems challenges across different parts of our data stack.</p>\n<p><strong>Responsibilities</strong></p>\n<p>As a Software Engineer in our Data Organisation depending on the team you join, you will focus on a subset of the following areas:</p>\n<ul>\n<li>Design, develop, and maintain scalable and reliable distributed systems across the entire data lifecycle.</li>\n</ul>\n<ul>\n<li>Build and optimise key components of our high-throughput data delivery platform to ensure data integrity and low-latency delivery.</li>\n</ul>\n<ul>\n<li>Develop new and improve existing components for the Cloudflare Analytical Platform to extend functionality and performance.</li>\n</ul>\n<ul>\n<li>Scale, monitor, and maintain the performance of our large-scale database clusters to accommodate the growing volume of data.</li>\n</ul>\n<ul>\n<li>Develop and enhance our customer-facing GraphQL APIs, log delivery, and alerting solutions, focusing on performance, reliability, and user experience.</li>\n</ul>\n<ul>\n<li>Work to identify and remove bottlenecks across our data platforms, from streamlining data ingestion processes to optimising query performance.</li>\n</ul>\n<ul>\n<li>Collaborate with other teams across Cloudflare to understand their data needs and build solutions that empower them to make data-driven decisions.</li>\n</ul>\n<ul>\n<li>Collaborate with the ClickHouse open-source community to add new features and contribute to the upstream codebase.</li>\n</ul>\n<ul>\n<li>Participate in the development of the next generation of our data platforms, including researching and evaluating new technologies and approaches.</li>\n</ul>\n<p><strong>Key Qualifications</strong></p>\n<ul>\n<li>3+ years of experience working in software development covering distributed systems and databases.</li>\n</ul>\n<ul>\n<li>Strong programming skills (Golang is preferable), as well as a deep understanding of software development best practices and principles.</li>\n</ul>\n<ul>\n<li>Hands-on experience with modern observability stacks, including Prometheus, Grafana, and a strong understanding of handling high-cardinality metrics at scale.</li>\n</ul>\n<ul>\n<li>Strong knowledge of SQL and database internals, including experience with database design, optimisation, and performance tuning.</li>\n</ul>\n<ul>\n<li>A solid foundation in computer science, including algorithms, data structures, distributed systems, and concurrency.</li>\n</ul>\n<ul>\n<li>Strong analytical and problem-solving skills, with a willingness to debug, troubleshoot, and learn about complex problems at high scale.</li>\n</ul>\n<ul>\n<li>Ability to work collaboratively in a team environment and communicate effectively with other teams across Cloudflare.</li>\n</ul>\n<ul>\n<li>Experience with ClickHouse is a plus.</li>\n</ul>\n<ul>\n<li>Experience with data streaming technologies (e.g., Kafka, Flink) is a plus.</li>\n</ul>\n<ul>\n<li>Experience developing and scaling APIs, particularly GraphQL, is a plus.</li>\n</ul>\n<ul>\n<li>Experience with Infrastructure as Code tools like SALT or Terraform is a plus.</li>\n</ul>\n<ul>\n<li>Experience with Linux container technologies, such as Docker and Kubernetes, is a plus.</li>\n</ul>\n<p>If you&#39;re passionate about building scalable and performant data platforms using cutting-edge technologies and want to work with a world-class team of engineers, then we want to hear from you!</p>\n<p>Join us in our mission to help build a better internet for everyone!</p>\n<p>This role requires flexibility to be on-call outside of standard working hours to address technical issues as needed.</p>\n<p>What Makes Cloudflare Special?</p>\n<p>We&#39;re not just a highly ambitious, large-scale technology company. We&#39;re a highly ambitious, large-scale technology company with a soul.</p>\n<p>Fundamental to our mission to help build a better Internet is protecting the free and open Internet.</p>\n<p>Project Galileo: Since 2014, we&#39;ve equipped more than 2,400 journalism and civil society organisations in 111 countries with powerful tools to defend themselves against attacks that would otherwise censor their work, technology already used by Cloudflare&#39;s enterprise customers--at no cost.</p>\n<p>Athenian Project: In 2017, we created the Athenian Project to ensure that state and local governments have the highest level of protection and reliability for free, so that their constituents have access to election information and voter registration.</p>\n<p>Since the project, we&#39;ve provided services to more than 425 local government election websites in 33 states.</p>\n<p>1.1.1.1: We released 1.1.1.1 to help fix the foundation of the Internet by building a faster, more secure and privacy-centric public DNS resolver.</p>\n<p>This is available publicly for everyone to use - it is the first consumer-focused service Cloudflare has ever released.</p>\n<p>Here’s the deal - we don’t store client IP addresses never, ever.</p>\n<p>We will continue to abide by our privacy commitment and ensure that no user data is sold to ad</p>\n<p style=\"margin-top:24px;font-size:13px;color:#666;\">XML job scraping automation by <a href=\"https://yubhub.co\">YubHub</a></p>","url":"https://yubhub.co/jobs/job_2e9cc602-f4f","directApply":true,"hiringOrganization":{"@type":"Organization","name":"Cloudflare","sameAs":"https://www.cloudflare.com/","logo":"https://logos.yubhub.co/cloudflare.com.png"},"x-apply-url":"https://job-boards.greenhouse.io/cloudflare/jobs/7267602?utm_source=yubhub.co&utm_medium=jobs_feed&utm_campaign=apply","x-work-arrangement":"hybrid","x-experience-level":"senior","x-job-type":"full-time","x-salary-range":null,"x-skills-required":["Golang","Prometheus","Grafana","SQL","database internals","database design","optimisation","performance tuning","algorithms","data structures","distributed systems","concurrency","ClickHouse","Kafka","Flink","GraphQL","Infrastructure as Code","Linux container technologies","Docker","Kubernetes"],"x-skills-preferred":[],"datePosted":"2026-04-26T15:37:26.844Z","jobLocation":{"@type":"Place","address":{"@type":"PostalAddress","addressLocality":"Hybrid"}},"employmentType":"FULL_TIME","occupationalCategory":"Engineering","industry":"Technology","skills":"Golang, Prometheus, Grafana, SQL, database internals, database design, optimisation, performance tuning, algorithms, data structures, distributed systems, concurrency, ClickHouse, Kafka, Flink, GraphQL, Infrastructure as Code, Linux container technologies, Docker, Kubernetes"},{"@context":"https://schema.org","@type":"JobPosting","identifier":{"@type":"PropertyValue","name":"YubHub","value":"job_e17a5477-2cf"},"title":"Data Operations Analyst (Castrol)","description":"<p>Our purpose is to deliver energy to the world, today and tomorrow. As a Data Operations Analyst, you will play an increasingly important part within a network of like-minded colleagues partnering on strategic projects that stretch across the globe.</p>\n<p>In this role, you will ensure master data procedures are accurate and in compliance within BP&#39;s systems and in accordance with business Service Level Agreements (SLAs). You will perform regular data checks to ensure values are aligned with input sources and across the various fields in SAP ECC.</p>\n<p>Key responsibilities include:</p>\n<ul>\n<li>Being the centre of expertise for FBT Europe in our ERP systems (SAP, Salesforce) and components of various Satellite systems through the process of analysis, investigation, and coaching.</li>\n<li>Creating and maintaining various master data elements across the NIKE SAP ECC</li>\n<li>Performing regular data checks to ensure values are aligned with input sources and across the various fields in ECC</li>\n<li>Maintaining the data portfolio across different systems</li>\n<li>Managing the SAP ECC iDoc handler, using iDoc handler data and the incoming requests</li>\n<li>Collaborating with participants including external and local teams to manage workload</li>\n<li>Following mapping- and other input files, as appropriate</li>\n<li>Completing requests in various systems</li>\n<li>Coordinating with various groups within the organization for required and specific information in respective stages of data maintenance</li>\n<li>Generating, reviewing and actioning data validation reports (based upon agreed schedule for the particular data element)</li>\n<li>Validating reports with other groups on request</li>\n<li>Troubleshooting material related failures throughout the planning and execution process</li>\n<li>Identifying and implementing improvements to data governance processes to reduce the number of errors occurring and minimise rework</li>\n<li>Developing, maintaining, and enhancing VBA-based tools to support efficient request handling, exception management, and operational reporting</li>\n<li>Reducing manual intervention and mitigating data accuracy risks by standardising and automating Excel-based data processes</li>\n<li>Designing, implementing, and maintaining automated workflows to support end-to-end data operations processes, including request intake, approvals, notifications, and data handoffs</li>\n<li>Integrating ERP systems, SharePoint, email, and other enterprise platforms to ensure timely, traceable, and compliant execution of data maintenance activities</li>\n<li>Developing, maintaining, and enhancing operational dashboards and reports to monitor data quality, performance KPIs, workload trends, and process effectiveness</li>\n<li>Transforming and modelling data from various source systems to deliver accurate, reliable insights for operational reviews and management decision-making</li>\n<li>Coordinating and providing expertise for change functions such as regression testing, change request, design forums and system rollouts</li>\n<li>Analysing the root cause of errors</li>\n<li>Creating, documenting, reviewing and updating procedures where required</li>\n<li>Proactively maximising the benefits delivered by BP&#39;s core systems by optimising system usage and output</li>\n<li>Contributing to the development and maintenance of KPIs and consistently delivering on targets set</li>\n<li>Supporting the management decisions to deliver strategy enabling</li>\n<li>Applying tools &amp; processes – applying and promoting within the team the appropriate tools and processes for planning, risk management, and scheduling</li>\n<li>Continuously reviewing the reporting process and tools, seeing opportunities for improvement, recommending changes and supporting implementation</li>\n<li>Regularly tracking and resolving outstanding master data management issues. Based on agreed trigger points, further raising to higher levels of authority for solution or direction and feedback.</li>\n<li>Identifying and contributing to the improvement of defective trends or areas of process performance weakness in the end-to-end process.</li>\n<li>Contributing towards the data enrichment process for the data sub-tower on a continuous improvement cycle.</li>\n<li>Identifying the operational gaps, reviewing processes and creating standard and compliant procedures and protocols where it is crucial.</li>\n<li>Identifying and carrying out Continuous Improvement initiatives and providing support to Analysts in CI methodology and running the project if it is required.</li>\n</ul>\n<p>What you will need to be successful:</p>\n<ul>\n<li>Bachelor&#39;s degree in a related field, or equivalent</li>\n<li>Strong PC skills, including Microsoft Office applications with the ability to navigate and use various software applications</li>\n<li>Proficiency in data management</li>\n<li>Excellent relationship building and communication skills</li>\n<li>Passion for data accuracy with a good understanding of end-to-end impacts of data elements</li>\n<li>Proficiency in English (at least B2 - written and spoken)</li>\n<li>Experience and expertise in Master Data activities</li>\n<li>Ability to work under pressure</li>\n<li>Understanding of CI principles and ability to apply and drive solutions</li>\n<li>Self-motivated and able to see activities through to completion</li>\n<li>Excellent organisational and time management skills</li>\n</ul>\n<p>At bp, we provide the following environment &amp; benefits to you:</p>\n<ul>\n<li>Different bonus opportunities based on performance, wide range of cafeteria elements</li>\n<li>Life &amp; health insurance, medical care package</li>\n<li>Hybrid working arrangement aligned with team arrangements and business needs</li>\n<li>Opportunity to build up long term career path and develop your skills with wide range of learning options</li>\n<li>Celebrate in bp&#39;s success. You may be eligible to join bp&#39;s Global ShareMatch plan. This non-contractual benefit lets employees buy bp shares and receive matching shares, in line with plan rules</li>\n<li>Family friendly workplace e.g.: Extended parental leave, Mother-baby room</li>\n<li>Employees&#39; wellbeing programs e.g.: Employee Assistance Program, Company Recognition Program</li>\n<li>Possibility to join our social communities and networks</li>\n<li>Chill-out and collaboration spaces in our beautiful Budapest Agora and Szeged offices e.g.: Play Zones, Office massage, Sport and music equipment</li>\n</ul>\n<p>bp Hungary won the Most Attractive Employer 2024 Award (SSC / BSC sector) fourth time in a row at PwC&#39;s annual employer research. Come and join us!</p>\n<p style=\"margin-top:24px;font-size:13px;color:#666;\">XML job scraping automation by <a href=\"https://yubhub.co\">YubHub</a></p>","url":"https://yubhub.co/jobs/job_e17a5477-2cf","directApply":true,"hiringOrganization":{"@type":"Organization","name":"Bp","sameAs":"https://careers.bp.com","logo":"https://logos.yubhub.co/careers.bp.com.png"},"x-apply-url":"https://careers.bp.com/job-description/RQ109643?utm_source=yubhub.co&utm_medium=jobs_feed&utm_campaign=apply","x-work-arrangement":"hybrid","x-experience-level":null,"x-job-type":"full-time","x-salary-range":null,"x-skills-required":["data management","Microsoft Office","SAP","Salesforce","VBA","Excel","operational reporting","automated workflows","ERP systems","SharePoint","email","enterprise platforms","operational dashboards","performance KPIs","workload trends","process effectiveness","data quality","regression testing","change request","design forums","system rollouts","root cause analysis","procedure creation","optimisation","KPI development","strategy enabling","planning","risk management","scheduling"],"x-skills-preferred":[],"datePosted":"2026-04-25T12:10:07.135Z","jobLocation":{"@type":"Place","address":{"@type":"PostalAddress","addressLocality":"Hungary, Szeged"}},"employmentType":"FULL_TIME","occupationalCategory":"Operations","industry":"Energy","skills":"data management, Microsoft Office, SAP, Salesforce, VBA, Excel, operational reporting, automated workflows, ERP systems, SharePoint, email, enterprise platforms, operational dashboards, performance KPIs, workload trends, process effectiveness, data quality, regression testing, change request, design forums, system rollouts, root cause analysis, procedure creation, optimisation, KPI development, strategy enabling, planning, risk management, scheduling"},{"@context":"https://schema.org","@type":"JobPosting","identifier":{"@type":"PropertyValue","name":"YubHub","value":"job_050996c8-5b0"},"title":"Associate, LDI Technology (Algo Engineer) - Fixed Income Solutions","description":"<p>About this role</p>\n<p>The Fixed Income Solutions (FIS) business at BlackRock sits within the Portfolio Management Group (PMG) and operates globally across London, New York, Atlanta, San Francisco, Gurgaon and Mumbai. FIS partners closely with clients to design, implement, and manage outcome-oriented investment solutions.</p>\n<p>The Liability Driven Investment (LDI) business is a distinct part of FIS which leverages the breadth and depth of the entire platform when delivering investment solutions for our clients. BlackRock has been managing LDI mandates for over 20 years.</p>\n<p>We are seeking an Associate level technologist to join the LDI Technology team. This role is focused on contributing to strategic LDI Technology Initiatives whilst also operating and building on BlackRock&#39;s technology platforms with a particular emphasis on CI/CD, pipeline maintenance and production delivery.</p>\n<p>Key responsibilities</p>\n<ul>\n<li>Contribute to the delivery of strategic LDI Technology projects, from requirements gathering through to implementation and release.</li>\n<li>Work closely with investments and analytics stakeholders to ensure solutions align with business needs.</li>\n<li>Design and implement Python-based tools that are robust, maintainable, and suitable for long-term use.</li>\n<li>Contribute to the development and maintenance of a shared library and common codebase, helping improve reuse, consistency, and overall engineering quality.</li>\n<li>Apply strong software engineering discipline, including version control, testing, documentation, and code review.</li>\n<li>Support the deployment and operation of tools in production environments, including contributing to build and release processes.</li>\n<li>Operate our CI/CD pipelines applying our platform standards, contributing to improvements where appropriate.</li>\n<li>Promote a strong tech culture to encourage innovation</li>\n</ul>\n<p>Business partnership</p>\n<ul>\n<li>Develop a strong understanding of LDI investment processes and the data that underpins them.</li>\n<li>Work closely with investors and LDI colleagues to identify high impact technology opportunities.</li>\n<li>Support users in adopting new technology and integrating it into established workflows.</li>\n<li>Guide less experienced colleagues through knowledge sharing, shaping emerging best practices, and building out a common code library</li>\n<li>Partner with firmwide technology platforms, including Aladdin, to shape requirements, support testing, and integrate new capabilities into the LDI business.</li>\n</ul>\n<p>Skills and experience required</p>\n<p>Essential</p>\n<ul>\n<li>Quantitative or technical background (BA/BS in Computer Science, Engineering, Mathematics, or a related field).</li>\n<li>Strong Python experience (or similar programming languages), with the ability to build production-grade tools.</li>\n<li>Experience working across the full software development lifecycle.</li>\n<li>Strong problemsolving skills with good attention to detail.</li>\n<li>Excellent written and verbal communication skills.</li>\n<li>Ability to work effectively in a collaborative, business-embedded technology team.</li>\n</ul>\n<p>Beneficial</p>\n<ul>\n<li>Knowledge of Fixed Income markets or Liability Driven Investment strategies</li>\n<li>Familiarity with BlackRock platforms such as Aladdin.</li>\n<li>Experience with CI/CD and agile development practices</li>\n<li>Experience with analytics, optimisation, or portfolio-related tooling.</li>\n</ul>\n<p>Our benefits</p>\n<p>To help you stay energized, engaged and inspired, we offer a wide range of employee benefits including: retirement investment and tools designed to help you in building a sound financial future; access to education reimbursement; comprehensive resources to support your physical health and emotional well-being; family support programs; and Flexible Time Off (FTO) so you can relax, recharge and be there for the people you care about.</p>\n<p>Our hybrid work model</p>\n<p>BlackRock&#39;s hybrid work model is designed to enable a culture of collaboration and apprenticeship that enriches the experience of our employees, while supporting flexibility for all. Employees are currently required to work at least 4 days in the office per week, with the flexibility to work from home 1 day a week. Some business groups may require more time in the office due to their roles and responsibilities. We remain focused on increasing the impactful moments that arise when we work together in person – aligned with our commitment to performance and innovation. As a new joiner, you can count on this hybrid model to accelerate your learning and onboarding experience here at BlackRock.</p>\n<p>About BlackRock</p>\n<p>At BlackRock, we are all connected by one mission: to help more and more people experience financial well-being. Our clients, and the people they serve, are saving for retirement, paying for their children&#39;s educations, buying homes and starting businesses. Their investments also help to strengthen the global economy: support businesses small and large; finance infrastructure projects that connect and power cities; and facilitate innovations that drive progress.</p>\n<p style=\"margin-top:24px;font-size:13px;color:#666;\">XML job scraping automation by <a href=\"https://yubhub.co\">YubHub</a></p>","url":"https://yubhub.co/jobs/job_050996c8-5b0","directApply":true,"hiringOrganization":{"@type":"Organization","name":"BlackRock","sameAs":"https://www.blackrock.com","logo":"https://logos.yubhub.co/blackrock.com.png"},"x-apply-url":"https://jobs.workable.com/view/4BgdBjZXihahkejCN7cVxV/associate%2C-ldi-technology-(algo-engineer)---fixed-income-solutions-in-london-at-blackrock?utm_source=yubhub.co&utm_medium=jobs_feed&utm_campaign=apply","x-work-arrangement":"hybrid","x-experience-level":null,"x-job-type":"full-time","x-salary-range":null,"x-skills-required":["Python","Software engineering","Version control","Testing","Documentation","Code review","CI/CD","Agile development","Fixed Income markets","Liability Driven Investment strategies"],"x-skills-preferred":["Knowledge of Aladdin","Experience with analytics","Optimisation","Portfolio-related tooling"],"datePosted":"2026-04-24T14:19:05.076Z","jobLocation":{"@type":"Place","address":{"@type":"PostalAddress","addressLocality":"London"}},"employmentType":"FULL_TIME","occupationalCategory":"Engineering","industry":"Finance","skills":"Python, Software engineering, Version control, Testing, Documentation, Code review, CI/CD, Agile development, Fixed Income markets, Liability Driven Investment strategies, Knowledge of Aladdin, Experience with analytics, Optimisation, Portfolio-related tooling"},{"@context":"https://schema.org","@type":"JobPosting","identifier":{"@type":"PropertyValue","name":"YubHub","value":"job_2b4a4f1f-f36"},"title":"Data Scientist - GenAI - Consultant","description":"<p>Do you want to boost your career and collaborate with expert, talented colleagues to solve and deliver against our clients&#39; most important challenges? We are growing and are looking for people to join our team. You&#39;ll be part of an entrepreneurial, high-growth environment of over 320,000 employees. Our dynamic organization allows you to work across functional business pillars, contributing your ideas, experiences, diverse thinking, and a strong mindset. Are you ready?</p>\n<p>The Role --------</p>\n<p>We are looking for highly skilled Data Scientists to join our team. As a Data Scientist, you’ll design and deliver GenAI solutions (LLM/RAG) and applied ML components, taking prototypes through to production with strong evaluation, observability and governance. You will work closely with cross-functional teams, including data engineers, analysts, and business stakeholders, to turn data into actionable strategies that drive business outcomes.</p>\n<p>Key Responsibilities --------------------</p>\n<ul>\n<li>Design and deliver GenAI solutions including LLM/RAG (retrieval strategy, embeddings, vector stores, prompt flows, grounding) for enterprise use cases.</li>\n<li>Evaluate and improve solution quality using offline/online metrics (quality, latency, cost) and iterate based on feedback.</li>\n<li>Harden solutions for production with observability/monitoring, tracing, guardrails, safety controls, and reliability practices</li>\n<li>Build and integrate model endpoints into products and workflows (APIs/services), partnering with engineering through to deployment.</li>\n<li>Work across cloud platforms (Azure/AWS/GCP) integrating storage, compute, orchestration, and model/runtime components.</li>\n<li>Assess data readiness for modelling/RAG (fitness, quality, access) and define remediation requirements</li>\n<li>Collaborate in cross-functional squads (DS/DE/engineering/product) and contribute to reusable assets and ways of working.</li>\n<li>Communicate clearly with stakeholders on trade-offs, evaluation results, risks, and adoption actions.</li>\n<li>Own end-to-end workstream delivery, lead stakeholder conversations, mentor others. (more senior levels)</li>\n<li>Shape solution direction and quality bar, coach teams, contribute to sales pursuits/bids and accelerators (most senior levels)</li>\n</ul>\n<p>Requirements ------------</p>\n<p><strong>Essential Skills:</strong></p>\n<ul>\n<li>Strong Python/R (pandas/NumPy; ML libs such as scikit-learn; DL frameworks TensorFlow/PyTorch).</li>\n<li>Experience with LLM/RAG toolchains (e.g., LangChain, LlamaIndex, Semantic Kernel) and vector search (e.g., Pinecone, Weaviate, FAISS, Azure AI Search).</li>\n<li>Experience with GenAI platforms (e.g., OpenAI API, Anthropic, Gemini, Llama or equivalents).</li>\n<li>Exposure to big data/distributed computing and pipeline/feature engineering.</li>\n<li>LLM safety &amp; governance (hallucination mitigation, grounded responses, audit trails)</li>\n<li>Degree in a quantitative field</li>\n<li>Right to work in the UK without sponsorship</li>\n</ul>\n<p><strong>Preferred Skills:</strong></p>\n<ul>\n<li>Cloud ML experience (AWS/GCP/Azure).</li>\n<li>Strong SQL; experience with visualisation tools (Tableau/Power BI or Python viz).</li>\n<li>Specialisms: NLP / computer vision / time series.</li>\n<li>NoSQL familiarity.</li>\n<li>Quant / trading analytics engineering practices</li>\n<li>Time-series forecasting (prices, demand, blend outcomes, scheduling effects)</li>\n<li>Optimisation / simulation (planning, blending, logistics constraints)</li>\n<li>Model risk controls (bias/leakage checks, backtesting discipline, monitoring/drift)</li>\n<li>CI/CD, deployment, monitoring; Docker/Kubernetes.</li>\n<li>Experiment design and randomised trials.</li>\n<li>MSc with PhD a plus</li>\n</ul>\n<p>Personal attributes</p>\n<ul>\n<li>Analytical, pragmatic problem-solver; outcome-oriented.</li>\n<li>Self-directed, able to prioritise and juggle multiple workstreams.</li>\n<li>Clear communicator who can simplify complexity.</li>\n<li>Collaborative, curious, continuous learner.</li>\n</ul>\n<p style=\"margin-top:24px;font-size:13px;color:#666;\">XML job scraping automation by <a href=\"https://yubhub.co\">YubHub</a></p>","url":"https://yubhub.co/jobs/job_2b4a4f1f-f36","directApply":true,"hiringOrganization":{"@type":"Organization","name":"Infosys Consulting - Europe","sameAs":"https://www.infosys.com/","logo":"https://logos.yubhub.co/infosys.com.png"},"x-apply-url":"https://jobs.workable.com/view/3Q492AhHyLQVx6RQtvfQXV/hybrid-data-scientist---genai---consultant-in-london-at-infosys-consulting---europe?utm_source=yubhub.co&utm_medium=jobs_feed&utm_campaign=apply","x-work-arrangement":"hybrid","x-experience-level":"senior","x-job-type":"full-time","x-salary-range":null,"x-skills-required":["Python","R","pandas","NumPy","scikit-learn","TensorFlow","PyTorch","LangChain","LlamaIndex","Semantic Kernel","Pinecone","Weaviate","FAISS","Azure AI Search","OpenAI API","Anthropic","Gemini","Llama","big data","distributed computing","pipeline","feature engineering","LLM safety","governance","hallucination mitigation","grounded responses","audit trails","degree in a quantitative field","right to work in the UK without sponsorship"],"x-skills-preferred":["cloud ML experience","strong SQL","visualisation tools","NLP","computer vision","time series","NoSQL","quant","trading analytics engineering","time-series forecasting","optimisation","simulation","model risk controls","CI/CD","deployment","monitoring","Docker","Kubernetes","experiment design","randomised trials","MSc with PhD"],"datePosted":"2026-04-24T14:13:18.122Z","jobLocation":{"@type":"Place","address":{"@type":"PostalAddress","addressLocality":"London"}},"employmentType":"FULL_TIME","occupationalCategory":"Engineering","industry":"Technology","skills":"Python, R, pandas, NumPy, scikit-learn, TensorFlow, PyTorch, LangChain, LlamaIndex, Semantic Kernel, Pinecone, Weaviate, FAISS, Azure AI Search, OpenAI API, Anthropic, Gemini, Llama, big data, distributed computing, pipeline, feature engineering, LLM safety, governance, hallucination mitigation, grounded responses, audit trails, degree in a quantitative field, right to work in the UK without sponsorship, cloud ML experience, strong SQL, visualisation tools, NLP, computer vision, time series, NoSQL, quant, trading analytics engineering, time-series forecasting, optimisation, simulation, model risk controls, CI/CD, deployment, monitoring, Docker, Kubernetes, experiment design, randomised trials, MSc with PhD"},{"@context":"https://schema.org","@type":"JobPosting","identifier":{"@type":"PropertyValue","name":"YubHub","value":"job_25792d96-823"},"title":"Lead Features Designer","description":"<p>Electronic Arts creates next-level entertainment experiences that inspire players and fans around the world. Playdemic is a part of this community, where creativity thrives, and new perspectives are invited. As a Lead Features Designer, you will be responsible for leading the feature design team and driving the design of new features and systems for Golf Clash, a globally successful game enjoyed by millions.</p>\n<p>Your main responsibilities will include: Collaborating with the product management team to drive ideation of new features and systems. Leading the design and development of features from concept through launch and iteration, ensuring they align with product goals and design direction. Defining clear design objectives, player value, and success criteria for features, using these to guide development and evaluate outcomes post-launch. Creating and maintaining clear, efficient, and legible feature documentation. Guiding feature development through implementation, collaborating closely with cross-disciplinary teams to ensure quality execution. Reviewing and providing feedback on design output within the feature design team, helping to maintain high standards of design quality, communication, and delivery. Supporting and mentoring designers within the team, helping them to develop their skills and improve their work. Helping to prioritise feature development in collaboration with product management and production. Using player behaviour, feedback, and data insights to evaluate feature performance and inform iteration, optimisation, and design decisions. Helping to establish and maintain strong design practices across the feature team.</p>\n<p>To succeed in this role, you will need: A minimum of 5 years&#39; experience working in a game development studio. Experience working on live games with a proven track record of owning and delivering multiple features or systems within the constraints of a release schedule. Strong experience designing features and systems for mobile free-to-play live service games. Experience leading, mentoring, or guiding other designers and helping to improve team output. Strong experience designing and balancing game systems and economies, supported by excellent Excel skills. Strong understanding of analytics, KPIs, and success metrics, and how they can be used to support design decisions, evaluate feature performance, and drive optimisation. Technical skills and knowledge of game development tools, including Confluence and JIRA. Strong communication skills, with the ability to clearly articulate design goals, trade-offs, and requirements across disciplines. Ability to review work, provide clear feedback, and maintain a high bar for design quality. Experience working collaboratively across product, production, art, and development disciplines to drive feature outcomes.</p>\n<p>In return, Playdemic offers a competitive salary and benefits package, including healthcare coverage, mental well-being support, retirement savings, paid time off, family leaves, complimentary games, and more.</p>\n<p style=\"margin-top:24px;font-size:13px;color:#666;\">XML job scraping automation by <a href=\"https://yubhub.co\">YubHub</a></p>","url":"https://yubhub.co/jobs/job_25792d96-823","directApply":true,"hiringOrganization":{"@type":"Organization","name":"Playdemic","sameAs":"https://jobs.ea.com","logo":"https://logos.yubhub.co/jobs.ea.com.png"},"x-apply-url":"https://jobs.ea.com/en_US/careers/JobDetail/Lead-Features-Designer/213598?utm_source=yubhub.co&utm_medium=jobs_feed&utm_campaign=apply","x-work-arrangement":"hybrid","x-experience-level":"senior","x-job-type":"full-time","x-salary-range":null,"x-skills-required":["Game development","Feature design","System design","Mobile free-to-play live service games","Analytics","KPIs","Success metrics","Excel","Confluence","JIRA"],"x-skills-preferred":["Sports or competitive multiplayer titles","Experimentation","Player behaviour data","Optimisation"],"datePosted":"2026-04-24T13:16:30.604Z","jobLocation":{"@type":"Place","address":{"@type":"PostalAddress","addressLocality":"Manchester"}},"employmentType":"FULL_TIME","occupationalCategory":"Engineering","industry":"Technology","skills":"Game development, Feature design, System design, Mobile free-to-play live service games, Analytics, KPIs, Success metrics, Excel, Confluence, JIRA, Sports or competitive multiplayer titles, Experimentation, Player behaviour data, Optimisation"},{"@context":"https://schema.org","@type":"JobPosting","identifier":{"@type":"PropertyValue","name":"YubHub","value":"job_0987988a-011"},"title":"Feature Framework Engineer","description":"<p>The Systematic Platform Execution &amp; Exchange Data (SPEED) Team is at the core of Millennium&#39;s Equities, Quant Strategies, and Shared Services Technology organisation.</p>\n<p>We are looking for a C++ engineer to design and build high-performance, low-latency applications that process large volumes of real-time data.</p>\n<p>Principal Responsibilities:</p>\n<ul>\n<li>Design, implement, and maintain high-performance C++ services handling high message rates and low-latency workloads.</li>\n</ul>\n<ul>\n<li>Optimise existing components for latency, throughput, and CPU/memory efficiency.</li>\n</ul>\n<ul>\n<li>Develop and tune networking, messaging, and I/O layers to handle large data volumes reliably.</li>\n</ul>\n<ul>\n<li>Profile and debug performance issues at application, OS, and network levels.</li>\n</ul>\n<ul>\n<li>Collaborate with quantitative, trading, and infrastructure teams to translate requirements into robust technical solutions.</li>\n</ul>\n<ul>\n<li>Write clean, production-quality code with appropriate tests and documentation.</li>\n</ul>\n<ul>\n<li>Participate in code reviews, design discussions, and continuous improvement of engineering practices.</li>\n</ul>\n<p>Required Qualifications:</p>\n<ul>\n<li>Strong proficiency in modern C++ (C++17/20 or later).</li>\n</ul>\n<ul>\n<li>5+ years of experience.</li>\n</ul>\n<ul>\n<li>Analytics Focus: KDB / Q Experience for large market data, modern data analysis with pytorch, pandas and modern tooling including Apache arrow.</li>\n</ul>\n<ul>\n<li>Familiar with basics statistics as applied to financial research.</li>\n</ul>\n<ul>\n<li>Proven experience building performance-critical, real-time, or low-latency systems.</li>\n</ul>\n<ul>\n<li>Strong knowledge of computer science fundamentals: data structures, algorithms, memory management, and optimisation.</li>\n</ul>\n<ul>\n<li>Experience using profiling, benchmarking, and performance analysis tools.</li>\n</ul>\n<ul>\n<li>Proficiency with version control (Git) and standard build systems.</li>\n</ul>\n<ul>\n<li>Excellent problem-solving skills and attention to detail.</li>\n</ul>\n<ul>\n<li>Strong interpersonal skills with a proven ability to navigate large organisations.</li>\n</ul>\n<p>Preferred Qualifications:</p>\n<ul>\n<li>Experience with kernel bypass or user-space networking technologies.</li>\n</ul>\n<ul>\n<li>Familiarity with AI productivity enhancing coding tools.</li>\n</ul>\n<ul>\n<li>Experience in financial markets, market data distribution, order routing, or exchange connectivity.</li>\n</ul>\n<ul>\n<li>Experience with monitoring/telemetry for high-performance systems.</li>\n</ul>\n<ul>\n<li>Familiarity with scripting languages for tooling and automation.</li>\n</ul>\n<ul>\n<li>AI: Familiarity with AI productivity enhancing coding tools.</li>\n</ul>\n<p>Personal Attributes:</p>\n<ul>\n<li>Obsessed with performance, measurement, and data-driven optimisation.</li>\n</ul>\n<ul>\n<li>Comfortable owning features end-to-end and operating in a production environment.</li>\n</ul>\n<ul>\n<li>Clear communicator who can work closely with both technical and non-technical stakeholders.</li>\n</ul>\n<ul>\n<li>Proactive, self-directed, and able to thrive in a highly iterative environment.</li>\n</ul>\n<p>The estimated base salary range for this position is $175,000 to $250,000, which is specific to New York and may change in the future.</p>\n<p style=\"margin-top:24px;font-size:13px;color:#666;\">XML job scraping automation by <a href=\"https://yubhub.co\">YubHub</a></p>","url":"https://yubhub.co/jobs/job_0987988a-011","directApply":true,"hiringOrganization":{"@type":"Organization","name":"Unknown","sameAs":"https://mlp.eightfold.ai","logo":"https://logos.yubhub.co/mlp.eightfold.ai.png"},"x-apply-url":"https://mlp.eightfold.ai/careers/job/755955682418?utm_source=yubhub.co&utm_medium=jobs_feed&utm_campaign=apply","x-work-arrangement":"onsite","x-experience-level":"senior","x-job-type":"full-time","x-salary-range":"$175,000 to $250,000","x-skills-required":["modern C++","KDB / Q","pytorch","pandas","Apache arrow","data structures","algorithms","memory management","optimisation","profiling","benchmarking","performance analysis tools","version control","standard build systems"],"x-skills-preferred":["kernel bypass","user-space networking technologies","AI productivity enhancing coding tools","financial markets","market data distribution","order routing","exchange connectivity","monitoring/telemetry","scripting languages"],"datePosted":"2026-04-18T22:14:03.382Z","jobLocation":{"@type":"Place","address":{"@type":"PostalAddress","addressLocality":"New York, New York, United States of America"}},"employmentType":"FULL_TIME","occupationalCategory":"Engineering","industry":"Finance","skills":"modern C++, KDB / Q, pytorch, pandas, Apache arrow, data structures, algorithms, memory management, optimisation, profiling, benchmarking, performance analysis tools, version control, standard build systems, kernel bypass, user-space networking technologies, AI productivity enhancing coding tools, financial markets, market data distribution, order routing, exchange connectivity, monitoring/telemetry, scripting languages","baseSalary":{"@type":"MonetaryAmount","currency":"USD","value":{"@type":"QuantitativeValue","minValue":175000,"maxValue":250000,"unitText":"YEAR"}}},{"@context":"https://schema.org","@type":"JobPosting","identifier":{"@type":"PropertyValue","name":"YubHub","value":"job_65bcd8f4-7e6"},"title":"Staff Software Engineer - Database Engine Internals","description":"<p>Our mission at Databricks is to radically simplify the whole data lifecycle from ingestion to ETL, BI, and all the way up to ML/AI with a unified platform.</p>\n<p>To achieve this goal, we believe the data warehouse architecture as we know it today will be replaced by a new architectural pattern, Lakehouse (CIDR 2021 paper), open platforms that unify data warehousing and advanced analytics.</p>\n<p>A critical part of realizing this vision is the next generation (decoupled) query engine and structured storage system that can outperform specialised data warehouses in relational query performance, yet retain the expressiveness and of general purpose systems such as Apache Spark™ to support diverse workloads ranging from ETL to data science.</p>\n<p>As part of this team, you will be working in one or more of the following areas to design and implement these next gen systems that leapfrog state-of-the-art:</p>\n<ul>\n<li>Query compilation and optimisation</li>\n<li>Distributed query execution and scheduling</li>\n<li>Vectorised execution engine</li>\n<li>Data security</li>\n<li>Resource management</li>\n<li>Transaction coordination</li>\n<li>Efficient storage structures (encodings, indexes)</li>\n<li>Automatic physical data optimisation</li>\n</ul>\n<p>We look for:</p>\n<ul>\n<li>A passion for database systems, storage systems, distributed systems, language design, or performance optimisation</li>\n<li>Experience working towards a multi-year vision with incremental deliverables</li>\n<li>Motivated by delivering customer value and impact</li>\n<li>8+ years of experience working in a related system (preferred)</li>\n</ul>\n<p>Pay Range Transparency</p>\n<p>Databricks is committed to fair and equitable compensation practices. The pay range(s) for this role is listed below and represents the expected salary range for non-commissionable roles or on-target earnings for commissionable roles.</p>\n<p>Actual compensation packages are based on several factors that are unique to each candidate, including but not limited to job-related skills, depth of experience, relevant certifications and training, and specific work location.</p>\n<p>Based on the factors above, Databricks anticipates utilising the full width of the range.</p>\n<p>The total compensation package for this position may also include eligibility for annual performance bonus, equity, and the benefits listed above.</p>\n<p>For more information regarding which range your location is in visit our page here.</p>\n<p>Local Pay Range $192,000-$260,000 USD</p>\n<p style=\"margin-top:24px;font-size:13px;color:#666;\">XML job scraping automation by <a href=\"https://yubhub.co\">YubHub</a></p>","url":"https://yubhub.co/jobs/job_65bcd8f4-7e6","directApply":true,"hiringOrganization":{"@type":"Organization","name":"Databricks","sameAs":"https://databricks.com","logo":"https://logos.yubhub.co/databricks.com.png"},"x-apply-url":"https://job-boards.greenhouse.io/databricks/jobs/5646866002?utm_source=yubhub.co&utm_medium=jobs_feed&utm_campaign=apply","x-work-arrangement":"onsite","x-experience-level":"staff","x-job-type":"full-time","x-salary-range":"$192,000-$260,000 USD","x-skills-required":["database systems","storage systems","distributed systems","language design","performance optimisation","query compilation","optimisation","distributed query execution","scheduling","vectorised execution engine","data security","resource management","transaction coordination","efficient storage structures","encodings","indexes"],"x-skills-preferred":[],"datePosted":"2026-04-18T15:44:40.153Z","jobLocation":{"@type":"Place","address":{"@type":"PostalAddress","addressLocality":"San Francisco, California"}},"employmentType":"FULL_TIME","occupationalCategory":"Engineering","industry":"Technology","skills":"database systems, storage systems, distributed systems, language design, performance optimisation, query compilation, optimisation, distributed query execution, scheduling, vectorised execution engine, data security, resource management, transaction coordination, efficient storage structures, encodings, indexes","baseSalary":{"@type":"MonetaryAmount","currency":"USD","value":{"@type":"QuantitativeValue","minValue":192000,"maxValue":260000,"unitText":"YEAR"}}},{"@context":"https://schema.org","@type":"JobPosting","identifier":{"@type":"PropertyValue","name":"YubHub","value":"job_20d39f2a-da8"},"title":"TPU Kernel Engineer","description":"<p><strong>About the Role</strong></p>\n<p>As a TPU Kernel Engineer, you&#39;ll be responsible for identifying and addressing performance issues across many different ML systems, including research, training, and inference. A significant portion of this work will involve designing and optimising kernels for the TPU. You will also provide feedback to researchers about how model changes impact performance.</p>\n<p><strong>You may be a good fit if you:</strong></p>\n<ul>\n<li>Have significant experience optimising ML systems for TPUs, GPUs, or other accelerators</li>\n<li>Are results-oriented, with a bias towards flexibility and impact</li>\n<li>Pick up slack, even if it goes outside your job description</li>\n<li>Enjoy pair programming (we love to pair!)</li>\n<li>Want to learn more about machine learning research</li>\n<li>Care about the societal impacts of your work</li>\n</ul>\n<p><strong>Strong candidates may also have experience with:</strong></p>\n<ul>\n<li>High performance, large-scale ML systems</li>\n<li>Designing and implementing kernels for TPUs or other ML accelerators</li>\n<li>Understanding accelerators at a deep level, e.g. a background in computer architecture</li>\n<li>ML framework internals</li>\n<li>Language modeling with transformers</li>\n</ul>\n<p><strong>Representative projects:</strong></p>\n<ul>\n<li>Implement low-latency, high-throughput sampling for large language models</li>\n<li>Adapt existing models for low-precision inference</li>\n<li>Build quantitative models of system performance</li>\n<li>Design and implement custom collective communication algorithms</li>\n<li>Debug kernel performance at the assembly level</li>\n</ul>\n<p><strong>Logistics</strong></p>\n<ul>\n<li>Education requirements: We require at least a Bachelor&#39;s degree in a related field or equivalent experience.</li>\n<li>Location-based hybrid policy: Currently, we expect all staff to be in one of our offices at least 25% of the time. However, some roles may require more time in our offices.</li>\n<li>Visa sponsorship: We do sponsor visas! However, we aren&#39;t able to successfully sponsor visas for every role and every candidate. But if we make you an offer, we will make every reasonable effort to get you a visa, and we retain an immigration lawyer to help with this.</li>\n</ul>\n<p><strong>We encourage you to apply even if you do not believe you meet every single qualification. Not all strong candidates will meet every single qualification as listed. Research shows that people who identify as being from underrepresented groups are more prone to experiencing imposter syndrome and doubting the strength of their candidacy, so we urge you not to exclude yourself prematurely and to submit an application if you&#39;re interested in this work.</strong></p>\n<p><strong>Your safety matters to us. To protect yourself from potential scams, remember that Anthropic recruiters only contact you from @anthropic.com email addresses. In some cases, we may partner with vetted recruiting agencies who will identify themselves as working on behalf of Anthropic. Be cautious of emails from other domains. Legitimate Anthropic recruiters will never ask for money, fees, or banking information before your first day. If you&#39;re ever unsure about a communication, don&#39;t click any links—visit anthropic.com/careers directly for confirmed position openings.</strong></p>\n<p><strong>How we&#39;re different</strong></p>\n<p>We believe that the highest-impact AI research will be big science. At Anthropic we work as a single cohesive team on just a few large-scale research efforts. And we value impact — advancing our long-term goals of steerable, trustworthy AI — rather than work on smaller and more specific puzzles. We view AI research as an empirical science, which has as much in common with physics and biology as with traditional efforts in computer science. We&#39;re an extremely collaborative group, and we host frequent research discussions to ensure that we are pursuing the highest-impact work at any given time. As such, we greatly value communication skills.</p>\n<p>The easiest way to understand our research directions is to read our recent research. This research continues many of the directions our team worked on prior to Anthropic, including: GPT-3, Circuit-Based Interpretability, Multimodal Neurons, Scaling Laws, AI &amp; Compute, Concrete Problems in AI Safety, and Learning from Human Preferences.</p>\n<p><strong>Come work with us!</strong></p>\n<p>Anthropic is a public benefit corporation headquartered in San Francisco. We offer competitive compensation and benefits, optional equity donation matching, generous vacation and parental leave, flexible working hours, and a lovely office space in which to collaborate with colleagues.</p>\n<p><strong>Guidance on Candidates&#39; AI Usage:</strong></p>\n<p>Learn about our policy for using AI in our application process</p>\n<p><strong>Apply for this job</strong></p>\n<ul>\n<li>indicates a required field</li>\n</ul>\n<p>First Name<em> Last Name</em> Email<em> Country</em> Phone* 244 results found No results found</p>\n<p style=\"margin-top:24px;font-size:13px;color:#666;\">XML job scraping automation by <a href=\"https://yubhub.co\">YubHub</a></p>","url":"https://yubhub.co/jobs/job_20d39f2a-da8","directApply":true,"hiringOrganization":{"@type":"Organization","name":"Anthropic","sameAs":"https://job-boards.greenhouse.io","logo":"https://logos.yubhub.co/anthropic.com.png"},"x-apply-url":"https://job-boards.greenhouse.io/anthropic/jobs/4720576008?utm_source=yubhub.co&utm_medium=jobs_feed&utm_campaign=apply","x-work-arrangement":"hybrid","x-experience-level":"senior","x-job-type":"full-time","x-salary-range":"$280,000 - $850,000USD","x-skills-required":["TPU","GPU","ML systems","kernel design","optimisation","pair programming","machine learning research","societal impacts"],"x-skills-preferred":["high performance","large-scale ML systems","computer architecture","ML framework internals","language modeling with transformers"],"datePosted":"2026-03-08T13:51:07.394Z","jobLocation":{"@type":"Place","address":{"@type":"PostalAddress","addressLocality":"San Francisco, CA | New York City, NY | Seattle, WA"}},"employmentType":"FULL_TIME","occupationalCategory":"Engineering","industry":"Technology","skills":"TPU, GPU, ML systems, kernel design, optimisation, pair programming, machine learning research, societal impacts, high performance, large-scale ML systems, computer architecture, ML framework internals, language modeling with transformers","baseSalary":{"@type":"MonetaryAmount","currency":"USD","value":{"@type":"QuantitativeValue","minValue":280000,"maxValue":850000,"unitText":"YEAR"}}},{"@context":"https://schema.org","@type":"JobPosting","identifier":{"@type":"PropertyValue","name":"YubHub","value":"job_f0a0b3ef-b9f"},"title":"INGENIEUR PERFORMANCE (F/H)","description":"<p><strong>Les missions du poste</strong></p>\n<p>Rattaché au Responsable Technique, l’Ingénieur performance est responsable de l&#39;analyse de performance et des données pour l&#39;ensemble des championnats (WEC, ELMS, ALMS). Il a pour mission de prédire, d’analyser et de comprendre la performance des voitures existantes lors de tous les évènements. Il aura la charge de travailler sur des projets impliquant des études de simulation approfondies et d’aider à encadrer les règlements techniques. Il sera fortement impliqué dans les projets et règlements techniques liés à l&#39;hydrogène.</p>\n<p><strong>Vous serez amené à :</strong></p>\n<ul>\n<li>Modéliser tout type de voiture et de stratégie de groupe motopropulseur pour évaluer les pistes des règlements techniques,</li>\n<li>Préparer des reportings à l’aide d’analyse des chronos/données et de simulation, avant chaque évènement,</li>\n<li>Être présent lors des évènements pour réaliser des analyses en temps réel des chronos/données et des évaluations de la performance des véhicules,</li>\n<li>Réaliser des reportings après l’évènement, en utilisant l’analyse des chronos/données,</li>\n<li>Développer et valider des outils d’analyse avancés et sur mesure pour améliorer notre compréhension de la performance des véhicules sur piste (performance du pilote, dynamique du véhicule, aérodynamique, groupe motopropulseur, ravitaillement), ainsi que des stratégies des concurrents,</li>\n<li>Améliorer les algorithmes de performance, les outils d’analyse statistique afin de produire des rapports standardisés pour les concurrents et les fabricants,</li>\n<li>Collaborer avec l’Ingénieur Électronique pour établir les spécifications et être en contact avec les fournisseurs de systèmes de contrôle (par exemple, les capteurs de couple).</li>\n</ul>\n<p><strong>Le profil recherché</strong></p>\n<ul>\n<li>Vous possédez un diplôme d’ingénieur et vous justifiez d&#39;une expérience d’au moins 10 ans en électronique de sports mécaniques,</li>\n<li>Vous avez 5 ans d’expérience avec des logiciels de simulation (avec plusieurs types de groupes motopropulseurs comme les moteurs à combustion interne, hybride, moteurs électriques…) et avec Matlab, ainsi sur les logiciels d’analyse de données, de préférence avec Magneti Marelli (ou équivalent),</li>\n<li>Vous appréciez comprendre et implémenter des méthodes issues de la littérature de pointe dans les domaines de l’optimisation, des statistiques et de l’apprentissage automatique,</li>\n<li>Vous possédez une bonne capacité d’analyse et avez un bon esprit de synthèse,</li>\n<li>Vous possédez d’excellentes compétences en organisation et en communication,</li>\n<li>Vous avez une attitude proactive et une capacité à travailler de manière autonome et en toute confidentialité,</li>\n<li>La maîtrise du français (parlé et écrit) est fortement souhaitée,</li>\n<li>La maîtrise de l’anglais (parlé et écrit) est indispensable pour ce poste,</li>\n<li>Vous avez la volonté de voyager fréquemment : des déplacements sont à prévoir selon les besoins (de 10 à 15 courses par an),</li>\n<li>Vous avez le désir de travailler en collaboration avec plusieurs parties prenantes : FIA, IMSA et les filiales de l&#39;ACO.</li>\n</ul>\n<p><strong>Informations utiles</strong></p>\n<p>13ème mois</p>\n<p>Mutuelle</p>\n<p>Restaurant d&#39;entreprise</p>\n<p><strong>Automobile Club de l&#39;Ouest en quelques mots</strong></p>\n<p>L’ACO est le créateur et organisateur des 24 Heures du Mans qui ont lieu depuis 1923 sur le mythique circuit du Mans. Cette épreuve iconique du sport automobile connait aujourd’hui un développement international accéléré grâce au Championnat du Monde d’Endurance de la FIA (FIA WEC) qui permet aux passionnés du monde entier d’assister à des épreuves de type Le Mans lors de huit courses organisées sur quatre continents.</p>\n<p>Le circuit du Mans accueille d’autres nombreuses manifestations internationales comme les 24 Heures Motos, le Grand Prix France Moto, Le Mans Classic, les 24 Heures Camions, les 24 Heures Karting, …</p>\n<p>En complément, l’ACO développe d’autres activités liées aux sports mécaniques et à la sécurité routière, et qui contribuent à son rayonnement national et international :</p>\n<ul>\n<li>L’organisation de séminaires et évènements d’entreprises dans un lieu mythique,</li>\n<li>Un complexe international de Karting,</li>\n<li>Le Musée des 24 Heures,</li>\n<li>Les écoles de pilotage,</li>\n<li>La location des pistes,</li>\n</ul>\n<p>L’ensemble de ces activités est organisé par des équipes de collaborateurs animés par les valeurs de l’ACO : Ethique, Esprit d’équipe, Excellence, Indépendance, Pérennité et Passion, et qui travaillent chaque jour au cœur même du circuit des 24 Heures.</p>\n<p>L’expertise de nos collaborateurs porte notamment sur les métiers suivants : sport mécanique, commercial et marketing, communication, et fonctions transverses.</p>\n<p><strong>Postuler en un clic</strong></p>\n<p>Cette offre vous intéresse ? Déposez votre candidature ici.</p>\n<p>Charger mon CV</p>\n<p><strong>Offres similaires</strong></p>\n<p><strong>DELEGUE(E) TECHNIQUE (F/H)</strong> \\ \\ CDILe Mans</p>\n<p><strong>CHARGE(E) DE GRANDS COMPTES (F/H)</strong> \\ \\ CDILe Mans29–31 k € brut/an</p>\n<p><strong>COMPTABLE (F/H)</strong> \\ \\ CDDLe Mans2 000 € brut/mois</p>\n<p><strong>OPERATEUR KARTING SPORT LOISIR (F/H)</strong> \\ \\ CDDLe Mans1 890 € brut/mois</p>\n<p><strong>CHARGE(E) DE COMMUNICATION (F/H)</strong> \\ \\ CDDLe Mans2 870 € brut/mois</p>\n<p>Voir toutes les offres</p>\n<p style=\"margin-top:24px;font-size:13px;color:#666;\">XML job scraping automation by <a href=\"https://yubhub.co\">YubHub</a></p>","url":"https://yubhub.co/jobs/job_f0a0b3ef-b9f","directApply":true,"hiringOrganization":{"@type":"Organization","name":"Automobile Club de l'Ouest","sameAs":"https://recrutement.lemans.org","logo":"https://logos.yubhub.co/recrutement.lemans.org.png"},"x-apply-url":"https://recrutement.lemans.org/offer/11284-NDQ0NTMtRFdFaWlp?utm_source=yubhub.co&utm_medium=jobs_feed&utm_campaign=apply","x-work-arrangement":"onsite","x-experience-level":"senior","x-job-type":"full-time","x-salary-range":"13ème mois","x-skills-required":["Matlab","Magneti Marelli","logiciels de simulation","électronique de sports mécaniques","optimisation","statistiques","apprentissage automatique"],"x-skills-preferred":["french","english"],"datePosted":"2026-03-06T14:18:24.142Z","employmentType":"FULL_TIME","occupationalCategory":"Engineering","industry":"Motorsport","skills":"Matlab, Magneti Marelli, logiciels de simulation, électronique de sports mécaniques, optimisation, statistiques, apprentissage automatique, french, english"}]}