{"version":"0.1","company":{"name":"YubHub","url":"https://yubhub.co","jobsUrl":"https://yubhub.co/jobs/skill/apac"},"x-facet":{"type":"skill","slug":"apac","display":"Apac","count":100},"x-feed-size-limit":100,"x-feed-sort":"enriched_at desc","x-feed-notice":"This feed contains at most 100 jobs (the most recently enriched). For the full corpus, use the paginated /stats/by-facet endpoint or /search.","x-generator":"yubhub-xml-generator","x-rights":"Free to redistribute with attribution: \"Data by YubHub (https://yubhub.co)\"","x-schema":"Each entry in `jobs` follows https://schema.org/JobPosting. YubHub-native raw fields carry `x-` prefix.","jobs":[{"@context":"https://schema.org","@type":"JobPosting","identifier":{"@type":"PropertyValue","name":"YubHub","value":"job_d384afb9-a9d"},"title":"Technical Engagement Manager","description":"<p>We are seeking a highly skilled and experienced Technical Engagement Manager to join our dynamic team. You will be responsible for working closely with a few of our largest customers to understand their business challenges and requirements, architecting solutions using Starburst products and driving business outcomes across the customer journey, from initial engagement to successful adoption.</p>\n<p>As a Technical Engagement Manager, you will establish trust and credibility by demonstrating Data &amp; AI industry knowledge, understanding of the buyer&#39;s organization, and a track record of successful engagements. You will build and nurture strong relationships with the champion, who serves as the internal advocate for the engagement within the customer organization.</p>\n<p>Key responsibilities include:</p>\n<ul>\n<li>Establishing trust and credibility with customers</li>\n<li>Building and nurturing strong relationships with champions</li>\n<li>Offering support and guidance to champions throughout the engagement process</li>\n<li>Proactively addressing concerns and objections raised by other stakeholders</li>\n<li>Soliciting feedback from other stakeholders throughout the engagement process</li>\n<li>Collaborating with sales teams to understand customer needs and objectives</li>\n<li>Driving adoption of Starburst culminating in the Customer reaching its success criteria</li>\n</ul>\n<p>Some of the things we look for include:</p>\n<ul>\n<li>A Bachelor&#39;s degree in business, technology, or a related field</li>\n<li>A deep understanding of data architecture principles, including data modeling, data integration, and data warehousing</li>\n<li>Proficiency in SQL and experience with distributed query engines (e.g., Presto, Trino, Apache Spark)</li>\n<li>Strong problem-solving skills and the ability to think strategically about business challenges and technical solutions</li>\n<li>A proven track record of successfully managing customer engagements and delivering business outcomes</li>\n<li>Excellent communication and interpersonal skills, with the ability to build strong relationships with customers and internal teams</li>\n</ul>\n<p>We offer a competitive salary range of $155,000-$190,000 USD, depending on relevant skills, experience, education, and training, and specific work location. All employees receive equity packages (ISOs) and have access to a comprehensive benefits offering.</p>\n<p style=\"margin-top:24px;font-size:13px;color:#666;\">XML job scraping automation by <a href=\"https://yubhub.co\">YubHub</a></p>","url":"https://yubhub.co/jobs/job_d384afb9-a9d","directApply":true,"hiringOrganization":{"@type":"Organization","name":"Starburst","sameAs":"https://www.starburst.io/","logo":"https://logos.yubhub.co/starburst.io.png"},"x-apply-url":"https://job-boards.greenhouse.io/starburst/jobs/5196535008","x-work-arrangement":"hybrid","x-experience-level":"senior","x-job-type":"full-time","x-salary-range":"$155,000-$190,000 USD","x-skills-required":["SQL","Presto","Trino","Apache Spark","Data architecture principles","Data modeling","Data integration","Data warehousing"],"x-skills-preferred":[],"datePosted":"2026-04-24T16:11:50.314Z","jobLocation":{"@type":"Place","address":{"@type":"PostalAddress","addressLocality":"Charlotte, NC"}},"employmentType":"FULL_TIME","occupationalCategory":"Engineering","industry":"Technology","skills":"SQL, Presto, Trino, Apache Spark, Data architecture principles, Data modeling, Data integration, Data warehousing","baseSalary":{"@type":"MonetaryAmount","currency":"USD","value":{"@type":"QuantitativeValue","minValue":155000,"maxValue":190000,"unitText":"YEAR"}}},{"@context":"https://schema.org","@type":"JobPosting","identifier":{"@type":"PropertyValue","name":"YubHub","value":"job_b2b83ad5-09e"},"title":"Manufacturing Engineering Manager (PCBA)","description":"<p>As the Manufacturing Engineering Manager (PCBA), you will lead the production strategy to establish and scale the internal PCBA production capabilities to meet our aggressive rate targets. You will work with the most cutting-edge MicroGEO satellite technology and lead all PCBA assembly and test operations to achieve the highest levels of quality to ensure our hardware meets mission-critical requirements.</p>\n<p>Your responsibilities will span the full production lifecycle, including equipment and process selection, factory layout, capacity planning, staffing, new product introduction, and collaboration with design engineering to champion design-for-manufacturability improvements.</p>\n<p>The ideal candidate is self-driven, extremely detail-oriented, and thrives in a highly cross-functional, dynamic environment while effectively mentoring and guiding others.</p>\n<p>If you are a proven PCBA production leader who thrives in a fast-paced total-ownership environment, and are ready for your next challenge - this role might be for you.</p>\n<p>Key Responsibilities:</p>\n<ul>\n<li>Lead a team of engineers, specialists, and technicians performing component and PCBA processes for dev and flight hardware to drawing, process specifications, and work order requirements through all phases of the product life cycle</li>\n</ul>\n<ul>\n<li>Interface with engineering, production, and supply chain to communicate and resolve nonconformances while ensuring timely escalation and pursuing corrective action as required</li>\n</ul>\n<ul>\n<li>Champion Design-for-Manufacturing (DFM) with design engineering to evaluate and influence hardware designs for manufacturability</li>\n</ul>\n<ul>\n<li>Develop and implement an effective, robust, and efficient production strategy to establish internal PCBA production capabilities that enable our factory rate targets</li>\n</ul>\n<ul>\n<li>Take direct ownership of the physical production layout, optimizing for material flow from receiving through all processes leading to the finished product</li>\n</ul>\n<ul>\n<li>Develop new processes, GSE, tooling, etc to support the production of new hardware designs, as well as continuous improvement of existing processes</li>\n</ul>\n<ul>\n<li>Create, maintain, and communicate detailed schedules to ensure your team and other stakeholders are aware of milestones and deadlines</li>\n</ul>\n<ul>\n<li>Create metrics and tools to monitor production success such as product yield and on-time delivery</li>\n</ul>\n<ul>\n<li>Create and enforce a culture of safety and quality throughout the workplace</li>\n</ul>\n<p>Requirements:</p>\n<ul>\n<li>Bachelor&#39;s of Science in Mechanical Engineering, Electrical Engineering, or equivalent technical degree</li>\n</ul>\n<ul>\n<li>5+ years of PCBA experience in an aerospace, military, or manufacturing environment</li>\n</ul>\n<ul>\n<li>3+ years in a leadership or team lead capacity transitioning products from development to production</li>\n</ul>\n<p>Bonus:</p>\n<ul>\n<li>Experience in a fast-paced, iterative design or manufacturing environment within the aerospace, automotive, or consumer electronics industries</li>\n</ul>\n<ul>\n<li>Experience with the design and manufacturing of high-reliability electronics</li>\n</ul>\n<ul>\n<li>Ability to develop and maintain high-performing suppliers and contract manufacturers</li>\n</ul>\n<ul>\n<li>Experience transitioning products from design to manufacturing and scaling to high volume</li>\n</ul>\n<ul>\n<li>Experience implementing automation into production processes</li>\n</ul>\n<ul>\n<li>Experience with design of experiments, gauge repeatability and reproducibility, and process qualifications</li>\n</ul>\n<ul>\n<li>Experience working with and defining requirements for ERP, MRP, and MES systems</li>\n</ul>\n<ul>\n<li>Knowledgeable with ISO 9001/AS9100 quality management systems</li>\n</ul>\n<p style=\"margin-top:24px;font-size:13px;color:#666;\">XML job scraping automation by <a href=\"https://yubhub.co\">YubHub</a></p>","url":"https://yubhub.co/jobs/job_b2b83ad5-09e","directApply":true,"hiringOrganization":{"@type":"Organization","name":"Astranis","sameAs":"https://astranis.com/","logo":"https://logos.yubhub.co/astranis.com.png"},"x-apply-url":"https://job-boards.greenhouse.io/astranis/jobs/4629658006","x-work-arrangement":"onsite","x-experience-level":"senior","x-job-type":"full-time","x-salary-range":"$150,000-$230,000 USD","x-skills-required":["PCBA","MicroGEO satellite technology","Design-for-Manufacturing","Process selection","Factory layout","Capacity planning","Staffing","New product introduction","Collaboration with design engineering","Team leadership","Communication","Problem-solving","Quality management","Safety management","ERP","MRP","MES","ISO 9001/AS9100"],"x-skills-preferred":[],"datePosted":"2026-04-24T15:19:48.228Z","jobLocation":{"@type":"Place","address":{"@type":"PostalAddress","addressLocality":"San Francisco"}},"employmentType":"FULL_TIME","occupationalCategory":"Engineering","industry":"Technology","skills":"PCBA, MicroGEO satellite technology, Design-for-Manufacturing, Process selection, Factory layout, Capacity planning, Staffing, New product introduction, Collaboration with design engineering, Team leadership, Communication, Problem-solving, Quality management, Safety management, ERP, MRP, MES, ISO 9001/AS9100","baseSalary":{"@type":"MonetaryAmount","currency":"USD","value":{"@type":"QuantitativeValue","minValue":150000,"maxValue":230000,"unitText":"YEAR"}}},{"@context":"https://schema.org","@type":"JobPosting","identifier":{"@type":"PropertyValue","name":"YubHub","value":"job_f8312d06-d7b"},"title":"Customer Operations Analyst","description":"<p>At Greenlight, we&#39;re looking for a Customer Operations Analyst to play a key role in helping us plan, analyze, and optimize customer support operations. This role sits at the intersection of operations and analytics, and is responsible for translating customer demand into actionable insights, improving forecasting and capacity planning processes, and driving operational efficiency across both in-house and BPO teams.</p>\n<p>As a Customer Operations Analyst, you will:</p>\n<ul>\n<li>Build and maintain contact forecasts across channels, incorporating seasonality, growth trends, product launches, and marketing initiatives</li>\n<li>Translate forecasts into staffing and capacity plans across in-house and BPO teams</li>\n<li>Monitor forecast accuracy and adjust assumptions based on performance and new data</li>\n<li>Analyze customer support data to identify trends, drivers of volume, and opportunities for efficiency</li>\n<li>Build and maintain models (Excel/Google Sheets, SQL, etc.) to support forecasting and operational decision-making</li>\n<li>Deliver clear, actionable insights to leadership, highlighting risks, tradeoffs, and recommendations</li>\n<li>Support initiatives to improve operational efficiency, including channel strategy, automation (AI), and process improvements</li>\n<li>Partner with Product, Marketing, Data, and Operations teams to understand demand drivers and align planning</li>\n<li>Establish and support operating cadences (weekly/monthly reviews, forecasting updates, capacity planning discussions)</li>\n<li>Identify opportunities to improve agent utilization, reduce cost per contact, and maintain service levels</li>\n<li>Support scheduling and real-time operations as needed, with a focus on improving systems rather than owning execution long-term</li>\n<li>Collaborate with BPO partners to align on staffing plans and performance expectations</li>\n<li>Simplify and improve existing processes to increase transparency and scalability</li>\n<li>Identify opportunities for automation as tools and systems evolve</li>\n<li>Help shape how customer operations planning evolves over time</li>\n<li>Assess current tools and processes, identify opportunities for improvement, and partner cross-functionally to implement new solutions</li>\n<li>Support and lead the rollout of new tools (e.g., WFM, reporting, or analytics platforms) to improve visibility and scalability</li>\n</ul>\n<p>We&#39;re looking for someone with 3-6 years of experience in operations, analytics, customer support, or workforce management, with strong analytical skills and experience working with data to build forecasts, models, or operational insights. You should be able to translate data into clear, actionable recommendations and have strong communication and stakeholder management skills.</p>\n<p style=\"margin-top:24px;font-size:13px;color:#666;\">XML job scraping automation by <a href=\"https://yubhub.co\">YubHub</a></p>","url":"https://yubhub.co/jobs/job_f8312d06-d7b","directApply":true,"hiringOrganization":{"@type":"Organization","name":"Greenlight","sameAs":"https://www.greenlight.com/","logo":"https://logos.yubhub.co/greenlight.com.png"},"x-apply-url":"https://jobs.lever.co/greenlight/098e53ae-1cf6-4435-9f79-e0f8f479dd95","x-work-arrangement":"remote","x-experience-level":"mid","x-job-type":"full-time","x-salary-range":"$100,000 - $130,000","x-skills-required":["forecasting","capacity planning","data analysis","model building","SQL","Excel","Google Sheets","WFM","reporting","analytics"],"x-skills-preferred":["AI","process improvement","channel strategy","BPO management"],"datePosted":"2026-04-24T15:19:39.006Z","jobLocation":{"@type":"Place","address":{"@type":"PostalAddress","addressLocality":"Atlanta"}},"jobLocationType":"TELECOMMUTE","employmentType":"FULL_TIME","occupationalCategory":"Operations","industry":"Finance","skills":"forecasting, capacity planning, data analysis, model building, SQL, Excel, Google Sheets, WFM, reporting, analytics, AI, process improvement, channel strategy, BPO management","baseSalary":{"@type":"MonetaryAmount","currency":"USD","value":{"@type":"QuantitativeValue","minValue":100000,"maxValue":130000,"unitText":"YEAR"}}},{"@context":"https://schema.org","@type":"JobPosting","identifier":{"@type":"PropertyValue","name":"YubHub","value":"job_0abdd674-18a"},"title":"Environmental Test Supervisor","description":"<p>As the Environmental Test Supervisor, you will be responsible for the day-to-day operations, readiness, and execution of the Environmental Test Lab to support testing of spacecraft, sub-assemblies, and components in simulated space environments. You will lead and develop a high-performing test technician team and ensure the lab operates safely, efficiently, and at high quality to meet program and production goals.</p>\n<p>Responsibilities: Manage day-to-day activities of the test lab and maintain operational readiness to support 24/7 operations and critical test campaigns Lead and develop the test technician team, setting clear expectations, driving accountability, and supporting ongoing skill growth Own detailed test scheduling and execution, aligning with program priorities and engineering needs to meet milestones and deadlines Drive test execution quality and consistency through robust procedures, training, and technician oversight Oversee operation, maintenance, and calibration of environmental test equipment including vibration shakers, thermal chambers, and thermal vacuum chambers Partner closely with the cross-functional team to improve test processes, increase throughput, and support development of new lab capabilities Identify and eliminate bottlenecks across equipment, staffing, and processes to improve lab efficiency and utilization Promote and enforce a culture of safety, quality, and accountability across the lab Implement and uphold high standards of 5S practices to maximize efficiency and productivity</p>\n<p>Requirements: 5+ years of experience in testing of electronics, electro-mechanical assemblies, or integrated systems 3+ years of experience leading technicians in a fast-paced and complex test or production environment Demonstrated ownership of test lab or production operations, including scheduling, equipment utilization, and team performance Strong understanding of thermal, thermal vacuum, and vibration testing Must be willing to work all shifts, overtime and/or weekends as needed, including support of 24/7 operations for critical test campaigns</p>\n<p style=\"margin-top:24px;font-size:13px;color:#666;\">XML job scraping automation by <a href=\"https://yubhub.co\">YubHub</a></p>","url":"https://yubhub.co/jobs/job_0abdd674-18a","directApply":true,"hiringOrganization":{"@type":"Organization","name":"Astranis","sameAs":"https://astranis.com/","logo":"https://logos.yubhub.co/astranis.com.png"},"x-apply-url":"https://job-boards.greenhouse.io/astranis/jobs/4669797006","x-work-arrangement":"onsite","x-experience-level":"senior","x-job-type":"full-time","x-salary-range":"$140,000-$170,000 USD","x-skills-required":["Thermal testing","Vibration testing","Test equipment operation and maintenance","Test scheduling and execution","Team leadership and development"],"x-skills-preferred":["Manufacturing execution systems (MES)","Test automation software","Data/telemetry platforms","Scaling operations (team size, equipment capacity, or throughput)"],"datePosted":"2026-04-24T15:19:04.833Z","jobLocation":{"@type":"Place","address":{"@type":"PostalAddress","addressLocality":"San Francisco"}},"employmentType":"FULL_TIME","occupationalCategory":"Engineering","industry":"Technology","skills":"Thermal testing, Vibration testing, Test equipment operation and maintenance, Test scheduling and execution, Team leadership and development, Manufacturing execution systems (MES), Test automation software, Data/telemetry platforms, Scaling operations (team size, equipment capacity, or throughput)","baseSalary":{"@type":"MonetaryAmount","currency":"USD","value":{"@type":"QuantitativeValue","minValue":140000,"maxValue":170000,"unitText":"YEAR"}}},{"@context":"https://schema.org","@type":"JobPosting","identifier":{"@type":"PropertyValue","name":"YubHub","value":"job_8fea43b1-2f0"},"title":"Flight Test Engineer","description":"<p>As a Flight Test Engineer at Anduril Industries, you will plan, execute, and report on developmental and operational flight test and evaluation activities. You will work alongside a team of engineers and test specialists to assist in the wider management and coordination of test and evaluation within the UK, US, and Europe under both civil and military approval schemes.</p>\n<p>Your primary responsibilities will include determining and developing various approaches to achieve test and evaluation goals, guiding and performing the completion of test activities, and supporting or leading test planning, establishing test measures of success, conducting test (ground and flight), analyzing test performance, and producing high-quality reports to inform future development.</p>\n<p>You will have the opportunity to work on new aircraft development, with an emphasis on electronics, avionics, propulsion, flight characteristics, rugged design, and quality at scale. You will also have the chance to work with a variety of technologies, including COTS and bespoke UAS technologies, and to develop your skills in data analysis and interpretation.</p>\n<p>If you are interested in working in a fast-paced and dynamic environment where your work directly impacts the products that are fielded, this could be an excellent opportunity for you.</p>\n<p><strong>Required Qualifications:</strong></p>\n<ul>\n<li>Master&#39;s degree in an applicable field of engineering</li>\n<li>3+ years&#39; flight test experience</li>\n<li>Expertise and working knowledge of UAS operating rules and regulations in the UK</li>\n<li>Experience in designing and executing rigorous test protocols for new and developmental systems and skillful in data analysis</li>\n<li>Thorough understanding of COTS and bespoke UAS technologies</li>\n<li>Expertise of internal combustion, electric, and/or hybrid powertrains</li>\n<li>Knowledge of flight control systems and flight dynamics of small, medium, and/or large rotorcraft</li>\n<li>Knowledge of EO/IR/Lidar sensor payloads and the intricacies required to plan and execute tests when deployed on maneuverable rotorcraft</li>\n<li>An understanding of and passion for engineering quality concepts, principles, codes, and experience demonstrating a broad application of those concepts</li>\n<li>Quick learner with the capacity to accurately implement new concepts and effectively organize, schedule, and manage multiple project phases</li>\n<li>Strong communication skills, both written and verbal, with strong interpersonal abilities</li>\n<li>Comfortable in a dynamic, team-oriented environment performing complex tasks in one or more engineering areas</li>\n<li>Flexibility to work additional hours and travel for testing activities as required by the business</li>\n<li>Valid driver&#39;s license</li>\n<li>Ability to immediately obtain and maintain a UK security clearance</li>\n</ul>\n<p><strong>Preferred Qualifications:</strong></p>\n<ul>\n<li>Formal T&amp;E training at a recognized Test Pilot School</li>\n<li>Experience within developmental and operational test &amp; evaluation organizations</li>\n<li>Direct experience with UAS design or operational deployment</li>\n<li>Advanced knowledge of UAV testing methodologies and data analysis</li>\n<li>Hold UK General VLOS Certificate</li>\n<li>Remote pilot experience on VTOL UAS</li>\n<li>Ability to travel up to 50% of the time</li>\n<li>Familiarity with programming languages such as Go, Java, C++, Python, JavaScript, etc</li>\n</ul>\n<p><strong>Salary:</strong> £60,000-£80,000 per year</p>\n<p><strong>Benefits:</strong></p>\n<ul>\n<li>Comprehensive medical, dental, and vision plans at little to no cost to you (US roles)</li>\n<li>We cover full cost of medical insurance premiums for you and your dependents (UK &amp; AUS roles)</li>\n<li>We offer an annual contribution toward your private health insurance for you and your dependents (IE roles)</li>\n<li>Income Protection: Anduril covers life and disability insurance for all employees</li>\n<li>Generous time off: Highly competitive PTO plans with a holiday hiatus in December</li>\n<li>Caregiver &amp; Wellness Leave is available to care for family members, bond with a new baby, or address your own medical needs</li>\n<li>Family Planning &amp; Parenting Support: Coverage for fertility treatments (e.g., IVF, preservation), adoption, and gestational carriers, along with resources to support you and your partner from planning to parenting</li>\n<li>Mental Health Resources: Access free mental health resources 24/7, including therapy and life coaching</li>\n<li>Additional work-life services, such as legal and financial support, are also available</li>\n<li>Professional Development: Annual reimbursement for professional development</li>\n<li>Commuter Benefits: Company-funded commuter benefits based on your region</li>\n<li>Relocation Assistance: Available depending on role eligibility</li>\n<li>Retirement Savings Plan: Traditional 401(k), Roth, and after-tax (mega backdoor Roth) options (US roles)</li>\n<li>Pension plan with employer match (UK &amp; IE roles)</li>\n<li>Superannuation plan (AUS roles)</li>\n</ul>\n<p style=\"margin-top:24px;font-size:13px;color:#666;\">XML job scraping automation by <a href=\"https://yubhub.co\">YubHub</a></p>","url":"https://yubhub.co/jobs/job_8fea43b1-2f0","directApply":true,"hiringOrganization":{"@type":"Organization","name":"Anduril Industries","sameAs":"https://www.andurilindustries.com/","logo":"https://logos.yubhub.co/andurilindustries.com.png"},"x-apply-url":"https://job-boards.greenhouse.io/andurilindustries/jobs/5111854007","x-work-arrangement":"onsite","x-experience-level":"mid","x-job-type":"full-time","x-salary-range":"£60,000-£80,000 per year","x-skills-required":["Master's degree in an applicable field of engineering","3+ years' flight test experience","Expertise and working knowledge of UAS operating rules and regulations in the UK","Experience in designing and executing rigorous test protocols for new and developmental systems and skillful in data analysis","Thorough understanding of COTS and bespoke UAS technologies","Expertise of internal combustion, electric, and/or hybrid powertrains","Knowledge of flight control systems and flight dynamics of small, medium, and/or large rotorcraft","Knowledge of EO/IR/Lidar sensor payloads and the intricacies required to plan and execute tests when deployed on maneuverable rotorcraft","An understanding of and passion for engineering quality concepts, principles, codes, and experience demonstrating a broad application of those concepts","Quick learner with the capacity to accurately implement new concepts and effectively organize, schedule, and manage multiple project phases","Strong communication skills, both written and verbal, with strong interpersonal abilities","Comfortable in a dynamic, team-oriented environment performing complex tasks in one or more engineering areas","Flexibility to work additional hours and travel for testing activities as required by the business","Valid driver's license","Ability to immediately obtain and maintain a UK security clearance"],"x-skills-preferred":["Formal T&E training at a recognized Test Pilot School","Experience within developmental and operational test & evaluation organizations","Direct experience with UAS design or operational deployment","Advanced knowledge of UAV testing methodologies and data analysis","Hold UK General VLOS Certificate","Remote pilot experience on VTOL UAS","Ability to travel up to 50% of the time","Familiarity with programming languages such as Go, Java, C++, Python, JavaScript, etc"],"datePosted":"2026-04-24T15:17:09.490Z","jobLocation":{"@type":"Place","address":{"@type":"PostalAddress","addressLocality":"Llanbedr, Wales, United Kingdom"}},"employmentType":"FULL_TIME","occupationalCategory":"Engineering","industry":"Technology","skills":"Master's degree in an applicable field of engineering, 3+ years' flight test experience, Expertise and working knowledge of UAS operating rules and regulations in the UK, Experience in designing and executing rigorous test protocols for new and developmental systems and skillful in data analysis, Thorough understanding of COTS and bespoke UAS technologies, Expertise of internal combustion, electric, and/or hybrid powertrains, Knowledge of flight control systems and flight dynamics of small, medium, and/or large rotorcraft, Knowledge of EO/IR/Lidar sensor payloads and the intricacies required to plan and execute tests when deployed on maneuverable rotorcraft, An understanding of and passion for engineering quality concepts, principles, codes, and experience demonstrating a broad application of those concepts, Quick learner with the capacity to accurately implement new concepts and effectively organize, schedule, and manage multiple project phases, Strong communication skills, both written and verbal, with strong interpersonal abilities, Comfortable in a dynamic, team-oriented environment performing complex tasks in one or more engineering areas, Flexibility to work additional hours and travel for testing activities as required by the business, Valid driver's license, Ability to immediately obtain and maintain a UK security clearance, Formal T&E training at a recognized Test Pilot School, Experience within developmental and operational test & evaluation organizations, Direct experience with UAS design or operational deployment, Advanced knowledge of UAV testing methodologies and data analysis, Hold UK General VLOS Certificate, Remote pilot experience on VTOL UAS, Ability to travel up to 50% of the time, Familiarity with programming languages such as Go, Java, C++, Python, JavaScript, etc","baseSalary":{"@type":"MonetaryAmount","currency":"GBP","value":{"@type":"QuantitativeValue","minValue":60000,"maxValue":80000,"unitText":"YEAR"}}},{"@context":"https://schema.org","@type":"JobPosting","identifier":{"@type":"PropertyValue","name":"YubHub","value":"job_49ef318f-90a"},"title":"Director, Site Reliability Engineer | Senior Engineering Team Director","description":"<p>We&#39;re seeking a Site Reliability Engineering (SRE) Lead to design, build, and maintain resilient, high-scale systems supporting BlackRock&#39;s Private Markets platform. In this hands-on leadership role, you&#39;ll apply deep engineering expertise to solve complex challenges, guide a global team, shape technical direction, and communicate effectively with senior stakeholders,ensuring the reliability of mission-critical systems that power private market investment workflows and decision-making. You will drive the adoption of AI-driven solutions to accelerate incident detection and triage, reduce toil, improve forecasting and capacity planning, and strengthen end-to-end observability and resilience.</p>\n<p>Key Responsibilities:</p>\n<ul>\n<li>Take ownership of project priorities, deadlines and deliverables using Agile methodologies, with clear outcomes around reliability automation and AI-enabled operations</li>\n<li>Understand and refine business and functional requirements, translating them into SLOs/SLIs and AI-assisted observability and support capabilities</li>\n<li>Hands on approach to getting work done,this role requires a “roll your sleeves up” mentality, including building and operationalizing reliability tooling and automation that measurably reduces toil and improves stability</li>\n<li>Be a leader with vision and a partner in brainstorming solutions for team productivity and efficiency to improve engineering effectiveness</li>\n<li>Drive priority setting of the engineering teams, balancing foundational reliability work with delivery of new product features</li>\n<li>Improve Engineering culture by encouraging continuous focus on reliability across the entire application lifecycle, and by adopting AI-enabled SRE practices (e.g., intelligent alerting, automated diagnosis, and self-healing where appropriate)</li>\n<li>Proactive participant in architectural and design decisions, including AI-ready telemetry, data quality, and model integration patterns for operational analytics</li>\n<li>Design and implement end-to-end monitoring solutions for application and infrastructure components, leveraging modern observability platforms plus AI/ML techniques for anomaly detection, correlation, and alert noise reduction</li>\n<li>Drive the engineering of capacity management and demand forecasting solutions, including predictive analytics/ML approaches where they add measurable value</li>\n<li>Act as a culture carrier and leader, passing on SRE knowledge and best practices to the engineering team</li>\n<li>Drive detailed root cause investigations for production incidents with rigorous focus on issue avoidance, using AI-assisted correlation/analysis to accelerate time-to-insight</li>\n<li>Create/coordinate retros for significant incidents, ensuring learnings are captured in automated/AI-assisted runbooks and embedded into prevention mechanisms</li>\n<li>Additional core engineering functions, such as adding custom telemetry metrics/logs/traces to the code base of in-scope applications to enable AI/ML-driven operational insights</li>\n<li>Anticipate new opportunities to continuously evolve the resiliency profile of scoped applications and infrastructure</li>\n</ul>\n<p>Requirements:</p>\n<ul>\n<li>B.S. / M.S. degree in Computer Science, Engineering or a related discipline with 10+ years of experience</li>\n<li>Experience leading high performing engineering/SRE teams, with a track record of driving continuous improvement through automation and AI-enabled operations</li>\n<li>Demonstrated ability to represent engineering/SRE priorities, status, and risk to senior leadership stakeholders with clear, executive-ready communication</li>\n<li>Hands-on experience building or operating AI-assisted capabilities (AIOps, ML-based anomaly detection, or GenAI workflows) in an engineering/production environment</li>\n<li>A passion for providing engineering support for highly available, performant full stack applications with a “Student of Technology” attitude</li>\n<li>Experience with relational database and NoSQL Database (e.g. Redis, Apache Cassandra)</li>\n</ul>\n<p>Benefits:</p>\n<ul>\n<li>Retirement investment and tools designed to help you in building a sound financial future</li>\n<li>Access to education reimbursement</li>\n<li>Comprehensive resources to support your physical health and emotional well-being</li>\n<li>Family support programs</li>\n<li>Flexible Time Off (FTO) so you can relax, recharge and be there for the people you care about</li>\n</ul>\n<p>Hybrid Work Model:</p>\n<ul>\n<li>BlackRock’s hybrid work model is designed to enable a culture of collaboration and apprenticeship that enriches the experience of our employees, while supporting flexibility for all</li>\n<li>Employees are currently required to work at least 4 days in the office per week, with the flexibility to work from home 1 day a week</li>\n<li>Some business groups may require more time in the office due to their roles and responsibilities</li>\n<li>We remain focused on increasing the impactful moments that arise when we work together in person – aligned with our commitment to performance and innovation</li>\n</ul>\n<p>About BlackRock:</p>\n<ul>\n<li>At BlackRock, we are all connected by one mission: to help more and more people experience financial well-being</li>\n<li>Our clients, and the people they serve, are saving for retirement, paying for their children’s educations, buying homes and starting businesses</li>\n<li>Their investments also help to strengthen the global economy: support businesses small and large; finance infrastructure projects that connect and power cities; and facilitate innovations that drive progress</li>\n</ul>\n<p style=\"margin-top:24px;font-size:13px;color:#666;\">XML job scraping automation by <a href=\"https://yubhub.co\">YubHub</a></p>","url":"https://yubhub.co/jobs/job_49ef318f-90a","directApply":true,"hiringOrganization":{"@type":"Organization","name":"BlackRock","sameAs":"https://www.blackrock.com/","logo":"https://logos.yubhub.co/blackrock.com.png"},"x-apply-url":"https://jobs.workable.com/view/cLBuSgz7avHiG3cKzS91ZB/director%2C-site-reliability-engineer-%7C-senior-engineering-team-director-in-england-at-blackrock","x-work-arrangement":"hybrid","x-experience-level":"senior","x-job-type":"full-time","x-salary-range":null,"x-skills-required":["Site Reliability Engineering","Agile Methodologies","Reliability Automation","AI-Enabled Operations","Business Requirements","Functional Requirements","SLOs/SLIs","Observability","Support Capabilities","Reliability Tooling","Automation","Stability","Leadership","Vision","Team Productivity","Efficiency","Engineering Effectiveness","Priority Setting","Foundational Reliability","New Product Features","Engineering Culture","Reliability Across Application Lifecycle","AI-Enabled SRE Practices","Intelligent Alerting","Automated Diagnosis","Self-Healing","Architectural Decisions","AI-Ready Telemetry","Data Quality","Model Integration Patterns","Operational Analytics","Monitoring Solutions","Application Components","Infrastructure Components","Anomaly Detection","Correlation","Alert Noise Reduction","Capacity Management","Demand Forecasting","Predictive Analytics","ML Approaches","Root Cause Investigations","Production Incidents","Issue Avoidance","AI-Assisted Correlation","Time-To-Insight","Retros","Significant Incidents","Learnings","Runbooks","Prevention Mechanisms","Custom Telemetry Metrics","Logs","Traces","AI/ML-Driven Operational Insights","Resiliency Profile","Scoped Applications","Infrastructure","Relational Database","NoSQL Database","Redis","Apache Cassandra"],"x-skills-preferred":[],"datePosted":"2026-04-24T14:19:53.538Z","jobLocation":{"@type":"Place","address":{"@type":"PostalAddress","addressLocality":"England"}},"employmentType":"FULL_TIME","occupationalCategory":"Engineering","industry":"Finance","skills":"Site Reliability Engineering, Agile Methodologies, Reliability Automation, AI-Enabled Operations, Business Requirements, Functional Requirements, SLOs/SLIs, Observability, Support Capabilities, Reliability Tooling, Automation, Stability, Leadership, Vision, Team Productivity, Efficiency, Engineering Effectiveness, Priority Setting, Foundational Reliability, New Product Features, Engineering Culture, Reliability Across Application Lifecycle, AI-Enabled SRE Practices, Intelligent Alerting, Automated Diagnosis, Self-Healing, Architectural Decisions, AI-Ready Telemetry, Data Quality, Model Integration Patterns, Operational Analytics, Monitoring Solutions, Application Components, Infrastructure Components, Anomaly Detection, Correlation, Alert Noise Reduction, Capacity Management, Demand Forecasting, Predictive Analytics, ML Approaches, Root Cause Investigations, Production Incidents, Issue Avoidance, AI-Assisted Correlation, Time-To-Insight, Retros, Significant Incidents, Learnings, Runbooks, Prevention Mechanisms, Custom Telemetry Metrics, Logs, Traces, AI/ML-Driven Operational Insights, Resiliency Profile, Scoped Applications, Infrastructure, Relational Database, NoSQL Database, Redis, Apache Cassandra"},{"@context":"https://schema.org","@type":"JobPosting","identifier":{"@type":"PropertyValue","name":"YubHub","value":"job_73381153-35f"},"title":"Security Developer - Associate","description":"<p>About this role</p>\n<p>Data Engineer – AWS Native Data Platforms</p>\n<p>Technology &amp; Operations | BlackRock</p>\n<p>We are looking for a Data Engineer to join BlackRock’s Technology &amp; Operations organization, supporting the design, build, and operation of our Cloud-native data platform that powers critical technology, security, and operational use cases across the firm.</p>\n<p>This role sits within a team responsible for building reliable, secure, and observable data pipelines in a highly regulated environment. You will work closely with technology, operations, and information security partners to deliver data products that enable transparency, automation, and risk-informed decision making at scale.</p>\n<p>The ideal candidate is an engineer at heart,comfortable working end-to-end across ingestion, transformation, orchestration, and governance,who values clean design, strong documentation, and operational excellence.</p>\n<p>What You’ll Do</p>\n<ul>\n<li>Design, build, and maintain AWS-native data pipelines for batch and event-driven workloads, with a focus on reliability, scalability, and security.</li>\n</ul>\n<ul>\n<li>Develop and operate data workflows using Apache Airflow for orchestration and Python and SQL for transformation and data quality logic.</li>\n</ul>\n<ul>\n<li>Implement data transformations and models using modern analytics engineering practices (e.g., dbt-style patterns, tested transformations, incremental processing).</li>\n</ul>\n<ul>\n<li>Integrate data from a variety of enterprise sources, including cloud services, internal platforms, APIs, and security/operational telemetry.</li>\n</ul>\n<ul>\n<li>Partner with Information Security, Risk, and Operations teams to translate business and control requirements into durable data solutions.</li>\n</ul>\n<ul>\n<li>Embed data quality, lineage, and observability into pipelines using testing frameworks and monitoring standards.</li>\n</ul>\n<ul>\n<li>Operate within BlackRock’s cloud security and governance standards, including IAM, encryption, logging, and secrets management.</li>\n</ul>\n<ul>\n<li>Contribute to CI/CD pipelines, infrastructure-as-code patterns, and standardized platform tooling.</li>\n</ul>\n<ul>\n<li>Document data products, pipelines, and operating procedures to support transparency and long-term maintainability.</li>\n</ul>\n<ul>\n<li>Participate in design reviews, code reviews, and incident/post-incident analysis to continuously improve platform resilience.</li>\n</ul>\n<p>Core Technologies You’ll Work With</p>\n<ul>\n<li>AWS: S3, IAM, Glue, Lambda, Step Functions, CloudWatch, Secrets Manager, OpenSearch, and related native services</li>\n</ul>\n<ul>\n<li>Orchestration: Apache Airflow</li>\n</ul>\n<ul>\n<li>Languages: Python, SQL</li>\n</ul>\n<ul>\n<li>Data Modeling &amp; Transformation: Analytics-engineering patterns (e.g., dbt-like workflows)</li>\n</ul>\n<ul>\n<li>Data Quality &amp; Testing: Schema and data validation frameworks (e.g., Great Expectations-style approaches)</li>\n</ul>\n<ul>\n<li>Infrastructure &amp; Delivery: CI/CD, Git-based workflows, infrastructure-as-code (Terraform or equivalent)</li>\n</ul>\n<ul>\n<li>Security &amp; Governance: Encryption, access controls, audit logging, platform security baselines</li>\n</ul>\n<p>What We’re Looking For</p>\n<ul>\n<li>3-6 years experience as a Data Engineer, Analytics Engineer, or similar role building production data pipelines.</li>\n</ul>\n<ul>\n<li>Strong hand-on experience with AWS-native data services in a regulated or enterprise environment.</li>\n</ul>\n<ul>\n<li>Proficiency in Python and SQL, with an emphasis on readable, testable, and maintainable code.</li>\n</ul>\n<ul>\n<li>Experience with workflow orchestration (Airflow or equivalent).</li>\n</ul>\n<ul>\n<li>Solid understanding of data modeling, incremental processing, and performance optimization.</li>\n</ul>\n<ul>\n<li>Familiarity with data quality, monitoring, and operational support for production data systems.</li>\n</ul>\n<ul>\n<li>Experience collaborating with cross-functional partners (e.g., security, operations, product, or risk teams).</li>\n</ul>\n<ul>\n<li>A disciplined approach to documentation, change management, and incident response.</li>\n</ul>\n<p>Nice to Have</p>\n<ul>\n<li>Experience supporting security, risk, or compliance data domains.</li>\n</ul>\n<ul>\n<li>Exposure to OpenSearch / Elasticsearch, metrics pipelines, or log-analytics platforms.</li>\n</ul>\n<ul>\n<li>Familiarity with cloud security controls, IAM design, and secrets management.</li>\n</ul>\n<ul>\n<li>Experience building data platforms that support executive-level reporting or regulatory oversight.</li>\n</ul>\n<p>Our benefits</p>\n<p>To help you stay energized, engaged, and inspired, we offer a wide range of employee benefits including: retirement investment and tools designed to help you in building a sound financial future; access to education reimbursement; comprehensive resources to support your physical health and emotional well-being; family support programs; and Flexible Time Off (FTO) so you can relax, recharge, and be there for the people you care about.</p>\n<p>Our hybrid work model</p>\n<p>BlackRock’s hybrid work model is designed to enable a culture of collaboration and apprenticeship that enriches the experience of our employees, while supporting flexibility for all. Employees are currently required to work at least 4 days in the office per week, with the flexibility to work from home 1 day a week. Some business groups may require more time in the office due to their roles and responsibilities. We remain focused on increasing the impactful moments that arise when we work together in person – aligned with our commitment to performance and innovation. As a new joiner, you can count on this hybrid model to accelerate your learning and onboarding experience here at BlackRock.</p>\n<p style=\"margin-top:24px;font-size:13px;color:#666;\">XML job scraping automation by <a href=\"https://yubhub.co\">YubHub</a></p>","url":"https://yubhub.co/jobs/job_73381153-35f","directApply":true,"hiringOrganization":{"@type":"Organization","name":"BlackRock","sameAs":"https://www.blackrock.com/","logo":"https://logos.yubhub.co/blackrock.com.png"},"x-apply-url":"https://jobs.workable.com/view/noeyyV7CbztGYxPetLe2Cu/security-developer---associate-in-edinburgh-at-blackrock","x-work-arrangement":"hybrid","x-experience-level":"mid","x-job-type":"full-time","x-salary-range":null,"x-skills-required":["AWS","Apache Airflow","Python","SQL","Data Modeling & Transformation","Data Quality & Testing","Infrastructure & Delivery","Security & Governance"],"x-skills-preferred":[],"datePosted":"2026-04-24T14:18:02.924Z","jobLocation":{"@type":"Place","address":{"@type":"PostalAddress","addressLocality":"Edinburgh"}},"employmentType":"FULL_TIME","occupationalCategory":"Engineering","industry":"Finance","skills":"AWS, Apache Airflow, Python, SQL, Data Modeling & Transformation, Data Quality & Testing, Infrastructure & Delivery, Security & Governance"},{"@context":"https://schema.org","@type":"JobPosting","identifier":{"@type":"PropertyValue","name":"YubHub","value":"job_086c2470-e2b"},"title":"Lead Developer - Vice President","description":"<p>About this role</p>\n<p>Data Engineer – AWS Native Data Platforms</p>\n<p>We are looking for a Data Engineer to join BlackRock’s Technology &amp; Operations organization, supporting the design, build, and operation of our Cloud-native data platform that powers critical technology, security, and operational use cases across the firm.</p>\n<p>This role sits within a team responsible for building reliable, secure, and observable data pipelines in a highly regulated environment. You will work closely with technology, operations, and information security partners to deliver data products that enable transparency, automation, and risk-informed decision making at scale.</p>\n<p>The ideal candidate is an engineer at heart,comfortable working end-to-end across ingestion, transformation, orchestration, and governance,who values clean design, strong documentation, and operational excellence.</p>\n<p>What You’ll Do</p>\n<ul>\n<li>Design, build, and maintain AWS-native data pipelines for batch and event-driven workloads, with a focus on reliability, scalability, and security.</li>\n<li>Develop and operate data workflows using Apache Airflow for orchestration and Python and SQL for transformation and data quality logic.</li>\n<li>Implement data transformations and models using modern analytics engineering practices (e.g., dbt-style patterns, tested transformations, incremental processing).</li>\n<li>Integrate data from a variety of enterprise sources, including cloud services, internal platforms, APIs, and security/operational telemetry.</li>\n<li>Partner with Information Security, Risk, and Operations teams to translate business and control requirements into durable data solutions.</li>\n<li>Embed data quality, lineage, and observability into pipelines using testing frameworks and monitoring standards.</li>\n<li>Operate within BlackRock’s cloud security and governance standards, including IAM, encryption, logging, and secrets management.</li>\n<li>Contribute to CI/CD pipelines, infrastructure-as-code patterns, and standardized platform tooling.</li>\n<li>Document data products, pipelines, and operating procedures to support transparency and long-term maintainability.</li>\n<li>Participate in design reviews, code reviews, and incident/post-incident analysis to continuously improve platform resilience.</li>\n</ul>\n<p>Core Technologies You’ll Work With</p>\n<ul>\n<li>AWS: S3, IAM, Glue, Lambda, Step Functions, CloudWatch, Secrets Manager, OpenSearch, and related native services</li>\n<li>Orchestration: Apache Airflow</li>\n<li>Languages: Python, SQL</li>\n<li>Data Modeling &amp; Transformation: Analytics-engineering patterns (e.g., dbt-like workflows)</li>\n<li>Data Quality &amp; Testing: Schema and data validation frameworks (e.g., Great Expectations-style approaches)</li>\n<li>Infrastructure &amp; Delivery: CI/CD, Git-based workflows, infrastructure-as-code (Terraform or equivalent)</li>\n<li>Security &amp; Governance: Encryption, access controls, audit logging, platform security baselines</li>\n</ul>\n<p>What We’re Looking For</p>\n<ul>\n<li>7 years+ relevant experience as a Data Engineer, Analytics Engineer, or similar role building production data pipelines.</li>\n<li>Strong hand-on experience with AWS-native data services in a regulated or enterprise environment.</li>\n<li>Proficiency in Python and SQL, with an emphasis on readable, testable, and maintainable code.</li>\n<li>Experience with workflow orchestration (Airflow or equivalent).</li>\n<li>Solid understanding of data modeling, incremental processing, and performance optimization.</li>\n<li>Familiarity with data quality, monitoring, and operational support for production data systems.</li>\n<li>Experience collaborating with cross-functional partners (e.g., security, operations, product, or risk teams).</li>\n<li>A disciplined approach to documentation, change management, and incident response.</li>\n</ul>\n<p>Nice to Have</p>\n<ul>\n<li>Experience supporting security, risk, or compliance data domains.</li>\n<li>Exposure to OpenSearch / Elasticsearch, metrics pipelines, or log-analytics platforms.</li>\n<li>Familiarity with cloud security controls, IAM design, and secrets management.</li>\n<li>Experience building data platforms that support executive-level reporting or regulatory oversight.</li>\n</ul>\n<p>Our benefits</p>\n<p>To help you stay energized, engaged, and inspired, we offer a wide range of employee benefits including:</p>\n<ul>\n<li>Retirement investment and tools designed to help you in building a sound financial future.</li>\n<li>Access to education reimbursement.</li>\n<li>Comprehensive resources to support your physical health and emotional well-being.</li>\n<li>Family support programs.</li>\n<li>Flexible Time Off (FTO) so you can relax, recharge, and be there for the people you care about.</li>\n</ul>\n<p>Our hybrid work model</p>\n<p>BlackRock’s hybrid work model is designed to enable a culture of collaboration and apprenticeship that enriches the experience of our employees, while supporting flexibility for all. Employees are currently required to work at least 4 days in the office per week, with the flexibility to work from home 1 day a week. Some business groups may require more time in the office due to their roles and responsibilities. We remain focused on increasing the impactful moments that arise when we work together in person – aligned with our commitment to performance and innovation. As a new joiner, you can count on this hybrid model to accelerate your learning and onboarding experience here at BlackRock.</p>\n<p>About BlackRock</p>\n<p>At BlackRock, we are all connected by one mission: to help more and more people experience financial well-being. Our clients, and the people they serve, are saving for retirement, paying for their children’s educations, buying homes and starting businesses. Their investments also help to strengthen the global economy: support businesses small and large; finance infrastructure projects that connect and power cities; and facilitate innovations that drive progress.</p>\n<p>This mission would not be possible without our smartest investment – the one we make in our employees. It’s why we’re dedicated to creating an environment where our colleagues feel welcomed, valued and supported with networks, benefits and development opportunities to help them thrive.</p>\n<p style=\"margin-top:24px;font-size:13px;color:#666;\">XML job scraping automation by <a href=\"https://yubhub.co\">YubHub</a></p>","url":"https://yubhub.co/jobs/job_086c2470-e2b","directApply":true,"hiringOrganization":{"@type":"Organization","name":"BlackRock","sameAs":"https://www.blackrock.com/","logo":"https://logos.yubhub.co/blackrock.com.png"},"x-apply-url":"https://jobs.workable.com/view/81uPaQe8ESRj635WgGaq2b/lead-developer---vice-president-in-edinburgh-at-blackrock","x-work-arrangement":"hybrid","x-experience-level":"senior","x-job-type":"full-time","x-salary-range":null,"x-skills-required":["AWS","Apache Airflow","Python","SQL","Data Modeling & Transformation","Data Quality & Testing","Infrastructure & Delivery","Security & Governance"],"x-skills-preferred":[],"datePosted":"2026-04-24T14:17:57.133Z","jobLocation":{"@type":"Place","address":{"@type":"PostalAddress","addressLocality":"Edinburgh"}},"employmentType":"FULL_TIME","occupationalCategory":"Engineering","industry":"Finance","skills":"AWS, Apache Airflow, Python, SQL, Data Modeling & Transformation, Data Quality & Testing, Infrastructure & Delivery, Security & Governance"},{"@context":"https://schema.org","@type":"JobPosting","identifier":{"@type":"PropertyValue","name":"YubHub","value":"job_8656c8b2-ca5"},"title":"Associate Director, R&D Resource & Capacity Management","description":"<p>This role is a key position at the intersection of several functions, leading the resource capacity planning process, an essential component for the Long-Range Plan, R&amp;D prioritization, and Annual Operating Process.</p>\n<p>The successful candidate will partner with R&amp;D functional representatives to develop and improve resourcing algorithms, providing expertise to ensure these accurately represent resource demand for the R&amp;D portfolio. They will also partner with the R&amp;D functions to execute the resource planning process for R&amp;D, and work with PMO and Portfolio Management Leads to drive an Integrated R&amp;D Portfolio Operations Model.</p>\n<p>Responsibilities include creating, documenting, and maintaining algorithms to translate project schedules into resource demand over time, supporting the resource planning process and related systems, offering guidance, templates, and expert support for all stakeholders using resource planning tools, and assisting in integrated business planning by providing R&amp;D resource needs aligned with key programs and departmental plans.</p>\n<p>The ideal candidate will have a minimum of a Bachelor&#39;s degree with 8+ years of experience in the pharmaceutical industry, with hands-on, comprehensive knowledge about resource capacity management. They protested proficiency in Excel, experience using Planisware, strong analytical skills with proven ability to perform complex data analyses to support resourcing decisions, past exposure and understanding of modeling and forecasting of portfolio execution and corresponding resource demand, and demonstrated ability to develop strong partnerships to successfully operate in a highly matrix-based organisation.</p>\n<p>The annual base pay for this position ranges from $136,514 to $204,772, with eligibility for various incentives, including short-term incentive bonuses, equity-based awards, and commissions. Benefits offered include qualified retirement programs, paid time off, and health, dental, and vision coverage.</p>\n<p style=\"margin-top:24px;font-size:13px;color:#666;\">XML job scraping automation by <a href=\"https://yubhub.co\">YubHub</a></p>","url":"https://yubhub.co/jobs/job_8656c8b2-ca5","directApply":true,"hiringOrganization":{"@type":"Organization","name":"Portfolio Management and Operations","sameAs":"https://astrazeneca.eightfold.ai","logo":"https://logos.yubhub.co/astrazeneca.eightfold.ai.png"},"x-apply-url":"https://astrazeneca.eightfold.ai/careers/job/563877689948062","x-work-arrangement":"onsite","x-experience-level":"senior","x-job-type":"full-time","x-salary-range":"$136,514 - $204,772","x-skills-required":["Resource capacity management","Planisware","Excel","Data analysis","Project scheduling"],"x-skills-preferred":["MBA","MS","MA","PowerBI","Advanced analytical tools"],"datePosted":"2026-04-24T14:17:31.363Z","jobLocation":{"@type":"Place","address":{"@type":"PostalAddress","addressLocality":"Boston, Massachusetts, United States of America"}},"employmentType":"FULL_TIME","occupationalCategory":"Operations","industry":"Healthcare","skills":"Resource capacity management, Planisware, Excel, Data analysis, Project scheduling, MBA, MS, MA, PowerBI, Advanced analytical tools","baseSalary":{"@type":"MonetaryAmount","currency":"USD","value":{"@type":"QuantitativeValue","minValue":136514,"maxValue":204772,"unitText":"YEAR"}}},{"@context":"https://schema.org","@type":"JobPosting","identifier":{"@type":"PropertyValue","name":"YubHub","value":"job_e964c530-195"},"title":"Associate Director, Data and Analytics","description":"<p>Some careers have more impact than others. If you’re looking for a career where you can make a real impression, join HSBC and discover how valued you’ll be.</p>\n<p>We are currently seeking an experienced professional to join our team in the role of Associate Director, Data and Analytics.</p>\n<p>Principal responsibilities:</p>\n<ul>\n<li>Own the end-to-end technical design and delivery of Decision Hub capabilities (eligibility, arbitration, offering services, monitoring, APIs, etc.).</li>\n<li>Champion and lead the adoption of AI-assisted development practices (e.g., GitHub Copilot, generative AI for code/test generation) to accelerate delivery, improve code quality, and foster a culture of innovation.</li>\n<li>Spearhead the integration of Generative AI, adaptive models, and other emerging AI technologies into the platform&#39;s core capabilities, moving beyond traditional ML models to create a truly intelligent system.</li>\n<li>Collaborate with Product, Solution Architecture, Data, and Security to ensure compliance with regulatory controls, data lineage, and auditability.</li>\n<li>Drive performance, scalability, and resilience design; validate via capacity/performance testing and production dry runs.</li>\n<li>Translate business requirements into extensible, maintainable technical solutions and reference designs for multi-market reuse.</li>\n<li>Lead technical design and code reviews, setting the standard for high-quality, efficient, and maintainable code.</li>\n<li>Hands-on implementation and troubleshooting; unblock teams during critical incidents and deployment windows.</li>\n<li>Coach and grow engineering capability; set engineering practices, quality metrics, and a high-performance culture.</li>\n</ul>\n<p>Requirements:</p>\n<ul>\n<li>8+ years in software engineering with 3+ years in technical lead or architect roles on distributed, real-time systems.</li>\n<li>Proven experience designing and delivering decisioning, orchestration, or real-time personalization platforms (or equivalent large-scale event/streaming systems).</li>\n<li>Strong cloud design and development experience (GCP preferred; AWS/Azure acceptable) and familiarity with multi-cloud/on-prem tradeoffs.</li>\n<li>Hands-on experience with the end-to-end machine learning lifecycle (MLOps), from model integration and deployment to performance monitoring and feedback loops.</li>\n<li>A strong passion for and practical experience with leveraging AI development tools (e.g., GitHub Copilot, CodeWhisperer) and embedding them into team workflows.</li>\n<li>Deep knowledge of performance engineering, capacity planning, fault tolerance, and observability (APM, metrics, tracing, alerting).</li>\n<li>Hands-on with modern engineering practices: microservices, APIs, CI/CD, infra as code, automated testing, security controls.</li>\n<li>Excellent stakeholder skills: ability to translate complex technical concepts for business partners and influence product and delivery decisions.</li>\n<li>Strong mentoring and team leadership track record.</li>\n</ul>\n<p style=\"margin-top:24px;font-size:13px;color:#666;\">XML job scraping automation by <a href=\"https://yubhub.co\">YubHub</a></p>","url":"https://yubhub.co/jobs/job_e964c530-195","directApply":true,"hiringOrganization":{"@type":"Organization","name":"HSBC Software Development (GuangDong) Limited","sameAs":"https://portal.careers.hsbc.com","logo":"https://logos.yubhub.co/portal.careers.hsbc.com.png"},"x-apply-url":"https://portal.careers.hsbc.com/careers/job/563774610760348","x-work-arrangement":"onsite","x-experience-level":"senior","x-job-type":"full-time","x-salary-range":null,"x-skills-required":["software engineering","technical lead","architect","distributed systems","real-time systems","cloud design","GCP","AWS","Azure","machine learning","MLOps","AI development tools","GitHub Copilot","CodeWhisperer","performance engineering","capacity planning","fault tolerance","observability","microservices","APIs","CI/CD","infra as code","automated testing","security controls"],"x-skills-preferred":[],"datePosted":"2026-04-24T14:17:15.315Z","jobLocation":{"@type":"Place","address":{"@type":"PostalAddress","addressLocality":"Guangzhou"}},"employmentType":"FULL_TIME","occupationalCategory":"Engineering","industry":"Finance","skills":"software engineering, technical lead, architect, distributed systems, real-time systems, cloud design, GCP, AWS, Azure, machine learning, MLOps, AI development tools, GitHub Copilot, CodeWhisperer, performance engineering, capacity planning, fault tolerance, observability, microservices, APIs, CI/CD, infra as code, automated testing, security controls"},{"@context":"https://schema.org","@type":"JobPosting","identifier":{"@type":"PropertyValue","name":"YubHub","value":"job_d34ee930-f5c"},"title":"Cloud Platform Engineer","description":"<p>We are seeking a Cloud Platform Engineer to join our team. As a Cloud Platform Engineer, you will be responsible for designing and implementing cloud-native database infrastructure using Terraform /Ansible to provision managed DB instances in multi-clouds (RDS/Azure DB /Cloud SQL) and self-managed clusters.</p>\n<p>You will also be responsible for automating Configuration Management, security hardening, and patching of database instances across all environments. Automate workflows to reduce manual effort and improve reliability.</p>\n<p>In addition, you will develop internal tools and scripts (Python/Bash) to enable production support teams to manage their own database instances and environments safely. Develop scripts for routine operational tasks like backups, health checks, etc.</p>\n<p>You will integrate advanced observability platforms (Dynatrace, CloudWatch) with AIOps tools to establish SLOs and train models for anomaly detection and proactive forecasting of database degradation like predicting slow queries or imminent connection pool exhaustion).</p>\n<p>You will design, deploy, and govern AI-powered agents (using Azure Copilot /AWS Bedrock) to achieve autonomous self-healing capabilities and automated resource management.</p>\n<p>You will implement advanced monitoring (CloudWatch, Dynatrace) for key database metrics (SLIs/SLOs) like latency, throughput, error rates, and connection pools. Develop and train predictive ML models to analyze historical telemetry and forecast potential system outages or performance bottlenecks and configure proactive monitoring and alerting for critical services.</p>\n<p>You will respond to alerts and create self-healing actions based on alerts.</p>\n<p>You will design and implement cross-region/multi-AZ replication, automated failover strategies, and point-in-time recovery (PITR) procedures for mission-critical databases. Disaster recovery planning and DR drills.</p>\n<p>You will execute backup strategies and validate recovery procedures using Rubrik and Perform restores as needed.</p>\n<p>You will work closely with application operations / Production support teams to troubleshoot issues on database layer (performance, locks, schema) and the platform layer (multi-cloud /middleware /network, resource limits) to find the root causes.</p>\n<p>You will lead incident response and root cause analysis (RCA) for database outages, performance degradations, and data integrity issues. Collaborate with DBAs and application teams for root cause analysis.</p>\n<p>You will implement AI tools to perform real-time Root Cause Analysis (RCA), correlate complex event data (logs, metrics) and auto-generate runbooks.</p>\n<p>You will define and automate scaling strategies (read replicas, sharding, auto-scaling) based on predicted load and business growth. Provide input for capacity planning and resource optimization.</p>\n<p>You will implement cost management policies, including rightsizing instances, managing storage tiers, and defining lifecycle rules for backups and snapshots.</p>\n<p>You will proactively analyze query performance, index usage, and database configuration, making and automating changes to optimize throughput and reduce latency. Support DBA teams in performance tuning initiatives.</p>\n<p>You will implement robust secrets management solutions (AWS Secrets Manager, HashiCorp Vault) for database credentials, ensuring applications retrieve secrets securely at runtime.</p>\n<p>You will define and enforce least-privilege access policies (IAM roles, service accounts) for databases.</p>\n<p>You will implement encryption and data masking policies as directed.</p>\n<p>You will manage security and compliance by utilizing AI agents to detect configuration drift and auto-generate compliant updates for IAM, network, and security policies.</p>\n<p>You will apply patches and perform upgrades in coordination with DBA teams. Validate post-upgrade functionality and compliance.</p>\n<p style=\"margin-top:24px;font-size:13px;color:#666;\">XML job scraping automation by <a href=\"https://yubhub.co\">YubHub</a></p>","url":"https://yubhub.co/jobs/job_d34ee930-f5c","directApply":true,"hiringOrganization":{"@type":"Organization","name":"Capgemini","sameAs":"https://www.capgemini.com/us-en/about-us/who-we-are/","logo":"https://logos.yubhub.co/capgemini.com.png"},"x-apply-url":"https://jobs.workable.com/view/aNTGp9AN6h4GPQ6Vrak2GZ/hybrid-cloud-platform-engineer-in-pune-at-capgemini","x-work-arrangement":"hybrid","x-experience-level":"senior","x-job-type":"full-time","x-salary-range":null,"x-skills-required":["Oracle","DB2","MSSQL","Snowflake","PostgreSQL","MySQL","Terraform","Ansible","Python","Bash","Dynatrace","CloudWatch","Azure Copilot","AWS Bedrock","Rubrik","AI/ML","Cloud Native","Database Administration","Configuration Management","Security Hardening","Patching","Observability Platforms","AIOps Tools","Autonomous Self-Healing","Resource Management","Advanced Monitoring","Predictive ML Models","Proactive Monitoring","Alerting","Cross-Region/Multi-AZ Replication","Automated Failover Strategies","Point-in-Time Recovery","Disaster Recovery Planning","DR Drills","Backup Strategies","Recovery Procedures","Application Operations","Production Support Teams","Root Cause Analysis","Incident Response","AI Tools","Runbooks","Scaling Strategies","Capacity Planning","Resource Optimization","Cost Management Policies","Rightsizing Instances","Storage Tiers","Lifecycle Rules","Query Performance","Index Usage","Database Configuration","Secrets Management Solutions","Least-Privilege Access Policies","Encryption","Data Masking Policies","Security Compliance","Configuration Drift","Compliant Updates"],"x-skills-preferred":[],"datePosted":"2026-04-24T14:17:12.465Z","jobLocation":{"@type":"Place","address":{"@type":"PostalAddress","addressLocality":"Pune"}},"employmentType":"FULL_TIME","occupationalCategory":"Engineering","industry":"Technology","skills":"Oracle, DB2, MSSQL, Snowflake, PostgreSQL, MySQL, Terraform, Ansible, Python, Bash, Dynatrace, CloudWatch, Azure Copilot, AWS Bedrock, Rubrik, AI/ML, Cloud Native, Database Administration, Configuration Management, Security Hardening, Patching, Observability Platforms, AIOps Tools, Autonomous Self-Healing, Resource Management, Advanced Monitoring, Predictive ML Models, Proactive Monitoring, Alerting, Cross-Region/Multi-AZ Replication, Automated Failover Strategies, Point-in-Time Recovery, Disaster Recovery Planning, DR Drills, Backup Strategies, Recovery Procedures, Application Operations, Production Support Teams, Root Cause Analysis, Incident Response, AI Tools, Runbooks, Scaling Strategies, Capacity Planning, Resource Optimization, Cost Management Policies, Rightsizing Instances, Storage Tiers, Lifecycle Rules, Query Performance, Index Usage, Database Configuration, Secrets Management Solutions, Least-Privilege Access Policies, Encryption, Data Masking Policies, Security Compliance, Configuration Drift, Compliant Updates"},{"@context":"https://schema.org","@type":"JobPosting","identifier":{"@type":"PropertyValue","name":"YubHub","value":"job_68020f5c-a34"},"title":"Insider Risk & Security Associate","description":"<p>We&#39;re looking for an Associate to join our Insider Risk &amp; Security team, reporting to the Director of Financial Crime Risk. This role will support the delivery of the Bank&#39;s insider threat and protective security programmes, primarily focusing on insider investigations.</p>\n<p>Responsibilities will include: Managing a caseload of sensitive internal investigations, investigating potential wrongdoing; Supporting with evidence gathering, including interviews as appropriate; Liaising with senior stakeholders across the business on sensitive matters; Contribute to the development of controls, training, policies and risk assessments; Actively monitor for anomalies through data-driven alerts; Liaising with Law Enforcement and Intelligence partners regarding criminality or threats to the security of the Bank and staff; Attend court as a signatory or witness, on behalf of the business - for both economic crime or employment matters; Support the Protective Security team with the management of the banks internal crisis management and communications platform;</p>\n<p>Requirements include: demonstrable awareness of insider risk and security best-practice (NPSA); experience working in an investigative capacity - within financial services or law enforcement, highly beneficial; experience in evidence-gathering and interviewing within law enforcement or financial services, highly beneficial; previous experience in drafting witness statements desirable, but not essential; previous experience in employment tribunals or court attendance, desirable - but not essential; previous experience in drafting policies, procedures or training content within financial services, beneficial, but not essential;</p>\n<p>Benefits include: 25 days holiday (plus take your public holiday allowance whenever works best for you); An extra day’s holiday for your birthday; Annual leave is increased with length of service, and you can choose to buy or sell up to five extra days off; 16 hours paid volunteering time a year; Salary sacrifice, company enhanced pension scheme; Life insurance at 4x your salary &amp; group income protection; Private Medical Insurance with VitalityHealth including mental health support and cancer care.</p>\n<p>About us: We&#39;re an equal opportunity employer, and we&#39;re proud of our ongoing efforts to foster diversity &amp; inclusion in the workplace. Individuals seeking employment at Starling are considered without regard to race, religion, national origin, age, sex, gender, gender identity, gender expression, sexual orientation, marital status, medical condition, ancestry, physical or mental disability, military or veteran status, or any other characteristic protected by applicable law.</p>\n<p style=\"margin-top:24px;font-size:13px;color:#666;\">XML job scraping automation by <a href=\"https://yubhub.co\">YubHub</a></p>","url":"https://yubhub.co/jobs/job_68020f5c-a34","directApply":true,"hiringOrganization":{"@type":"Organization","name":"Starling","sameAs":"https://www.starlingbank.com/","logo":"https://logos.yubhub.co/starlingbank.com.png"},"x-apply-url":"https://apply.workable.com/j/40004EF7C9","x-work-arrangement":"onsite","x-experience-level":"entry","x-job-type":"full-time","x-salary-range":null,"x-skills-required":["insider risk and security best-practice","investigative capacity","evidence-gathering","interviewing","drafting witness statements","employment tribunals","court attendance","drafting policies","procedures","training content"],"x-skills-preferred":[],"datePosted":"2026-04-24T14:15:55.653Z","jobLocation":{"@type":"Place","address":{"@type":"PostalAddress","addressLocality":"Southampton"}},"employmentType":"FULL_TIME","occupationalCategory":"Finance","industry":"Finance","skills":"insider risk and security best-practice, investigative capacity, evidence-gathering, interviewing, drafting witness statements, employment tribunals, court attendance, drafting policies, procedures, training content"},{"@context":"https://schema.org","@type":"JobPosting","identifier":{"@type":"PropertyValue","name":"YubHub","value":"job_dc829fe4-248"},"title":"Vice President, Corporate Risk Solutions","description":"<p>In compliance with applicable laws, HSBC is committed to employing only those who are authorised to work in the US. As Vice President, Corporate Risk Solutions, you will originate and execute strategic risk management solutions for corporate and private capital clients, primarily linked to capital markets, acquisition and financing-related events, and interest rate risk. You will service priority clients at the senior Treasurer and C-Suite level, developing trusted relationships and delivering high-quality client outcomes.</p>\n<p>Your key responsibilities will include:</p>\n<ul>\n<li>Originating and executing cross-asset risk management solutions predominantly related to strategic/financing events</li>\n<li>Performing and/or overseeing financial analysis to support hedging recommendations, including capital structure/hedge capacity assessment and modelling, stress testing and scenario analysis of market risk factors impacting corporate finance and credit metrics</li>\n<li>Assessing client credit metrics, rating agency considerations and hedge accounting standards as they relate to risk management transactions</li>\n<li>Supporting execution by coordinating internal stakeholders and preparing clear, client-ready materials (PowerPoint) that explain transactions in a concise, non-technical way</li>\n</ul>\n<p>You will likely have the following qualifications to succeed in this role:</p>\n<ul>\n<li>Experience originating and executing cross-asset risk management solutions predominantly related to strategic/financing events</li>\n<li>Strong understanding of hedging solutions and market dynamics; experience with financial sponsors is preferred</li>\n<li>Strong understanding of common financing structures/products (e.g., acquisition financing) and associated hedging solutions</li>\n<li>Advanced proficiency in Microsoft Excel and PowerPoint; VBA (Visual Basic for Applications) is a plus</li>\n<li>FINRA Series 63, 79, and 7 licenses required. A contingency period to obtain licensing can be provided at manager&#39;s discretion</li>\n</ul>\n<p style=\"margin-top:24px;font-size:13px;color:#666;\">XML job scraping automation by <a href=\"https://yubhub.co\">YubHub</a></p>","url":"https://yubhub.co/jobs/job_dc829fe4-248","directApply":true,"hiringOrganization":{"@type":"Organization","name":"HSBC","sameAs":"https://portal.careers.hsbc.com","logo":"https://logos.yubhub.co/portal.careers.hsbc.com.png"},"x-apply-url":"https://portal.careers.hsbc.com/careers/job/563774610684041","x-work-arrangement":"onsite","x-experience-level":"senior","x-job-type":"full-time","x-salary-range":null,"x-skills-required":["risk management","financial analysis","hedging recommendations","capital structure","hedge capacity assessment","modelling","stress testing","scenario analysis","market risk factors","corporate finance","credit metrics","rating agency considerations","hedge accounting standards"],"x-skills-preferred":["Microsoft Excel","PowerPoint","VBA","FINRA Series 63","FINRA Series 79","FINRA Series 7"],"datePosted":"2026-04-24T14:14:32.337Z","jobLocation":{"@type":"Place","address":{"@type":"PostalAddress","addressLocality":"New York"}},"employmentType":"FULL_TIME","occupationalCategory":"Finance","industry":"Finance","skills":"risk management, financial analysis, hedging recommendations, capital structure, hedge capacity assessment, modelling, stress testing, scenario analysis, market risk factors, corporate finance, credit metrics, rating agency considerations, hedge accounting standards, Microsoft Excel, PowerPoint, VBA, FINRA Series 63, FINRA Series 79, FINRA Series 7"},{"@context":"https://schema.org","@type":"JobPosting","identifier":{"@type":"PropertyValue","name":"YubHub","value":"job_2c452765-84f"},"title":"Site Reliability Data Engineer","description":"<p>For over 31,000 growing businesses and HR teams seeking a comprehensive, all-in-one HR suite, Workable emerges as the premier solution. We uniquely combine the world&#39;s most widely adopted Applicant Tracking System (Workable Recruiting) with a full-spectrum employee management system (Workable HR).</p>\n<p>At Workable, we empower companies to focus on what truly matters: hiring the right people and fostering their growth. While we take HR seriously, we maintain a lighthearted and collaborative culture. At Workable, you&#39;ll find smart people who have fun, learn, innovate, and help others do the same.</p>\n<p>We respect everyone, we hire the best, and make sure every experience is special.</p>\n<p>As a Site Reliability Data Engineer based in Athens, you will play a critical role in ensuring the reliability, scalability, and performance of our data infrastructure and pipelines. You will collaborate closely with engineering teams to build and operate robust cloud-based systems, driving automation and observability across our platform.</p>\n<p>Key responsibilities include:</p>\n<ul>\n<li>Build, operate, and improve ETL/ELT pipelines, Spark workloads, and data warehouse components.</li>\n<li>Develop tools and automations to simplify and harden data pipeline workflows and general operations.</li>\n<li>Design, implement, and maintain scalable, highly available cloud infrastructure and services with a focus on automation and reliability.</li>\n<li>Develop and operate observability tooling for monitoring, logging, tracing, and data-pipeline metrics (freshness, completeness, latency, error rates).</li>\n<li>Collaborate with development teams to instrument, deploy, and troubleshoot production systems across microservices on Kubernetes.</li>\n<li>Operate, deploy, and monitor data infrastructure and cloud services from development to production.</li>\n<li>Own availability, scalability, and performance of systems, focusing on data pipelines and warehousing components.</li>\n<li>Partner with peer SREs to roll out production changes and mitigate data-related and infrastructure incidents.</li>\n<li>Troubleshoot issues across data pipelines and production systems; support capacity planning and analyze system and data workflow performance.</li>\n<li>Provide data engineering expertise to engineering teams and work cross-functionally with developers and analysts on designing, releasing, and troubleshooting production systems.</li>\n<li>Own team projects and ensure timely delivery.</li>\n</ul>\n<p>Requirements</p>\n<ul>\n<li>BS/MS degree in Computer Science, Engineering, or equivalent practical experience</li>\n<li>2+ years of experience in site reliability engineering, data engineering, or a closely related role, including programming</li>\n<li>Experience with a major cloud provider (AWS or GCP)</li>\n<li>Hands-on experience with infrastructure-as-code or configuration management tools (Terraform or Ansible)</li>\n<li>Experience with ETL/ELT concepts and tools (Airflow or dbt)</li>\n<li>Experience with Apache Spark or similar distributed data processing frameworks</li>\n<li>Experience with cloud data warehouses (BigQuery, Redshift, or Snowflake)</li>\n<li>Proficiency in at least one programming language (Python, Go, or Scala)</li>\n<li>Excellent written English proficiency</li>\n<li>Legally authorized to work in Greece</li>\n</ul>\n<p>Preferred Qualifications</p>\n<ul>\n<li>Production experience with Kubernetes</li>\n<li>Experience with centralized monitoring and logging systems</li>\n<li>Experience with streaming systems (Kafka or Spark Streaming)</li>\n</ul>\n<p>Benefits</p>\n<p>Our employees enjoy benefits that make them more productive and contribute directly to the development of their professional skills. We want to be able to attract the best of the best and make sure they keep getting better. On top of an exciting, vibrant and intellectually challenging environment, we are offering:</p>\n<ul>\n<li>Comprehensive Health Coverage: A robust health insurance plan that includes coverage for your dependents.</li>\n<li>Competitive Compensation: An attractive salary paired with a performance-based bonus plan.</li>\n<li>Flexible Work Model: Enjoy the best of both worlds with a hybrid setup,two days working from home and three in the office.</li>\n<li>Top-Tier Tools: Apple gear and access to the latest productivity tools to help you excel.</li>\n<li>Stay Connected: A mobile data plan to keep you online wherever you are.</li>\n<li>Delicious Perks: Fresh, tasty food at the office to fuel your productivity.</li>\n<li>Relocation Bonus: To help you settle in smoothly in Athens.</li>\n</ul>\n<p>Workable is most decidedly an equal opportunity employer. We want applicants of diverse background and hire without regard to colour, gender, religion, national origin, citizenship, disability, age, sexual orientation, or any other characteristic protected by law.</p>\n<p style=\"margin-top:24px;font-size:13px;color:#666;\">XML job scraping automation by <a href=\"https://yubhub.co\">YubHub</a></p>","url":"https://yubhub.co/jobs/job_2c452765-84f","directApply":true,"hiringOrganization":{"@type":"Organization","name":"Workable"},"x-apply-url":"https://apply.workable.com/j/273C8E852D","x-work-arrangement":"hybrid","x-experience-level":"mid","x-job-type":"full-time","x-salary-range":null,"x-skills-required":["Cloud computing","Data engineering","ETL/ELT","Apache Spark","Cloud data warehouses","Kubernetes","Infrastructure-as-code","Configuration management","Observability tooling","Monitoring","Logging","Tracing","Data-pipeline metrics"],"x-skills-preferred":["Production experience with Kubernetes","Centralized monitoring and logging systems","Streaming systems (Kafka or Spark Streaming)"],"datePosted":"2026-04-24T14:14:22.101Z","jobLocation":{"@type":"Place","address":{"@type":"PostalAddress","addressLocality":"Athens"}},"employmentType":"FULL_TIME","occupationalCategory":"Engineering","industry":"Technology","skills":"Cloud computing, Data engineering, ETL/ELT, Apache Spark, Cloud data warehouses, Kubernetes, Infrastructure-as-code, Configuration management, Observability tooling, Monitoring, Logging, Tracing, Data-pipeline metrics, Production experience with Kubernetes, Centralized monitoring and logging systems, Streaming systems (Kafka or Spark Streaming)"},{"@context":"https://schema.org","@type":"JobPosting","identifier":{"@type":"PropertyValue","name":"YubHub","value":"job_8bd53be2-6cf"},"title":"Senior Site Reliability Data Engineer","description":"<p>For over 31,000 growing businesses and HR teams seeking a comprehensive, all-in-one HR suite, Workable emerges as the premier solution. We uniquely combine the world’s most widely adopted Applicant Tracking System (Workable Recruiting) with a full-spectrum employee management system (Workable HR).</p>\n<p>At Workable, we empower companies to focus on what truly matters: hiring the right people and fostering their growth. While we take HR seriously, we maintain a lighthearted and collaborative culture. At Workable, you’ll find smart people who have fun, learn, innovate, and help others do the same.</p>\n<p>We respect everyone, we hire the best, and make sure every experience is special.</p>\n<p>As a Senior Site Reliability Data Engineer based in Athens, Greece, you will play a critical role in ensuring the reliability, scalability, and performance of Workable&#39;s data and cloud infrastructure. This is a high-impact position where your expertise will directly influence the operational excellence and growth of our data platform.</p>\n<p>Key responsibilities include:</p>\n<ul>\n<li>Design, build, and maintain core data engineering infrastructure including ETL/ELT pipelines, Apache Spark workloads, and data warehouse systems.</li>\n</ul>\n<ul>\n<li>Ensure availability, scalability, and performance of data infrastructure and pipelines with deep operational ownership.</li>\n</ul>\n<ul>\n<li>Design, implement, and maintain scalable reliability tooling and automation to streamline deployment, monitoring, and incident response across distributed services.</li>\n</ul>\n<ul>\n<li>Operate and optimize Kubernetes-based cloud infrastructure to ensure high availability, performance, and cost-efficiency.</li>\n</ul>\n<ul>\n<li>Partner cross-functionally with developers and analysts to design, release, and troubleshoot production systems; provide data engineering expertise.</li>\n</ul>\n<ul>\n<li>Lead cross-functional projects with development teams to improve system reliability, automate capacity planning, and enforce SRE best practices.</li>\n</ul>\n<ul>\n<li>Develop and maintain centralized observability, including logging, metrics, tracing, and alerting pipelines; continuously improve incident detection and response workflows.</li>\n</ul>\n<ul>\n<li>Own observability for data pipelines (freshness, completeness, latency, error rates) and ensure SLOs are met.</li>\n</ul>\n<ul>\n<li>Plan platform growth and manage capacity for the data platform and related infrastructure.</li>\n</ul>\n<ul>\n<li>Operate, deploy, and monitor data platform components and broader cloud services from development through production.</li>\n</ul>\n<ul>\n<li>Develop tools and automation to simplify data operations and make deployments more robust and self-service.</li>\n</ul>\n<ul>\n<li>Collaborate with peer SREs to roll out production changes and mitigate data/infrastructure incidents.</li>\n</ul>\n<p style=\"margin-top:24px;font-size:13px;color:#666;\">XML job scraping automation by <a href=\"https://yubhub.co\">YubHub</a></p>","url":"https://yubhub.co/jobs/job_8bd53be2-6cf","directApply":true,"hiringOrganization":{"@type":"Organization","name":"Workable"},"x-apply-url":"https://apply.workable.com/j/22CEAF6027","x-work-arrangement":"onsite","x-experience-level":"senior","x-job-type":"full-time","x-salary-range":null,"x-skills-required":["Apache Spark","ETL/ELT pipelines","cloud data warehouses","major cloud provider","infrastructure automation tools","centralized logging","monitoring","observability frameworks"],"x-skills-preferred":["production experience with Kubernetes","streaming systems","data quality","data observability tooling","relational and NoSQL databases","proficiency in programming languages","deep knowledge of Linux systems"],"datePosted":"2026-04-24T14:13:36.167Z","jobLocation":{"@type":"Place","address":{"@type":"PostalAddress","addressLocality":"Athens"}},"employmentType":"FULL_TIME","occupationalCategory":"Engineering","industry":"Technology","skills":"Apache Spark, ETL/ELT pipelines, cloud data warehouses, major cloud provider, infrastructure automation tools, centralized logging, monitoring, observability frameworks, production experience with Kubernetes, streaming systems, data quality, data observability tooling, relational and NoSQL databases, proficiency in programming languages, deep knowledge of Linux systems"},{"@context":"https://schema.org","@type":"JobPosting","identifier":{"@type":"PropertyValue","name":"YubHub","value":"job_af746432-e09"},"title":"VP, Senior Full-Stack Engineer (Java & Angular)","description":"<p>Are you interested in building innovative technology that shapes the financial markets? Do you like working at the speed of a startup, and tackling some of the world&#39;s most interesting challenges? At BlackRock, we are looking for Software Engineers who like to innovate and solve complex problems.</p>\n<p>We recognize that strength comes from diversity, and will embrace your unique skills, curiosity, and passion while giving you the opportunity to grow technically and as an individual.</p>\n<p>Aladdin by BlackRock manages over $30 trillion (USD) in assets, and its engineers have an extraordinary responsibility to our clients all over the world. Our technology empowers millions of investors to achieve their investments objectives, save for retirement, pay for college, buy a home, and improve people&#39;s financial well-being.</p>\n<p>This role will be responsible for all aspects of software development, testing and ensuring compatibility with enterprise and solutions architecture by harnessing modern development technologies.</p>\n<p>The position is for a Vice President within the Investment and Trading engineering team within Aladdin Engineering and is responsible for delivering software solutions leveraged by Portfolio Managers, Traders, Researchers, Risk Managers, Compliance Officers and Investment Operations.</p>\n<p>We are passionate about building quality software and scalable technology to meet the needs of tomorrow. We have strong Java expertise and work with a range of technologies such as Azure cloud, Kafka, Cassandra, Docker, Kubernetes, Angular and many others. We are committed to open source, and contributing back to the community. We write testable software every day, with a focus on agile innovation.</p>\n<p>The team is looking for an ambitious hands-on senior software engineer to work on an exciting strategic product to expand our Aladdin Portfolio Management capabilities. Working with a global team and be a part of an outstanding group of engineers setting, evolving the technology direction of our upcoming suit of applications for Portfolio Management. Passionate about multiple aspects of enterprise software development – Performance, Scale, Resilience, Usability and Maintainability. As a key member of our engineering team, you will be encouraged and empowered to bring your ideas forward to help shape the technical solutions. Making you become a strong team player in our distributed and diverse global team. You also have opportunities to present your innovative ideas to leaders across the firm.</p>\n<p>Responsibilities include:</p>\n<ul>\n<li>Develop and maintain institutional grade investment functionalities used by portfolio managers</li>\n</ul>\n<ul>\n<li>Help design and build the next generation of world-class investment platform</li>\n</ul>\n<ul>\n<li>Contribute to an agile development team working with designers, product managers, users</li>\n</ul>\n<ul>\n<li>Quality-first mind-set - apply quality software engineering practices through all phases of development and into production</li>\n</ul>\n<ul>\n<li>Collaborate with team members in a multi-office, multi-country, global team environment.</li>\n</ul>\n<ul>\n<li>Ensure resilience, stability, and high-performance of software delivery through quality code reviews, unit, regression and user acceptance testing, dev ops and level two production support.</li>\n</ul>\n<ul>\n<li>Nurture the talent around you and lead by example.</li>\n</ul>\n<ul>\n<li>Being in a senior position, people would look up to you, and you would be responsible for driving an inclusive and competitive culture in the team.</li>\n</ul>\n<p>Competencies include:</p>\n<ul>\n<li>Passionate about technology, user experience, with personal ownership for the work you do</li>\n</ul>\n<ul>\n<li>Curious and eager to learn new business domain and tech skills, and willing to challenge the status quo</li>\n</ul>\n<ul>\n<li>Know how to leverage AI tools to increase your productivity</li>\n</ul>\n<ul>\n<li>Willing to embrace work outside of your comfort zone, and open to guidance from others</li>\n</ul>\n<ul>\n<li>Data and quality focused, with an eye for the details that make great solutions</li>\n</ul>\n<ul>\n<li>You are always willing to learn from any issues/incidents, try to continuously improve</li>\n</ul>\n<ul>\n<li>Experienced working in either Portfolio Management or Trading segments</li>\n</ul>\n<ul>\n<li>Knowledgeable in Trading, Equity, FI, OTC, Exchange Traded Derivatives, Prime Brokerage, Compliance, and Portfolio Management processes.</li>\n</ul>\n<p>Experience and Qualifications:</p>\n<ul>\n<li>Designed and engineered enterprise financial solutions in production with a strong foundation in Java and related technologies</li>\n</ul>\n<ul>\n<li>Experience with distributed caching &amp; computing, real-time, and highly scalable technologies (such as Apache Ignite, Kafka, Redis) and modern front-end web development (such as Micro-frontends, Web-streaming, Angular/React, Type Script).</li>\n</ul>\n<ul>\n<li>Passionate about creating the best user experience</li>\n</ul>\n<ul>\n<li>B.E. or M.S. degree in Computer Science, Engineering or a related discipline</li>\n</ul>\n<ul>\n<li>Excellent analytical, problem-solving and communication skills</li>\n</ul>\n<ul>\n<li>An ability to apply modern tech solutions to solve investment and trading problems</li>\n</ul>\n<ul>\n<li>A track record of forging strong relationships and building trusted partnerships through open dialogue and continuous delivery</li>\n</ul>\n<ul>\n<li>Experience working with UX designers, product managers, technical/enterprise leads, and architects across the SDLC lifecycle; understanding of systems requirements, design, development, testing, deployment and documentation</li>\n</ul>\n<p>Nice to have:</p>\n<ul>\n<li>Certification (e.g., CFA) or passion in investment/portfolio management/trading processes</li>\n</ul>\n<ul>\n<li>Experience with MSSQL or Apache Cassandra Database</li>\n</ul>\n<ul>\n<li>Experience with Cloud platforms such as Microsoft Azure</li>\n</ul>\n<ul>\n<li>Experience with AI models and tools</li>\n</ul>\n<p style=\"margin-top:24px;font-size:13px;color:#666;\">XML job scraping automation by <a href=\"https://yubhub.co\">YubHub</a></p>","url":"https://yubhub.co/jobs/job_af746432-e09","directApply":true,"hiringOrganization":{"@type":"Organization","name":"BlackRock","sameAs":"https://www.blackrock.com","logo":"https://logos.yubhub.co/blackrock.com.png"},"x-apply-url":"https://jobs.workable.com/view/65fGJ5np3dAFaJEGL4T3Py/vp%2C-senior-full-stack-engineer-(java-%26amp%3B-angular)-in-london-at-blackrock","x-work-arrangement":"onsite","x-experience-level":"senior","x-job-type":"full-time","x-salary-range":null,"x-skills-required":["Java","Angular","Azure cloud","Kafka","Cassandra","Docker","Kubernetes","Micro-frontends","Web-streaming","Type Script","Apache Ignite","Redis","UI/UX","APIs","gRPC","Proto-buffs","Spring","Node.JS"],"x-skills-preferred":[],"datePosted":"2026-04-24T14:12:44.479Z","jobLocation":{"@type":"Place","address":{"@type":"PostalAddress","addressLocality":"London"}},"employmentType":"FULL_TIME","occupationalCategory":"Engineering","industry":"Finance","skills":"Java, Angular, Azure cloud, Kafka, Cassandra, Docker, Kubernetes, Micro-frontends, Web-streaming, Type Script, Apache Ignite, Redis, UI/UX, APIs, gRPC, Proto-buffs, Spring, Node.JS"},{"@context":"https://schema.org","@type":"JobPosting","identifier":{"@type":"PropertyValue","name":"YubHub","value":"job_45fb1c5c-dbe"},"title":"Supply Chain Manager","description":"<p>We&#39;re looking for a dynamic Supply Chain Manager to join Biffa Polymers in Redcar, leading the end-to-end operation across procurement, warehousing, and logistics.</p>\n<p>To provide operational leadership for the Supply Chain function to coordinate materials, warehousing, and fulfilment activities at Biffa Polymers Redcar, ensuring production schedules, inventory levels, and customer requirements are consistently met. This role combines strategic oversight and operational accountability, aligning procurement, warehousing, inventory, and transport to deliver business objectives.</p>\n<p>Your core responsibilities will include:</p>\n<ul>\n<li>Leading end-to-end supply chain operations, owning feedstock procurement from internal Biffa sites (MRFs/PRFs) and external suppliers, ensuring quality, cost-effectiveness, and alignment with production and customer demand</li>\n<li>Partnering with Commercial teams to coordinate customer deliveries, ensuring accuracy across documentation, compliance, and scheduling</li>\n<li>Taking ownership of New Product Introduction (NPI) activities and customer trials, acting as the central point of contact to ensure operational readiness across materials, production, and fulfilment</li>\n<li>Collaborating cross-functionally to deliver operational plans, proactively managing service performance, risks, and issues to meet and exceed contractual commitments</li>\n<li>Monitoring and driving Supply Chain KPIs, using data and performance metrics to improve service levels and operational efficiency</li>\n<li>Championing continuous improvement initiatives, identifying opportunities for cost reduction, process standardisation, and enhanced service delivery</li>\n<li>Owning the non-conformance (NCR) process, leading root cause analysis and implementing corrective and preventative actions with internal teams and external partners</li>\n<li>Supporting strategic supply chain planning, including ERP system integrity, capacity forecasting, team development, and active participation in health and safety initiatives</li>\n</ul>\n<p>Our essential requirements include:</p>\n<ul>\n<li>A degree-level qualification in Supply Chain or a related field</li>\n<li>Advanced IT skills across Microsoft Office (Word, Excel, Outlook, PowerPoint, Access)</li>\n<li>A full, current UK driving licence</li>\n<li>Minimum 5 years&#39; experience in a materials planning, fulfilment, supply chain, or operations management role</li>\n<li>Professional supply chain qualification (desirable)</li>\n<li>NVQ Level 3 in Management (desirable)</li>\n<li>Previous experience within the recycling industry (desirable)</li>\n</ul>\n<p>And here&#39;s why you&#39;ll love it at Biffa:</p>\n<ul>\n<li>Competitive salary</li>\n<li>Ongoing career development, training and coaching – because if you don’t grow, we don’t grow</li>\n<li>Generous pension scheme</li>\n<li>Retail and leisure discounts</li>\n<li>Holiday and travel discounts</li>\n</ul>\n<p style=\"margin-top:24px;font-size:13px;color:#666;\">XML job scraping automation by <a href=\"https://yubhub.co\">YubHub</a></p>","url":"https://yubhub.co/jobs/job_45fb1c5c-dbe","directApply":true,"hiringOrganization":{"@type":"Organization","name":"Biffa Polymers","sameAs":"https://www.biffa.co.uk/","logo":"https://logos.yubhub.co/biffa.co.uk.png"},"x-apply-url":"https://apply.workable.com/j/B74C8E1DC6","x-work-arrangement":"onsite","x-experience-level":"mid","x-job-type":"full-time","x-salary-range":null,"x-skills-required":["Supply Chain Management","Procurement","Warehousing","Logistics","Microsoft Office","ERP System Integrity","Capacity Forecasting","Team Development","Health and Safety"],"x-skills-preferred":[],"datePosted":"2026-04-24T14:11:36.261Z","jobLocation":{"@type":"Place","address":{"@type":"PostalAddress","addressLocality":"Redcar"}},"employmentType":"FULL_TIME","occupationalCategory":"Operations","industry":"Manufacturing","skills":"Supply Chain Management, Procurement, Warehousing, Logistics, Microsoft Office, ERP System Integrity, Capacity Forecasting, Team Development, Health and Safety"},{"@context":"https://schema.org","@type":"JobPosting","identifier":{"@type":"PropertyValue","name":"YubHub","value":"job_7caba9c0-9a1"},"title":"Data & Cloud Engineer (H/F)","description":"<p>We are looking for a Data &amp; Cloud Engineer to join our team. As a Data &amp; Cloud Engineer, you will be responsible for developing and implementing technical solutions to transform data into actionable insights. You will work closely with our clients to understand their data needs and develop tailored solutions to meet those needs.</p>\n<p>Our team uses a range of technologies including Python, SQL, Docker, Kubernetes, and Apache Airflow. We are looking for someone with strong technical skills and experience working with cloud-based technologies.</p>\n<p>Responsibilities:</p>\n<ul>\n<li>Develop and implement technical solutions to transform data into actionable insights</li>\n<li>Work closely with clients to understand their data needs and develop tailored solutions to meet those needs</li>\n<li>Collaborate with cross-functional teams to ensure seamless delivery of projects</li>\n<li>Stay up-to-date with industry trends and emerging technologies</li>\n</ul>\n<p>Requirements:</p>\n<ul>\n<li>2 years of experience in data engineering</li>\n<li>Strong technical skills in Python, SQL, Docker, Kubernetes, and Apache Airflow</li>\n<li>Experience working with cloud-based technologies</li>\n<li>Strong communication and collaboration skills</li>\n<li>Ability to work in a fast-paced environment</li>\n</ul>\n<p>Preferred qualifications:</p>\n<ul>\n<li>Experience working with big data technologies such as Hadoop and Spark</li>\n<li>Knowledge of data warehousing and business intelligence tools</li>\n<li>Experience with data visualization tools such as Tableau and Power BI</li>\n</ul>\n<p>Benefits:</p>\n<ul>\n<li>Competitive salary and benefits package</li>\n<li>Opportunity to work with a leading data company</li>\n<li>Collaborative and dynamic work environment</li>\n<li>Professional development opportunities</li>\n<li>Flexible working hours and remote work options</li>\n</ul>\n<p>If you are a motivated and experienced data engineer looking for a new challenge, please submit your application. We look forward to hearing from you!</p>\n<p style=\"margin-top:24px;font-size:13px;color:#666;\">XML job scraping automation by <a href=\"https://yubhub.co\">YubHub</a></p>","url":"https://yubhub.co/jobs/job_7caba9c0-9a1","directApply":true,"hiringOrganization":{"@type":"Organization","name":"fifty-five","sameAs":"https://www.fifty-five.com/","logo":"https://logos.yubhub.co/fifty-five.com.png"},"x-apply-url":"https://jobs.workable.com/view/c6JDDgc6oq5eBJSqCegVw5/hybrid-data-%26-cloud-engineer-(h%2Ff)-in-paris-at-fifty-five","x-work-arrangement":"hybrid","x-experience-level":"mid","x-job-type":"full-time","x-salary-range":null,"x-skills-required":["Python","SQL","Docker","Kubernetes","Apache Airflow","Cloud-based technologies"],"x-skills-preferred":["Big data technologies","Data warehousing and business intelligence tools","Data visualization tools"],"datePosted":"2026-04-24T14:10:35.818Z","jobLocation":{"@type":"Place","address":{"@type":"PostalAddress","addressLocality":"Paris"}},"employmentType":"FULL_TIME","occupationalCategory":"Engineering","industry":"Technology","skills":"Python, SQL, Docker, Kubernetes, Apache Airflow, Cloud-based technologies, Big data technologies, Data warehousing and business intelligence tools, Data visualization tools"},{"@context":"https://schema.org","@type":"JobPosting","identifier":{"@type":"PropertyValue","name":"YubHub","value":"job_5558189c-8cd"},"title":"Software Engineer","description":"<p><strong>About the Role</strong></p>\n<p>As a Software Engineer on the Storage team at Cursor, you&#39;ll own the data layer that underpins every product surface: the databases, caches, and the strategy for how teams provision, query, and scale their data stores.</p>\n<p>Millions of developers depend on Cursor every day, and the future of our storage architecture is one of the highest-leverage problems at the company: get it right, and every team ships faster, every product surface gets more reliable, and Cursor can scale to meet explosive demand. You&#39;ll design and execute the path to a robust, multi-database topology built for that growth.</p>\n<p><strong>Example projects include...</strong></p>\n<ul>\n<li>Designing the next-generation data architecture: evolving our storage layer into a partitioned, resilient topology that keeps pace with Cursor&#39;s rapid growth.</li>\n</ul>\n<ul>\n<li>Building query attribution and guardrails: instrumenting every database query by service, catching bad patterns before they hit production, and making it impossible to ship problematic queries without review.</li>\n</ul>\n<ul>\n<li>Defining the &#39;when to use what&#39; strategy for data stores: creating clear guidance and golden pathways so every team picks the right engine for their workload without second-guessing.</li>\n</ul>\n<ul>\n<li>Owning cache infrastructure end-to-end: reliability, capacity planning, and patterns that let product teams move fast without worrying about cache correctness.</li>\n</ul>\n<p><strong>You may be a fit if</strong></p>\n<ul>\n<li>You have deep experience with relational databases at scale, especially Postgres, MySQL, or similar OLTP systems.</li>\n</ul>\n<ul>\n<li>You&#39;ve tackled database sharding, migration, or decomposition problems in production environments.</li>\n</ul>\n<ul>\n<li>You understand the tradeoffs between different storage engines and can help teams make the right choices for their workloads.</li>\n</ul>\n<ul>\n<li>You care about operational excellence: backups, monitoring, query performance, and capacity planning are things you think about proactively.</li>\n</ul>\n<ul>\n<li>You have strong software engineering fundamentals and enjoy building systems that other engineers depend on.</li>\n</ul>\n<p style=\"margin-top:24px;font-size:13px;color:#666;\">XML job scraping automation by <a href=\"https://yubhub.co\">YubHub</a></p>","url":"https://yubhub.co/jobs/job_5558189c-8cd","directApply":true,"hiringOrganization":{"@type":"Organization","name":"Cursor","sameAs":"https://cursor.com","logo":"https://logos.yubhub.co/cursor.com.png"},"x-apply-url":"https://cursor.com/careers/software-engineer-storage","x-work-arrangement":"remote","x-experience-level":"senior","x-job-type":"full-time","x-salary-range":null,"x-skills-required":["Postgres","MySQL","relational databases","database sharding","migration","decomposition","storage engines","operational excellence","backups","monitoring","query performance","capacity planning"],"x-skills-preferred":[],"datePosted":"2026-04-24T14:09:43.224Z","jobLocationType":"TELECOMMUTE","employmentType":"FULL_TIME","occupationalCategory":"Engineering","industry":"Technology","skills":"Postgres, MySQL, relational databases, database sharding, migration, decomposition, storage engines, operational excellence, backups, monitoring, query performance, capacity planning"},{"@context":"https://schema.org","@type":"JobPosting","identifier":{"@type":"PropertyValue","name":"YubHub","value":"job_88175146-7cc"},"title":"Development Director - Advanced Technology","description":"<p>Electronic Arts creates next-level entertainment experiences that inspire players and fans around the world. Here, everyone is part of the story. Part of a community that connects across the globe. A place where creativity thrives, new perspectives are invited, and ideas matter. A team where everyone makes play happen.</p>\n<p>EA SPORTS is one of the most iconic brands in sports &amp; entertainment with nearly 30 years of innovation, passion, and connection of millions of players across the globe to their favourite sports, teams, and athletes. This is your opportunity to join this new team to shape the future of interactive entertainment and create the next great EA SPORTS game.</p>\n<p>The Advanced Technology Group is part of the EA SPORTS Tech organization, focused on developing the latest game features and engine enhancements in close collaboration with game and engine dev teams. You will be part of a team tackling a variety of technical challenges beginning from proof of concept to implementation for titles across EA. An essential priority for this group is to partner with the Frostbite team, our Game teams, and our central art team to create meaningful user-facing experiences and content workflow improvements. You will work on strategic, multi-year projects focused on our character technology for all EA SPORTS titles.</p>\n<p>This team works 3 days/week onsite in our Burnaby studio.</p>\n<p>Reporting to the Lead Development Director, a Development Director manages a variety of disciplines including artists, designers, and software engineers. DD&#39;s are the keepers of the project schedule and play an important role in successfully moving the development team from one project phase to the next while ensuring a focus on quality, collaboration and communication. You will partner with producers to ensure projects are managed on time, to quality, and within budget.</p>\n<p><strong>Your Responsibilities</strong></p>\n<ul>\n<li>Lead project planning and execution by defining scope, establishing realistic schedules, and maintaining clear priorities across multiple disciplines. Ensure project plans reflect quality targets, dependencies, risks, and resource requirements.</li>\n</ul>\n<ul>\n<li>Manage and develop high-performing teams by providing clear goals, coaching, feedback, and growth opportunities. Apply situational leadership to support career development and ensure team engagement and health.</li>\n</ul>\n<ul>\n<li>Oversee day-to-day project execution, monitoring progress, removing roadblocks, and driving alignment across internal teams and external development partners.</li>\n</ul>\n<ul>\n<li>Manage partner relationships (outsourcing, co-dev, centralized teams) by defining expectations, maintaining communication plans, ensuring clarity of roles and responsibilities, and monitoring delivery against agreed-upon milestones.</li>\n</ul>\n<ul>\n<li>Identify and mitigate risks across schedule, quality, scope, and resourcing in close partnership with discipline leads (e.g., Technical Lead, Art Lead, Producer). Proactively escalate critical risks and collaborate with these partners to develop and execute mitigation and contingency plans.</li>\n</ul>\n<ul>\n<li>Drive process adoption and continuous improvement by applying standard industry and EA methodologies, identifying gaps, and contributing to improvements in workflow, collaboration, and quality</li>\n</ul>\n<ul>\n<li>Contribute to hiring and talent planning by identifying staffing needs, participating in interviews, and supporting onboarding to ensure team readiness and long-term capability.</li>\n</ul>\n<p><strong>Your Qualifications</strong></p>\n<ul>\n<li>Minimum of 6 years of project management or production leadership experience, including 4+ years managing people and/or teams in a collaborative development environment.</li>\n</ul>\n<ul>\n<li>Proven ability to manage complex, multi-disciplinary projects using established methodologies such as Agile, Scrum, and Waterfall.</li>\n</ul>\n<ul>\n<li>Demonstrated experience leading teams, resolving conflicts, developing talent, and fostering a healthy, motivated work environment.</li>\n</ul>\n<ul>\n<li>Strong communication and relationship-building skills, capable of partnering effectively across teams, disciplines, and external development groups.</li>\n</ul>\n<ul>\n<li>Experience with project scheduling, capacity planning, and risk management.</li>\n</ul>\n<ul>\n<li>Bachelor’s degree or equivalent professional experience.</li>\n</ul>\n<p style=\"margin-top:24px;font-size:13px;color:#666;\">XML job scraping automation by <a href=\"https://yubhub.co\">YubHub</a></p>","url":"https://yubhub.co/jobs/job_88175146-7cc","directApply":true,"hiringOrganization":{"@type":"Organization","name":"Electronic Arts","sameAs":"https://jobs.ea.com","logo":"https://logos.yubhub.co/jobs.ea.com.png"},"x-apply-url":"https://jobs.ea.com/en_US/careers/JobDetail/Development-Director-Advanced-Technology/213653","x-work-arrangement":"hybrid","x-experience-level":"senior","x-job-type":"full-time","x-salary-range":"$122,300 - $170,700 CAD","x-skills-required":["project management","team leadership","communication","relationship-building","scheduling","capacity planning","risk management"],"x-skills-preferred":[],"datePosted":"2026-04-24T13:16:23.721Z","jobLocation":{"@type":"Place","address":{"@type":"PostalAddress","addressLocality":"Vancouver"}},"employmentType":"FULL_TIME","occupationalCategory":"Engineering","industry":"Technology","skills":"project management, team leadership, communication, relationship-building, scheduling, capacity planning, risk management","baseSalary":{"@type":"MonetaryAmount","currency":"USD","value":{"@type":"QuantitativeValue","minValue":122300,"maxValue":170700,"unitText":"YEAR"}}},{"@context":"https://schema.org","@type":"JobPosting","identifier":{"@type":"PropertyValue","name":"YubHub","value":"job_b8870690-5d6"},"title":"Sr. AI Engineer - Player Intelligence and Growth, Data & Insights (D&I)","description":"<p>Electronic Arts creates next-level entertainment experiences that inspire players and fans around the world. The Data &amp; Insights (D&amp;I) team transforms data into actionable insights that power EA. We are hiring an AI Engineer to join the Player Intelligence &amp; Growth team within Data and Insights (D&amp;I), reporting to a Sr Manager. This team partners with all of EA&#39;s game studios to offer data science &amp; AI products and solutions. For this AI Engineer role we are looking for applied and practical AI/ML expertise with a focus on Gen AI Solutions.</p>\n<p>As a Sr. AI Engineer, you will help scale our internal AI-powered insights tool by partnering with analysts, product teams, marketing, and titles like EA SPORTS FC™, Apex Legends™, The Sims™, and Madden NFL. You will work directly with game teams/partners (internal clients) to understand their offerings/domain and create AI products and solutions to solve for their use cases. You will develop plans to generalize AI products across titles and review AI tools used within the team, providing guidance and being accountable for the success and the adoption of the project/product.</p>\n<p>You will implement feature enhancements for our AI-powered analytics tool using GCP services, LLMs, and our internal tech stack. You will engage with other Data Scientists, Data Analysts sharing best practices and help consult on cross-projects. You will design, improve and work with our data pipeline that transfers and processes petabytes of data using tools, such as: AWS, S3, Kubernetes, GCP, Python, Apache Kafka, Ruby &amp; Hive.</p>\n<p>We are looking for a hands-on engineer with practical experience building AI/ML-driven systems, evaluating emerging tools, and delivering impactful, reusable solutions across multiple domains. You will have a graduate degree in Computer Science, Engineering, AI/ML, or a related quantitative field and 4+ years of experience building AI, ML, or data-driven systems in production environments.</p>\n<p style=\"margin-top:24px;font-size:13px;color:#666;\">XML job scraping automation by <a href=\"https://yubhub.co\">YubHub</a></p>","url":"https://yubhub.co/jobs/job_b8870690-5d6","directApply":true,"hiringOrganization":{"@type":"Organization","name":"Electronic Arts","sameAs":"https://jobs.ea.com","logo":"https://logos.yubhub.co/jobs.ea.com.png"},"x-apply-url":"https://jobs.ea.com/en_US/careers/JobDetail/Sr-AI-Engineer-Player-Intelligence-and-Growth-Data-Insights-D-I/211264","x-work-arrangement":"hybrid","x-experience-level":"senior","x-job-type":"full-time","x-salary-range":"$122,300 - $170,700 CAD","x-skills-required":["Python","SQL","GCP","LLMs","embeddings","retrieval systems","AI agents","CI/CD","microservices","cloud-native deployment patterns"],"x-skills-preferred":["AWS","S3","Kubernetes","Apache Kafka","Ruby & Hive"],"datePosted":"2026-04-24T13:16:11.540Z","jobLocation":{"@type":"Place","address":{"@type":"PostalAddress","addressLocality":"Vancouver"}},"employmentType":"FULL_TIME","occupationalCategory":"Engineering","industry":"Technology","skills":"Python, SQL, GCP, LLMs, embeddings, retrieval systems, AI agents, CI/CD, microservices, cloud-native deployment patterns, AWS, S3, Kubernetes, Apache Kafka, Ruby & Hive","baseSalary":{"@type":"MonetaryAmount","currency":"USD","value":{"@type":"QuantitativeValue","minValue":122300,"maxValue":170700,"unitText":"YEAR"}}},{"@context":"https://schema.org","@type":"JobPosting","identifier":{"@type":"PropertyValue","name":"YubHub","value":"job_0999bb3f-a45"},"title":"Sr. Manager, Game Production and Operations","description":"<p>Electronic Arts creates next-level entertainment experiences that inspire players and fans around the world. Here, everyone is part of the story. Part of a community that connects across the globe. A place where creativity thrives, new perspectives are invited, and ideas matter. A team where everyone makes play happen.</p>\n<p>We WANT YOUR BRAINS…</p>\n<p>PopCap is looking for brainy, skillful people with a passion for making the world’s best games. What’s in it for you? A super-fun environment, rewarding work, and great perks.</p>\n<p>We are seeking a Sr. Manager, Game Production and Operations to join the Plants vs. Zombies franchise, contributing to the continued development of Plants vs. Zombies 3. In this role, you will work as an operational leader, partnering across disciplines to deliver high-quality features and releases for a globally loved mobile game.</p>\n<p>Role Overview</p>\n<p>The Sr. Manager, Game Production and Operations plays a critical role in successfully moving the development team forward while ensuring a strong focus on efficiency, predictability, collaboration and communication. This is a key leadership role, directly managing the Development Director discipline as well as team-wide processes and solutions for how and when we deliver value to our players and the business. This role will report to the Head of Studio Operations.</p>\n<p>Responsibilities</p>\n<ul>\n<li>Lead and manage a team of Development Directors. The Sr. Manager, Game Production and Operations sets the standard for operational excellence, motivates and drives the discipline, and sets clear expectations and goals for their team.</li>\n</ul>\n<ul>\n<li>Own and drive all operational aspects of the project, including establishing and optimizing development frameworks and process, reporting, risk management, capacity planning, resource management, organizational design, budget, and gate reviews.</li>\n</ul>\n<ul>\n<li>Partner directly with the Game Director and other discipline leaders to ensure health and excellence across all day to day operations and communications.</li>\n</ul>\n<ul>\n<li>Proactively remove roadblocks, resolve conflicts, push decision making, and drive continuous improvement and efficiency in how the team collaborates and executes across all disciplines.</li>\n</ul>\n<ul>\n<li>Be the point of contact for escalation issues, listen to all sides of a challenge and work with teams for quick and effective resolutions.</li>\n</ul>\n<ul>\n<li>Own and optimize the relationships and coordination with partners, vendors, and central EA teams to ensure the project team is supported in their goals.</li>\n</ul>\n<ul>\n<li>Identify risks and dependencies. Proactively execute mitigation and contingency plans to minimize impact.</li>\n</ul>\n<ul>\n<li>Provide clear and concise reporting inside and outside of the organization to keep everyone informed of progress, risks, and mitigations.</li>\n</ul>\n<ul>\n<li>Drive team and resource planning activities, anticipating skills, gaps and organizational needs.</li>\n</ul>\n<ul>\n<li>Drive standard processes to be used across the project, while challenging the status quo.</li>\n</ul>\n<ul>\n<li>Evaluate and lead change management initiatives where applicable.</li>\n</ul>\n<p>Qualifications</p>\n<ul>\n<li>7+ years of operations, development, or production leadership experience on an internal game development team.</li>\n</ul>\n<ul>\n<li>Proven experience leading 100+ sized distributed organizations.</li>\n</ul>\n<ul>\n<li>4+ years of experience successfully managing multiple direct reports.</li>\n</ul>\n<ul>\n<li>Experience leading a game’s development from early production through launch, and managing a successful global live service.</li>\n</ul>\n<ul>\n<li>Expert knowledge of project management methodologies, processes, and tools (Jira, Flow/Shotgrid), and an understanding of how and when to use or adapt each.</li>\n</ul>\n<ul>\n<li>Strong general communications, proactivity, and visible leadership capabilities.</li>\n</ul>\n<ul>\n<li>Able to shift effectively between strategic and tactical work.</li>\n</ul>\n<ul>\n<li>Strong emotional intelligence, resilience, adaptability, and effectiveness in ambiguous, complex, and challenging situations.</li>\n</ul>\n<p>Preferred</p>\n<ul>\n<li>Passion for mobile free-to-play games and a strong understanding of the market.</li>\n</ul>\n<ul>\n<li>Experience working with development partners and vendors internationally.</li>\n</ul>\n<p style=\"margin-top:24px;font-size:13px;color:#666;\">XML job scraping automation by <a href=\"https://yubhub.co\">YubHub</a></p>","url":"https://yubhub.co/jobs/job_0999bb3f-a45","directApply":true,"hiringOrganization":{"@type":"Organization","name":"Electronic Arts","sameAs":"https://jobs.ea.com","logo":"https://logos.yubhub.co/jobs.ea.com.png"},"x-apply-url":"https://jobs.ea.com/en_US/careers/JobDetail/Lead-Development-Director/213551","x-work-arrangement":"hybrid","x-experience-level":"senior","x-job-type":"full-time","x-salary-range":"$141,400 - $204,400 CAD","x-skills-required":["project management","operations","development","production","leadership","communication","risk management","capacity planning","resource management","organizational design","budgeting","gate reviews","team management","change management"],"x-skills-preferred":["mobile free-to-play games","game development","project management methodologies","tools (Jira, Flow/Shotgrid)","proactivity","visible leadership"],"datePosted":"2026-04-24T13:16:02.178Z","jobLocation":{"@type":"Place","address":{"@type":"PostalAddress","addressLocality":"Los Angeles"}},"employmentType":"FULL_TIME","occupationalCategory":"Engineering","industry":"Technology","skills":"project management, operations, development, production, leadership, communication, risk management, capacity planning, resource management, organizational design, budgeting, gate reviews, team management, change management, mobile free-to-play games, game development, project management methodologies, tools (Jira, Flow/Shotgrid), proactivity, visible leadership","baseSalary":{"@type":"MonetaryAmount","currency":"USD","value":{"@type":"QuantitativeValue","minValue":141400,"maxValue":204400,"unitText":"YEAR"}}},{"@context":"https://schema.org","@type":"JobPosting","identifier":{"@type":"PropertyValue","name":"YubHub","value":"job_3b361f7a-791"},"title":"Manager, Support Workforce Management & Insights","description":"<p>As the Manager of Workforce Management &amp; Support Insights, you&#39;ll build and own the workforce planning function that ensures the Support organization can meet customer demand efficiently while maintaining exceptional service quality.</p>\n<p>You&#39;ll be responsible for forecasting demand, planning headcount across both FTE and vendor teams, and ensuring the Support organization is staffed appropriately across channels, regions, and time zones. Your work will directly influence how the company scales support while maintaining strong service levels and operational efficiency.</p>\n<p>You&#39;ll operate as both a strategic planner and operational owner. Early on, you&#39;ll build forecasting models, staffing strategies, and WFM systems that provide visibility into capacity and performance. Over time, you&#39;ll help shape how workforce planning evolves as automation and AI increasingly influence support demand and agent productivity.</p>\n<p>You&#39;ll work cross-functionally with Support leadership, Operations, Finance, and vendor partners to ensure the Support organization is staffed effectively and positioned to scale alongside the platform.</p>\n<p>Key responsibilities include:</p>\n<ul>\n<li>Building and owning the Workforce Management function for Support, establishing forecasting frameworks, staffing models, and workforce planning processes.</li>\n</ul>\n<ul>\n<li>Developing demand forecasts across support channels using historical trends, product signals, and growth projections.</li>\n</ul>\n<ul>\n<li>Building and maintaining operational dashboards and reporting frameworks that provide visibility into support demand, workforce performance, and key operational trends.</li>\n</ul>\n<ul>\n<li>Owning headcount planning across both internal support teams and BPO partners, ensuring staffing aligns with service level targets and operational goals.</li>\n</ul>\n<ul>\n<li>Designing capacity models that account for productivity, shrinkage, onboarding ramp time, and operational complexity.</li>\n</ul>\n<ul>\n<li>Owning and optimizing WFM tooling and systems that power forecasting, staffing, and workforce reporting.</li>\n</ul>\n<ul>\n<li>Building dashboards and reporting frameworks to track workforce metrics such as service levels, utilization, shrinkage, and forecast accuracy.</li>\n</ul>\n<ul>\n<li>Partnering with Support leadership and Finance to translate business growth into hiring plans and staffing strategies.</li>\n</ul>\n<ul>\n<li>Collaborating with vendor management teams to align BPO staffing plans with forecasted demand and service level requirements.</li>\n</ul>\n<ul>\n<li>Establishing processes for real-time workforce monitoring and intraday staffing adjustments.</li>\n</ul>\n<ul>\n<li>Modeling and evaluating the impact of automation and AI tooling on support demand, agent productivity, and long-term workforce planning.</li>\n</ul>\n<p>The ideal candidateryptography</p>\n<p style=\"margin-top:24px;font-size:13px;color:#666;\">XML job scraping automation by <a href=\"https://yubhub.co\">YubHub</a></p>","url":"https://yubhub.co/jobs/job_3b361f7a-791","directApply":true,"hiringOrganization":{"@type":"Organization","name":"Replit","sameAs":"https://replit.com/","logo":"https://logos.yubhub.co/replit.com.png"},"x-apply-url":"https://jobs.ashbyhq.com/replit/7f183ea2-97a5-47e6-9963-523ceca68e70","x-work-arrangement":"hybrid","x-experience-level":"senior","x-job-type":"Full time","x-salary-range":"$150K - $200K","x-skills-required":["demand forecasting","workforce modeling","capacity planning","SQL","spreadsheets","BI platforms","workforce management platforms","NICE","Verint","Teleopti","Tymeshift","AI tools","Replit","Claude","ChatGPT"],"x-skills-preferred":[],"datePosted":"2026-04-24T13:15:58.926Z","jobLocation":{"@type":"Place","address":{"@type":"PostalAddress","addressLocality":"Foster City, CA (Hybrid) In office M,W,F"}},"employmentType":"FULL_TIME","occupationalCategory":"Engineering","industry":"Technology","skills":"demand forecasting, workforce modeling, capacity planning, SQL, spreadsheets, BI platforms, workforce management platforms, NICE, Verint, Teleopti, Tymeshift, AI tools, Replit, Claude, ChatGPT","baseSalary":{"@type":"MonetaryAmount","currency":"USD","value":{"@type":"QuantitativeValue","minValue":150000,"maxValue":200000,"unitText":"YEAR"}}},{"@context":"https://schema.org","@type":"JobPosting","identifier":{"@type":"PropertyValue","name":"YubHub","value":"job_88313c8a-9fa"},"title":"Software Engineer Full Stack","description":"<p>Electronic Arts creates next-level entertainment experiences that inspire players and fans around the world. As a Software Engineer II - Full Stack for Gameplay Services, you will work on providing systems and tooling enabling game teams to leverage our matchmaking system, integrated in EA&#39;s biggest titles and enjoyed by millions of players worldwide.</p>\n<p>Our platform powers online features for EA&#39;s games, serving millions of users each day. We live, breathe, and dream about how we can make every player&#39;s multiplayer experience memorable. We develop services and SDKs in collaboration with EA&#39;s game studios for matchmaking, stats and leaderboards, achievements, game replays, VOIP, and game networking.</p>\n<p>Your focus will be on providing systems and tooling enabling game teams to leverage our matchmaking system. You will collaborate closely with your team and partner studios to maintain, enhance, and extend our core services.</p>\n<p>Responsibilities:</p>\n<ul>\n<li>Design brand new services covering all aspects from storage to application logic to management console</li>\n<li>Enhance and add features to existing systems</li>\n<li>Communicate with engineers from across the company to deliver the next generation of online features for both established and not-yet-released games</li>\n<li>Be a part of the full product cycle for our products, from design and testing to deployment and supporting our LIVE environments and our game team customers</li>\n<li>Maintain a suite of automated tests that validate the correctness of backend services</li>\n<li>Advocate for best practices within the engineering team</li>\n<li>Work with product managers to improve new features to support EA&#39;s business</li>\n</ul>\n<p>Qualifications:</p>\n<ul>\n<li>Bachelor/Master&#39;s degree in Computer Science, Computer Engineering or related field</li>\n<li>2+ years professional programming experience</li>\n<li>Experience with various programming languages and frameworks (React, Typescript, NodeJS, Golang)</li>\n<li>Deep understanding of HTML, CSS and DOM</li>\n<li>Experience with cloud computing products such as AWS EC2, ElastiCache, and ELB</li>\n<li>Experience with technologies such as Docker, Kubernetes, and Terraform</li>\n<li>Experience with relational or NoSQL database</li>\n<li>Experience with all phases of product development lifecycle, including requirement definition, development, test, and product release</li>\n<li>Adept at solving complex technical problems</li>\n<li>Strong sense of collaboration</li>\n<li>Excellent written and verbal communication skills</li>\n<li>Motivated self-starter and able to operate with autonomy</li>\n</ul>\n<p>Bonus Qualifications:</p>\n<ul>\n<li>Experience with Jenkins and Groovy</li>\n<li>Experience with Ansible</li>\n<li>Knowledge of Google gRPC and protobuf</li>\n<li>Experience with high traffic services and highly scalable, distributed systems</li>\n<li>Knowledge of scalable data storage and processing technologies such as Cassandra, Apache Spark, and AWS S3</li>\n<li>Experience with stress testing plus performance tuning and optimization</li>\n<li>Experience working within the games industries</li>\n</ul>\n<p>We thought you might also want to know</p>\n<p>The benefits and perks of working for EA</p>\n<p>We&#39;re proud to have an extensive portfolio of games and experiences, locations around the world, and opportunities across EA. We value adaptability, resilience, creativity, and curiosity. From leadership that brings out your potential, to creating space for learning and experimenting, we empower you to do great work and pursue opportunities for growth.</p>\n<p>We adopt a holistic approach to our benefits programs, emphasizing physical, emotional, financial, career, and community wellness to support a balanced life. Our packages are tailored to meet local needs and may include healthcare coverage, mental well-being support, retirement savings, paid time off, family leaves, complimentary games, and more. We nurture environments where our teams can always bring their best to what they do.</p>\n<p style=\"margin-top:24px;font-size:13px;color:#666;\">XML job scraping automation by <a href=\"https://yubhub.co\">YubHub</a></p>","url":"https://yubhub.co/jobs/job_88313c8a-9fa","directApply":true,"hiringOrganization":{"@type":"Organization","name":"Electronic Arts","sameAs":"https://jobs.ea.com","logo":"https://logos.yubhub.co/jobs.ea.com.png"},"x-apply-url":"https://jobs.ea.com/en_US/careers/JobDetail/Software-Engineer-II-Full-Stack/211085","x-work-arrangement":"hybrid","x-experience-level":"mid","x-job-type":"full-time","x-salary-range":null,"x-skills-required":["React","Typescript","NodeJS","Golang","HTML","CSS","DOM","AWS EC2","ElastiCache","ELB","Docker","Kubernetes","Terraform","relational database","NoSQL database","product development lifecycle"],"x-skills-preferred":["Jenkins","Groovy","Ansible","Google gRPC","protobuf","high traffic services","distributed systems","scalable data storage","Apache Spark","AWS S3","stress testing","performance tuning","games industries"],"datePosted":"2026-04-24T13:15:39.091Z","jobLocation":{"@type":"Place","address":{"@type":"PostalAddress","addressLocality":"Hyderabad"}},"employmentType":"FULL_TIME","occupationalCategory":"Engineering","industry":"Technology","skills":"React, Typescript, NodeJS, Golang, HTML, CSS, DOM, AWS EC2, ElastiCache, ELB, Docker, Kubernetes, Terraform, relational database, NoSQL database, product development lifecycle, Jenkins, Groovy, Ansible, Google gRPC, protobuf, high traffic services, distributed systems, scalable data storage, Apache Spark, AWS S3, stress testing, performance tuning, games industries"},{"@context":"https://schema.org","@type":"JobPosting","identifier":{"@type":"PropertyValue","name":"YubHub","value":"job_ddc30b7e-af9"},"title":"IT Infrastructure Operations Manager","description":"<p>Job Openings</p>\n<p><strong>IT Infrastructure Operations Manager</strong></p>\n<p>950 - IT - Banbury, Oxfordshire</p>\n<p>TGR Haas F1 Team has been a stalwart of the FIA Formula 1 World Championship over the past decade. With more than 200 grand prix starts to our name, we pride ourselves on being an ambitious challenger within Formula 1 - and we want you to be part of that journey.</p>\n<p>The first American Formula 1 team to compete in the sport since 1986, TGR Haas F1 Team made an immediate impression with a memorable points-scoring debut at the 2016 Australian Grand Prix. Ten years later, the team is still building momentum, guided by clear objectives and technical partnerships, and fresh from securing its second biggest points haul in a Formula 1 season.</p>\n<p>Yes, you’ll learn from us, but we expect to learn from you too!</p>\n<p><strong>General Summary:</strong></p>\n<p>This position reports to the Head of IT or their designee; is located in Banbury, UK. The Infrastructure Operations Manager is responsible for the strategy, reliability, performance and continuous improvement of Haas F1 Team’s IT infrastructure, platforms and core systems.</p>\n<p>The role ensures infrastructure initiatives and operational needs are delivered safely, on time and within budget, with a clear focus on service quality, resilience, security and innovation. It ensures infrastructure requirements are defined, prioritised and delivered in line with business needs and operational standards.</p>\n<p><strong>General Responsibilities:</strong></p>\n<ul>\n<li>Promote teamwork and effective communications to develop trustworthy relationships between all personnel and departments.</li>\n<li>Responsible for the overall performance of the Infrastructure, Cloud and Apps team, ensuring targets and milestones are achieved.</li>\n<li>Manage department personnel from recruitment, onboarding, employee engagement through to separation.</li>\n<li>Mentor, train and guide personnel as required. Develop staff using effective performance reviews and development plans.</li>\n<li>Responsible for working within departmental budgets.</li>\n<li>Produce monthly reports to the Head of IT detailing the status of the team, infrastructure and projects.</li>\n<li>Other duties as assigned by the Head of IT or their designee.</li>\n</ul>\n<p><strong>Key Responsibilities:</strong></p>\n<ul>\n<li>Lead, mentor and develop a collaborative and high-performing team through recruitment, onboarding, training and coaching, setting clear expectations and fostering accountability.</li>\n<li>Define and execute the strategy for IT infrastructure, platforms and core systems, owning the full lifecycle and providing governance and technical leadership.</li>\n<li>Drive innovation and continuous improvement to deliver modern, secure, scalable, resilient and highly available services, underpinned by robust backup and tested disaster recovery capabilities.</li>\n<li>Oversee day-to-day infrastructure operations, ensuring service quality, resilience and operational excellence across all supported environments.</li>\n<li>Maintain and optimise on-premises and cloud infrastructure, platforms and core systems, ensuring performance, capacity, patching and lifecycle management.</li>\n<li>Lead and ensure end-to-end delivery of infrastructure projects and improvements, ensuring scope, timelines, budgets, risks and dependencies are effectively managed.</li>\n<li>Establish, monitor and report on KPIs and SLAs to drive prioritisation and measurable service improvements.</li>\n<li>Manage Incident &amp; Corrective Action and Change Control processes.</li>\n<li>Ensure infrastructure and platform continuity during major incidents, remaining contactable outside normal hours and leading response and recovery, either directly or by coordinating the required resources.</li>\n<li>Manage relationships with technology vendors and service providers, including contract/term negotiation and performance oversight.</li>\n<li>Collaborate with cross-functional teams to understand business requirements, translate demand into technical roadmaps and maximise value delivered.</li>\n<li>Perform hands-on administration duties as required.</li>\n<li>Provide 3rd line technical support escalation services, ensuring adherence to SLAs.</li>\n<li>Oversee ad hoc projects as assigned.</li>\n<li>Follow all safety regulations in all venues.</li>\n</ul>\n<p><strong>Education and Work Experience:</strong></p>\n<ul>\n<li>Further education such as BSc or MIS in an IT related qualification or equivalent experience required.</li>\n<li>Over 10 years of progressive IT experience, including at least 2 years in a team leader or managerial role.</li>\n<li>IT and cyber/information security certifications are advantageous.</li>\n<li>Exposure to international environments is advantageous.</li>\n</ul>\n<p><strong>Specialised Knowledge and Skills:</strong></p>\n<p>Technical expertise</p>\n<ul>\n<li>Extensive technical knowledge and hands-on experience across on-premises and cloud enterprise infrastructure and platforms.</li>\n<li>Proven experience in infrastructure operations, including service reliability, performance, availability, capacity and lifecycle management.</li>\n<li>Advanced knowledge of Microsoft server, cloud and client environments.</li>\n<li>Strong knowledge of virtualisation, containerisation, enterprise networking, Linux-based platforms and system integration across complex environments.</li>\n<li>Demonstratable ability to embed secure-by-design practices and security controls into infrastructure and platforms as standard.</li>\n<li>HPC experience is advantageous.</li>\n</ul>\n<p>Leadership, strategy and delivery</p>\n<ul>\n<li>Proven success in infrastructure operations management and strategy development, translating business needs into practical roadmaps.</li>\n<li>Demonstrable ability to lead, mentor and develop high-performing teams, setting clear expectations and driving accountability.</li>\n<li>Strong project and delivery management capability, with a proven track record of delivering key IT projects and change initiatives, balancing scope, risk, budget and timelines.</li>\n<li>Experience managing end-to-end infrastructure delivery, from requirements and design through build, transition to run and continual improvement.</li>\n<li>Ability to align technology initiatives with business objectives, ensuring outcomes are measurable and value-led.</li>\n</ul>\n<p>Working style and behaviours</p>\n<ul>\n<li>Work to a consistently high standard in stressful and time sensitive situations, making clear decisions and driving issues through to resolution at pace.</li>\n<li>Quick decision-making skills whilst working through problems in a scientific and analytical way.</li>\n<li>Comfortable operating at pace with frequent context switching, resolving complex issues quickly, and sustaining focus on longer-term tasks when required to deliver high-quality outcomes</li>\n<li>Ability to manage and develop others, communicating clearly and collaborating effectively to achieve results.</li>\n<li>Builds strong working relationships and works effectively with stakeholders at all levels across teams and functions.</li>\n<li>Maintains a positive, supportive and professional approach, contributing to a high-performing team culture and actively helping others succeed.</li>\n<li>Demonstrates an innovative, forward-thinking mindset, driving continuous improvement and encouraging adoption of better ways of working.</li>\n<li>Excellent communication skills with a commitment to continuous learning, self-improvement and sharing knowledge with others.</li>\n<li>A sympathetic approach to your work colleagues and an ability to integrate within a group environment.</li>\n<li>A can-do positive approach and a willingness to help others is essential.</li>\n</ul>\n<p><strong>Work Environment Physical Demands:</strong></p>\n<p>An ability to work and prioritise within a high pressure, time sensitive environment while retaining a methodical approach is essential. The role may require occasional domestic or international travel and a willingness to work long and flexible hours including weekends.</p>\n<p>This position may require lifting to 50 pounds, repeated bending, squatting and manual dexterity. Fast-paced work environment requiring heavy mental demands. Work environment includes machinery, race cars, 7-post and other rigs, grinding debris, and hazardous fluids. All employees must ensure compliance with the Company Health and Safety Policy, and all relevant other statutory Health and Safety legislation.</p>\n<p><strong>Core Company Values:</strong></p>\n<p>Integrity: Uphold the highest standard</p>\n<p style=\"margin-top:24px;font-size:13px;color:#666;\">XML job scraping automation by <a href=\"https://yubhub.co\">YubHub</a></p>","url":"https://yubhub.co/jobs/job_ddc30b7e-af9","directApply":true,"hiringOrganization":{"@type":"Organization","name":"TGR Haas F1 Team","sameAs":"https://haasf1team.bamboohr.com","logo":"https://logos.yubhub.co/haasf1team.bamboohr.com.png"},"x-apply-url":"https://haasf1team.bamboohr.com/careers/813","x-work-arrangement":"onsite","x-experience-level":null,"x-job-type":"full-time","x-salary-range":null,"x-skills-required":["on-premises and cloud enterprise infrastructure and platforms","service reliability, performance, availability, capacity and lifecycle management","Microsoft server, cloud and client environments","virtualisation, containerisation, enterprise networking, Linux-based platforms and system integration across complex environments","secure-by-design practices and security controls into infrastructure and platforms as standard","HPC experience"],"x-skills-preferred":[],"datePosted":"2026-04-24T13:14:48.742Z","jobLocation":{"@type":"Place","address":{"@type":"PostalAddress","addressLocality":"Banbury"}},"employmentType":"FULL_TIME","occupationalCategory":"Engineering","industry":"Motorsport","skills":"on-premises and cloud enterprise infrastructure and platforms, service reliability, performance, availability, capacity and lifecycle management, Microsoft server, cloud and client environments, virtualisation, containerisation, enterprise networking, Linux-based platforms and system integration across complex environments, secure-by-design practices and security controls into infrastructure and platforms as standard, HPC experience"},{"@context":"https://schema.org","@type":"JobPosting","identifier":{"@type":"PropertyValue","name":"YubHub","value":"job_88030e1d-d2f"},"title":"Senior Software Engineer","description":"<p>As a Senior Software Engineer at MHP, you will develop full-stack applications using React and TypeScript on the frontend and Node.js (TypeScript) on the backend. You will also define, deploy, and manage infrastructure using AWS CDK (TypeScript) and design and maintain microservices and event-driven systems using Apache Kafka, SNS, SQS, and EventBridge.</p>\n<p>Key responsibilities include:</p>\n<ul>\n<li>Developing full-stack applications using React and TypeScript on the frontend and Node.js (TypeScript) on the backend</li>\n<li>Defining, deploying, and managing infrastructure using AWS CDK (TypeScript)</li>\n<li>Designing and maintaining microservices and event-driven systems using Apache Kafka, SNS, SQS, and EventBridge</li>\n<li>Ensuring system security, scalability, and observability using tools like IAM, CloudWatch, and X-Ray</li>\n<li>Writing clean, maintainable, and well-documented code</li>\n</ul>\n<p>Requirements include:</p>\n<ul>\n<li>Senior-level experience working with NodeJS, additional Java experience is an advantage</li>\n<li>Senior-level experience working with frontend technologies such as React and Typescript</li>\n<li>Mid-senior level experience working with AWS Services (S3, Lambdas, API Gateway, Lambda, ECS), Authorization with PPN/Entra-ID (Oauth, OIDC), and Infrastructure as a Code (AWS CDK with Typescript)</li>\n<li>Experience with REST API development</li>\n<li>Hands-on knowledge of responsive UI development and frontend testing</li>\n<li>Hands-on knowledge with CI/CD pipelines with GitLab and test automation</li>\n<li>Problem-solving mindset with the ability to optimize performance and cost management</li>\n<li>Strong communication skills and experience working in cross-functional Agile teams</li>\n<li>Ability to write clean, maintainable, and well-documented code</li>\n<li>Experience in enterprise applications, preferably in the Automotive domain, is a plus</li>\n<li>Bachelor&#39;s Degree in Computer Science or a related field is an advantage</li>\n</ul>\n<p style=\"margin-top:24px;font-size:13px;color:#666;\">XML job scraping automation by <a href=\"https://yubhub.co\">YubHub</a></p>","url":"https://yubhub.co/jobs/job_88030e1d-d2f","directApply":true,"hiringOrganization":{"@type":"Organization","name":"MHP","sameAs":"http://www.mhp.com/","logo":"https://logos.yubhub.co/mhp.com.png"},"x-apply-url":"https://jobs.porsche.com/index.php?ac=jobad&id=18149","x-work-arrangement":"onsite","x-experience-level":"senior","x-job-type":"full-time","x-salary-range":null,"x-skills-required":["NodeJS","React","TypeScript","AWS CDK","Apache Kafka","SNS","SQS","EventBridge","IAM","CloudWatch","X-Ray"],"x-skills-preferred":[],"datePosted":"2026-04-24T13:14:26.208Z","employmentType":"FULL_TIME","occupationalCategory":"Engineering","industry":"Consulting","skills":"NodeJS, React, TypeScript, AWS CDK, Apache Kafka, SNS, SQS, EventBridge, IAM, CloudWatch, X-Ray"},{"@context":"https://schema.org","@type":"JobPosting","identifier":{"@type":"PropertyValue","name":"YubHub","value":"job_aad66c6a-ad1"},"title":"Lead Data Scientist - Battlefield, Data and Insights (D&I)","description":"<p>We&#39;re hiring a Lead Data Scientist to join our Data &amp; Insights (D&amp;I) Data Science team. The Data Science team partners with EA studios to build scalable AI/ML solutions that enhance player experience, game design, and live service performance.</p>\n<p>You will bring expertise in the area of AI, ML, and engineering. You will also lead efforts related to life cycle management, progression, in-game economies, and player experience, specifically, within the Battlefield franchise.</p>\n<p>Key responsibilities include:</p>\n<ul>\n<li>Working directly with Battlefield game team/partners to understand their offerings/domain and create data science products and solutions to solve for their use cases.</li>\n</ul>\n<ul>\n<li>Applying problem-driven, AI/ML approaches to improve player experience, engagement, retention, and monetization systems.</li>\n</ul>\n<ul>\n<li>Developing plans to generalize products across the franchise with our engineering partners.</li>\n</ul>\n<ul>\n<li>Establishing rigorous experimental design standards (A/B testing, causal inference, system experimentation) to produce actionable insights.</li>\n</ul>\n<ul>\n<li>Collaborating with engineering partners to productionize models within live environments and gameplay systems.</li>\n</ul>\n<ul>\n<li>Designing and enhancing data pipelines that process petabyte-scale telemetry data using technologies such as AWS, S3, Kubernetes, GCP, Python, Apache Kafka, and Hive.</li>\n</ul>\n<ul>\n<li>Developing algorithms and statistical models for forecasting, player state prediction, churn analysis, progression balancing, and economic system tuning.</li>\n</ul>\n<ul>\n<li>Communicating complex analytical concepts to technical and non-technical partners, influencing strategic decisions.</li>\n</ul>\n<ul>\n<li>Mentoring other data scientists and contributing to shared best practices across the D&amp;I organization.</li>\n</ul>\n<p style=\"margin-top:24px;font-size:13px;color:#666;\">XML job scraping automation by <a href=\"https://yubhub.co\">YubHub</a></p>","url":"https://yubhub.co/jobs/job_aad66c6a-ad1","directApply":true,"hiringOrganization":{"@type":"Organization","name":"Electronic Arts","sameAs":"https://jobs.ea.com","logo":"https://logos.yubhub.co/jobs.ea.com.png"},"x-apply-url":"https://jobs.ea.com/en_US/careers/JobDetail/Lead-Data-Scientist-Battlefield-Data-and-Insights-D-I/213127","x-work-arrangement":"hybrid","x-experience-level":"senior","x-job-type":"full-time","x-salary-range":"$141,400 - $204,400 CAD","x-skills-required":["AI","ML","engineering","data science","AWS","S3","Kubernetes","GCP","Python","Apache Kafka","Hive"],"x-skills-preferred":[],"datePosted":"2026-04-24T13:13:26.748Z","jobLocation":{"@type":"Place","address":{"@type":"PostalAddress","addressLocality":"Vancouver"}},"employmentType":"FULL_TIME","occupationalCategory":"Engineering","industry":"Technology","skills":"AI, ML, engineering, data science, AWS, S3, Kubernetes, GCP, Python, Apache Kafka, Hive","baseSalary":{"@type":"MonetaryAmount","currency":"USD","value":{"@type":"QuantitativeValue","minValue":141400,"maxValue":204400,"unitText":"YEAR"}}},{"@context":"https://schema.org","@type":"JobPosting","identifier":{"@type":"PropertyValue","name":"YubHub","value":"job_fbf6d959-c91"},"title":"Real Estate Occupancy Planner","description":"<p>We are seeking an experienced Real Estate Occupancy Planner to join Anthropic&#39;s Workplace and Real Estate team. This role is the DRI for office move management, space restacking, neighborhood planning and key stakeholder engagement across our global portfolio.</p>\n<p>The ideal candidate brings a rigorous, data-driven approach to occupancy planning, excels at stakeholder collaboration, and is passionate about using technology and analytics to inform smarter space decisions.</p>\n<p>Responsibilities:</p>\n<p>Capacity Planning &amp; Data Analytics</p>\n<ul>\n<li>Collaborate with the People Analytics team to integrate workforce data into capacity planning models</li>\n<li>Develop and maintain occupancy dashboards and KPI reporting for leadership visibility</li>\n<li>Analyze utilization data, badge access trends, and reservation patterns to identify optimization opportunities</li>\n<li>Build forward-looking capacity forecasts aligned to hiring plans and portfolio lease events</li>\n<li>Synthesize data from multiple sources to produce clear, defensible recommendations on space allocation and density</li>\n</ul>\n<p>OfficeSpace Platform Ownership</p>\n<ul>\n<li>Own the global rollout of OfficeSpace, Anthropic&#39;s new capacity planning and space management tool</li>\n<li>Lead implementation planning, vendor coordination, and change management across all office locations</li>\n<li>Develop training materials and adoption programs to onboard internal teams and Space Captains onto the platform</li>\n<li>Establish data governance standards, floor plan accuracy protocols, and seat assignment workflows within OfficeSpace</li>\n<li>Serve as the internal subject matter expert and primary point of contact for the OfficeSpace platform</li>\n<li>Drive continuous improvement of the platform&#39;s configuration to meet evolving business needs</li>\n</ul>\n<p>Move &amp; Restack Management</p>\n<ul>\n<li>Serve as DRI for all office relocation moves and building restacks across Anthropic&#39;s global portfolio</li>\n<li>Own end-to-end move project management: planning, sequencing, stakeholder communication, and execution</li>\n<li>Partner with Space Captains within each business unit to plan and assign departmental neighborhoods</li>\n<li>Develop and maintain detailed move plans, seating assignments, and restack timelines</li>\n<li>Coordinate cross-functionally with IT, Facilities, Security, and HR to ensure seamless transitions</li>\n<li>Conduct post-move reviews to capture lessons learned and drive continuous improvement</li>\n</ul>\n<p>Stakeholder Collaboration &amp; Communication</p>\n<ul>\n<li>Build strong relationships with Space Captains across business units to ensure neighborhood planning reflects team needs</li>\n<li>Serve as a trusted advisor to team leads on space allocation, growth planning, and workplace norms</li>\n<li>Create executive-level reporting and communications on occupancy performance and capacity outlook</li>\n<li>Proactively surface space constraints, risks, and opportunities to the Head of Workplace and Real Estate</li>\n</ul>\n<p>You May Be a Good Fit If You:</p>\n<ul>\n<li>Have 5–8 years of experience in occupancy planning, space management, corporate real estate, or a related workplace function</li>\n<li>Have hands-on experience managing office moves, restacks, or large-scale seat assignments across multiple locations</li>\n<li>Are proficient with space management or IWMS platforms, preferably OfficeSpace (Archibus, Serraview, or similar)</li>\n<li>Bring a strong analytical mindset and comfort working with occupancy data, utilization metrics, and headcount models to clearly articulate strategy to stakeholders</li>\n<li>Have experience partnering with Finance or People Analytics teams on workforce and capacity planning</li>\n<li>Are skilled at managing multiple projects simultaneously in a fast-paced, high-growth environment</li>\n<li>Communicate clearly and confidently with both operational teams and senior leadership</li>\n<li>Hold a Bachelor&#39;s degree in Real Estate, Urban Planning, Business, or a related field</li>\n</ul>\n<p>Strong Candidates May Also:</p>\n<ul>\n<li>Have experience rolling out a new space management platform across a global portfolio</li>\n<li>Bring familiarity with badge/utilization data tools and how to connect them to planning decisions</li>\n<li>Have worked in a high-growth technology company scaling from hundreds to thousands of employees</li>\n<li>Demonstrate experience building scalable processes and documentation for occupancy planning functions</li>\n<li>Bring knowledge of neighborhood planning models, activity-based working, or hybrid workplace strategies</li>\n</ul>\n<p>Annual compensation range for this role is $145,600-$185,000 USD.</p>\n<p style=\"margin-top:24px;font-size:13px;color:#666;\">XML job scraping automation by <a href=\"https://yubhub.co\">YubHub</a></p>","url":"https://yubhub.co/jobs/job_fbf6d959-c91","directApply":true,"hiringOrganization":{"@type":"Organization","name":"Anthropic","sameAs":"https://www.anthropic.com","logo":"https://logos.yubhub.co/anthropic.com.png"},"x-apply-url":"https://job-boards.greenhouse.io/anthropic/jobs/5161113008","x-work-arrangement":"hybrid","x-experience-level":"senior","x-job-type":"full-time","x-salary-range":"$145,600-$185,000 USD","x-skills-required":["occupancy planning","space management","corporate real estate","data analysis","project management","stakeholder collaboration","communication","office moves","restacks","seat assignments","floor plan accuracy","seat assignment workflows","data governance","vendor coordination","change management","training materials","adoption programs","people analytics","finance","utilization metrics","headcount models","workforce planning","capacity planning","KPI reporting","leadership visibility","forward-looking capacity forecasts","hiring plans","portfolio lease events","space allocation","density","OfficeSpace","capacity planning and space management tool","implementation planning","internal teams","Space Captains","data governance standards","floor plan accuracy protocols","internal subject matter expert","primary point of contact","platform configuration","evolving business needs","end-to-end move project management","planning","sequencing","stakeholder communication","execution","departmental neighborhoods","detailed move plans","seating assignments","restack timelines","cross-functional coordination","IT","Facilities","Security","HR","seamless transitions","post-move reviews","lessons learned","continuous improvement","neighborhood planning","team needs","growth planning","workplace norms","executive-level reporting","communications","occupancy performance","capacity outlook","space constraints","risks","opportunities","Head of Workplace and Real Estate"],"x-skills-preferred":[],"datePosted":"2026-04-24T13:10:40.847Z","jobLocation":{"@type":"Place","address":{"@type":"PostalAddress","addressLocality":"San Francisco, CA"}},"employmentType":"FULL_TIME","occupationalCategory":"Operations","industry":"Technology","skills":"occupancy planning, space management, corporate real estate, data analysis, project management, stakeholder collaboration, communication, office moves, restacks, seat assignments, floor plan accuracy, seat assignment workflows, data governance, vendor coordination, change management, training materials, adoption programs, people analytics, finance, utilization metrics, headcount models, workforce planning, capacity planning, KPI reporting, leadership visibility, forward-looking capacity forecasts, hiring plans, portfolio lease events, space allocation, density, OfficeSpace, capacity planning and space management tool, implementation planning, internal teams, Space Captains, data governance standards, floor plan accuracy protocols, internal subject matter expert, primary point of contact, platform configuration, evolving business needs, end-to-end move project management, planning, sequencing, stakeholder communication, execution, departmental neighborhoods, detailed move plans, seating assignments, restack timelines, cross-functional coordination, IT, Facilities, Security, HR, seamless transitions, post-move reviews, lessons learned, continuous improvement, neighborhood planning, team needs, growth planning, workplace norms, executive-level reporting, communications, occupancy performance, capacity outlook, space constraints, risks, opportunities, Head of Workplace and Real Estate","baseSalary":{"@type":"MonetaryAmount","currency":"USD","value":{"@type":"QuantitativeValue","minValue":145600,"maxValue":185000,"unitText":"YEAR"}}},{"@context":"https://schema.org","@type":"JobPosting","identifier":{"@type":"PropertyValue","name":"YubHub","value":"job_46bea292-136"},"title":"Software Engineering II - Developer Productivity","description":"<p>We&#39;re looking for a Software Engineer – Developer Productivity to help improve the productivity of the entire engineering team. You will be responsible for everything from our build and testing automation, to the packaging and release of the final product. You will identify and provide tools to allow engineers to locate bottlenecks across their SDLC and help them remove friction points. You will evaluate our build systems and expand our deployment automation to meet growing needs. Working with Jenkins, Containerisation, Custom Tooling in Clojure, Python, and Ansible workflows, observability, you will influence the solution and business strategy, and tooling necessary to transform.</p>\n<p>Responsibilities:</p>\n<ul>\n<li>Design, implement, and maintain secure CI/CD pipelines for automating deployment, configuration, and testing processes.</li>\n<li>Integrate security into the release workflow and ensure that all CI-CD tools are compliant from security perspective</li>\n<li>Understand developer workflows and Build Systems to improve build times</li>\n<li>Partner with other engineering teams and develop scalable tools and infrastructure to develop, test, debug and release software quickly</li>\n<li>Design, develop and deliver distributed engineering build tools and platforms for a variety of codebase languages</li>\n<li>Help maintain the backend infrastructure that supports our Dev test environments</li>\n<li>Develop and improve instrumentation for monitoring and logging the health and availability of services</li>\n<li>Follow best practices for development</li>\n<li>Participate in code and system design reviews</li>\n</ul>\n<p>Requirements:</p>\n<ul>\n<li>5+ years of software development experience.</li>\n<li>In-depth knowledge of running/managing UNIX-like operating systems (we use Ubuntu).</li>\n<li>Experience with containerisation technologies (e.g., Docker, Kubernetes) and securing containerised environments.</li>\n<li>Knowledge of implementing security in CI/CD pipelines</li>\n<li>Experience of various FOSS tools for monitoring, graphing, capacity planning, and logging.</li>\n<li>Experience with Cloud Computing platforms like Amazon AWS, Google Cloud Platform, Heroku.</li>\n<li>Experience with IaaC tools like Ansible, Puppet, Terraform.</li>\n<li>Ability to analyse bottlenecks in architecture and quickly debug to reach resolution for issues</li>\n<li>Have an automation mindset and ability to reason and work with complex systems.</li>\n<li>Excellent communication and documentation skills</li>\n</ul>\n<p>Good to have:</p>\n<ul>\n<li>You’re familiar with building and writing in one of the following languages: Python, Shell, Java, Clojure</li>\n<li>You’re familiar with either of IntelliJ, VSCode, Emacs IDE and can help developers with their IDEs</li>\n<li>Familiar with the Challenges of Testing</li>\n<li>Comfortable using CLI tools for achieving day-to-day tasks.</li>\n<li>Systematic problem-solving approach, coupled with excellent communication skills and a sense of ownership and drive</li>\n<li>Drive task to the finish line with high quality and on time</li>\n<li>Experience in Designing and building solutions that are highly scalable, fault tolerant and cost-effective</li>\n</ul>\n<p style=\"margin-top:24px;font-size:13px;color:#666;\">XML job scraping automation by <a href=\"https://yubhub.co\">YubHub</a></p>","url":"https://yubhub.co/jobs/job_46bea292-136","directApply":true,"hiringOrganization":{"@type":"Organization","name":"Helpshift","sameAs":"https://www.helpshift.com/","logo":"https://logos.yubhub.co/helpshift.com.png"},"x-apply-url":"https://apply.workable.com/j/C18365A106","x-work-arrangement":"hybrid","x-experience-level":"senior","x-job-type":"full-time","x-salary-range":null,"x-skills-required":["UNIX-like operating systems","containerisation technologies","security in CI/CD pipelines","FOSS tools for monitoring, graphing, capacity planning, and logging","Cloud Computing platforms","IaaC tools","complex systems","Python","Shell","Java","Clojure","IntelliJ","VSCode","Emacs","CLI tools"],"x-skills-preferred":[],"datePosted":"2026-04-24T13:06:43.591Z","jobLocation":{"@type":"Place","address":{"@type":"PostalAddress","addressLocality":"Pune"}},"employmentType":"FULL_TIME","occupationalCategory":"Engineering","industry":"Technology","skills":"UNIX-like operating systems, containerisation technologies, security in CI/CD pipelines, FOSS tools for monitoring, graphing, capacity planning, and logging, Cloud Computing platforms, IaaC tools, complex systems, Python, Shell, Java, Clojure, IntelliJ, VSCode, Emacs, CLI tools"},{"@context":"https://schema.org","@type":"JobPosting","identifier":{"@type":"PropertyValue","name":"YubHub","value":"job_7e078ceb-e9a"},"title":"Data Engineer","description":"<p>At Ford Motor Company, we believe freedom of movement drives human progress. We also believe in providing you with the freedom to define and realize your dreams. With our incredible plans for the future of mobility, we have an exciting opportunity for you to join our expanding area of Prognostics.</p>\n<p>Are you enthusiastic to mine raw data and realize its hidden value by building amazing, connected data solutions that benefit our customers? Would you love to accelerate our efforts in implementing advanced physics and ML Models in production?</p>\n<p>The Data Engineer role resides within the Ford’s Electric Vehicle organization. In this role, you will work on building scalable and robust data pipelines to process large volumes of connected vehicle data to support the Ford vehicle prognostic initiatives.</p>\n<p><strong>Responsibilities</strong></p>\n<ul>\n<li>Develop exceptional analytical data products using both streaming and batch ingestion patterns on Google Cloud Platform with solid data warehouse principles.</li>\n<li>Build data pipelines to monitoring quality of data and performance of analytical models.</li>\n<li>Maintain the infrastructure of the data platform using terraform and continuously develop, evaluate, and deliver code using CI/CD.</li>\n<li>Collaborate with data analytics stakeholders to streamline the data acquisition, processing, and presentation process.</li>\n<li>Implement an enterprise data governance model and actively promote the concept of data - protection, sharing, reuse, quality, and standards.</li>\n<li>Enhance and maintain the DevOps capabilities of the data platform.</li>\n<li>Continuously optimize and enhance existing data solutions (pipelines, products, infrastructure) for best performance, high security, low vulnerability, low costs, and high reliability.</li>\n<li>Work in an agile product team to deliver code frequently using Test Driven Development (TDD), continuous integration and continuous deployment (CI/CD).</li>\n<li>Promptly address code quality issues using SonarQube, Checkmarx, Fossa, and Cycode throughout the development lifecycle.</li>\n<li>Perform any necessary data mapping, data lineage activities and document information flows.</li>\n<li>Monitor the production pipelines and provide production support by addressing production issues as per SLAs.</li>\n<li>Provide analysis of connected vehicle data to support new product developments and production vehicle improvements.</li>\n<li>Provide visibility to data quality/vehicle/feature issues and work with the business owners to fix the issues.</li>\n<li>Demonstrate technical knowledge and communication skills with the ability to advocate for well-designed solutions.</li>\n<li>Continuously enhance your domain knowledge of connected vehicle data, connected services and algorithms/models developed by data scientists within Ford.</li>\n<li>Stay current on the latest data engineering practices and contribute to the technical direction of the company while keeping a customer-centric approach.</li>\n</ul>\n<p><strong>Qualifications</strong></p>\n<ul>\n<li>Master’s degree or foreign equivalent degree in Computer Science, Software Engineering, Information Systems, Data Engineering, or a related field, and 4 years of experience OR equivalent combination of education and experience (6+ years with Bachelor&#39;s Degree).</li>\n<li>4 years of professional experience in:</li>\n<li>Data engineering, data product development and software product launches</li>\n<li>At least three of the following languages: Java, Python, Spark, Scala, SQL</li>\n<li>3 years of cloud data/software engineering experience building scalable, reliable, and cost-effective production batch and streaming data pipelines using:</li>\n<li>Data warehouses like Amazon Redshift, Microsoft Azure Synapse Analytics, Google BigQuery.</li>\n<li>Workflow orchestration tools like Airflow.</li>\n<li>Relational Database Management System like MySQL, PostgreSQL, and SQL Server.</li>\n<li>Real-Time data streaming platform like Apache Kafka, GCP Pub/Sub</li>\n<li>Microservices architecture to deliver large-scale real-time data processing application.</li>\n<li>REST APIs for compute, storage, operations, and security.</li>\n<li>DevOps tools such as Tekton, GitHub Actions, Git, GitHub, Terraform, Docker.</li>\n<li>Project management tools like Atlassian JIRA.</li>\n</ul>\n<p><strong>Even better if you have...</strong></p>\n<ul>\n<li>Ph.D. or foreign equivalent degree in Computer Science, Software Engineering, Information System, Data Engineering, or a related field.</li>\n<li>2 years of experience with ML Model Development and/or MLOps.</li>\n<li>Committed code to improve open-source data/software engineering projects</li>\n<li>Experience architecting cloud infrastructure and handling application migrations/upgrades.</li>\n<li>GCP Professional Certifications.</li>\n<li>Demonstrated passion to mine raw data and realize its hidden value.</li>\n<li>Passion to experiment/implement state of the art data engineering methods/techniques.</li>\n<li>Experience working in an implementation team from concept to operations, providing deep technical subject matter expertise for successful deployment.</li>\n<li>Experience implementing methods for automation of all parts of the pipeline to minimize labor in development and production.</li>\n<li>Analytics skills to profile data, troubleshoot data pipeline/product issues.</li>\n<li>Ability to simplify, clearly communicate complex data/software ideas/problems and work with cross-functional teams and all levels of management independently.</li>\n</ul>\n<p>Experience Level: mid</p>\n<p style=\"margin-top:24px;font-size:13px;color:#666;\">XML job scraping automation by <a href=\"https://yubhub.co\">YubHub</a></p>","url":"https://yubhub.co/jobs/job_7e078ceb-e9a","directApply":true,"hiringOrganization":{"@type":"Organization","name":"Ford Motor Company","sameAs":"https://www.ford.com/","logo":"https://logos.yubhub.co/ford.com.png"},"x-apply-url":"https://efds.fa.em5.oraclecloud.com/hcmUI/CandidateExperience/en/sites/CX_1/job/55567","x-work-arrangement":"hybrid","x-experience-level":null,"x-job-type":"full-time","x-salary-range":"This position is a range of salary grades 6-8.","x-skills-required":["Java","Python","Spark","Scala","SQL","Amazon Redshift","Microsoft Azure Synapse Analytics","Google BigQuery","Airflow","MySQL","PostgreSQL","SQL Server","Apache Kafka","GCP Pub/Sub","Microservices","REST APIs","Tekton","GitHub Actions","Git","GitHub","Terraform","Docker","Atlassian JIRA"],"x-skills-preferred":[],"datePosted":"2026-04-24T12:24:19.099Z","jobLocation":{"@type":"Place","address":{"@type":"PostalAddress","addressLocality":"Dearborn"}},"employmentType":"FULL_TIME","occupationalCategory":"Engineering","industry":"Automotive","skills":"Java, Python, Spark, Scala, SQL, Amazon Redshift, Microsoft Azure Synapse Analytics, Google BigQuery, Airflow, MySQL, PostgreSQL, SQL Server, Apache Kafka, GCP Pub/Sub, Microservices, REST APIs, Tekton, GitHub Actions, Git, GitHub, Terraform, Docker, Atlassian JIRA"},{"@context":"https://schema.org","@type":"JobPosting","identifier":{"@type":"PropertyValue","name":"YubHub","value":"job_2d4635c3-8a5"},"title":"Director, Compute & Infrastructure FP&A","description":"<p>As a Director, Compute &amp; Infrastructure FP&amp;A, you will own and drive the monthly forecasting process for the Compute &amp; Infrastructure org by partnering with various stakeholders across Finance, Accounting, Tax and Engineering. You will play a critical role in planning and forecasting the company&#39;s largest and most complex cost center (Compute &amp; Infrastructure).</p>\n<p>You will collaborate cross-functionally to develop long-range infrastructure investment plans, evaluate build vs. buy decisions, and ensure capital is deployed efficiently to support rapid growth. You will also provide strategic financial guidance through scenario modeling, ROI analysis, and performance tracking, enabling leadership to make high-stakes decisions under uncertainty.</p>\n<p>Key responsibilities include:</p>\n<ul>\n<li>Own compute financial planning &amp; Forecasting.</li>\n<li>Build and manage consolidation models for GPU/CPU capacity, storage, networking, and data center investments.</li>\n<li>Translate infrastructure roadmaps into short- and long-term financial forecasts (LRP, annual planning)</li>\n<li>Coordinate closely with Corporate FP&amp;A on timelines and process</li>\n<li>Present insights on a monthly basis to senior management.</li>\n<li>Drive infrastructure investment decisions.</li>\n<li>Evaluate build vs. buy, vendor vs. owned infrastructure, and capacity allocation tradeoffs.</li>\n<li>Develop frameworks for investment trade-offs to guide executive decision making.</li>\n<li>Build scalable tooling &amp; reporting.</li>\n<li>Implement stakeholder-facing dashboards to track compute spend, utilization, and efficiency metrics.</li>\n<li>Improve visibility into unit economics (e.g., cost per training run, cost per inference, cost per customer).</li>\n<li>Drive forecasting accuracy &amp; accountability.</li>\n<li>Lead budget vs. actual analysis for compute and infrastructure spend.</li>\n<li>Identify key cost drivers (utilization, pricing, efficiency gains) and reduce forecast variance.</li>\n<li>Support close &amp; financial reporting.</li>\n<li>Partner with Accounting to ensure accurate classification of infrastructure spend (OpEx vs CapEx).</li>\n<li>Translate complex infrastructure costs into clear insights for leadership.</li>\n<li>Enable strategic decision-making.</li>\n<li>Build scenario models to support leadership decisions on capacity scaling, new model launches, and infrastructure investments.</li>\n<li>Lead ad hoc analyses on emerging topics.</li>\n</ul>\n<p>You might thrive in this role if you have:</p>\n<ul>\n<li>10+ years in strategic finance, with experience in infrastructure, cloud, hardware, or compute-intensive environments</li>\n<li>2+ years in investment banking</li>\n<li>Must have experience running an FP&amp;A team at the corporate level or business unit level with significant scale.</li>\n<li>Strong financial modeling skills, particularly in capacity planning, unit economics, and scenario analysis under uncertainty.</li>\n<li>Experience supporting large-scale infrastructure or cloud spend (e.g., AWS/GCP/Azure, GPUs, data centers).</li>\n<li>Ability to translate technical concepts (compute usage, model training/inference, system architecture) into financial insights.</li>\n<li>Proficiency in Excel/Sheets, SQL, and BI tools (e.g., Tableau); experience with planning systems like Anaplan is a plus.</li>\n<li>Strong cross-functional partnership skills, especially with Engineering, Product, and Supply Chain.</li>\n<li>Familiarity with AI/ML infrastructure cost drivers and the economics of training and serving models.</li>\n</ul>\n<p style=\"margin-top:24px;font-size:13px;color:#666;\">XML job scraping automation by <a href=\"https://yubhub.co\">YubHub</a></p>","url":"https://yubhub.co/jobs/job_2d4635c3-8a5","directApply":true,"hiringOrganization":{"@type":"Organization","name":"OpenAI","sameAs":"https://openai.com/","logo":"https://logos.yubhub.co/openai.com.png"},"x-apply-url":"https://jobs.ashbyhq.com/openai/7536171d-0f98-4964-8f22-7968db062105","x-work-arrangement":"onsite","x-experience-level":"senior","x-job-type":"Full time","x-salary-range":"$234K – $325K","x-skills-required":["strategic finance","infrastructure","cloud","hardware","compute-intensive environments","investment banking","financial modeling","capacity planning","unit economics","scenario analysis","large-scale infrastructure","cloud spend","AWS","GCP","Azure","GPUs","data centers","Excel","SQL","BI tools","Tableau","planning systems","Anaplan","cross-functional partnership","engineering","product","supply chain","AI/ML infrastructure cost drivers","economics of training and serving models"],"x-skills-preferred":[],"datePosted":"2026-04-24T12:23:57.567Z","jobLocation":{"@type":"Place","address":{"@type":"PostalAddress","addressLocality":"San Francisco"}},"employmentType":"FULL_TIME","occupationalCategory":"Engineering","industry":"Technology","skills":"strategic finance, infrastructure, cloud, hardware, compute-intensive environments, investment banking, financial modeling, capacity planning, unit economics, scenario analysis, large-scale infrastructure, cloud spend, AWS, GCP, Azure, GPUs, data centers, Excel, SQL, BI tools, Tableau, planning systems, Anaplan, cross-functional partnership, engineering, product, supply chain, AI/ML infrastructure cost drivers, economics of training and serving models","baseSalary":{"@type":"MonetaryAmount","currency":"USD","value":{"@type":"QuantitativeValue","minValue":234000,"maxValue":325000,"unitText":"YEAR"}}},{"@context":"https://schema.org","@type":"JobPosting","identifier":{"@type":"PropertyValue","name":"YubHub","value":"job_9c3667a3-140"},"title":"Token-as-a-Service Technical Program Manager","description":"<p><strong>Compensation</strong></p>\n<p>The base pay offered may vary depending on multiple individualized factors, including market location, job-related knowledge, skills, and experience. If the role is non-exempt, overtime pay will be provided consistent with applicable laws. In addition to the salary range listed above, total compensation also includes generous equity, performance-related bonus(es) for eligible employees, and the following benefits.</p>\n<ul>\n<li>Medical, dental, and vision insurance for you and your family, with employer contributions to Health Savings Accounts</li>\n</ul>\n<ul>\n<li>Pre-tax accounts for Health FSA, Dependent Care FSA, and commuter expenses (parking and transit)</li>\n</ul>\n<ul>\n<li>401(k) retirement plan with employer match</li>\n</ul>\n<ul>\n<li>Paid parental leave (up to 24 weeks for birth parents and 20 weeks for non-birthing parents), plus paid medical and caregiver leave (up to 8 weeks)</li>\n</ul>\n<ul>\n<li>Paid time off: flexible PTO for exempt employees and up to 15 days annually for non-exempt employees</li>\n</ul>\n<ul>\n<li>13+ paid company holidays, and multiple paid coordinated company office closures throughout the year for focus and recharge, plus paid sick or safe time (1 hour per 30 hours worked, or more, as required by applicable state or local law)</li>\n</ul>\n<ul>\n<li>Mental health and wellness support</li>\n</ul>\n<ul>\n<li>Employer-paid basic life and disability coverage</li>\n</ul>\n<ul>\n<li>Annual learning and development stipend to fuel your professional growth</li>\n</ul>\n<ul>\n<li>Daily meals in our offices, and meal delivery credits as eligible</li>\n</ul>\n<ul>\n<li>Relocation support for eligible employees</li>\n</ul>\n<ul>\n<li>Additional taxable fringe benefits, such as charitable donation matching and wellness stipends, may also be provided.</li>\n</ul>\n<p><strong>About the Team</strong></p>\n<p>OpenAI&#39;s Stargate and 3P Engineering teams are responsible for building and scaling the external infrastructure ecosystem that powers advanced AI systems. We work across hyperscalers, colocation providers, cloud partners, and strategic third-party operators to turn contracted capacity into production-ready compute.</p>\n<p><strong>About the Role</strong></p>\n<p>We are seeking a Technical Program Manager, Token-as-a-Service (TaaS) to lead delivery of external compute capacity that directly serves OpenAI model workloads.</p>\n<p>In this role, you will own complex cross-functional programs that transform third-party infrastructure into usable tokens at scale. You will partner across engineering, capacity planning, networking, hardware, finance, product, and external providers to ensure that deployed capacity translates into real production throughput.</p>\n<p><strong>Key Responsibilities</strong></p>\n<ul>\n<li>Lead end-to-end delivery programs that convert external infrastructure capacity into production-ready token supply.</li>\n</ul>\n<ul>\n<li>Own readiness across compute, storage, networking, security, and operational dependencies for third-party environments.</li>\n</ul>\n<ul>\n<li>Build integrated plans across internal engineering teams and external partners with clear milestones, owners, risks, and critical paths.</li>\n</ul>\n<ul>\n<li>Drive launch execution for new partner regions, clusters, and capacity expansions.</li>\n</ul>\n<ul>\n<li>Create operating mechanisms that measure deployed capacity versus usable token output.</li>\n</ul>\n<ul>\n<li>Identify bottlenecks preventing token generation (network constraints, hardware readiness, software enablement, partner delays, etc.) and drive resolution.</li>\n</ul>\n<ul>\n<li>Coordinate with capacity planning and finance teams to prioritize the highest ROI capacity opportunities.</li>\n</ul>\n<ul>\n<li>Establish executive-level reporting on delivery status, risks, and token ramp forecasts.</li>\n</ul>\n<ul>\n<li>Improve repeatability of partner onboarding, technical integration, and scaling motions.</li>\n</ul>\n<ul>\n<li>Manage escalations across internal and external stakeholders during high-severity delivery issues.</li>\n</ul>\n<ul>\n<li>Translate ambiguous infrastructure constraints into clear execution plans.</li>\n</ul>\n<ul>\n<li>Help define the long-term operating model for Token-as-a-Service across Stargate and 3P ecosystems.</li>\n</ul>\n<p><strong>Qualifications</strong></p>\n<ul>\n<li>8+ years of Technical Program Management, Engineering Program Management, or Infrastructure Delivery experience.</li>\n</ul>\n<ul>\n<li>Experience leading large-scale technical programs involving cloud, data center, networking, hardware, or distributed systems.</li>\n</ul>\n<ul>\n<li>Strong understanding of compute infrastructure, clusters, networking, storage, and production systems.</li>\n</ul>\n<ul>\n<li>Proven ability to drive cross-functional execution across engineering, operations, finance, and external vendors.</li>\n</ul>\n<ul>\n<li>Experience managing executive stakeholders and communicating complex tradeoffs clearly.</li>\n</ul>\n<ul>\n<li>Strong analytical skills with ability to reason about utilization, throughput, capacity, and operational metrics.</li>\n</ul>\n<ul>\n<li>Comfortable operating in ambiguous, fast-scaling environments.</li>\n</ul>\n<ul>\n<li>Strong written and verbal communication skills.</li>\n</ul>\n<ul>\n<li>High ownership mentality with bias toward action.</li>\n</ul>\n<ul>\n<li>Experience working with external providers, strategic partners, or hyperscalers is highly preferred.</li>\n</ul>\n<p><strong>Preferred Skills</strong></p>\n<ul>\n<li>Experience with GPU clusters, AI infrastructure, or large-scale model serving environments.</li>\n</ul>\n<ul>\n<li>Familiarity with token economics, inference capacity planning, or workload scheduling.</li>\n</ul>\n<ul>\n<li>Experience scaling global infrastructure through third-party providers.</li>\n</ul>\n<ul>\n<li>Background in systems engineering, networking, or hardware deployment programs.</li>\n</ul>\n<ul>\n<li>Experience building new operational models in high-growth environments.</li>\n</ul>\n<p style=\"margin-top:24px;font-size:13px;color:#666;\">XML job scraping automation by <a href=\"https://yubhub.co\">YubHub</a></p>","url":"https://yubhub.co/jobs/job_9c3667a3-140","directApply":true,"hiringOrganization":{"@type":"Organization","name":"OpenAI","sameAs":"https://openai.com","logo":"https://logos.yubhub.co/openai.com.png"},"x-apply-url":"https://jobs.ashbyhq.com/openai/e8558280-69dc-438a-b905-623f75ae6d62","x-work-arrangement":"hybrid","x-experience-level":"senior","x-job-type":"Full time","x-salary-range":"$342K – $555K","x-skills-required":["Technical Program Management","Engineering Program Management","Infrastructure Delivery","Cloud","Data Center","Networking","Hardware","Distributed Systems","Compute Infrastructure","Clusters","Storage","Production Systems"],"x-skills-preferred":["GPU Clusters","AI Infrastructure","Large-Scale Model Serving Environments","Token Economics","Inference Capacity Planning","Workload Scheduling","Global Infrastructure","Systems Engineering","Hardware Deployment Programs"],"datePosted":"2026-04-24T12:23:54.161Z","jobLocation":{"@type":"Place","address":{"@type":"PostalAddress","addressLocality":"San Francisco; Seattle"}},"employmentType":"FULL_TIME","occupationalCategory":"Engineering","industry":"Technology","skills":"Technical Program Management, Engineering Program Management, Infrastructure Delivery, Cloud, Data Center, Networking, Hardware, Distributed Systems, Compute Infrastructure, Clusters, Storage, Production Systems, GPU Clusters, AI Infrastructure, Large-Scale Model Serving Environments, Token Economics, Inference Capacity Planning, Workload Scheduling, Global Infrastructure, Systems Engineering, Hardware Deployment Programs","baseSalary":{"@type":"MonetaryAmount","currency":"USD","value":{"@type":"QuantitativeValue","minValue":342000,"maxValue":555000,"unitText":"YEAR"}}},{"@context":"https://schema.org","@type":"JobPosting","identifier":{"@type":"PropertyValue","name":"YubHub","value":"job_a60d47f5-d9e"},"title":"Partner Manager, Growth (Mandarin Speaking)","description":"<p><strong>Compensation</strong></p>\n<p>The base pay offered may vary depending on multiple individualized factors, including market location, job-related knowledge, skills, and experience. If the role is non-exempt, overtime pay will be provided consistent with applicable laws. In addition to the salary range listed above, total compensation also includes generous equity, performance-related bonus(es) for eligible employees, and the following benefits.</p>\n<ul>\n<li>Medical, dental, and vision insurance for you and your family, with employer contributions to Health Savings Accounts</li>\n</ul>\n<ul>\n<li>Pre-tax accounts for Health FSA, Dependent Care FSA, and commuter expenses (parking and transit)</li>\n</ul>\n<ul>\n<li>401(k) retirement plan with employer match</li>\n</ul>\n<ul>\n<li>Paid parental leave (up to 24 weeks for birth parents and 20 weeks for non-birthing parents), plus paid medical and caregiver leave (up to 8 weeks)</li>\n</ul>\n<ul>\n<li>Paid time off: flexible PTO for exempt employees and up to 15 days annually for non-exempt employees</li>\n</ul>\n<ul>\n<li>13+ paid company holidays, and multiple paid coordinated company office closures throughout the year for focus and recharge, plus paid sick or safe time (1 hour per 30 hours worked, or more, as required by applicable state or local law)</li>\n</ul>\n<ul>\n<li>Mental health and wellness support</li>\n</ul>\n<ul>\n<li>Employer-paid basic life and disability coverage</li>\n</ul>\n<ul>\n<li>Annual learning and development stipend to fuel your professional growth</li>\n</ul>\n<ul>\n<li>Daily meals in our offices, and meal delivery credits as eligible</li>\n</ul>\n<ul>\n<li>Relocation support for eligible employees</li>\n</ul>\n<ul>\n<li>Additional taxable fringe benefits, such as charitable donation matching and wellness stipends, may also be provided.</li>\n</ul>\n<p><strong>About the Team</strong></p>\n<p>We are a small and fast-moving partnerships team that shapes and executes all aspects of collaboration with partners building consumer applications and devices. Your mission in this role is to launch and scale partnerships spanning products, platforms and other key initiatives at OpenAI. This role sits at the intersection of product, growth, and partnerships , supporting and accelerating our expansion across Asia-Pacific.</p>\n<p><strong>About the Role</strong></p>\n<p>As an APAC Partner Manager – Growth, you will be based in San Francisco and work closely with APAC-based Product Partnerships leadership to support and scale key partners across Asia. Your mandate is to expand adoption of OpenAI products across the region by helping partners build high-quality, useful experiences for users. You’ll work to deepen integrations, support sustainable growth, and bring successful partnerships to life in ways that deliver clear value for both users and partners.</p>\n<p>You will operate as a critical bridge between global product teams and regional partners , ensuring alignment, execution velocity, and strong commercial outcomes.</p>\n<p><strong>Key Responsibilities:</strong></p>\n<ul>\n<li>Support strategy and execution for priority APAC partners, with a focus on product adoption, engagement and long-term user value</li>\n</ul>\n<ul>\n<li>Identify and operationalize opportunities across distribution, product integrations, and partner-led initiatives that improve user experience and accessibility</li>\n</ul>\n<ul>\n<li>Support end-to-end partnership lifecycle: evaluation, deal structuring, launch, optimization, and renewal</li>\n</ul>\n<ul>\n<li>Collaborate with APAC leadership on partner strategy, while driving execution from SF across internal teams</li>\n</ul>\n<ul>\n<li>Act as the central point of coordination between APAC partners and US-based product, engineering, legal, and marketing teams</li>\n</ul>\n<ul>\n<li>Ensure clear communication, alignment on priorities, and rapid issue resolution across time zones</li>\n</ul>\n<ul>\n<li>Establish and manage structured engagement with partners (e.g., business reviews, growth planning, performance tracking)</li>\n</ul>\n<ul>\n<li>Drive accountability on both sides to deliver against agreed KPIs and track success metrics (e.g., WAU growth, new users, retention, revenue contribution)</li>\n</ul>\n<ul>\n<li>Translate performance insights into actionable recommendations for partners and internal stakeholders</li>\n</ul>\n<ul>\n<li>Develop strong, trusted relationships with partner stakeholders across APAC markets</li>\n</ul>\n<ul>\n<li>Surface new opportunities for expansion, deeper integration, and long-term strategic collaboration that benefit users and partners</li>\n</ul>\n<p style=\"margin-top:24px;font-size:13px;color:#666;\">XML job scraping automation by <a href=\"https://yubhub.co\">YubHub</a></p>","url":"https://yubhub.co/jobs/job_a60d47f5-d9e","directApply":true,"hiringOrganization":{"@type":"Organization","name":"OpenAI","sameAs":"https://openai.com","logo":"https://logos.yubhub.co/openai.com.png"},"x-apply-url":"https://jobs.ashbyhq.com/openai/b5dc0d33-b2d8-4150-9e41-2f7a6dfc8930","x-work-arrangement":"onsite","x-experience-level":"senior","x-job-type":"Full time","x-salary-range":"$266K – $295K","x-skills-required":["business development","product partnerships","product","technology industry","Mandarin language proficiency","APAC markets","regional market dynamics","distribution channels","consumer behavior","Asia"],"x-skills-preferred":[],"datePosted":"2026-04-24T12:23:01.085Z","jobLocation":{"@type":"Place","address":{"@type":"PostalAddress","addressLocality":"San Francisco"}},"employmentType":"FULL_TIME","occupationalCategory":"Engineering","industry":"Technology","skills":"business development, product partnerships, product, technology industry, Mandarin language proficiency, APAC markets, regional market dynamics, distribution channels, consumer behavior, Asia","baseSalary":{"@type":"MonetaryAmount","currency":"USD","value":{"@type":"QuantitativeValue","minValue":266000,"maxValue":295000,"unitText":"YEAR"}}},{"@context":"https://schema.org","@type":"JobPosting","identifier":{"@type":"PropertyValue","name":"YubHub","value":"job_99208b28-226"},"title":"Technical Program Manager, Robotics Data Acquisition","description":"<p>We&#39;re looking for a Technical Program Manager to own and scale the systems that power robotic data acquisition across our development and evaluation environments. This role sits at the intersection of Robotics Engineering, Operations, and Infrastructure, ensuring that DAQ stations and associated workflows reliably produce high-quality data for model training and evaluation.</p>\n<p>You will drive end-to-end execution of complex, cross-functional programs that integrate robotic platforms, sensing systems, operator tooling, and data pipelines into cohesive, production-ready systems. Success in this role requires strong systems thinking, operational rigor, and the ability to translate ambiguous research needs into scalable infrastructure.</p>\n<p>In this role you will:</p>\n<ul>\n<li>Own DAQ Program Delivery: Lead the roadmap, execution, and scaling of robotic DAQ systems, ensuring alignment with research, engineering, and operational priorities.</li>\n</ul>\n<ul>\n<li>Drive Cross-Functional Integration: Coordinate across robotics hardware, software, infrastructure, and operations teams to deliver tightly integrated, deployment-ready data collection systems.</li>\n</ul>\n<ul>\n<li>Operationalize Data Collection Systems: Translate experimental and research workflows into repeatable, scalable DAQ processes with clear SLAs, metrics, and reliability targets.</li>\n</ul>\n<ul>\n<li>System Readiness &amp; Deployment: Ensure DAQ stations (robots, sensors, compute, operator interfaces) are fully integrated, validated, and ready for production use across multiple sites.</li>\n</ul>\n<ul>\n<li>Program Execution &amp; Risk Management: Build and manage detailed program plans, identify risks early, and drive mitigation across technical and operational domains.</li>\n</ul>\n<ul>\n<li>Capacity &amp; Throughput Planning: Model and plan DAQ capacity (stations, operators, uptime) to meet evolving data demands, balancing speed, cost, and quality.</li>\n</ul>\n<ul>\n<li>Quality &amp; Data Integrity Oversight: Partner with engineering and data teams to define and enforce data quality standards, ensuring consistency and usability for downstream model training.</li>\n</ul>\n<ul>\n<li>Continuous Improvement: Drive improvements in system reliability, utilization, and efficiency through instrumentation, feedback loops, and process optimization.</li>\n</ul>\n<p>You might thrive in this role if you:</p>\n<ul>\n<li>5+ years of experience in technical program management, systems engineering, or operations in hardware, robotics, or infrastructure-heavy environments</li>\n</ul>\n<ul>\n<li>Strong systems-level thinking across hardware, software, and operational workflows</li>\n</ul>\n<ul>\n<li>Proven ability to lead complex, cross-functional programs with high ambiguity</li>\n</ul>\n<ul>\n<li>Experience scaling physical systems (labs, factories, robotics fleets, or test infrastructure)</li>\n</ul>\n<ul>\n<li>Comfortable working close to hardware and debugging real-world system issues</li>\n</ul>\n<ul>\n<li>Strong analytical skills; able to model capacity, throughput, and system performance</li>\n</ul>\n<ul>\n<li>Excellent communication skills with the ability to align diverse stakeholders</li>\n</ul>\n<p style=\"margin-top:24px;font-size:13px;color:#666;\">XML job scraping automation by <a href=\"https://yubhub.co\">YubHub</a></p>","url":"https://yubhub.co/jobs/job_99208b28-226","directApply":true,"hiringOrganization":{"@type":"Organization","name":"OpenAI","sameAs":"https://openai.com/","logo":"https://logos.yubhub.co/openai.com.png"},"x-apply-url":"https://jobs.ashbyhq.com/openai/7eb1ac08-5168-4eff-a96c-90ce0bfd3fde","x-work-arrangement":"hybrid","x-experience-level":"senior","x-job-type":"Full time","x-salary-range":"$257K – $300K","x-skills-required":["technical program management","systems engineering","operations","hardware","robotics","infrastructure","cross-functional integration","data collection systems","system readiness","deployment","program execution","risk management","capacity planning","throughput planning","quality oversight","continuous improvement"],"x-skills-preferred":[],"datePosted":"2026-04-24T12:21:43.720Z","jobLocation":{"@type":"Place","address":{"@type":"PostalAddress","addressLocality":"San Francisco"}},"employmentType":"FULL_TIME","occupationalCategory":"Engineering","industry":"Technology","skills":"technical program management, systems engineering, operations, hardware, robotics, infrastructure, cross-functional integration, data collection systems, system readiness, deployment, program execution, risk management, capacity planning, throughput planning, quality oversight, continuous improvement","baseSalary":{"@type":"MonetaryAmount","currency":"USD","value":{"@type":"QuantitativeValue","minValue":257000,"maxValue":300000,"unitText":"YEAR"}}},{"@context":"https://schema.org","@type":"JobPosting","identifier":{"@type":"PropertyValue","name":"YubHub","value":"job_ae02d685-c2c"},"title":"GTM Planning Operations","description":"<p><strong>Compensation</strong></p>\n<p>The base pay offered may vary depending on multiple individualized factors, including market location, job-related knowledge, skills, and experience. If the role is non-exempt, overtime pay will be provided consistent with applicable laws. In addition to the salary range listed above, total compensation also includes generous equity, performance-related bonus(es) for eligible employees, and the following benefits.</p>\n<ul>\n<li>Medical, dental, and vision insurance for you and your family, with employer contributions to Health Savings Accounts</li>\n</ul>\n<ul>\n<li>Pre-tax accounts for Health FSA, Dependent Care FSA, and commuter expenses (parking and transit)</li>\n</ul>\n<ul>\n<li>401(k) retirement plan with employer match</li>\n</ul>\n<ul>\n<li>Paid parental leave (up to 24 weeks for birth parents and 20 weeks for non-birthing parents), plus paid medical and caregiver leave (up to 8 weeks)</li>\n</ul>\n<ul>\n<li>Paid time off: flexible PTO for exempt employees and up to 15 days annually for non-exempt employees</li>\n</ul>\n<ul>\n<li>13+ paid company holidays, and multiple paid coordinated company office closures throughout the year for focus and recharge, plus paid sick or safe time (1 hour per 30 hours worked, or more, as required by applicable state or local law)</li>\n</ul>\n<ul>\n<li>Mental health and wellness support</li>\n</ul>\n<ul>\n<li>Employer-paid basic life and disability coverage</li>\n</ul>\n<ul>\n<li>Annual learning and development stipend to fuel your professional growth</li>\n</ul>\n<ul>\n<li>Daily meals in our offices, and meal delivery credits as eligible</li>\n</ul>\n<ul>\n<li>Relocation support for eligible employees</li>\n</ul>\n<ul>\n<li>Additional taxable fringe benefits, such as charitable donation matching and wellness stipends, may also be provided.</li>\n</ul>\n<p><strong>About the Team</strong></p>\n<p>OpenAI&#39;s mission is to build safe artificial general intelligence (AGI) that benefits all of humanity. This long-term undertaking brings together the world&#39;s best scientists, engineers, and business professionals. Our Go-To-Market (GTM) organization helps customers understand, adopt, and scale OpenAI&#39;s products and platform. Revenue Operations partners closely with Sales, Finance, and HR/People to drive disciplined planning and performance management. The GTM Planning team defines how company strategy translates into targets, segmentation, and coverage models that guide execution in the field, ensuring our GTM approach remains both analytically rigorous and operationally executable as we scale.</p>\n<p><strong>About the Role</strong></p>\n<p>We are hiring a GTM Planning operator to own target setting, segmentation, and coverage design centrally across our GTM organization. This is a high-impact individual contributor role responsible for defining how we set targets, structure GTM teams, and deploy coverage across segments and geographies. You will translate company growth goals into clear, actionable plans for the field and drive consistency and rigor across GTM planning processes. You will partner closely with Sales, Technical Success, and Finance leadership to shape planning decisions, frame tradeoffs, and continuously evolve how we maximize growth and efficiency.</p>\n<p><strong>Responsibilities</strong></p>\n<ul>\n<li>Define and own the GTM region &amp; segment target-setting methodology</li>\n</ul>\n<ul>\n<li>Translate company-level revenue goals into quotas across segments, roles, and region</li>\n</ul>\n<ul>\n<li>Drive tradeoffs across growth, efficiency, and coverage in partnership with Finance and GTM leadership</li>\n</ul>\n<ul>\n<li>Define customer and market segmentation (e.g., enterprise, midmarket, SMB, industry verticals, product motions)</li>\n</ul>\n<ul>\n<li>Continuously refine segmentation based on core KPIs and market dynamics</li>\n</ul>\n<ul>\n<li>Ensure segmentation is consistently applied across planning and territory execution</li>\n</ul>\n<ul>\n<li>Evolve coverage models in partnership with GTM leadership to optimize growth, efficiency, and customer engagement</li>\n</ul>\n<ul>\n<li>Analyze GTM performance drivers (e.g., attainment, segment performance, book size, coverage ratios)</li>\n</ul>\n<ul>\n<li>Identify opportunities to improve growth and efficiency</li>\n</ul>\n<ul>\n<li>Translate analysis into clear, actionable recommendations for leadership</li>\n</ul>\n<ul>\n<li>Partner closely with Sales leadership to ensure plans are credible and executable in the field</li>\n</ul>\n<ul>\n<li>Incorporate field feedback into planning iterations</li>\n</ul>\n<ul>\n<li>Deliver clear, executive-ready narratives to support planning decisions</li>\n</ul>\n<ul>\n<li>Drive how targets and segmentation are implemented in systems (e.g., Salesforce, BI tools)</li>\n</ul>\n<ul>\n<li>Partner with data and systems teams to ensure accuracy, scalability, and consistency</li>\n</ul>\n<ul>\n<li>Improve usability and reliability of planning data across GTM workflows</li>\n</ul>\n<p><strong>Requirements</strong></p>\n<ul>\n<li>7+ years of experience in Sales Strategy, GTM Strategy &amp; Planning, Revenue Operations, or similar roles in a high-growth B2B environment</li>\n</ul>\n<ul>\n<li>Direct experience owning or heavily contributing to target setting, quota development, segmentation, and/or territory design</li>\n</ul>\n<ul>\n<li>Strong understanding of GTM mechanics (quota setting, territory design, book construction, coverage models)</li>\n</ul>\n<ul>\n<li>Deep modeling experience, including capacity modeling and topline / target setting (e.g., financial plans, quotas) to support GTM planning decisions</li>\n</ul>\n<ul>\n<li>Proven ability to translate business goals into structured plans and field-ready targets</li>\n</ul>\n<ul>\n<li>Advanced analytical skills (SQL, spreadsheets, BI tools such as Tableau) and experience with and planning platforms (e.g., Anaplan or similar tools)</li>\n</ul>\n<ul>\n<li>Strong communication and stakeholder management skills with the ability to influence senior leaders</li>\n</ul>\n<p><strong>Nice to Have</strong></p>\n<ul>\n<li>Experience in consumption-based or usage-based models (e.g., API, PLG, hybrid motions)</li>\n</ul>\n<ul>\n<li>Experience in multi-product or rapidly evolving GTM environments</li>\n</ul>\n<ul>\n<li>Familiarity with AI products and the AI/ML ecosystem</li>\n</ul>\n<p><strong>About OpenAI</strong></p>\n<p>OpenAI is an AI research and deployment company dedicated to ensuring that general-purpose artificial intelligence benefits all of humanity. We push the boundaries of the capabilities of AI systems and seek to safely deploy them to the world through our products. AI is an extremely powerful tool that must be created with safety and human needs at its core, and to achieve our mission, we must encompass and value the many different perspectives, voices, and experiences that form the full spectrum of humanity.</p>\n<p>We are an equal opportunity employer, and we do not discriminate on the basis of race, religion, color, national origin, sex, sexual orientation, age, veteran status, disability, genetic information, or other applicable legally protected characteristic.</p>\n<p>For additional information, please see [OpenAI’s Affirmative Action and Equal Employment Opportunity Policy Statement](https://cdn.openai.com/policies/eeo-policy-statement.pdf).</p>\n<p>Background checks for applicants will be administered in accordance with applicable law, and qualified applicants with arrest or conviction records will be considered for employment consistent with those laws, including the San Francisco Fair Chance Ordinance, the Los Angeles County Fair Chance Ordinance for Employers, and the California Fair Chance Act, for US-based candidates. For unincorporated Los Angeles County workers: we reasonably believe that criminal history may have a direct, adverse and negative relationship with the following job duties, potentially resulting in the withdrawal of a conditional offer of employment: protect computer hardware entrusted to you from theft, loss or damage; return all computer hardware in your possession (including the data contained therein) upon termination of employment or end of assignment; and maintain the confidentiality of proprietary, confidential, and non-public information.</p>\n<p style=\"margin-top:24px;font-size:13px;color:#666;\">XML job scraping automation by <a href=\"https://yubhub.co\">YubHub</a></p>","url":"https://yubhub.co/jobs/job_ae02d685-c2c","directApply":true,"hiringOrganization":{"@type":"Organization","name":"OpenAI","sameAs":"https://openai.com/","logo":"https://logos.yubhub.co/openai.com.png"},"x-apply-url":"https://jobs.ashbyhq.com/openai/bd3a367f-aa91-4b6f-b74e-451fb7fc3151","x-work-arrangement":"hybrid","x-experience-level":"senior","x-job-type":"Full time","x-salary-range":"$239K – $265K","x-skills-required":["Sales Strategy","GTM Strategy & Planning","Revenue Operations","Target Setting","Segmentation","Coverage Design","Capacity Modeling","Topline / Target Setting","Financial Plans","Quotas","Advanced Analytical Skills","SQL","Spreadsheets","BI Tools","Planning Platforms","Anaplan","Tableau"],"x-skills-preferred":["Consumption-Based Models","Usage-Based Models","API","PLG","Hybrid Motions","Multi-Product Environments","Rapidly Evolving GTM Environments","AI Products","AI/ML Ecosystem"],"datePosted":"2026-04-24T12:21:10.086Z","jobLocation":{"@type":"Place","address":{"@type":"PostalAddress","addressLocality":"San Francisco"}},"employmentType":"FULL_TIME","occupationalCategory":"Sales","industry":"Technology","skills":"Sales Strategy, GTM Strategy & Planning, Revenue Operations, Target Setting, Segmentation, Coverage Design, Capacity Modeling, Topline / Target Setting, Financial Plans, Quotas, Advanced Analytical Skills, SQL, Spreadsheets, BI Tools, Planning Platforms, Anaplan, Tableau, Consumption-Based Models, Usage-Based Models, API, PLG, Hybrid Motions, Multi-Product Environments, Rapidly Evolving GTM Environments, AI Products, AI/ML Ecosystem","baseSalary":{"@type":"MonetaryAmount","currency":"USD","value":{"@type":"QuantitativeValue","minValue":239000,"maxValue":265000,"unitText":"YEAR"}}},{"@context":"https://schema.org","@type":"JobPosting","identifier":{"@type":"PropertyValue","name":"YubHub","value":"job_eb99c035-971"},"title":"Manager, Data Engineering","description":"<p>We&#39;re looking for a seasoned Data Engineering Manager to lead our team in designing, developing, and maintaining data pipelines that support our Data Hub strategy. As a key member of our Global Data Insight &amp; Analytics team, you&#39;ll be responsible for building and maintaining data assets and services that empower Artificial Intelligence, Data Science, and Software Engineering.</p>\n<p>Responsibilities:</p>\n<ul>\n<li>Lead a high-performing team of Portfolio Data Engineers, fostering a culture of collaboration, innovation, and continuous improvement.</li>\n<li>Strategically prioritize and manage team workloads, ensuring effective task allocation and resource capacity to support team goals.</li>\n<li>Provide expert technical guidance and mentorship, ensuring adherence to best practices, coding standards, and architectural guidelines.</li>\n<li>Act as the Chief Data Technical Anchor for the PLMA domain, resolving critical incidents through Root Cause Analysis (RCA) and implementing permanent, resilient architectural fixes.</li>\n<li>Oversee the design, development, maintenance, scalability, reliability, and performance of data platform pipelines, aligning them with business needs and strategic objectives.</li>\n<li>Contribute to the long-term strategic direction of the Data Platform by proactively identifying opportunities for best practice adoption and standardization.</li>\n<li>Champion data quality, governance, and security standards, ensuring compliance and safeguarding sensitive data assets.</li>\n<li>Enhance efficiency and reduce redundancy by consolidating common tasks across teams.</li>\n<li>Effectively communicate decisions to stakeholders, building strong relationships and ensuring alignment on data initiatives.</li>\n<li>Maintain awareness of industry trends and emerging technologies to inform technical decisions.</li>\n<li>Lead the implementation of customer requests into data assets, ensuring optimized design and code development.</li>\n<li>Guide the team in delivering scalable, robust data solutions and contribute hands-on to critical projects, including design and code reviews.</li>\n<li>Lead technical decisions that drive data innovation and resilience.</li>\n<li>Demonstrate full stack cloud data engineering expertise, covering automation, versioning, ingestion, integration, transformation, optimization, and data modeling.</li>\n<li>Engage in agile planning, including scope, work breakdown structure, as well as roadblock resolution.</li>\n<li>Design solutions for cost and consumption optimization, scalability, and performance.</li>\n<li>Collaborate with Data Architecture and stakeholders on solution design, data consolidation, retention, purpose of use, compliance, and audit requirements.</li>\n<li>Drive engineering excellence by establishing and monitoring SWE-centric quality metrics (including DORA metrics and P99 latency targets).</li>\n</ul>\n<p>Requirements:</p>\n<ul>\n<li>Bachelor&#39;s degree in Computer Science, Information Technology, Information Systems, Data Analytics, or a related field.</li>\n<li>8+ years of experience in complex data environments, demonstrating increased responsibilities and achievements with:</li>\n</ul>\n<p>+ Expertise in programming languages such as Python or Scala, and strong SQL skills. \t+ Experience with ETL/ELT processes, data warehousing, and data modeling. \t+ Experience with CI/CD pipelines, Docker, Git/Gerrit, and experience designing resilient deployment strategies and sophisticated release management. \t+ Familiarity of data governance, privacy, quality, and monitoring.</p>\n<ul>\n<li>Proven experience in implementing sophisticated testing strategies, driving quality tool adoption, establishing comprehensive code review processes, and setting observability standards with advanced monitoring and proactive alerting.</li>\n<li>5+ years of experience within the automotive industry or related product development environments and product lifecycle management.</li>\n<li>5+ years of experience in leading software or data engineering teams, with a focus on team development and project success.</li>\n<li>5+ years of experience in Big Data environments or expertise with Big Data tools, including:</li>\n</ul>\n<p>+ Data processing frameworks and data modeling. \t+ In-depth knowledge and practical experience with Google Cloud Platform services. \t+ Proven experience in monitoring and optimizing costs and compute resources in hyperscaler platforms.</p>\n<ul>\n<li>Significant experience leveraging Generative AI and LLMs to optimize data engineering workflows (e.g., automated code generation, documentation, or metadata management).</li>\n</ul>\n<p>Preferred Qualifications:</p>\n<ul>\n<li>Master&#39;s degree in Computer Science, Engineering, or a related field.</li>\n<li>Expertise in GCP based data engineering services like BQ, Dataflow, Airflow, Dataform, Datastream, Apache Beam, Cloud Run, Cloud Functions</li>\n<li>Familiarity with automotive Product Development processes, including program planning, design validation, and cross-functional collaboration across engineering, manufacturing, and supplier teams to deliver data-driven insights at each lifecycle stage</li>\n<li>Experience in managing and scaling serverless applications and clusters, focusing on resource optimization and robust monitoring and logging strategies.</li>\n<li>Proficiency in unstructured data ingestion, including experience with data modeling and preparation techniques to support AI and machine learning workloads.</li>\n<li>Experience with AI architecture and AI enabling tech (graph database, vector database, etc)</li>\n<li>Familiarity with data visualization tools (e.g., Power BI, Tableau).</li>\n<li>Working knowledge of ontology, semantic modeling, and related technologies</li>\n</ul>\n<p style=\"margin-top:24px;font-size:13px;color:#666;\">XML job scraping automation by <a href=\"https://yubhub.co\">YubHub</a></p>","url":"https://yubhub.co/jobs/job_eb99c035-971","directApply":true,"hiringOrganization":{"@type":"Organization","name":"Ford Motor Company","sameAs":"https://corporate.ford.com/","logo":"https://logos.yubhub.co/corporate.ford.com.png"},"x-apply-url":"https://efds.fa.em5.oraclecloud.com/hcmUI/CandidateExperience/en/sites/CX_1/job/62339","x-work-arrangement":"hybrid","x-experience-level":"senior","x-job-type":"full-time","x-salary-range":"Competitive salary and benefits package","x-skills-required":["Python","Scala","SQL","ETL/ELT processes","data warehousing","data modeling","CI/CD pipelines","Docker","Git/Gerrit","data governance","privacy","quality","monitoring"],"x-skills-preferred":["Generative AI","LLMs","GCP based data engineering services","BQ","Dataflow","Airflow","Dataform","Datastream","Apache Beam","Cloud Run","Cloud Functions","automotive Product Development processes","program planning","design validation","cross-functional collaboration","data-driven insights","unstructured data ingestion","preparation techniques","AI architecture","AI enabling tech","graph database","vector database","data visualization tools","ontology","semantic modeling"],"datePosted":"2026-04-24T12:19:58.496Z","jobLocation":{"@type":"Place","address":{"@type":"PostalAddress","addressLocality":"Dearborn"}},"employmentType":"FULL_TIME","occupationalCategory":"Engineering","industry":"Automotive","skills":"Python, Scala, SQL, ETL/ELT processes, data warehousing, data modeling, CI/CD pipelines, Docker, Git/Gerrit, data governance, privacy, quality, monitoring, Generative AI, LLMs, GCP based data engineering services, BQ, Dataflow, Airflow, Dataform, Datastream, Apache Beam, Cloud Run, Cloud Functions, automotive Product Development processes, program planning, design validation, cross-functional collaboration, data-driven insights, unstructured data ingestion, preparation techniques, AI architecture, AI enabling tech, graph database, vector database, data visualization tools, ontology, semantic modeling"},{"@context":"https://schema.org","@type":"JobPosting","identifier":{"@type":"PropertyValue","name":"YubHub","value":"job_1c920de1-7f9"},"title":"Principal Software Engineer","description":"<p>Join Microsoft AI&#39;s Copilot Discover Engineering Team as a Principal Software Engineer, serving as a senior technical architect to play a central role in the technical direction and long-range architecture of Copilot Discover.</p>\n<p>This is a role emphasizing true end-to-end responsibility: setting the architectural vision, shaping the platform for AI-forward discovery experiences, and steering the evolution of product experiences that sit at the heart of how users engage with the intersection of knowledge, content, and personalization on surfaces on which Copilot shows up.</p>\n<p>You will design and drive the systems that power the Copilot Discover feed at scale. You&#39;ll work on foundational platforms that ingest, enrich, rank, personalize, and serve content across web, mobile, and partner surfaces and lead architectural strategy for how we unify signals, models, and data into coherent, trustworthy experiences; modernize our ranking and personalization stack; and build the AI-forward infrastructure that makes Copilot Discover feel intelligent, anticipatory, and personalized for every user.</p>\n<p>The key is an end-to-end focus on outcomes, across a broad technical space. You&#39;ll be expected to influence platform direction across multiple teams and adjacent organizations. You will ensure that the MSN and Copilot Discover systems are robust, scalable, privacy-respecting, and engineered for long-term adaptation.</p>\n<p>High product sense is a success factor – how you drive product and architectural convergence across today&#39;s fragmented surfaces, reduce complexity, and shape a consistent platform model is key to the success of the product and this role.</p>\n<p>Copilot Discover sits at the intersection of content, signals, and user intent. Our ambition is to make it a durable, strategic layer that powers intelligent, personalized, and trusted discovery experiences across a broad array of surfaces where Microsoft engages consumers in their journeys.</p>\n<p>If you are passionate about building high-scale, AI-driven systems that combine solid architectural rigor with meaningful user value, this is the role for you.</p>\n<p>Microsoft&#39;s mission is to empower every person and every organization on the planet to achieve more. As employees we come together with a growth mindset, innovate to empower others, and collaborate to realize our shared goals. Each day we build on our values of respect, integrity, and accountability to create a culture of inclusion where everyone can thrive at work and beyond.</p>\n<p>Responsibilities:</p>\n<p>Own the technical direction for Copilot Discover platforms, setting end-to-end architectural strategy.</p>\n<p>Partner with product, design, data science, and engineering leaders to translate business and user needs into executable architectural plans, well-documented designs, and multi-year roadmaps.</p>\n<p>Set and govern architectural decisions across multiple services and teams, ensuring systems are scalable, secure, reliable, cost-efficient, and grounded in data, telemetry and operational excellence.</p>\n<p>Raise the technical bar across the organization by establishing flasifible principles, reviewing critical designs, and helping to develop technical leaders within the team.</p>\n<p>Establish and evolve quality and reliability standards, including test strategies, CI/CD practices, monitoring, alerting, and live-site health.</p>\n<p>Shape the adoption of AI/ML techniques for content understanding, personalization, summarization, and safety, in close collaboration with MAI and partner teams.</p>\n<p>Serve as a cross-org technical leader, aligning MSN architecture with Bing, Copilot, Ads, Privacy, Trust, and other Microsoft platforms.</p>\n<p>Qualifications:</p>\n<p>Required Qualifications:</p>\n<p>Bachelor’s Degree in Computer Science or related technical field AND 8+ years technical engineering experience with coding in languages including, but not limited to, C, C++, C#, Java, JavaScript, or Python OR equivalent experience.</p>\n<p>Preferred Qualifications:</p>\n<p>Master’s Degree in Computer Science or related technical field AND 12+ years technical engineering experience with coding in languages including, but not limited to, C, C++, C#, Java, JavaScript, or Python OR Bachelor’s Degree in Computer Science or related technical field AND 15+ years technical engineering experience with coding in languages including, but not limited to, C, C++, C#, Java, JavaScript, or Python OR equivalent experience.</p>\n<p>Experience in ML/AI systems, especially in content understanding, ranking, or personalization.</p>\n<p>Proven experience designing and operating large-scale distributed systems, including data pipelines, microservices, APIs, and storage systems.</p>\n<p>Experience with content platforms, personalization systems, or consumer-facing services at scale.</p>\n<p>Experience with technologies such as Apache Spark, Kafka, columnar storage, data modeling, and schema evolution.</p>\n<p>Demonstrated success as a technical lead or architect, influencing across teams without direct authority.</p>\n<p>Solid understanding of system architecture, performance tuning, telemetry design, and operational excellence.</p>\n<p>Experience in ML/AI systems, especially in content understanding, ranking, or personalization.</p>\n<p>Excellent analytical and communication skills, with the ability to clearly articulate complex technical concepts.</p>\n<p>Solid cross-organizational collaboration skills and the ability to influence senior stakeholders.</p>\n<p style=\"margin-top:24px;font-size:13px;color:#666;\">XML job scraping automation by <a href=\"https://yubhub.co\">YubHub</a></p>","url":"https://yubhub.co/jobs/job_1c920de1-7f9","directApply":true,"hiringOrganization":{"@type":"Organization","name":"Microsoft AI","sameAs":"https://microsoft.ai","logo":"https://logos.yubhub.co/microsoft.ai.png"},"x-apply-url":"https://microsoft.ai/job/principal-software-engineer-52/","x-work-arrangement":"hybrid","x-experience-level":"senior","x-job-type":"full-time","x-salary-range":"$163,000 - $296,400 per year","x-skills-required":["C","C++","C#","Java","JavaScript","Python","Apache Spark","Kafka","columnar storage","data modeling","schema evolution"],"x-skills-preferred":[],"datePosted":"2026-04-24T12:19:50.797Z","jobLocation":{"@type":"Place","address":{"@type":"PostalAddress","addressLocality":"Mountain View"}},"employmentType":"FULL_TIME","occupationalCategory":"Engineering","industry":"Technology","skills":"C, C++, C#, Java, JavaScript, Python, Apache Spark, Kafka, columnar storage, data modeling, schema evolution","baseSalary":{"@type":"MonetaryAmount","currency":"USD","value":{"@type":"QuantitativeValue","minValue":163000,"maxValue":296400,"unitText":"YEAR"}}},{"@context":"https://schema.org","@type":"JobPosting","identifier":{"@type":"PropertyValue","name":"YubHub","value":"job_68a62835-66b"},"title":"Senior DevOps Engineer","description":"<p>We are seeking a highly skilled and self-motivated Senior Embedded DevOps Engineer to support our engineering teams. This role will focus on driving changes and ensuring adherence to company-established standards for data infrastructure and CI/CD pipelines.</p>\n<p>The ideal candidate will have strong experience working with AWS and/or GCP, cloud-based data streaming and processing services, containerized application deployments, infrastructure automation, and Site Reliability Engineering (SRE) best practices for performance and cost optimization.</p>\n<p>Key Responsibilities:</p>\n<ul>\n<li>Drive initiatives to implement and enforce best practices for data streaming, processing, analytics and monitoring infrastructure.</li>\n<li>Deploy and manage services on Kubernetes-based platforms such as Amazon EKS and Google Kubernetes Engine (GKE).</li>\n<li>Provision and manage cloud infrastructure using Terraform, ensuring best practices in security, scalability, and cost-efficiency.</li>\n<li>Maintain and optimize CI/CD pipelines using Jenkins, ArgoCD, and GitHub Enterprise Actions to support automated deployments and testing.</li>\n<li>Work with cloud-native data services such as AWS Kinesis, AWS Glue, Google Dataflow, and Google Pub/Sub, BigQuery, BigTable</li>\n<li>Familiarity with workflow orchestration services such as Apache Airflow and Google Cloud Composer.</li>\n<li>Develop and maintain automation scripts and tooling using Python to support DevOps processes.</li>\n<li>Monitor system performance, troubleshoot issues, and implement proactive solutions to enhance reliability and efficiency.</li>\n<li>Implement SRE practices to improve service reliability, scalability, and cost-effectiveness.</li>\n<li>Analyze and optimize cloud costs, identifying areas for improvement and implementing cost-saving strategies.</li>\n<li>Ensure compliance with security policies and best practices in cloud environments.</li>\n<li>Drive adoption of company standards and influence data teams to align with best DevOps and SRE practices.</li>\n<li>Collaborate with cross-functional teams to improve development workflows and infrastructure.</li>\n</ul>\n<p>Requirements:</p>\n<ul>\n<li>7+ years of experience in a DevOps, Site Reliability Engineering, or Cloud Infrastructure role.</li>\n<li>Strong experience with AWS and GCP data services, including Kinesis, Glue, Pub/Sub, and Dataflow.</li>\n<li>Proficiency in deploying and managing workloads on Kubernetes (EKS/GKE) in production environments.</li>\n<li>Hands-on experience with Infrastructure-as-Code (IaC) using Terraform.</li>\n<li>Expertise in CI/CD pipeline management using Jenkins, ArgoCD, and GitHub Enterprise Actions.</li>\n<li>Programming skills in Python for automation and scripting.</li>\n<li>Experience with observability and monitoring tools (e.g., Prometheus, Grafana, Datadog, or CloudWatch).</li>\n<li>Strong understanding of SRE principles, including performance monitoring, incident response, and reliability engineering.</li>\n<li>Experience with cost optimization strategies for cloud infrastructure.</li>\n<li>Self-motivated and driven, with a strong ability to influence and drive changes across multiple teams.</li>\n<li>Ability to work collaboratively in an agile environment and support multiple teams.</li>\n</ul>\n<p>Preferred Qualifications:</p>\n<ul>\n<li>Experience with data lake architectures and big data processing frameworks (e.g., Apache Spark, Flink, Snowflake, BigQuery).</li>\n<li>Familiarity with event-driven architectures and message queues (e.g., Kafka, RabbitMQ).</li>\n<li>Experience with workflow orchestration tools such as Apache Airflow and Google Cloud Composer.</li>\n<li>Knowledge of service mesh technologies like Istio.</li>\n<li>Experience with GitOps workflows and Kubernetes-native tooling.</li>\n</ul>\n<p style=\"margin-top:24px;font-size:13px;color:#666;\">XML job scraping automation by <a href=\"https://yubhub.co\">YubHub</a></p>","url":"https://yubhub.co/jobs/job_68a62835-66b","directApply":true,"hiringOrganization":{"@type":"Organization","name":"ZoomInfo","sameAs":"https://www.zoominfo.com/","logo":"https://logos.yubhub.co/zoominfo.com.png"},"x-apply-url":"https://job-boards.greenhouse.io/zoominfo/jobs/8496473002","x-work-arrangement":"hybrid","x-experience-level":"senior","x-job-type":"full-time","x-salary-range":null,"x-skills-required":["AWS","GCP","Kubernetes","Terraform","Jenkins","ArgoCD","GitHub Enterprise Actions","Python","Apache Airflow","Google Cloud Composer","CloudWatch","Prometheus","Grafana","Datadog"],"x-skills-preferred":["Data lake architectures","Big data processing frameworks","Event-driven architectures","Message queues","Workflow orchestration tools","Service mesh technologies","GitOps workflows","Kubernetes-native tooling"],"datePosted":"2026-04-24T12:19:32.227Z","jobLocation":{"@type":"Place","address":{"@type":"PostalAddress","addressLocality":"Toronto, Ontario, Canada"}},"employmentType":"FULL_TIME","occupationalCategory":"Engineering","industry":"Technology","skills":"AWS, GCP, Kubernetes, Terraform, Jenkins, ArgoCD, GitHub Enterprise Actions, Python, Apache Airflow, Google Cloud Composer, CloudWatch, Prometheus, Grafana, Datadog, Data lake architectures, Big data processing frameworks, Event-driven architectures, Message queues, Workflow orchestration tools, Service mesh technologies, GitOps workflows, Kubernetes-native tooling"},{"@context":"https://schema.org","@type":"JobPosting","identifier":{"@type":"PropertyValue","name":"YubHub","value":"job_1ccace61-279"},"title":"Strategy and Operations, Forward Deployed Engineering (FDE)","description":"<p><strong>Compensation</strong></p>\n<p>$216K – $240K • Offers Equity</p>\n<p>The base pay offered may vary depending on multiple individualized factors, including market location, job-related knowledge, skills, and experience. If the role is non-exempt, overtime pay will be provided consistent with applicable laws. In addition to the salary range listed above, total compensation also includes generous equity, performance-related bonus(es) for eligible employees, and the following benefits.</p>\n<ul>\n<li>Medical, dental, and vision insurance for you and your family, with employer contributions to Health Savings Accounts</li>\n</ul>\n<ul>\n<li>Pre-tax accounts for Health FSA, Dependent Care FSA, and commuter expenses (parking and transit)</li>\n</ul>\n<ul>\n<li>401(k) retirement plan with employer match</li>\n</ul>\n<ul>\n<li>Paid parental leave (up to 24 weeks for birth parents and 20 weeks for non-birthing parents), plus paid medical and caregiver leave (up to 8 weeks)</li>\n</ul>\n<ul>\n<li>Paid time off: flexible PTO for exempt employees and up to 15 days annually for non-exempt employees</li>\n</ul>\n<ul>\n<li>13+ paid company holidays, and multiple paid coordinated company office closures throughout the year for focus and recharge, plus paid sick or safe time (1 hour per 30 hours worked, or more, as required by applicable state or local law)</li>\n</ul>\n<ul>\n<li>Mental health and wellness support</li>\n</ul>\n<ul>\n<li>Employer-paid basic life and disability coverage</li>\n</ul>\n<ul>\n<li>Annual learning and development stipend to fuel your professional growth</li>\n</ul>\n<ul>\n<li>Daily meals in our offices, and meal delivery credits as eligible</li>\n</ul>\n<ul>\n<li>Relocation support for eligible employees</li>\n</ul>\n<ul>\n<li>Additional taxable fringe benefits, such as charitable donation matching and wellness stipends, may also be provided.</li>\n</ul>\n<p><strong>About the team</strong></p>\n<p>OpenAI’s Forward Deployed Engineering (FDE) team partners with customers to turn research breakthroughs into production-grade AI systems. FDE sits at the intersection of Product, Engineering, Research, and GTM. We embed deeply with users to solve high-leverage problems and surface patterns that shape our platform. We take frontier capabilities into the real world and translate customer signal into durable solutions, repeatable patterns, and product direction.</p>\n<p><strong>About the role</strong></p>\n<p>We’re hiring an Strategy and Ops Lead to build, run, and evolve the systems that enable the FDE team to execute at scale. This role sits at the core of the team and directly shapes how effectively we deploy frontier AI in the real world. You’ll turn fast-moving signals from the field and the business into clear operational plans, aligning project demand with FDE capacity, driving staffing decisions, and ensuring the portfolio scales predictably.</p>\n<p>You will partner closely with Business, Product, and GTM stakeholders to improve how we prioritize, plan, and coordinate. Rather than leading a single program, you’ll run core operating rhythms for the team, such as portfolio reviews, execution tracking, and quarterly planning, ensuring leaders have clear visibility into risks and delivery progress as the organization scales. This is a senior IC role with broad ownership across the FDE operating model.</p>\n<p><strong>In this role you will</strong></p>\n<ul>\n<li>Own FDE capacity planning, translating pipeline and active project demand into hiring forecasts.</li>\n</ul>\n<ul>\n<li>Run the operating rhythm across portfolio reviews and quarterly planning, ensuring leaders have visibility into priorities, risk, dependencies, and the decisions needed to keep execution moving.</li>\n</ul>\n<ul>\n<li>Determine how customer engagements should be staffed across FDE and partner channels, working with GTM and FDE leadership to make explicit tradeoff calls based on scope, strategic value, and capacity constraints.</li>\n</ul>\n<ul>\n<li>Codify and evolve the FDE operating model so each subsequent deployment becomes easier to scope and deliver.</li>\n</ul>\n<ul>\n<li>Identify and resolve emerging operational bottlenecks as FDE scales, implementing lightweight systems that improve execution without adding unnecessary overhead.</li>\n</ul>\n<p><strong>You might thrive in this role if you</strong></p>\n<ul>\n<li>Bring 6+ years in technical program management, engineering operations, business operations, or similar operator roles supporting technical teams in fast-paced, high-ambiguity environments.</li>\n</ul>\n<ul>\n<li>Have built 0→1 operating mechanisms that scaled a technical team through rapid growth.</li>\n</ul>\n<ul>\n<li>Bring alignment to conflicting priorities and resource tradeoffs, driving teams toward measurable outcomes at pace.</li>\n</ul>\n<ul>\n<li>Break down ambiguous operational challenges into clear workstreams, anticipate risks early, and make sound decisions under pressure while balancing speed with long-term system health.</li>\n</ul>\n<ul>\n<li>Communicate clearly across engineering, product, GTM, and executive audiences, simplifying complexity and translating tradeoffs into actionable decisions.</li>\n</ul>\n<ul>\n<li>Influence senior leaders without formal authority, aligning teams with different incentives around clear, shared outcomes.</li>\n</ul>\n<p><strong>About OpenAI</strong></p>\n<p>OpenAI is an AI research and deployment company dedicated to ensuring that general-purpose artificial intelligence benefits all of humanity. We push the boundaries of the capabilities of AI systems and seek to safely deploy them to the world through our products. AI is an extremely powerful tool that must be created with safety and human needs at its core, and to achieve our mission, we must encompass and value the many different perspectives, voices, and experiences that form the full spectrum of humanity.</p>\n<p><strong>Benefits</strong></p>\n<p><strong>We are an equal opportunity employer, and we do not discriminate on the basis of race, religion, color, national origin, sex, sexual orientation, age, veteran status, disability, genetic information, or other applicable legally protected characteristic.</strong></p>\n<p><strong>Background checks for applicants will be administered in accordance with applicable law, and qualified applicants with arrest or conviction records will be considered for employment consistent with those laws, including the San Francisco Fair Chance Ordinance, the Los Angeles County Fair Chance Ordinance for Employers, and the California Fair Chance Act, for US-based candidates. For unincorporated Los Angeles County workers: we reasonably believe that criminal history may have a direct, adverse and negative relationship with the following job duties, potentially resulting in the withdrawal of a conditional offer of employment: protect computer hardware entrusted to you from theft, loss or damage; return all computer hardware in your possession (including the data contained therein) upon termination of employment or end of assignment; and maintain the confidentiality of proprietary, confidential, and non-public information. In addition, job duties require access to secure and protected information technology systems and related data security obligations.</strong></p>\n<p><strong>To notify OpenAI that you believe this job posting is non-compliant, please submit a report through [this form](https://form.asana.com/?d=57018692298241&amp;k=5MqR40fZd7jlxVUh5J-UeA). No response will be provided to inquiries unrelated to job posting compliance.</strong></p>\n<p><strong>We are committed to providing reasonable accommodations to applicants with disabilities, and requests can be made via this [link](https://form.asana.com/?k=bQ7w9h3iexRlicUdWRiwvg&amp;d=57018692298241).</strong></p>\n<p><strong>[OpenAI Global Applicant Privacy Policy](https://cdn.openai.com/policies/global-employee-an</strong></p>\n<p style=\"margin-top:24px;font-size:13px;color:#666;\">XML job scraping automation by <a href=\"https://yubhub.co\">YubHub</a></p>","url":"https://yubhub.co/jobs/job_1ccace61-279","directApply":true,"hiringOrganization":{"@type":"Organization","name":"OpenAI","sameAs":"https://openai.com/","logo":"https://logos.yubhub.co/openai.com.png"},"x-apply-url":"https://jobs.ashbyhq.com/openai/976939e9-e072-4a24-abdb-84cf29a564c6","x-work-arrangement":"hybrid","x-experience-level":"senior","x-job-type":"Full time","x-salary-range":"$216K – $240K • Offers Equity","x-skills-required":["technical program management","engineering operations","business operations","similar operator roles","fast-paced","high-ambiguity environments","capacity planning","pipeline and active project demand","hiring forecasts","portfolio reviews","quarterly planning","execution tracking","operating rhythm","leadership","GTM","FDE","operating model","staffing decisions","project demand","FDE capacity","customer engagements","partner channels","scope","strategic value","capacity constraints","operational bottlenecks","lightweight systems","execution","communication","complexity","tradeoffs","actionable decisions","influence","senior leaders","shared outcomes","AI research","deployment","general-purpose artificial intelligence","human needs","safety","data security","information technology systems","data security obligations"],"x-skills-preferred":[],"datePosted":"2026-04-24T12:19:06.022Z","jobLocation":{"@type":"Place","address":{"@type":"PostalAddress","addressLocality":"San Francisco"}},"employmentType":"FULL_TIME","occupationalCategory":"Engineering","industry":"Technology","skills":"technical program management, engineering operations, business operations, similar operator roles, fast-paced, high-ambiguity environments, capacity planning, pipeline and active project demand, hiring forecasts, portfolio reviews, quarterly planning, execution tracking, operating rhythm, leadership, GTM, FDE, operating model, staffing decisions, project demand, FDE capacity, customer engagements, partner channels, scope, strategic value, capacity constraints, operational bottlenecks, lightweight systems, execution, communication, complexity, tradeoffs, actionable decisions, influence, senior leaders, shared outcomes, AI research, deployment, general-purpose artificial intelligence, human needs, safety, data security, information technology systems, data security obligations","baseSalary":{"@type":"MonetaryAmount","currency":"USD","value":{"@type":"QuantitativeValue","minValue":216000,"maxValue":240000,"unitText":"YEAR"}}},{"@context":"https://schema.org","@type":"JobPosting","identifier":{"@type":"PropertyValue","name":"YubHub","value":"job_653bca90-18d"},"title":"Engineering Manager, Organizations (Auth0)","description":"<p>We are looking for an experienced Engineering Manager to lead our Organizations team. As an Engineering Manager, you will be responsible for managing a team of 9 remote engineers, mentoring and coaching them to achieve their goals. You will work closely with the Product Manager to plan and deliver the team&#39;s quarterly and annual roadmap. You will also be responsible for owning and being accountable for the quality of the team&#39;s technical estate, effectively managing technical debt, addressing security vulnerabilities, and ensuring wider cross-team technical initiatives are delivered in a timely manner.</p>\n<p>The ideal candidate will have experience growing engineers to the next level, bringing off-track engineers back on track, and working on projects that require close collaboration with external teams. They will also have solid architectural knowledge, backed by experience in designing, implementing, and evolving complex distributed systems.</p>\n<p>In particular, you will be able to spot areas where scalability and performance might be affected. You will know how to track and steer a project to successful and timely delivery. Experience in authentication protocols such as OAuth2, OIDC, SAML, and understanding of event-driven architectures, especially Apache Kafka, is a plus.</p>\n<p>As an Engineering Manager at Okta, you will have the opportunity to work on a wide range of challenging projects, collaborate with a talented team of engineers, and contribute to the growth and success of the company.</p>\n<p>If you are a motivated and experienced engineer looking for a new challenge, we encourage you to apply for this exciting opportunity.</p>\n<p style=\"margin-top:24px;font-size:13px;color:#666;\">XML job scraping automation by <a href=\"https://yubhub.co\">YubHub</a></p>","url":"https://yubhub.co/jobs/job_653bca90-18d","directApply":true,"hiringOrganization":{"@type":"Organization","name":"Okta","sameAs":"https://www.okta.com","logo":"https://logos.yubhub.co/okta.com.png"},"x-apply-url":"https://job-boards.greenhouse.io/okta/jobs/7843717","x-work-arrangement":"remote","x-experience-level":"senior","x-job-type":"full-time","x-salary-range":"$168,000-$231,000 CAD","x-skills-required":["NodeJS","JavaScript","TypeScript","PostgreSQL","AWS","Azure","Containers","Authentication protocols","Event-driven architectures"],"x-skills-preferred":["OAuth2","OIDC","SAML","Apache Kafka"],"datePosted":"2026-04-24T12:18:53.914Z","jobLocation":{"@type":"Place","address":{"@type":"PostalAddress","addressLocality":"Toronto, Ontario, Canada"}},"jobLocationType":"TELECOMMUTE","employmentType":"FULL_TIME","occupationalCategory":"Engineering","industry":"Technology","skills":"NodeJS, JavaScript, TypeScript, PostgreSQL, AWS, Azure, Containers, Authentication protocols, Event-driven architectures, OAuth2, OIDC, SAML, Apache Kafka","baseSalary":{"@type":"MonetaryAmount","currency":"USD","value":{"@type":"QuantitativeValue","minValue":168000,"maxValue":231000,"unitText":"YEAR"}}},{"@context":"https://schema.org","@type":"JobPosting","identifier":{"@type":"PropertyValue","name":"YubHub","value":"job_639596c3-530"},"title":"Staff Technical Program Manager, Infrastructure FinOps","description":"<p>About Pinterest:</p>\n<p>Millions of people around the world come to our platform to find creative ideas, dream about new possibilities and plan for memories that will last a lifetime.</p>\n<p>At Pinterest, we&#39;re on a mission to bring everyone the inspiration to create a life they love, and that starts with the people behind the product.</p>\n<p>Discover a career where you ignite innovation for millions, transform passion into growth opportunities, celebrate each other&#39;s unique experiences and embrace the flexibility to do your best work.</p>\n<p>Creating a career you love? It&#39;s Possible.</p>\n<p>At Pinterest, AI isn&#39;t just a feature, it&#39;s a powerful partner that augments our creativity and amplifies our impact, and we&#39;re looking for candidates who are excited to be a part of that.</p>\n<p>To get a complete picture of your experience and abilities, we&#39;ll explore your foundational skills and how you collaborate with AI.</p>\n<p>Through our interview process, what matters most is that you can always explain your approach, showing us not just what you know, but how you think.</p>\n<p>You can read more about our AI interview philosophy and how we use AI in our recruiting process here.</p>\n<p>The Team:</p>\n<p>Pinterest is a visual discovery platform where people go from inspiration to action, the Infra Governance space plays a foundational role: it helps ensure Pinterest&#39;s infrastructure strategy can scale with product demand while remaining disciplined on cost, capacity, and long-term platform health.</p>\n<p>What is especially exciting about this team is the mix of technical strategy, planning rigor, and business impact,it sits close to infrastructure and platform engineering, close to Product execution, and close to the investment decisions that determine where Pinterest scales next.</p>\n<p>It is also an unusually strong fit for a senior IC TPM who wants to build governance mechanisms, improve transparency into where infrastructure and AI investment is going, and redesign high-toil operating workflows with AI-first methods.</p>\n<p>The TPM interview framework reinforces that this kind of role is about ambiguity handling, XFN influence, technical judgment, and mechanism-building at scale.</p>\n<p>What you&#39;ll do:</p>\n<p>Partner with Product and Engineering to translate infrastructure strategy into executable multi-quarter programs, with clear scope, sequencing, dependencies, governance forums, and measurable outcomes across the Infra Governance portfolio.</p>\n<p>Lead capacity planning as a core operating motion: connect product demand, infrastructure supply, growth assumptions, and technical constraints into a durable planning cadence that enables earlier, better investment decisions.</p>\n<p>Own budgeting, forecasting, and variance analysis for infrastructure governance programs, building clear visibility into where investment is going today, where it is expected to go next, and what is driving movement versus plan.</p>\n<p>Develop financial models, cost models, and scenario-planning frameworks that translate technical choices into business impact and support explicit trade-off discussions across capacity, cost, reliability, performance, and speed.</p>\n<p>Build an investment transparency matrix for infrastructure and AI spend, including ownership, allocation, utilization, forecast, actuals, and decision points, so leadership can quickly understand where resources are being consumed and where intervention is needed.</p>\n<p>Establish governance for AI token utilization and allocation, including forecasting demand, tracking usage, improving transparency, and helping teams understand the financial implications of model and token consumption choices.</p>\n<p>Operationalize durable mechanisms for budgeting reviews, forecast updates, variance investigations, anomaly management, and optimization follow-through in partnership with Product, Engineering, Finance, Security, and Data.</p>\n<p>Lead trade-off discussions that improve decision quality: synthesize technical inputs, business priorities, and financial signals into clear options, recommendations, and escalation-ready decisions.</p>\n<p>Build the governance layer for execution: run portfolio reviews, dependency management, roadmap health reviews, planning checkpoints, and executive updates that keep large cross-functional programs aligned and on track.</p>\n<p>Identify high-leverage opportunities to automate operational toil, especially in recurring workflows like status synthesis, planning reviews, dependency tracking, intake triage, forecast reconciliation, and action/decision capture.</p>\n<p>Build AI-assisted frameworks, lightweight models, and workflow automations that improve decision-making, including dashboards, scenario tools, cost-model templates, token-usage views, and planning helpers that increase signal while reducing manual overhead.</p>\n<p>Use GenAI as the default operating model for EP PgM execution,producing AI-assisted first drafts of core program artifacts, modernizing high-toil workflows into AI-first mechanisms (e.g., intake triage, status synthesis, action/decision extraction, risk &amp; dependency tracking), and synthesizing signals to proactively surface risks, decision/trade-offs, and escalation paths.</p>\n<p>Prototype solutions to augment decisions through data (e.g. dashboards, data analysis) or simplify processes (e.g. process and workflow helpers, or internal tools) using AI coding assistants (“vibe coding”).</p>\n<p>Follow Pinterest AI guidance for risk, governance, and safety-by-design: appropriately handle sensitive data, validate AI-generated outputs, document assumptions/limits, and ensure AI-assisted workflows meet applicable policy/compliance expectations before broad adoption.</p>\n<p>What we’re looking for:</p>\n<p>8+ years of Technical leadership experience owning large, ambiguous, cross-functional programs with senior stakeholder visibility and durable business impact.</p>\n<p>Strong experience in infrastructure, platform, cloud economics, or adjacent environments where capacity, cost, performance be managed together.</p>\n<p>Proven ability to translate strategy into executable programs: turning broad technical goals into roadmaps, governance mechanisms, planning cadences, and measurable outcomes.</p>\n<p>Demonstrated strength in capacity planning, budgeting, forecasting, variation planning in environments with meaningful scale and complexity.</p>\n<p>Strong financial modeling and cost modeling skills: able to build simple, credible models, pressure-test assumptions, and use data to guide prioritization and trade-off discussions.</p>\n<p>Experience building transparency mechanisms for investment and utilization, including understanding where infrastructure or AI spend is going, where it is expected to go, and how ownership and allocation should work.</p>\n<p>Strong technical breadth and judgment: able to partner effectively with engineering teams, drive clarity around requirements, and facilitate technical trade-offs without relying on coding depth;</p>\n<p>Excellent cross-functional collaboration and executive communication: able to align teams with conflicting priorities, communicate clearly upward, and influence without direct authority;</p>\n<p>Experience building durable mechanisms rather than just managing tasks: governance processes, planning systems, dashboards, review cadences, and decision frameworks that scale across teams.</p>\n<p>Workflow design, AI fluency, data &amp; insights orientation: experience turning repeatable program work into durable, low-toil mechanisms and improving decision-making by using GenAI (e.g., strong prompting, vibe coding lightweight scripts/tools, dashboards, data analysis and leveraging agents where appropriate)</p>\n<p>Safety-by-design AI fluency: experience operating within AI governance expectations (risk assessment, data handling, and validation).</p>\n<p style=\"margin-top:24px;font-size:13px;color:#666;\">XML job scraping automation by <a href=\"https://yubhub.co\">YubHub</a></p>","url":"https://yubhub.co/jobs/job_639596c3-530","directApply":true,"hiringOrganization":{"@type":"Organization","name":"Pinterest","sameAs":"https://www.pinterest.com/","logo":"https://logos.yubhub.co/pinterest.com.png"},"x-apply-url":"https://job-boards.greenhouse.io/pinterest/jobs/7770921","x-work-arrangement":"remote","x-experience-level":"staff","x-job-type":"full-time","x-salary-range":null,"x-skills-required":["Technical leadership","Infrastructure strategy","Capacity planning","Budgeting","Forecasting","Financial modeling","Cost modeling","Scenario planning","Investment transparency","AI token utilization","Governance","Program management","Cross-functional collaboration","Executive communication","Workflow design","AI fluency","Data insights","Safety-by-design"],"x-skills-preferred":[],"datePosted":"2026-04-24T12:18:29.205Z","jobLocation":{"@type":"Place","address":{"@type":"PostalAddress","addressLocality":"San Francisco, CA, US; Remote, US"}},"jobLocationType":"TELECOMMUTE","employmentType":"FULL_TIME","occupationalCategory":"Engineering","industry":"Technology","skills":"Technical leadership, Infrastructure strategy, Capacity planning, Budgeting, Forecasting, Financial modeling, Cost modeling, Scenario planning, Investment transparency, AI token utilization, Governance, Program management, Cross-functional collaboration, Executive communication, Workflow design, AI fluency, Data insights, Safety-by-design"},{"@context":"https://schema.org","@type":"JobPosting","identifier":{"@type":"PropertyValue","name":"YubHub","value":"job_7cee676b-646"},"title":"Staff MLE","description":"<p>The Personalization team makes deciding what to play next easier and more enjoyable for every listener. From Blend to Discover Weekly, we’re behind some of Spotify’s most-loved features. We built them by understanding the world of music and podcasts better than anyone else.</p>\n<p>We are looking for a Staff MLE to join Surfaces Podcasts. The Surfaces Podcasts team builds the systems that power podcast recommendations across some of Spotify’s most visible experiences, including Home and the Now Playing view. We work across candidate generation, ranking, and embedding models to help listeners discover their favorite new podcast and engage deeply with their favorite shows.</p>\n<p>We’re also shaping the next generation of personalization through transformer-based models that bring more dynamic, context-aware recommendations to millions of listeners. You’ll collaborate closely with teams across Personalization, Experience, and the Podcast Mission to evolve podcast listening across Spotify.</p>\n<p><strong>Responsibilities</strong></p>\n<ul>\n<li>Contribute to designing, scaling/building, evaluating, integrating, shipping, and refining reward signals for recommendations by hands-on ML development</li>\n</ul>\n<ul>\n<li>Promote and role-model best practices of ML systems development, testing, evaluation, etc., both inside the team as well as throughout the organization.</li>\n</ul>\n<ul>\n<li>Lead collaborations and align across PZN to integrate and A/B test mid-term signals in various recommendation systems</li>\n</ul>\n<p><strong>Requirements</strong></p>\n<ul>\n<li>You have a strong background in machine learning, enjoy applying theory to develop real-world applications, with expertise in statistics and optimization, especially in sequential models, transformers, generative AI and large language models, and relevant fine-tuning processes.</li>\n</ul>\n<ul>\n<li>You have hands-on experience with large cross-collaborative machine learning projects and managing stakeholders.</li>\n</ul>\n<ul>\n<li>You have hands-on experience implementing production machine learning systems at scale in Java, Scala, Python, or similar languages. Experience with PyTorch, Ray, Hugging Face and related tools is required.</li>\n</ul>\n<ul>\n<li>You have some experience with large scale, distributed data processing frameworks/tools like Apache Beam, Apache Spark, or even our open source API for it - Scio, and cloud platforms like GCP or AWS.</li>\n</ul>\n<ul>\n<li>You care about agile software processes, data-driven development, reliability, and disciplined experimentation.</li>\n</ul>\n<p><strong>Where You’ll Be</strong></p>\n<ul>\n<li>We offer you the flexibility to work where you work best! For this role, you can be within North America as long as we have a work location.</li>\n</ul>\n<ul>\n<li>This team operates within the Eastern Standard time zone for collaboration</li>\n</ul>\n<p><strong>Benefits</strong></p>\n<p>The United States base range for this position is $227,495- $324,993 equity. The benefits available for this position include health insurance, six month paid parental leave, 401(k) retirement plan, monthly meal allowance, 23 paid days off, 13 paid flexible holidays, paid sick leave.</p>\n<p style=\"margin-top:24px;font-size:13px;color:#666;\">XML job scraping automation by <a href=\"https://yubhub.co\">YubHub</a></p>","url":"https://yubhub.co/jobs/job_7cee676b-646","directApply":true,"hiringOrganization":{"@type":"Organization","name":"Spotify","sameAs":"https://www.spotify.com","logo":"https://logos.yubhub.co/spotify.com.png"},"x-apply-url":"https://jobs.lever.co/spotify/3f816a31-2336-4e29-a5bf-6b147c604c2f","x-work-arrangement":"remote","x-experience-level":"staff","x-job-type":"full-time","x-salary-range":"$227,495-$324,993","x-skills-required":["machine learning","statistics","optimization","sequential models","transformers","generative AI","large language models","Java","Scala","Python","PyTorch","Ray","Hugging Face","Apache Beam","Apache Spark","Scio","GCP","AWS"],"x-skills-preferred":[],"datePosted":"2026-04-24T12:17:41.574Z","jobLocation":{"@type":"Place","address":{"@type":"PostalAddress","addressLocality":"North America"}},"jobLocationType":"TELECOMMUTE","employmentType":"FULL_TIME","occupationalCategory":"Engineering","industry":"Technology","skills":"machine learning, statistics, optimization, sequential models, transformers, generative AI, large language models, Java, Scala, Python, PyTorch, Ray, Hugging Face, Apache Beam, Apache Spark, Scio, GCP, AWS","baseSalary":{"@type":"MonetaryAmount","currency":"USD","value":{"@type":"QuantitativeValue","minValue":227495,"maxValue":324993,"unitText":"YEAR"}}},{"@context":"https://schema.org","@type":"JobPosting","identifier":{"@type":"PropertyValue","name":"YubHub","value":"job_c0c30c21-9ae"},"title":"Staff Software Engineer, Data Engineering","description":"<p>You&#39;ll own Gamma&#39;s data infrastructure and architecture as we scale to hundreds of millions of users and petabytes of data. This means defining the technical strategy for our end-to-end event pipeline architecture, designing distributed systems that handle massive scale with reliability, and establishing the foundation for how data flows through Gamma.</p>\n<p>As a Staff Data Engineer, you&#39;ll balance hands-on engineering with technical leadership. You&#39;ll architect solutions for orders of magnitude growth, mentor engineers across the organization, and drive strategic decisions about our data stack. You&#39;ll work closely with analytics, product, and engineering leadership to enable data-driven decision making at scale while building systems that serve millions of users and inform critical business decisions.</p>\n<p>Our team has a strong in-office culture and works in person 4–5 days per week in San Francisco. We love working together to stay creative and connected, with flexibility to work from home when focus matters most.</p>\n<p><strong>Responsibilities</strong></p>\n<ul>\n<li>Own and evolve our end-to-end event pipeline architecture, from Kafka ingestion through Snowflake analytics, setting technical direction for data infrastructure</li>\n<li>Design and architect distributed data systems that scale to orders of magnitude more data volume while maintaining world-class query performance</li>\n<li>Lead initiatives to build and optimize CDC (change data capture) pipelines and streaming data transformations at massive scale</li>\n<li>Establish best practices for data quality, pipeline reliability, and system observability across the organization</li>\n<li>Drive strategic technical decisions about data modeling, infrastructure architecture, and technology choices</li>\n<li>Mentor engineers and elevate data engineering practices across analytics, product, and engineering teams</li>\n</ul>\n<p><strong>Requirements</strong></p>\n<ul>\n<li>10+ years as a data or software engineer with deep expertise in distributed systems, data infrastructure, and high-growth SaaS products at massive scale</li>\n<li>Expert-level knowledge of Apache Kafka (producers, consumers, Kafka Connect, stream processing) and event streaming platforms</li>\n<li>Extensive hands-on experience with Snowflake, including performance optimization, cost management, and data modeling; strong foundation in Postgres, CDC patterns, and replication strategies</li>\n<li>Proven track record architecting and leading major data infrastructure initiatives through orders-of-magnitude growth</li>\n<li>Experience establishing best practices and driving technical strategy across organizations</li>\n<li>Strong communication skills with a history of influencing technical direction across engineering, analytics, and leadership</li>\n<li>Proficiency with dbt, Terraform, and working knowledge of data governance, privacy compliance (GDPR, CCPA), and security best practices</li>\n</ul>\n<p><strong>Compensation Range</strong></p>\n<p>The base salary for this full-time position, which spans multiple internal levels depending on qualifications, ranges between $230K - $310K plus benefits &amp; equity.</p>\n<p style=\"margin-top:24px;font-size:13px;color:#666;\">XML job scraping automation by <a href=\"https://yubhub.co\">YubHub</a></p>","url":"https://yubhub.co/jobs/job_c0c30c21-9ae","directApply":true,"hiringOrganization":{"@type":"Organization","name":"Gamma","sameAs":"https://gamma.com","logo":"https://logos.yubhub.co/gamma.com.png"},"x-apply-url":"https://jobs.ashbyhq.com/gamma/4b2c97d1-b12b-46b7-9e24-1fcd248e28a3","x-work-arrangement":"onsite","x-experience-level":"staff","x-job-type":"Full time","x-salary-range":"$230K - $310K","x-skills-required":["Apache Kafka","Snowflake","Postgres","dbt","Terraform","data governance","privacy compliance","security best practices"],"x-skills-preferred":[],"datePosted":"2026-04-24T12:17:12.124Z","jobLocation":{"@type":"Place","address":{"@type":"PostalAddress","addressLocality":"San Francisco"}},"employmentType":"FULL_TIME","occupationalCategory":"Engineering","industry":"Technology","skills":"Apache Kafka, Snowflake, Postgres, dbt, Terraform, data governance, privacy compliance, security best practices","baseSalary":{"@type":"MonetaryAmount","currency":"USD","value":{"@type":"QuantitativeValue","minValue":230000,"maxValue":310000,"unitText":"YEAR"}}},{"@context":"https://schema.org","@type":"JobPosting","identifier":{"@type":"PropertyValue","name":"YubHub","value":"job_ccc144a8-284"},"title":"Machine Learning Engineer","description":"<p>The Personalization team makes deciding what to play next easier and more enjoyable for every listener. We&#39;re behind some of Spotify&#39;s most-loved features, such as Blend and Discover Weekly. We built them by understanding the world of music and podcasts better than anyone else.</p>\n<p>We are looking for a Machine Learning Engineer to join the Personalization team. As an integral part of the squad, you will collaborate with research scientists, data scientists and other engineers across PZN in prototyping and productizing state-of-the-art ML at the intersection of recommendations and long-term user satisfaction.</p>\n<p><strong>Responsibilities</strong></p>\n<ul>\n<li>Contribute to designing, scaling/building, evaluating, integrating, shipping, and refining reward signals for recommendations by hands-on ML development</li>\n<li>Promote and role-model best practices of ML systems development, testing, evaluation, etc., both inside the team as well as throughout the organization</li>\n<li>Lead collaborations and align across PZN to integrate and A/B test mid-term signals in various recommendation systems</li>\n</ul>\n<p><strong>Requirements</strong></p>\n<ul>\n<li>Strong background in machine learning, with expertise in statistics and optimization, especially in sequential models, transformers, generative AI and large language models, and relevant fine-tuning processes</li>\n<li>Hands-on experience with large cross-collaborative machine learning projects and managing stakeholders</li>\n<li>Hands-on experience implementing production machine learning systems at scale in Java, Scala, Python, or similar languages. Experience with PyTorch, Ray, Hugging Face and related tools is required</li>\n<li>Some experience with large scale, distributed data processing frameworks/tools like Apache Beam, Apache Spark, or even our open source API for it - Scio, and cloud platforms like GCP or AWS</li>\n<li>Care about agile software processes, data-driven development, reliability, and disciplined experimentation</li>\n</ul>\n<p><strong>Where You&#39;ll Be</strong></p>\n<ul>\n<li>We offer you the flexibility to work where you work best! For this role, you can be within the North America and EMEA region as long as we have a work location</li>\n<li>This team operates within the Eastern Standard time zone for collaboration</li>\n</ul>\n<p><strong>Additional Information</strong></p>\n<p>The United States base range for this position is $227,495-$324,993 equity. The benefits available for this position include health insurance, six month paid parental leave, 401(k) retirement plan, monthly meal allowance, 23 paid days off, 13 paid flexible holidays, paid sick leave.</p>\n<p style=\"margin-top:24px;font-size:13px;color:#666;\">XML job scraping automation by <a href=\"https://yubhub.co\">YubHub</a></p>","url":"https://yubhub.co/jobs/job_ccc144a8-284","directApply":true,"hiringOrganization":{"@type":"Organization","name":"Spotify","sameAs":"https://www.spotify.com","logo":"https://logos.yubhub.co/spotify.com.png"},"x-apply-url":"https://jobs.lever.co/spotify/f3616bfc-a2bb-4847-90e1-0437b8a1c054","x-work-arrangement":"remote","x-experience-level":"senior","x-job-type":"full-time","x-salary-range":"$227,495-$324,993","x-skills-required":["machine learning","statistics","optimization","sequential models","transformers","generative AI","large language models","PyTorch","Ray","Hugging Face","Apache Beam","Apache Spark","Scio","GCP","AWS"],"x-skills-preferred":[],"datePosted":"2026-04-24T12:16:59.999Z","jobLocation":{"@type":"Place","address":{"@type":"PostalAddress","addressLocality":"EMEA"}},"jobLocationType":"TELECOMMUTE","employmentType":"FULL_TIME","occupationalCategory":"Engineering","industry":"Technology","skills":"machine learning, statistics, optimization, sequential models, transformers, generative AI, large language models, PyTorch, Ray, Hugging Face, Apache Beam, Apache Spark, Scio, GCP, AWS","baseSalary":{"@type":"MonetaryAmount","currency":"USD","value":{"@type":"QuantitativeValue","minValue":227495,"maxValue":324993,"unitText":"YEAR"}}},{"@context":"https://schema.org","@type":"JobPosting","identifier":{"@type":"PropertyValue","name":"YubHub","value":"job_cecd01f7-106"},"title":"Machine Learning Engineer","description":"<p>The Personalization team makes deciding what to play next easier and more enjoyable for every listener. We&#39;re behind some of Spotify&#39;s most-loved features, such as Blend and Discover Weekly. We built them by understanding the world of music and podcasts better than anyone else.</p>\n<p>We are looking for a Machine Learning Engineer to join the Personalization team. As an integral part of the squad, you will collaborate with research scientists, data scientists and other engineers across PZN in prototyping and productizing state-of-the-art ML at the intersection of recommendations and long-term user satisfaction.</p>\n<p><strong>Responsibilities</strong></p>\n<ul>\n<li>Contribute to designing, scaling/building, evaluating, integrating, shipping, and refining reward signals for recommendations by hands-on ML development</li>\n<li>Promote and role-model best practices of ML systems development, testing, evaluation, etc., both inside the team as well as throughout the organization</li>\n<li>Lead collaborations and align across PZN to integrate and A/B test mid-term signals in various recommendation systems</li>\n</ul>\n<p><strong>Requirements</strong></p>\n<ul>\n<li>Strong background in machine learning, with expertise in statistics and optimization, especially in sequential models, transformers, generative AI and large language models, and relevant fine-tuning processes</li>\n<li>Hands-on experience with large cross-collaborative machine learning projects and managing stakeholders</li>\n<li>Hands-on experience implementing production machine learning systems at scale in Java, Scala, Python, or similar languages. Experience with PyTorch, Ray, Hugging Face and related tools is required</li>\n<li>Some experience with large scale, distributed data processing frameworks/tools like Apache Beam, Apache Spark, or even our open source API for it - Scio, and cloud platforms like GCP or AWS</li>\n<li>Care about agile software processes, data-driven development, reliability, and disciplined experimentation</li>\n</ul>\n<p><strong>Where You&#39;ll Be</strong></p>\n<ul>\n<li>We offer you the flexibility to work where you work best! For this role, you can be within the North America and EMEA region as long as we have a work location</li>\n<li>This team operates within the Eastern Standard time zone for collaboration</li>\n</ul>\n<p><strong>Additional Information</strong></p>\n<p>The United States base range for this position is $227,495- $324,993 equity. The benefits available for this position include health insurance, six month paid parental leave, 401(k) retirement plan, monthly meal allowance, 23 paid days off, 13 paid flexible holidays, paid sick leave.</p>\n<p style=\"margin-top:24px;font-size:13px;color:#666;\">XML job scraping automation by <a href=\"https://yubhub.co\">YubHub</a></p>","url":"https://yubhub.co/jobs/job_cecd01f7-106","directApply":true,"hiringOrganization":{"@type":"Organization","name":"Spotify","sameAs":"https://www.spotify.com","logo":"https://logos.yubhub.co/spotify.com.png"},"x-apply-url":"https://jobs.lever.co/spotify/736f1827-6b26-4b3b-b8d8-1d754296e033","x-work-arrangement":"remote","x-experience-level":"senior","x-job-type":"full-time","x-salary-range":"$227,495-$324,993","x-skills-required":["machine learning","statistics","optimization","sequential models","transformers","generative AI","large language models","PyTorch","Ray","Hugging Face","Apache Beam","Apache Spark","Scio","GCP","AWS"],"x-skills-preferred":[],"datePosted":"2026-04-24T12:16:51.109Z","jobLocation":{"@type":"Place","address":{"@type":"PostalAddress","addressLocality":"EMEA"}},"jobLocationType":"TELECOMMUTE","employmentType":"FULL_TIME","occupationalCategory":"Engineering","industry":"Technology","skills":"machine learning, statistics, optimization, sequential models, transformers, generative AI, large language models, PyTorch, Ray, Hugging Face, Apache Beam, Apache Spark, Scio, GCP, AWS","baseSalary":{"@type":"MonetaryAmount","currency":"USD","value":{"@type":"QuantitativeValue","minValue":227495,"maxValue":324993,"unitText":"YEAR"}}},{"@context":"https://schema.org","@type":"JobPosting","identifier":{"@type":"PropertyValue","name":"YubHub","value":"job_3513ac8f-9c4"},"title":"Staff Software Engineer, PostgreSQL","description":"<p>You&#39;ll own Gamma&#39;s PostgreSQL infrastructure as we scale from 70 million users to hundreds of millions, and from terabytes of data to hundreds of terabytes. Your job is to make sure our database can handle orders of magnitude more usage without compromising performance.</p>\n<p>This is a deeply technical, hands-on role. You&#39;ll read and write code daily, dig into low-level systems, debug complex issues across massive datasets, and work on both core database scaling projects and application features. You&#39;ll collaborate closely with backend engineers, data engineers, and infrastructure teams to ensure our database architecture keeps pace with Gamma&#39;s growth.</p>\n<p>Our team has a strong in-office culture and works in person 4–5 days per week in San Francisco. We love working together to stay creative and connected, with flexibility to work from home when focus matters most.</p>\n<p><strong>Responsibilities</strong></p>\n<ul>\n<li>Architect and implement solutions for horizontally scaling PostgreSQL to hundreds of millions of users and hundreds of terabytes of data</li>\n</ul>\n<ul>\n<li>Own database performance, availability, and reliability as usage grows by orders of magnitude</li>\n</ul>\n<ul>\n<li>Debug complex issues across very large datasets and optimize query performance at scale</li>\n</ul>\n<ul>\n<li>Establish best practices for database design, query optimization, and data modeling across engineering</li>\n</ul>\n<ul>\n<li>Work across core infrastructure and application features that depend on database architecture</li>\n</ul>\n<ul>\n<li>Collaborate with backend, data, and infrastructure engineers to align database strategy with product needs</li>\n</ul>\n<p><strong>Requirements</strong></p>\n<ul>\n<li>10+ years of software engineering experience with deep expertise in large-scale relational database systems, including hands-on experience managing hundreds of terabytes of data in production</li>\n</ul>\n<ul>\n<li>Expert-level understanding of PostgreSQL (or comparable relational databases), horizontal scaling techniques such as sharding and partitioning, and complex query tuning</li>\n</ul>\n<ul>\n<li>Strong programming skills in at least one backend language, with experience writing and maintaining highly available web APIs</li>\n</ul>\n<ul>\n<li>Experience with large-scale event streaming systems, preferably Apache Kafka</li>\n</ul>\n<ul>\n<li>Ability to explain complex technical concepts clearly to engineers across teams</li>\n</ul>\n<ul>\n<li>Familiarity with TypeScript, Prisma, Apollo GraphQL, Terraform, AWS, or AI/LLM tooling (Nice to have)</li>\n</ul>\n<p><strong>Compensation</strong></p>\n<p>The base salary for this full-time position, which spans multiple internal levels depending on qualifications, ranges between $230K - $310K plus benefits &amp; equity.</p>\n<p style=\"margin-top:24px;font-size:13px;color:#666;\">XML job scraping automation by <a href=\"https://yubhub.co\">YubHub</a></p>","url":"https://yubhub.co/jobs/job_3513ac8f-9c4","directApply":true,"hiringOrganization":{"@type":"Organization","name":"Gamma","sameAs":"https://gamma.com","logo":"https://logos.yubhub.co/gamma.com.png"},"x-apply-url":"https://jobs.ashbyhq.com/gamma/f672c729-457f-4143-80e9-363ddf8a0870","x-work-arrangement":"hybrid","x-experience-level":"staff","x-job-type":"Full time","x-salary-range":"$230K - $310K","x-skills-required":["PostgreSQL","horizontal scaling","sharding","partitioning","complex query tuning","backend language","web APIs","Apache Kafka"],"x-skills-preferred":["TypeScript","Prisma","Apollo GraphQL","Terraform","AWS","AI/LLM tooling"],"datePosted":"2026-04-24T12:16:45.597Z","jobLocation":{"@type":"Place","address":{"@type":"PostalAddress","addressLocality":"San Francisco"}},"employmentType":"FULL_TIME","occupationalCategory":"Engineering","industry":"Technology","skills":"PostgreSQL, horizontal scaling, sharding, partitioning, complex query tuning, backend language, web APIs, Apache Kafka, TypeScript, Prisma, Apollo GraphQL, Terraform, AWS, AI/LLM tooling","baseSalary":{"@type":"MonetaryAmount","currency":"USD","value":{"@type":"QuantitativeValue","minValue":230000,"maxValue":310000,"unitText":"YEAR"}}},{"@context":"https://schema.org","@type":"JobPosting","identifier":{"@type":"PropertyValue","name":"YubHub","value":"job_f781935e-b81"},"title":"Member of Technical Staff, Full Stack - ML Efficiency & Observability - MAI Superintelligence Team","description":"<p>Microsoft AI is looking for a Member of Technical Staff – Full Stack Engineer, ML Efficiency &amp; Observability to help us efficiently manage our compute capacity. We&#39;re looking for someone who will bring an abundance of positive energy, empathy, and kindness to the team every day, in addition to being highly effective. The right candidate enjoys building world-class consumer experiences and products in a fast-paced environment. You will wear multiple hats and work on engineering, research, and everything in between. Your contributions will span capacity, efficiency, data architecture, training and inference infrastructures, and many other exciting topics at the cutting edge of AI.</p>\n<p>Microsoft AI is building foundational models to develop novel responsible and efficient artificial general intelligence. The foundational models require large compute-capacity, and as a Senior Engineer – Full Stack, ML Efficiency &amp; Observability you will be responsible for building world-class user experience for our executives as well as out ML researcher. You’ll work closely with research and framework teams to turn their requirements into intuitive experiences that lead to efficiency improvements. As a contributing member of the core group of engineers, you would also bring to the table best practices driving architectural changes and influence roadmap of relevant software components.</p>\n<p>Microsoft Superintelligence Team The MAIST is a startup-like team inside Microsoft AI, created to push the boundaries of AI toward Humanist Superintelligence,ultra-capable systems that remain controllable, safety-aligned, and anchored to human values. Our mission is to create AI that amplifies human potential while ensuring humanity remains firmly in control. We aim to deliver breakthroughs that benefit society,advancing science, education, and global well-being. We’re also fortunate to partner with incredible product teams giving our models the chance to reach billions of users and create immense positive impact.</p>\n<p>If you’re a brilliant, highly-ambitious and low ego individual, you’ll fit right in,come and join us as we work on our next generation of models!</p>\n<p>Microsoft’s mission is to empower every person and every organization on the planet to achieve more. As employees, we come together with a growth mindset, innovate to empower others, and collaborate to realize our shared goals. Each day we build on our values of respect, integrity, and accountability to create a culture of inclusion where everyone can thrive at work and beyond.</p>\n<p>By applying to this Mountain View, CA position, you are required to be local to the San Francisco area and in office 4 days a week.</p>\n<p>Responsibilities:</p>\n<ul>\n<li>Design and develop features for our capacity management portal</li>\n<li>Design and develop features to provide visibility into model performance and quality across our fleet</li>\n<li>Partner with ML researchers and PMs to translate functional requirements into highly functional, intuitive and appealing interfaces</li>\n<li>Integrate with backend APIs from schedulers to training frameworks to build visibility across the training life cycle</li>\n<li>Explore, develop, and adapt new innovations to the software development process</li>\n<li>Contribute to the development of internal tooling and infrastructure</li>\n<li>Implement best software development practices to ensure code quality. Hold a high quality bar. Embody our culture and values.</li>\n</ul>\n<p>Qualifications:</p>\n<ul>\n<li>Bachelor’s Degree in Computer Science, Math, Software Engineering, Computer Engineering, or related field AND 4+ years technical engineering experience with coding in languages including, but not limited to, C, C++, C#, Java, JavaScript, or Python OR equivalent experience.</li>\n<li>4+ years experience in business analytics, data science, software development, data modeling or data engineering work</li>\n<li>Experience with Capacity Management, Efficiency Management, ML Training and/or Inference</li>\n<li>Solid expertise in JavaScript / TypeScript, React, HTML, CSS and browser internals</li>\n<li>Solid understanding of web performance, accessibility, and cross-browser compatibility</li>\n<li>Experience with Development &amp; Debugging with dev environments like Visual Studio or Visual Studio Code</li>\n<li>Software development experience with Generative AI tools</li>\n<li>Experience in leading technical projects and supporting architectural decisions with data</li>\n</ul>\n<p style=\"margin-top:24px;font-size:13px;color:#666;\">XML job scraping automation by <a href=\"https://yubhub.co\">YubHub</a></p>","url":"https://yubhub.co/jobs/job_f781935e-b81","directApply":true,"hiringOrganization":{"@type":"Organization","name":"Microsoft AI","sameAs":"https://microsoft.ai","logo":"https://logos.yubhub.co/microsoft.ai.png"},"x-apply-url":"https://microsoft.ai/job/member-of-technical-staff-full-stack-ml-efficiency-observability-mai-superintelligence-team/","x-work-arrangement":"hybrid","x-experience-level":"staff","x-job-type":"Full-time","x-salary-range":"$119,800 - $234,700 per year","x-skills-required":["JavaScript","TypeScript","React","HTML","CSS","Browser Internals","Web Performance","Accessibility","Cross-Browser Compatibility","Development & Debugging","Generative AI Tools","Capacity Management","Efficiency Management","ML Training","Inference"],"x-skills-preferred":[],"datePosted":"2026-04-24T12:16:15.411Z","jobLocation":{"@type":"Place","address":{"@type":"PostalAddress","addressLocality":"Mountain View"}},"employmentType":"FULL_TIME","occupationalCategory":"Engineering","industry":"Technology","skills":"JavaScript, TypeScript, React, HTML, CSS, Browser Internals, Web Performance, Accessibility, Cross-Browser Compatibility, Development & Debugging, Generative AI Tools, Capacity Management, Efficiency Management, ML Training, Inference","baseSalary":{"@type":"MonetaryAmount","currency":"USD","value":{"@type":"QuantitativeValue","minValue":119800,"maxValue":234700,"unitText":"YEAR"}}},{"@context":"https://schema.org","@type":"JobPosting","identifier":{"@type":"PropertyValue","name":"YubHub","value":"job_74402ab5-601"},"title":"Senior Machine Learning Engineer - Ads R&D","description":"<p>Our mission on the Advertising Product &amp; Technology team is to build a next-generation advertising platform that aligns with our unique value proposition for audio and video. We work to scale the user experience for hundreds of millions of fans and hundreds of thousands of advertisers. This scale brings unique challenges as well as tremendous opportunities for our artists and creators.</p>\n<p>We are seeking a Senior Machine Learning Engineer to join the Supply Personalization squad. Supply Personalization focuses on optimizing the volume, timing, and types of ad loads a user receives. By leveraging data, machine learning, causal inference, and large-scale online experimentation, we aim to uncover and learn the most effective strategies for enhancing user experiences and driving business outcomes.</p>\n<p>As a Senior Machine Learning Engineer, you will design and implement machine learning systems for ad performance optimization. You will research and apply ML optimization strategies to balance multiple objectives effectively. You will analyze data and use machine learning techniques to understand user behavior and improve ad experiences. You will collaborate with backend engineers, data scientists, data engineers, and product managers to establish baselines, inform product decisions, and develop new technologies.</p>\n<p>The ideal candidate will have professional experience in applied machine learning. They will have strong technical expertise in software engineering, data analysis, and machine learning. They will be proficient in programming languages such as Python, Java, or Scala. They will have experience with TensorFlow or PyTorch and working with various aspects of the ML lifecycle. They will also have expertise in developing data pipelines using tools like Apache Beam or Spark.</p>\n<p style=\"margin-top:24px;font-size:13px;color:#666;\">XML job scraping automation by <a href=\"https://yubhub.co\">YubHub</a></p>","url":"https://yubhub.co/jobs/job_74402ab5-601","directApply":true,"hiringOrganization":{"@type":"Organization","name":"Spotify","sameAs":"https://www.spotify.com","logo":"https://logos.yubhub.co/spotify.com.png"},"x-apply-url":"https://jobs.lever.co/spotify/6236f25f-f9cc-47c2-af7b-4ace57332eeb","x-work-arrangement":"remote","x-experience-level":"senior","x-job-type":"permanent","x-salary-range":"$184,050.00 - $262,928.00","x-skills-required":["machine learning","software engineering","data analysis","Python","Java","Scala","TensorFlow","PyTorch","Apache Beam","Spark"],"x-skills-preferred":["LLMs","Ray","Adtech","Recommender Systems"],"datePosted":"2026-04-24T12:16:04.900Z","jobLocation":{"@type":"Place","address":{"@type":"PostalAddress","addressLocality":"New York"}},"jobLocationType":"TELECOMMUTE","occupationalCategory":"Engineering","industry":"Technology","skills":"machine learning, software engineering, data analysis, Python, Java, Scala, TensorFlow, PyTorch, Apache Beam, Spark, LLMs, Ray, Adtech, Recommender Systems","baseSalary":{"@type":"MonetaryAmount","currency":"USD","value":{"@type":"QuantitativeValue","minValue":184050,"maxValue":262928,"unitText":"YEAR"}}},{"@context":"https://schema.org","@type":"JobPosting","identifier":{"@type":"PropertyValue","name":"YubHub","value":"job_fdb57476-4a9"},"title":"Backend Engineer","description":"<p>The Personalization team makes deciding what to play next easier and more enjoyable for every listener. From Blend to Discover Weekly, we&#39;re behind some of Spotify&#39;s most-loved features. We built them by understanding the world of music and podcasts better than anyone else.</p>\n<p>Join us and you&#39;ll keep millions of users listening by making great recommendations to each and every one of them.</p>\n<p>You&#39;ll join a team working at the intersection of backend engineering, music understanding, and user experience. We focus on building the backend systems that power agentic music fulfilment products from conversational playlist generation to adaptive listening experiences that give users more intuitive control over what they listen to.</p>\n<p>This team collaborates closely with product, design, user research, data science, and machine learning to build personalized, high-impact features used by hundreds of millions of listeners worldwide.</p>\n<p><strong>Responsibilities</strong></p>\n<ul>\n<li>Design, build, and ship backend services that power LLM-based music fulfilment experiences, giving users more adaptive control over their listening</li>\n</ul>\n<ul>\n<li>Build and maintain the APIs and distributed systems behind prompted playlist experiences, session generation, and agentic music products</li>\n</ul>\n<ul>\n<li>Collaborate with cross-functional partners across user research, design, data science, product, and ML engineering to build new product features that connect artists and fans in personalized and meaningful ways</li>\n</ul>\n<ul>\n<li>Be a technical leader and valued contributor in an autonomous, cross-functional agile team</li>\n</ul>\n<ul>\n<li>Prototype new approaches and productionize solutions at scale for hundreds of millions of active users</li>\n</ul>\n<ul>\n<li>Contribute to the Spotify-wide backend developer community, affecting and driving architecture across the company</li>\n</ul>\n<ul>\n<li>Promote best practices in backend system design, testing, and deployment across the organization</li>\n</ul>\n<p><strong>Requirements</strong></p>\n<ul>\n<li>You are an experienced backend engineer who enjoys solving complex real-world problems in a fast-paced, collaborative environment</li>\n</ul>\n<ul>\n<li>You have experience working directly with stakeholders to understand, document, and develop APIs and systems to meet their requirements, driving increased adoption and reducing reliance on custom one-off implementations</li>\n</ul>\n<ul>\n<li>You have experience writing distributed, high-volume services and know how to deploy and keep them running in production</li>\n</ul>\n<ul>\n<li>You have a deep understanding of system design, data structures, and algorithms</li>\n</ul>\n<ul>\n<li>You are comfortable working with LLM-based systems and building the backend infrastructure that supports them</li>\n</ul>\n<ul>\n<li>You have experience with large-scale distributed data processing tools such as Apache Beam or Apache Spark</li>\n</ul>\n<ul>\n<li>You have worked with cloud platforms like GCP or AWS</li>\n</ul>\n<ul>\n<li>You love working in an environment where you constantly experiment and iterate quickly</li>\n</ul>\n<ul>\n<li>You believe data is the most powerful tool for informed decision-making</li>\n</ul>\n<ul>\n<li>You care about quality and you know what it means to ship high-quality code</li>\n</ul>\n<p><strong>Benefits</strong></p>\n<ul>\n<li>Health insurance</li>\n</ul>\n<ul>\n<li>Six month paid parental leave</li>\n</ul>\n<ul>\n<li>401(k) retirement plan</li>\n</ul>\n<ul>\n<li>Monthly meal allowance</li>\n</ul>\n<ul>\n<li>23 paid days off</li>\n</ul>\n<ul>\n<li>13 paid flexible holidays</li>\n</ul>\n<ul>\n<li>Paid sick leave</li>\n</ul>\n<p><strong>Salary</strong></p>\n<p>The United States base range for this position is $160,091 - $228,702 plus equity.</p>\n<p style=\"margin-top:24px;font-size:13px;color:#666;\">XML job scraping automation by <a href=\"https://yubhub.co\">YubHub</a></p>","url":"https://yubhub.co/jobs/job_fdb57476-4a9","directApply":true,"hiringOrganization":{"@type":"Organization","name":"Spotify","sameAs":"https://www.spotify.com","logo":"https://logos.yubhub.co/spotify.com.png"},"x-apply-url":"https://jobs.lever.co/spotify/ab6947fc-adc4-41db-ad11-8fae741ceff0","x-work-arrangement":"remote","x-experience-level":"senior","x-job-type":"full-time","x-salary-range":"$160,091 - $228,702","x-skills-required":["backend engineering","music understanding","user experience","LLM-based systems","Apache Beam","Apache Spark","GCP","AWS"],"x-skills-preferred":[],"datePosted":"2026-04-24T12:15:59.803Z","jobLocation":{"@type":"Place","address":{"@type":"PostalAddress","addressLocality":"North America"}},"jobLocationType":"TELECOMMUTE","employmentType":"FULL_TIME","occupationalCategory":"Engineering","industry":"Technology","skills":"backend engineering, music understanding, user experience, LLM-based systems, Apache Beam, Apache Spark, GCP, AWS","baseSalary":{"@type":"MonetaryAmount","currency":"USD","value":{"@type":"QuantitativeValue","minValue":160091,"maxValue":228702,"unitText":"YEAR"}}},{"@context":"https://schema.org","@type":"JobPosting","identifier":{"@type":"PropertyValue","name":"YubHub","value":"job_21ebe22d-1df"},"title":"Product Manager, Data Center","description":"<p>We are seeking a Product Manager, Data Center Technology to define and deliver a scalable technology ecosystem for Data Center Operations. You will own product strategy, roadmap, prioritization, and execution, translating complex cross-functional needs into solutions that improve uptime, incident response, maintenance planning, technician experience, safety, and workforce productivity.</p>\n<p>This role partners closely with Data Center Operations, Facilities Engineering, Construction, Portfolio Planning, EHS, Security, Supply Chain, Finance, and IT to align solutions with business objectives. You will lead the end-to-end product lifecycle and deliver iterative, scalable platforms that enable operational excellence across the data center portfolio.</p>\n<ul>\n<li>Own the vision and roadmap for data center technology platforms, including DCIM, CMMS/EAM, BMS/SPoG, construction management, asset lifecycle, and workforce systems, aligned to business OKRs.</li>\n<li>Translate business goals into clear initiatives across first-party, third-party, and data/AI-driven solutions, with defined outcomes and success metrics.</li>\n<li>Make build, buy, or extend decisions across internal systems and external tools, balancing ROI, TCO, scalability, and interoperability.</li>\n<li>Lead the end-to-end product lifecycle: discovery, user research, problem definition, PRDs, prioritization, release planning, launch, and continuous improvement.</li>\n<li>Deliver iteratively using Agile principles, defining MVPs that unlock incremental value while building toward scalable platforms.</li>\n<li>Maintain structured execution through tools such as Jira, with clear tracking, reporting, and delivery discipline.</li>\n<li>Partner with cross-functional stakeholders to ensure systems interoperate cleanly with clear systems of record.</li>\n<li>Translate complex infrastructure concepts into intuitive workflows and product requirements for technicians and leadership.</li>\n<li>Conduct field discovery at data center sites to identify inefficiencies and prioritize improvements.</li>\n<li>Define data and telemetry strategies, including data models, integrations, and system-of-record decisions.</li>\n<li>Enable portfolio-level visibility through dashboards, KPIs, and digital twin capabilities.</li>\n<li>Drive adoption of data, AI, and automation solutions by embedding insights into operational workflows.</li>\n<li>Define and track KPIs such as MTTR/MTBF, maintenance compliance, backlog health, system adoption, telemetry coverage, construction schedule adherence, and capacity utilization.</li>\n<li>Establish product operating rhythms, including intake, prioritization, roadmap reviews, and release communications.</li>\n<li>Standardize templates and playbooks for site onboarding, asset standards, lifecycle workflows, and reporting.</li>\n<li>Stay current on industry trends and incorporate best practices into product strategy.</li>\n<li>Manage vendor relationships, including licensing, negotiations, and roadmap alignment.</li>\n</ul>\n<p style=\"margin-top:24px;font-size:13px;color:#666;\">XML job scraping automation by <a href=\"https://yubhub.co\">YubHub</a></p>","url":"https://yubhub.co/jobs/job_21ebe22d-1df","directApply":true,"hiringOrganization":{"@type":"Organization","name":"CoreWeave","sameAs":"https://www.coreweave.com","logo":"https://logos.yubhub.co/coreweave.com.png"},"x-apply-url":"https://job-boards.greenhouse.io/coreweave/jobs/4673538006","x-work-arrangement":"onsite","x-experience-level":"senior","x-job-type":"full-time","x-salary-range":"The base salary range for this role is $165,000 to $242,000.","x-skills-required":["product management","data center operations","technology platforms","DCIM","CMMS/EAM","BMS/SPoG","construction management","asset lifecycle","workforce systems","Agile principles","Jira","cross-functional stakeholders","data and telemetry strategies","digital twin capabilities","data, AI, and automation solutions","KPIs","MTTR/MTBF","maintenance compliance","backlog health","system adoption","telemetry coverage","construction schedule adherence","capacity utilization"],"x-skills-preferred":["hands-on experience with platforms such as Sunbird or Modius (DCIM)","Ignition (BMS/SPoG)","Hexagon or similar CMMS/EAM tools","Procore, SiteTracker, or Primavera for construction management"],"datePosted":"2026-04-24T12:15:59.441Z","jobLocation":{"@type":"Place","address":{"@type":"PostalAddress","addressLocality":"Livingston, NJ / New York, NY / Sunnyvale, CA / San Francisco, CA / Bellevue, WA"}},"employmentType":"FULL_TIME","occupationalCategory":"Engineering","industry":"Technology","skills":"product management, data center operations, technology platforms, DCIM, CMMS/EAM, BMS/SPoG, construction management, asset lifecycle, workforce systems, Agile principles, Jira, cross-functional stakeholders, data and telemetry strategies, digital twin capabilities, data, AI, and automation solutions, KPIs, MTTR/MTBF, maintenance compliance, backlog health, system adoption, telemetry coverage, construction schedule adherence, capacity utilization, hands-on experience with platforms such as Sunbird or Modius (DCIM), Ignition (BMS/SPoG), Hexagon or similar CMMS/EAM tools, Procore, SiteTracker, or Primavera for construction management","baseSalary":{"@type":"MonetaryAmount","currency":"USD","value":{"@type":"QuantitativeValue","minValue":165000,"maxValue":242000,"unitText":"YEAR"}}},{"@context":"https://schema.org","@type":"JobPosting","identifier":{"@type":"PropertyValue","name":"YubHub","value":"job_6ecebedb-31e"},"title":"Member of Technical Staff - Data Engineer","description":"<p>As Microsoft continues to push the boundaries of AI, we are on the lookout for individuals to work with us on the most interesting and challenging AI questions of our time. Our vision is bold and broad , to build systems that have true artificial intelligence across agents, applications, services, and infrastructure. It’s also inclusive: we aim to make AI accessible to all , consumers, businesses, developers , so that everyone can realize its benefits.</p>\n<p>We’re looking for someone who possesses technical prowess, a methodical approach to problem-solving, proficiency in big data processing technologies, and a mastery of templating to architect solutions that stand the test of time and who will bring an abundance of positive energy, empathy, and kindness to the team every day, in addition to being highly effective.</p>\n<p>The Data Platform Engineering team is responsible for building core data pipelines that help fine tune models, support introspection and retrospection of data so that we can constantly evolve and improve human AI interactions.</p>\n<p>Microsoft’s mission is to empower every person and every organization on the planet to achieve more. As employees we come together with a growth mindset, innovate to empower others, and collaborate to realize our shared goals. Each day we build on our values of respect, integrity, and accountability to create a culture of inclusion where everyone can thrive at work and beyond.</p>\n<p>Starting January 26, 2026, MAI employees are expected to work from a designated Microsoft office at least four days a week if they live within 50 miles (U.S.) or 25 miles (non-U.S., country-specific) of that location. This expectation is subject to local law and may vary by jurisdiction.</p>\n<p>Responsibilities:</p>\n<ul>\n<li>Build scalable data pipelines for sourcing, transforming and publishing data assets for AI use cases.</li>\n<li>Work collaboratively with other Platform, infrastructure, application engineers as well as AI Researchers to build next generation data platform products and services.</li>\n<li>Ship high-quality, well-tested, secure, and maintainable code.</li>\n<li>Find a path to get things done despite roadblocks to get your work into the hands of users quickly and iteratively.</li>\n<li>Enjoy working in a fast-paced, design-driven, product development cycle.</li>\n<li>Embody our Culture and Values.</li>\n</ul>\n<p>Qualifications:</p>\n<ul>\n<li>Bachelor’s Degree in Computer Science, Math, Software Engineering, Computer Engineering, or related field AND 6+ years experience in business analytics, data science, software development, data modeling or data engineering work OR Master’s Degree in Computer Science, Math, Software Engineering, Computer Engineering, or related field AND 4+ years experience in business analytics, data science, software development, or data engineering work OR equivalent experience.</li>\n<li>4+ years technical engineering experience building data processing applications (batch and streaming) with coding in languages including, but not limited to, Python, Java, Spark, SQL.</li>\n<li>Experience working with Apache Hadoop eco system, Kafka, NoSQL, etc.</li>\n<li>3+ years experience with data governance, data compliance and/or data security.</li>\n<li>2+ years’ experience building scalable services on top of public cloud infrastructure like Azure, AWS, or GCP.</li>\n<li>Extensive use datastores like RDBMS, key-value stores, etc.</li>\n<li>2+ years’ experience building distributed systems at scale and extensive systems knowledge that spans bare-metal hosts to containers to networking.</li>\n<li>Ability to identify, analyze, and resolve complex technical issues, ensuring optimal performance, scalability, and user experience.</li>\n<li>Dedication to writing clean, maintainable, and well-documented code with a focus on application quality, performance, and security.</li>\n<li>Demonstrated interpersonal skills and ability to work closely with cross-functional teams, including product managers, designers, and other engineers.</li>\n<li>Ability to clearly communicate complex technical concepts to both technical and non-technical stakeholders.</li>\n<li>Interest in learning new technologies and staying up to date with industry trends, best practices, and emerging technologies in web development and AI.</li>\n<li>Ability to work in a fast-paced environment, manage multiple priorities, and adapt to changing requirements and deadlines.</li>\n</ul>\n<p>#mai-datainsights #mai-datainsights</p>\n<p style=\"margin-top:24px;font-size:13px;color:#666;\">XML job scraping automation by <a href=\"https://yubhub.co\">YubHub</a></p>","url":"https://yubhub.co/jobs/job_6ecebedb-31e","directApply":true,"hiringOrganization":{"@type":"Organization","name":"Microsoft AI","sameAs":"https://microsoft.ai","logo":"https://logos.yubhub.co/microsoft.ai.png"},"x-apply-url":"https://microsoft.ai/job/member-of-technical-staff-data-engineer-5/","x-work-arrangement":"hybrid","x-experience-level":"staff","x-job-type":"full-time","x-salary-range":"$139,900 – $274,800 per year","x-skills-required":["Python","Java","Spark","SQL","Apache Hadoop","Kafka","NoSQL","Azure","AWS","GCP","RDBMS","key-value stores"],"x-skills-preferred":[],"datePosted":"2026-04-24T12:15:55.844Z","jobLocation":{"@type":"Place","address":{"@type":"PostalAddress","addressLocality":"Mountain View"}},"employmentType":"FULL_TIME","occupationalCategory":"Engineering","industry":"Technology","skills":"Python, Java, Spark, SQL, Apache Hadoop, Kafka, NoSQL, Azure, AWS, GCP, RDBMS, key-value stores","baseSalary":{"@type":"MonetaryAmount","currency":"USD","value":{"@type":"QuantitativeValue","minValue":139900,"maxValue":274800,"unitText":"YEAR"}}},{"@context":"https://schema.org","@type":"JobPosting","identifier":{"@type":"PropertyValue","name":"YubHub","value":"job_2f0a9221-9cb"},"title":"Data Engineer","description":"<p>You&#39;ll join the Data Collection Product Area within our Platform mission, where we build and operate the systems that power how data flows across Spotify. Our team develops the core event delivery infrastructure that enables hundreds of teams to collect and use data at massive scale. Every day, we support the delivery of trillions of events that help shape Spotify&#39;s products and unlock new innovations for creators and listeners alike.</p>\n<p><strong>Responsibilities</strong></p>\n<ul>\n<li>Design, build, and improve the infrastructure that powers Spotify&#39;s event delivery systems at global scale</li>\n</ul>\n<ul>\n<li>Develop backend services using Java and Apollo, and build batch and real-time data pipelines using tools like Scio and Apache Beam</li>\n</ul>\n<ul>\n<li>Work closely with your squad to ensure systems are reliable, efficient, and continuously evolving to meet user needs</li>\n</ul>\n<ul>\n<li>Take shared ownership of operational responsibilities, including monitoring, troubleshooting, and improving system health</li>\n</ul>\n<ul>\n<li>Collaborate with other teams across the Data Platform and broader R&amp;D organization to deliver impactful data solutions</li>\n</ul>\n<ul>\n<li>Contribute to evolving our use of cloud technologies such as Google Cloud Pub/Sub, GKE, and Dataflow</li>\n</ul>\n<p><strong>Requirements</strong></p>\n<ul>\n<li>Experience building backend systems and data pipelines using Java and Scala</li>\n</ul>\n<ul>\n<li>Understanding of distributed systems and comfort working with large-scale, cloud-based infrastructure</li>\n</ul>\n<ul>\n<li>Solid foundation in system design, data structures, and algorithms</li>\n</ul>\n<ul>\n<li>Experience with modern data processing frameworks such as Apache Beam or similar technologies</li>\n</ul>\n<ul>\n<li>Care about building reliable systems and familiarity with continuous integration and delivery practices</li>\n</ul>\n<ul>\n<li>Curiosity and motivation to solve complex technical challenges in high-scale environments</li>\n</ul>\n<ul>\n<li>Ability to collaborate effectively with others and value open feedback and continuous learning</li>\n</ul>\n<ul>\n<li>Comfortable working in agile teams and contributing to a culture of experimentation and improvement</li>\n</ul>\n<p style=\"margin-top:24px;font-size:13px;color:#666;\">XML job scraping automation by <a href=\"https://yubhub.co\">YubHub</a></p>","url":"https://yubhub.co/jobs/job_2f0a9221-9cb","directApply":true,"hiringOrganization":{"@type":"Organization","name":"Spotify","sameAs":"https://www.spotify.com","logo":"https://logos.yubhub.co/spotify.com.png"},"x-apply-url":"https://jobs.lever.co/spotify/baa87498-b0a3-4ac5-b197-a224e93c8a07","x-work-arrangement":"hybrid","x-experience-level":"mid","x-job-type":"full-time","x-salary-range":null,"x-skills-required":["Java","Scala","Apache Beam","Scio","Google Cloud Pub/Sub","GKE","Dataflow"],"x-skills-preferred":[],"datePosted":"2026-04-24T12:15:10.236Z","jobLocation":{"@type":"Place","address":{"@type":"PostalAddress","addressLocality":"London"}},"employmentType":"FULL_TIME","occupationalCategory":"Engineering","industry":"Technology","skills":"Java, Scala, Apache Beam, Scio, Google Cloud Pub/Sub, GKE, Dataflow"},{"@context":"https://schema.org","@type":"JobPosting","identifier":{"@type":"PropertyValue","name":"YubHub","value":"job_3b419874-946"},"title":"Senior Production Engineer","description":"<p>Production Engineering ensures CoreWeave&#39;s cloud delivers world-class reliability, performance, and operational excellence. We are hiring a Senior Production Engineer to take direct, hands-on ownership of critical tooling that drives reliability and delivery success.</p>\n<p>In this role, you will work broadly across the cloud stack designing, implementing, deploying, and operating systems that improve delivery velocity, service availability, and operational safety. You’ll be responsible for leading end-to-end technical projects, maintaining long-lived systems the team owns, and strengthening our operational foundations through durable engineering investments.</p>\n<p>This is a role for someone who enjoys building, debugging, and operating production systems. You will collaborate closely with service owners, but your primary impact comes from the reliability, quality, and maturity of the systems you deliver and maintain over time.</p>\n<p><strong>Responsibilities</strong></p>\n<ul>\n<li>Take hands-on ownership of critical systems and frameworks, driving their architecture, implementation, and long-term evolution.</li>\n<li>Lead end-to-end delivery of engineering projects that improve availability, scalability, operational automation, and failure recovery.</li>\n<li>Build and maintain observability, alerting, automated remediation, and resilience testing for the systems you support.</li>\n<li>Participate in incident response as a subject-matter expert; drive deep root-cause investigations and implement lasting fixes.</li>\n<li>Improve runbooks, sources of truth, deployment workflows, and operational tooling to harden production readiness.</li>\n<li>Eliminate single points of failure and reduce operational toil through automation, refactors, and system redesigns.</li>\n<li>Ship production code regularly in Python, Go, or similar languages, and participate in on-call rotations.</li>\n<li>Maintain and mature long-term projects and frameworks owned by the team, ensuring they remain reliable, well-instrumented, and easy to operate.</li>\n<li>Collaborate with platform teams to ensure new features and services integrate cleanly with our reliability best-practices and tooling.</li>\n</ul>\n<p><strong>What You’ve Worked On (Minimum Qualifications)</strong></p>\n<ul>\n<li>7+ years of engineering experience building and operating distributed systems or cloud platforms.</li>\n<li>Demonstrated ability to debug complex production issues end-to-end, across services, infrastructure layers, and automation.</li>\n<li>Strong programming or scripting ability (Python, Go, or similar), with experience shipping and operating production services and tools.</li>\n<li>Deep knowledge of cloud-native technologies and distributed system patterns, particularly Kubernetes.</li>\n<li>Experience with modern observability stacks: metrics, tracing, structured logs, SLOs/SLIs, and incident lifecycle practices.</li>\n<li>A track record of successfully delivering hands-on reliability improvements through engineering execution.</li>\n</ul>\n<p><strong>Preferred Qualifications</strong></p>\n<ul>\n<li>Experience building internal tooling, frameworks, or automation that supports high-availability cloud operations.</li>\n<li>Familiarity with DR/BCP, service tiering, capacity planning, or chaos engineering.</li>\n<li>Background operating or building large-scale AI or GPU-accelerated infrastructure.</li>\n<li>Experience maintaining multi-year ownership of foundational production systems.</li>\n</ul>\n<p><strong>Why CoreWeave</strong></p>\n<p>At CoreWeave, we work hard, have fun, and move fast. You’ll join a team that values curiosity, ownership, and creative problem-solving. Production Engineering sits at the intersection of reliability and AI infrastructure, building systems that enable the world’s most powerful AI cloud.</p>\n<p><strong>Core Values</strong></p>\n<ul>\n<li>Be Curious at Your Core</li>\n<li>Act Like an Owner</li>\n<li>Empower Employees</li>\n<li>Deliver Best-in-Class Client Experiences</li>\n<li>Achieve More Together</li>\n</ul>\n<p>We support and encourage an entrepreneurial outlook and independent thinking. We foster an environment that encourages collaboration and enables the development of innovative solutions to complex problems. As we get set for takeoff, the organization&#39;s growth opportunities are constantly expanding. You will be surrounded by some of the best talent in the industry, who will want to learn from you, too. Come join us!</p>\n<p><strong>Compensation</strong></p>\n<p>The base salary range for this role is 160,000 to 214,000 SGD. The starting salary will be determined based on job-related knowledge, skills, experience, and market location. We strive for both market alignment and internal equity when determining compensation. In addition to base salary, our total rewards package includes a discretionary bonus, equity awards, and a comprehensive benefits program (all based on eligibility).</p>\n<p><strong>What We Offer</strong></p>\n<p>The range we’ve posted represents the typical compensation range for this role. To determine actual compensation, we review the market rate for each candidate which can include a variety of factors. These include qualifications, experience, interview performance, and location.</p>\n<p>In addition to a competitive salary, we offer a variety of benefits to support your needs, including:</p>\n<ul>\n<li>Medical, dental, and vision insurance - 100% paid for by CoreWeave</li>\n<li>Company-paid Life Insurance</li>\n<li>Voluntary supplemental life insurance</li>\n<li>Short and long-term disability insurance</li>\n<li>Flexible Spending Account</li>\n<li>Health Savings Account</li>\n<li>Tuition Reimbursement</li>\n<li>Ability to Participate in Employee Stock Purchase Program (ESPP)</li>\n<li>Mental Wellness Benefits through Spring Health</li>\n<li>Family-Forming support provided by Carrot</li>\n<li>Paid Parental Leave</li>\n<li>Flexible, full-service childcare support with Kinside</li>\n<li>401(k) with a generous employer match</li>\n<li>Flexible PTO</li>\n<li>Catered lunch each day in our office and data center locations</li>\n<li>A casual work environment</li>\n<li>A work culture focused on innovative disruption</li>\n</ul>\n<p style=\"margin-top:24px;font-size:13px;color:#666;\">XML job scraping automation by <a href=\"https://yubhub.co\">YubHub</a></p>","url":"https://yubhub.co/jobs/job_3b419874-946","directApply":true,"hiringOrganization":{"@type":"Organization","name":"CoreWeave","sameAs":"https://www.coreweave.com","logo":"https://logos.yubhub.co/coreweave.com.png"},"x-apply-url":"https://job-boards.greenhouse.io/coreweave/jobs/4675297006","x-work-arrangement":"hybrid","x-experience-level":"senior","x-job-type":"full-time","x-salary-range":"160,000 to 214,000 SGD","x-skills-required":["cloud computing","distributed systems","Kubernetes","observability stacks","metrics","tracing","structured logs","SLOs/SLIs","incident lifecycle practices","Python","Go","engineering experience"],"x-skills-preferred":["internal tooling","frameworks","automation","DR/BCP","service tiering","capacity planning","chaos engineering","large-scale AI","GPU-accelerated infrastructure"],"datePosted":"2026-04-24T12:14:03.335Z","jobLocation":{"@type":"Place","address":{"@type":"PostalAddress","addressLocality":"Singapore"}},"employmentType":"FULL_TIME","occupationalCategory":"Engineering","industry":"Technology","skills":"cloud computing, distributed systems, Kubernetes, observability stacks, metrics, tracing, structured logs, SLOs/SLIs, incident lifecycle practices, Python, Go, engineering experience, internal tooling, frameworks, automation, DR/BCP, service tiering, capacity planning, chaos engineering, large-scale AI, GPU-accelerated infrastructure","baseSalary":{"@type":"MonetaryAmount","currency":"USD","value":{"@type":"QuantitativeValue","minValue":160000,"maxValue":214000,"unitText":"YEAR"}}},{"@context":"https://schema.org","@type":"JobPosting","identifier":{"@type":"PropertyValue","name":"YubHub","value":"job_8797f9b8-aca"},"title":"Principal Software Engineer","description":"<p>Join Microsoft AI&#39;s Copilot Discover Engineering Team as a Principal Software Engineer, serving as a senior technical architect to play a central role in the technical direction and long-range architecture of Copilot Discover.</p>\n<p>This is a role emphasizing true end-to-end responsibility: setting the architectural vision, shaping the platform for AI-forward discovery experiences, and steering the evolution of product experiences that sit at the heart of how users engage with the intersection of knowledge, content, and personalization on surfaces on which Copilot shows up.</p>\n<p>You will design and drive the systems that power the Copilot Discover feed at scale. You&#39;ll work on foundational platforms that ingest, enrich, rank, personalize, and serve content across web, mobile, and partner surfaces and lead architectural strategy for how we unify signals, models, and data into coherent, trustworthy experiences; modernize our ranking and personalization stack; and build the AI-forward infrastructure that makes Copilot Discover feel intelligent, anticipatory, and personalized for every user.</p>\n<p>The key is an end-to-end focus on outcomes, across a broad technical space. You&#39;ll be expected to influence platform direction across multiple teams and adjacent organizations. You will ensure that the MSN and Copilot Discover systems are robust, scalable, privacy-respecting, and engineered for long-term adaptation.</p>\n<p>High product sense is a success factor – how you drive product and architectural convergence across today&#39;s fragmented surfaces, reduce complexity, and shape a consistent platform model is key to the success of the product and this role.</p>\n<p>Copilot Discover sits at the intersection of content, signals, and user intent. Our ambition is to make it a durable, strategic layer that powers intelligent, personalized, and trusted discovery experiences across a broad array of surfaces where Microsoft engages consumers in their journeys.</p>\n<p>If you are passionate about building high-scale, AI-driven systems that combine solid architectural rigor with meaningful user value, this is the role for you.</p>\n<p>Microsoft&#39;s mission is to empower every person and every organization on the planet to achieve more. As employees we come together with a growth mindset, innovate to empower others, and collaborate to realize our shared goals. Each day we build on our values of respect, integrity, and accountability to create a culture of inclusion where everyone can thrive at work and beyond.</p>\n<p>Responsibilities:</p>\n<p>Own the technical direction for Copilot Discover platforms, setting end-to-end architectural strategy.</p>\n<p>Partner with product, design, data science, and engineering leaders to translate business and user needs into executable architectural plans, well-documented designs, and multi-year roadmaps.</p>\n<p>Set and govern architectural decisions across multiple services and teams, ensuring systems are scalable, secure, reliable, cost-efficient, and grounded in data, telemetry and operational excellence.</p>\n<p>Raise the technical bar across the organization by establishing flasifible principles, reviewing critical designs, and helping to develop technical leaders within the team.</p>\n<p>Establish and evolve quality and reliability standards, including test strategies, CI/CD practices, monitoring, alerting, and live-site health.</p>\n<p>Shape the adoption of AI/ML techniques for content understanding, personalization, summarization, and safety, in close collaboration with MAI and partner teams.</p>\n<p>Serve as a cross-org technical leader, aligning MSN architecture with Bing, Copilot, Ads, Privacy, Trust, and other Microsoft platforms.</p>\n<p>Qualifications:</p>\n<p>Required Qualifications:</p>\n<p>Bachelor’s Degree in Computer Science or related technical field AND 8+ years technical engineering experience with coding in languages including, but not limited to, C, C++, C#, Java, JavaScript, or Python OR equivalent experience.</p>\n<p>Preferred Qualifications:</p>\n<p>Master’s Degree in Computer Science or related technical field AND 12+ years technical engineering experience with coding in languages including, but not limited to, C, C++, C#, Java, JavaScript, or Python OR Bachelor’s Degree in Computer Science or related technical field AND 15+ years technical engineering experience with coding in languages including, but not limited to, C, C++, C#, Java, JavaScript, or Python OR equivalent experience.</p>\n<p>Experience in ML/AI systems, especially in content understanding, ranking, or personalization.</p>\n<p>Proven experience designing and operating large-scale distributed systems, including data pipelines, microservices, APIs, and storage systems.</p>\n<p>Experience with content platforms, personalization systems, or consumer-facing services at scale.</p>\n<p>Experience with technologies such as Apache Spark, Kafka, columnar storage, data modeling, and schema evolution.</p>\n<p>Demonstrated success as a technical lead or architect, influencing across teams without direct authority.</p>\n<p>Solid understanding of system architecture, performance tuning, telemetry design, and operational excellence.</p>\n<p>Experience in ML/AI systems, especially in content understanding, ranking, or personalization.</p>\n<p>Excellent analytical and communication skills, with the ability to clearly articulate complex technical concepts.</p>\n<p>Solid cross-organizational collaboration skills and the ability to influence senior stakeholders.</p>\n<p style=\"margin-top:24px;font-size:13px;color:#666;\">XML job scraping automation by <a href=\"https://yubhub.co\">YubHub</a></p>","url":"https://yubhub.co/jobs/job_8797f9b8-aca","directApply":true,"hiringOrganization":{"@type":"Organization","name":"Microsoft","sameAs":"https://microsoft.ai","logo":"https://logos.yubhub.co/microsoft.ai.png"},"x-apply-url":"https://microsoft.ai/job/principal-software-engineer-51/","x-work-arrangement":"hybrid","x-experience-level":"senior","x-job-type":"full-time","x-salary-range":"$163,000 - $296,400 per year","x-skills-required":["C","C++","C#","Java","JavaScript","Python","Apache Spark","Kafka","columnar storage","data modeling","schema evolution"],"x-skills-preferred":[],"datePosted":"2026-04-24T12:14:00.100Z","jobLocation":{"@type":"Place","address":{"@type":"PostalAddress","addressLocality":"Redmond"}},"employmentType":"FULL_TIME","occupationalCategory":"Engineering","industry":"Technology","skills":"C, C++, C#, Java, JavaScript, Python, Apache Spark, Kafka, columnar storage, data modeling, schema evolution","baseSalary":{"@type":"MonetaryAmount","currency":"USD","value":{"@type":"QuantitativeValue","minValue":163000,"maxValue":296400,"unitText":"YEAR"}}},{"@context":"https://schema.org","@type":"JobPosting","identifier":{"@type":"PropertyValue","name":"YubHub","value":"job_28608bb0-b72"},"title":"Software Engineer - Full Stack","description":"<p>Help millions of people find the right local businesses and services at the moments that matter most. At Bing Places, we build the systems that power local discovery across Microsoft experiences. You’ll work at the intersection of engineering, data, and product to improve the quality, relevance, and trustworthiness of local search at global scale.</p>\n<p>In this role, you’ll build and operate scalable systems that power accurate and trustworthy local search experiences across Microsoft. As a Software Engineer II on Bing Places, you’ll collaborate with engineers, data scientists, and product partners to integrate diverse data sources, improve ranking quality, and ship features used by millions of customers.</p>\n<p>The role offers solid growth opportunities as you deepen your expertise in distributed systems, geospatial data. Microsoft’s mission is to empower every person and every organization on the planet to achieve more. As employees we come together with a growth mindset, innovate to empower others, and collaborate to realize our shared goals. Each day we build on our values of respect, integrity, and accountability to create a culture of inclusion where everyone can thrive at work and beyond.</p>\n<p>Responsibilities:</p>\n<p>Contribute to architecture, engineering standards, and development practices across the team.</p>\n<p>Work with appropriate stakeholders to determine user requirements for a set of features.</p>\n<p>Contribute to the identification of dependencies, and the development of design documents for a product area with little oversight.</p>\n<p>Create and implement code for a product, service, or feature, reusing code as applicable.</p>\n<p>Contribute to efforts to break down larger work items into smaller work items and provides estimation.</p>\n<p>Act as a Designated Responsible Individual (DRI) working on-call to monitor system/product feature/service for degradation, downtime, or interruptions and gain approval to restore system/product/service for simple problems.</p>\n<p>Remain current in skills by investing time and effort into staying abreast of current developments that will improve the availability, reliability, efficiency, observability, and performance of products while also driving consistency in monitoring and operations at scale.</p>\n<p>Qualifications:</p>\n<p>Required Qualifications:</p>\n<p>Bachelor’s Degree in Computer Science or related technical field AND 2+ years technical engineering experience with coding in languages including, but not limited to, C, C++, C#, Java, JavaScript, or Python OR equivalent experience.</p>\n<p>Ability to meet Microsoft, customer and/or government security screening requirements are required for this role. These requirements include but are not limited to the following specialized security screenings: Microsoft Cloud Background Check: This position will be required to pass the Microsoft Cloud background check upon hire/transfer and every two years thereafter.</p>\n<p>Preferred Qualifications:</p>\n<p>Master’s Degree in Computer Science or related technical field AND 3+ years technical engineering experience with coding in languages including, but not limited to, C, C++, C#, Java, JavaScript, or Python OR Bachelor’s Degree in Computer Science or related technical field AND 5+ years technical engineering experience with coding in languages including, but not limited to, C, C++, C#, Java, JavaScript, or Python OR equivalent experience.</p>\n<p>1+ with data engineering leveraging tools such as Apache Hadoop or Spark or equivalent experience.</p>\n<p>Experience with Azure Cloud, Azure Data Factory (ADF) 3+ years of experience in solving, design, coding, and debugging skills.</p>\n<p>Demonstrated experience with products that involve high availability/reliability and low latency systems.</p>\n<p>#MicrosoftAI</p>\n<p style=\"margin-top:24px;font-size:13px;color:#666;\">XML job scraping automation by <a href=\"https://yubhub.co\">YubHub</a></p>","url":"https://yubhub.co/jobs/job_28608bb0-b72","directApply":true,"hiringOrganization":{"@type":"Organization","name":"Microsoft","sameAs":"https://microsoft.ai","logo":"https://logos.yubhub.co/microsoft.ai.png"},"x-apply-url":"https://microsoft.ai/job/software-engineer-full-stack-2/","x-work-arrangement":"hybrid","x-experience-level":"mid","x-job-type":"full-time","x-salary-range":"Base pay range for this role across the U.S. is USD $100,600 – $199,000 per year.","x-skills-required":["C","C++","C#","Java","JavaScript","Python","Apache Hadoop","Spark","Azure Cloud","Azure Data Factory (ADF)"],"x-skills-preferred":[],"datePosted":"2026-04-24T12:13:33.845Z","jobLocation":{"@type":"Place","address":{"@type":"PostalAddress","addressLocality":"Redmond"}},"employmentType":"FULL_TIME","occupationalCategory":"Engineering","industry":"Technology","skills":"C, C++, C#, Java, JavaScript, Python, Apache Hadoop, Spark, Azure Cloud, Azure Data Factory (ADF)","baseSalary":{"@type":"MonetaryAmount","currency":"USD","value":{"@type":"QuantitativeValue","minValue":100600,"maxValue":199000,"unitText":"YEAR"}}},{"@context":"https://schema.org","@type":"JobPosting","identifier":{"@type":"PropertyValue","name":"YubHub","value":"job_2c095439-13b"},"title":"Principal Software Engineer","description":"<p>Microsoft Advertising is seeking a Principal Software Engineer to join our Ads Engineering Platform team and advance the core capabilities of our ad-serving infrastructure,the engine that powers advertising across Bing Search, MSN, Microsoft Start, and shopping experiences in the Edge browser.</p>\n<p>Our serving stack operates at massive global scale, delivering millions of ad requests per second through a geo-distributed, low-latency system that combines large-scale GPU/CPU inference, real-time bidding, and intelligent ranking pipelines.</p>\n<p>This role focuses on advancing the performance, efficiency, and scalability of the next generation of model serving and inference platforms for Ads.</p>\n<p>As a senior technical leader, you’ll design and optimize high-performance serving systems and GPU inference frameworks that drive measurable latency improvements and cost efficiency across Microsoft’s ad ecosystem.</p>\n<p>You’ll work across the stack,from CUDA kernel tuning and NUMA-aware threading to large-scale distributed orchestration and model deployment for deep learning and LLM workloads.</p>\n<p>This is a rare opportunity to shape the architecture of one of the world’s most advanced, mission-critical online serving platforms, collaborating with world-class engineers to deliver innovation at Internet scale.</p>\n<p>Microsoft’s mission is to empower every person and every organization on the planet to achieve more.</p>\n<p>As employees we come together with a growth mindset, innovate to empower others, and collaborate to realize our shared goals.</p>\n<p>Each day we build on our values of respect, integrity, and accountability to create a culture of inclusion where everyone can thrive at work and beyond.</p>\n<p>Starting January 26, 2026, Microsoft AI (MAI) employees who live within a 50-mile commute of a designated Microsoft office in the U.S. or 25-mile commute of a non-U.S., country-specific location are expected to work from the office at least four days per week.</p>\n<p>This expectation is subject to local law and may vary by jurisdiction.</p>\n<p>Responsibilities:</p>\n<p>Design and lead the development of large-scale, distributed online serving systems,including GPU-accelerated and CPU-based ranking/inference pipelines,to process millions of ad requests per second with ultra-low latency, high throughput, and solid reliability.</p>\n<p>Architect and optimize end-to-end inference infrastructure, including model serving, batching/streaming, caching, scheduling, and resource orchestration across heterogeneous hardware (GPU, CPU, and memory tiers).</p>\n<p>Profile and optimize performance across the full stack,from CUDA kernels and GPU pipelines to CPU threads and OS-level scheduling,identifying bottlenecks, tuning latency tails, and improving cost efficiency through advanced profiling and instrumentation.</p>\n<p>Own live-site reliability as a DRI: design telemetry, alerting, and fault-tolerance mechanisms; drive rapid diagnosis and mitigation of performance regressions or outages in globally distributed systems.</p>\n<p>Collaborate and mentor across teams,driving architecture reviews, enforcing engineering excellence, promoting system-level optimization practices, and mentoring others in deep debugging, profiling, and performance engineering.</p>\n<p>Qualifications:</p>\n<p>Required Qualifications:</p>\n<p>Bachelor’s Degree in Computer Science or related technical field AND 6+ years technical engineering experience with coding in languages including, but not limited to, C, C++, C#, Java, JavaScript, or Python OR equivalent experience.</p>\n<p>Preferred Qualifications:</p>\n<p>Master’s Degree in Computer Science or related technical field AND 8+ years technical engineering experience with coding in languages including, but not limited to, C, C++, C#, Java, JavaScript, or Python OR Bachelor’s Degree in Computer Science or related technical field AND 12+ years technical engineering experience with coding in languages including, but not limited to, C, C++, C#, Java, JavaScript, or Python OR equivalent experience.</p>\n<p>Industry experience in advertising or search engine backend systems, such as large-scale ad ranking, real-time bidding (RTB), or relevance-serving infrastructure.</p>\n<p>Hands-on experience with real-time data streaming systems (Kafka, Flink, Spark Streaming), feature-store integration, and multi-region deployment for low-latency, globally distributed services.</p>\n<p>Familiarity with LLM inference optimization,model sharding, tensor/kv-cache parallelism, paged attention, continuous batching, quantization (AWQ/FP8), and hybrid CPU–GPU orchestration.</p>\n<p>Demonstrated success operating large-scale systems with SLA-based capacity forecasting, autoscaling, and performance telemetry; proven leadership in cross-functional architecture initiatives and technical mentorship.</p>\n<p>Passion for performance engineering, observability, and deep systems debugging, with a solid drive to push the limits of serving infrastructure for the next generation of ads and AI models.</p>\n<p>Deep expertise in GPU inference frameworks such as NVIDIA Triton Inference Server, CUDA, and TensorRT, including hands-on experience implementing custom CUDA kernels, optimizing memory movement (H2D/D2H), overlapping compute and I/O, and maximizing GPU occupancy and kernel fusion for deep learning and LLM workloads.</p>\n<p>Solid understanding of model-serving trade-offs,batching vs. streaming, latency vs. throughput, quantization (FP16/BF16/INT8), dynamic batching, continuous model rollout, and adaptive inference scheduling across CPU/GPU tiers.</p>\n<p>Proven ability to profile and optimize GPU and system workloads,including tensor/memory alignment, compute–memory balancing, embedding table management, parameter servers, hierarchical caching, and vectorized inference for transformer/LLM architectures.</p>\n<p>Expertise in low-level system and OS internals, including multi-threading, process scheduling, NUMA-aware memory allocation, lock-free data structures, context switching, I/O stack tuning (NVMe, RDMA), kernel bypass (DPDK, io_uring), and CPU/GPU affinity optimization for large-scale serving pipelines.</p>\n<p>#MicrosoftAI Software Engineering IC5 – The typical base pay range for this role across the U.S. is USD $139,900 – $274,800 per year. There is a different range applicable to specific work locations, within the San Francisco Bay area and New York City metropolitan area, and the base pay range for this role in those locations is USD $188,000 – $304,200 per year.</p>\n<p>Certain roles may be eligible for benefits and other compensation.</p>\n<p style=\"margin-top:24px;font-size:13px;color:#666;\">XML job scraping automation by <a href=\"https://yubhub.co\">YubHub</a></p>","url":"https://yubhub.co/jobs/job_2c095439-13b","directApply":true,"hiringOrganization":{"@type":"Organization","name":"Microsoft AI","sameAs":"https://microsoft.ai","logo":"https://logos.yubhub.co/microsoft.ai.png"},"x-apply-url":"https://microsoft.ai/job/principal-software-engineer-41/","x-work-arrangement":"hybrid","x-experience-level":"senior","x-job-type":"full-time","x-salary-range":"$139,900 - $274,800 per year","x-skills-required":["C","C++","C#","Java","JavaScript","Python","NVIDIA Triton Inference Server","CUDA","TensorRT","Kafka","Flink","Spark Streaming","GPU inference frameworks","LLM inference optimization","model sharding","tensor/kv-cache parallelism","paged attention","continuous batching","quantization","AWQ/FP8","hybrid CPU–GPU orchestration","SLA-based capacity forecasting","autoscaling","performance telemetry","cross-functional architecture initiatives","technical mentorship","performance engineering","observability","deep systems debugging","low-level system and OS internals","multi-threading","process scheduling","NUMA-aware memory allocation","lock-free data structures","context switching","I/O stack tuning","kernel bypass","CPU/GPU affinity optimization"],"x-skills-preferred":[],"datePosted":"2026-04-24T12:12:57.301Z","jobLocation":{"@type":"Place","address":{"@type":"PostalAddress","addressLocality":"Redmond"}},"employmentType":"FULL_TIME","occupationalCategory":"Engineering","industry":"Technology","skills":"C, C++, C#, Java, JavaScript, Python, NVIDIA Triton Inference Server, CUDA, TensorRT, Kafka, Flink, Spark Streaming, GPU inference frameworks, LLM inference optimization, model sharding, tensor/kv-cache parallelism, paged attention, continuous batching, quantization, AWQ/FP8, hybrid CPU–GPU orchestration, SLA-based capacity forecasting, autoscaling, performance telemetry, cross-functional architecture initiatives, technical mentorship, performance engineering, observability, deep systems debugging, low-level system and OS internals, multi-threading, process scheduling, NUMA-aware memory allocation, lock-free data structures, context switching, I/O stack tuning, kernel bypass, CPU/GPU affinity optimization","baseSalary":{"@type":"MonetaryAmount","currency":"USD","value":{"@type":"QuantitativeValue","minValue":139900,"maxValue":274800,"unitText":"YEAR"}}},{"@context":"https://schema.org","@type":"JobPosting","identifier":{"@type":"PropertyValue","name":"YubHub","value":"job_6c7ddfe8-e54"},"title":"Solutions Architect (Greater China Region)","description":"<p>At Databricks, our core principles are at the heart of everything we do; creating a culture of proactiveness and a customer-centric mindset guides us to create a unified platform that makes data science and analytics accessible to everyone.</p>\n<p>We aim to inspire our customers to make informed decisions that push their business forward. We provide a user-friendly and intuitive platform that makes it easy to turn insights into action and fosters a culture of creativity, experimentation, and continuous improvement.</p>\n<p>As a Solutions Architect in the Greater China Region, you will be an essential part of this mission, using your technical expertise to demonstrate how our Data Intelligence Platform can help customers solve their complex data challenges.</p>\n<p>You&#39;ll work with a collaborative, customer-focused team that values innovation and creativity, using your skills to create customized solutions to help our customers achieve their goals and guide their businesses forward.</p>\n<p>Join us in our quest to change how people work with data and make a better world!</p>\n<p>The impact you will have:</p>\n<ul>\n<li>Form successful relationships with clients in the Greater China Region to provide technical and business value in collaboration with Account Executives.</li>\n</ul>\n<ul>\n<li>Operate as an expert in big data analytics to excite customers about Databricks.</li>\n</ul>\n<ul>\n<li>Develop into a &#39;champion&#39; and trusted advisor on multiple issues of architecture, design, and implementation to lead to the successful adoption of the Databricks Data Intelligence Platform.</li>\n</ul>\n<ul>\n<li>Scale best practices in your field and support customers by authoring reference architectures, how-tos, and demo applications, and help build the Databricks community in your region by leading workshops, seminars, and meet-ups.</li>\n</ul>\n<ul>\n<li>Grow your knowledge and expertise to the level of a technical and/or industry specialist.</li>\n</ul>\n<p>What we look for:</p>\n<ul>\n<li>Engage customers in technical sales, challenge their questions, guide clear outcomes, and communicate technical and value propositions.</li>\n</ul>\n<ul>\n<li>Develop customer relationships and build internal partnerships with account executives and teams.</li>\n</ul>\n<ul>\n<li>Prior experience with coding in a core programming language (i.e., Python, Java, Scala) and willingness to learn a base level of Apache Spark.</li>\n</ul>\n<ul>\n<li>Proficient with Big Data Analytics technologies, including hands-on expertise with complex proofs-of-concept and public cloud platform(s).</li>\n</ul>\n<ul>\n<li>Experienced in use case discovery, scoping, and delivering complex solution architecture designs to multiple audiences requiring an ability to context switch in levels of technical depth.</li>\n</ul>\n<ul>\n<li>Business proficiency in Mandarin and experience in the Greater China Region are required to enable effective collaboration and understanding of client needs.</li>\n</ul>\n<p>The successful candidate will engage with the Greater China Region customers in Mandarin for technical sales discussions, address technical challenges, and articulate clear technical solutions and value propositions.</p>\n<p style=\"margin-top:24px;font-size:13px;color:#666;\">XML job scraping automation by <a href=\"https://yubhub.co\">YubHub</a></p>","url":"https://yubhub.co/jobs/job_6c7ddfe8-e54","directApply":true,"hiringOrganization":{"@type":"Organization","name":"Databricks","sameAs":"https://databricks.com","logo":"https://logos.yubhub.co/databricks.com.png"},"x-apply-url":"https://job-boards.greenhouse.io/databricks/jobs/8499584002","x-work-arrangement":"onsite","x-experience-level":"senior","x-job-type":"full-time","x-salary-range":null,"x-skills-required":["Python","Java","Scala","Apache Spark","Big Data Analytics","Mandarin"],"x-skills-preferred":[],"datePosted":"2026-04-24T12:12:48.156Z","jobLocation":{"@type":"Place","address":{"@type":"PostalAddress","addressLocality":"Singapore"}},"employmentType":"FULL_TIME","occupationalCategory":"Engineering","industry":"Technology","skills":"Python, Java, Scala, Apache Spark, Big Data Analytics, Mandarin"},{"@context":"https://schema.org","@type":"JobPosting","identifier":{"@type":"PropertyValue","name":"YubHub","value":"job_9b1d250c-732"},"title":"Senior Applied Scientist","description":"<p>Conversational commerce introduces challenges that differ from traditional web shopping. Preferences emerge through dialogue, expectations for accuracy and trust are high, and systems must reason over context and frequently changing commerce data. Microsoft Copilot is building shopping experiences that are conversational, proactive, and trustworthy. As a Senior Applied Scientist, you will lead the development of machine learning and generative AI systems that power product discovery, ranking, personalization, and reasoning across Copilot shopping surfaces.</p>\n<p>This role sits at the intersection of applied machine learning, generative AI, and product experience, with clear ownership of core shopping intelligence used directly in user-facing Copilot experiences. Microsoft’s mission is to empower every person and every organization on the planet to achieve more. As employees we come together with a growth mindset, innovate to empower others, and collaborate to realize our shared goals. Each day we build on our values of respect, integrity, and accountability to create a culture of inclusion where everyone can thrive at work and beyond.</p>\n<p>Starting January 26, 2026, Microsoft AI (MAI) employees who live within a 50- mile commute of a designated Microsoft office in the U.S. or 25-mile commute of a non-U.S., country-specific location are expected to work from the office at least four days per week. This expectation is subject to local law and may vary by jurisdiction.</p>\n<p>Responsibilities:</p>\n<ul>\n<li>Design, build, and productionize machine learning models for product discovery, ranking, recommendation, and personalization using large-scale commerce and behavioral data.</li>\n<li>Develop LLM-based systems for conversational shopping, including prompt design, retrieval-augmented generation, tool orchestration, and grounding against structured commerce data.</li>\n<li>Address quality and trust challenges such as hallucination risk, stale data, and recommendation reliability.</li>\n<li>Define evaluation frameworks and experimentation strategies for conversational and proactive shopping scenarios, including offline metrics and online experiments.</li>\n<li>Partner closely with product, engineering, and design teams to translate models into low-latency, reliable Copilot experiences.</li>\n<li>Provide technical leadership for applied science within Copilot Shopping through design reviews, mentoring, and setting quality standards.</li>\n<li>Contribute to model governance and Responsible AI practices to ensure trustworthy and compliant systems.</li>\n</ul>\n<p>Qualifications:</p>\n<ul>\n<li>Bachelor’s Degree in Statistics, Econometrics, Computer Science, Electrical or Computer Engineering, or related field AND 4+ years related experience (e.g., statistics predictive analytics, research) OR Master’s Degree in Statistics, Econometrics, Computer Science, Electrical or Computer Engineering, or related field AND 3+ years related experience (e.g., statistics, predictive analytics, research) OR Doctorate in Statistics, Econometrics, Computer Science, Electrical or Computer Engineering, or related field AND 1+ year(s) related experience (e.g., statistics, predictive analytics, research) OR equivalent experience.</li>\n<li>3+ years of hands-on experience developing machine learning or statistical models to solve real-world problems (in industry or academic projects), including building and validating algorithms such as regressions, classifiers, or clustering models.</li>\n<li>Proficiency in programming for data science (e.g. using Python or R for data analysis and modeling) and experience with data querying languages (e.g. SQL).</li>\n<li>Big Data &amp; Distributed Computing: Hands-on experience with large-scale data processing using tools like Apache Spark or Azure Databricks for training and inference workflows.</li>\n<li>Advanced Analytics: Skilled in time-series analysis and anomaly detection techniques (e.g., ARIMA, isolation forests) applied to business contexts for actionable insights.</li>\n<li>LLMs &amp; Domain Adaptation: Practical experience with prompt engineering, fine-tuning GPT-like models, and applying LLMs in domain-heavy areas (healthcare, agriculture, social sciences) while ensuring privacy and Responsible AI compliance.</li>\n</ul>\n<p>Experience Level: senior Employment Type: full-time Workplace Type: hybrid Category: Engineering Industry: Technology Salary Range: $119,800 - $234,700 per year Salary Min: 119800 Salary Max: 234700 Salary Currency: USD Salary Period: year Required Skills: [&quot;machine learning&quot;, &quot;generative AI&quot;, &quot;product discovery&quot;, &quot;ranking&quot;, &quot;personalization&quot;, &quot;reasoning&quot;, &quot;Apache Spark&quot;, &quot;Azure Databricks&quot;, &quot;Python&quot;, &quot;R&quot;, &quot;SQL&quot;, &quot;time-series analysis&quot;, &quot;anomaly detection&quot;] Preferred Skills: [&quot;prompt engineering&quot;, &quot;fine-tuning GPT-like models&quot;, &quot;LLMs in domain-heavy areas&quot;]</p>\n<p style=\"margin-top:24px;font-size:13px;color:#666;\">XML job scraping automation by <a href=\"https://yubhub.co\">YubHub</a></p>","url":"https://yubhub.co/jobs/job_9b1d250c-732","directApply":true,"hiringOrganization":{"@type":"Organization","name":"Microsoft","sameAs":"https://microsoft.ai","logo":"https://logos.yubhub.co/microsoft.ai.png"},"x-apply-url":"https://microsoft.ai/job/senior-applied-scientist-56/","x-work-arrangement":"hybrid","x-experience-level":"senior","x-job-type":"full-time","x-salary-range":"$119,800 - $234,700 per year","x-skills-required":["machine learning","generative AI","product discovery","ranking","personalization","reasoning","Apache Spark","Azure Databricks","Python","R","SQL","time-series analysis","anomaly detection"],"x-skills-preferred":["prompt engineering","fine-tuning GPT-like models","LLMs in domain-heavy areas"],"datePosted":"2026-04-24T12:12:41.295Z","jobLocation":{"@type":"Place","address":{"@type":"PostalAddress","addressLocality":"Redmond"}},"employmentType":"FULL_TIME","occupationalCategory":"Engineering","industry":"Technology","skills":"machine learning, generative AI, product discovery, ranking, personalization, reasoning, Apache Spark, Azure Databricks, Python, R, SQL, time-series analysis, anomaly detection, prompt engineering, fine-tuning GPT-like models, LLMs in domain-heavy areas","baseSalary":{"@type":"MonetaryAmount","currency":"USD","value":{"@type":"QuantitativeValue","minValue":119800,"maxValue":234700,"unitText":"YEAR"}}},{"@context":"https://schema.org","@type":"JobPosting","identifier":{"@type":"PropertyValue","name":"YubHub","value":"job_bdf4e05a-b8c"},"title":"MTS - Site Reliability Engineer","description":"<p>As Microsoft continues to push the boundaries of AI, we are on the lookout for individuals to work with us on the most interesting and challenging AI questions of our time. Our vision is bold and broad , to build systems that have true artificial intelligence across agents, applications, services, and infrastructure. It’s also inclusive: we aim to make AI accessible to all , consumers, businesses, developers , so that everyone can realize its benefits.</p>\n<p>We’re looking for an experienced Site Reliability Engineer (SRE) to join our infrastructure team. In this role, you’ll blend software engineering and systems engineering to keep our large-scale distributed AI infrastructure reliable and efficient. You’ll work closely with ML researchers, data engineers, and product developers to design and operate the platforms that power training, fine-tuning, and serving generative AI models.</p>\n<p>Microsoft’s mission is to empower every person and every organization on the planet to achieve more. As employees we come together with a growth mindset, innovate to empower others, and collaborate to realize our shared goals. Each day we build on our values of respect, integrity, and accountability to create a culture of inclusion where everyone can thrive at work and beyond.</p>\n<p>Responsibilities:</p>\n<p>Reliability &amp; Availability: Ensure uptime, resiliency, and fault tolerance of AI model training and inference systems.</p>\n<p>Observability: Design and maintain monitoring, alerting, and logging systems to provide real-time visibility into model serving pipelines and infra.</p>\n<p>Performance Optimization: Analyze system performance and scalability, optimize resource utilization (compute, GPU clusters, storage, networking).</p>\n<p>Automation &amp; Tooling: Build automation for deployments, incident response, scaling, and failover in hybrid cloud/on-prem CPU+GPU environments.</p>\n<p>Incident Management: Lead on-call rotations, troubleshoot production issues, conduct blameless postmortems, and drive continuous improvements.</p>\n<p>Security &amp; Compliance: Ensure data privacy, compliance, and secure operations across model training and serving environments.</p>\n<p>Collaboration: Partner with ML engineers and platform teams to improve developer experience and accelerate research-to-production workflows.</p>\n<p>Qualifications:</p>\n<p>Required Qualifications: 4+ years of experience in Site Reliability Engineering, DevOps, or Infrastructure Engineering roles.</p>\n<p>Preferred Qualifications: Strong proficiency in Kubernetes, Docker, and container orchestration. Knowledge of CI/CD pipelines for Inference and ML model deployment. Hands-on experience with public cloud platforms like Azure/AWS/GCP and infrastructure-as-code. Expertise in monitoring &amp; observability tools (Grafana, Datadog, OpenTelemetry, etc.). Strong programming/scripting skills in Python, Go, or Bash. Solid knowledge of distributed systems, networking, and storage. Experience running large-scale GPU clusters for ML/AI workloads (preferred). Familiarity with ML training/inference pipelines. Experience with high-performance computing (HPC) and workload schedulers (Kubernetes operators). Background in capacity planning &amp; cost optimization for GPU-heavy environments.</p>\n<p>Work on cutting-edge infrastructure that powers the future of Generative AI. Collaborate with world-class researchers and engineers. Impact millions of users through reliable and responsible AI deployments. Competitive compensation, equity options, and comprehensive benefits.</p>\n<p style=\"margin-top:24px;font-size:13px;color:#666;\">XML job scraping automation by <a href=\"https://yubhub.co\">YubHub</a></p>","url":"https://yubhub.co/jobs/job_bdf4e05a-b8c","directApply":true,"hiringOrganization":{"@type":"Organization","name":"Microsoft","sameAs":"https://microsoft.ai","logo":"https://logos.yubhub.co/microsoft.ai.png"},"x-apply-url":"https://microsoft.ai/job/mts-site-reliability-engineer/","x-work-arrangement":"hybrid","x-experience-level":"staff","x-job-type":"full-time","x-salary-range":"$119,800 - $234,700 per year","x-skills-required":["Site Reliability Engineering","DevOps","Infrastructure Engineering","Kubernetes","Docker","container orchestration","CI/CD pipelines","ML model deployment","public cloud platforms","Azure","AWS","GCP","infrastructure-as-code","monitoring & observability tools","Grafana","Datadog","OpenTelemetry","Python","Go","Bash","distributed systems","networking","storage","GPU clusters","ML training/inference pipelines","high-performance computing","workload schedulers","capacity planning","cost optimization"],"x-skills-preferred":["cloud architecture","containerization","microservices","API design","security","compliance","agile development","scrum","kanban"],"datePosted":"2026-04-24T12:12:26.597Z","jobLocation":{"@type":"Place","address":{"@type":"PostalAddress","addressLocality":"Redmond"}},"employmentType":"FULL_TIME","occupationalCategory":"Engineering","industry":"Technology","skills":"Site Reliability Engineering, DevOps, Infrastructure Engineering, Kubernetes, Docker, container orchestration, CI/CD pipelines, ML model deployment, public cloud platforms, Azure, AWS, GCP, infrastructure-as-code, monitoring & observability tools, Grafana, Datadog, OpenTelemetry, Python, Go, Bash, distributed systems, networking, storage, GPU clusters, ML training/inference pipelines, high-performance computing, workload schedulers, capacity planning, cost optimization, cloud architecture, containerization, microservices, API design, security, compliance, agile development, scrum, kanban","baseSalary":{"@type":"MonetaryAmount","currency":"USD","value":{"@type":"QuantitativeValue","minValue":119800,"maxValue":234700,"unitText":"YEAR"}}},{"@context":"https://schema.org","@type":"JobPosting","identifier":{"@type":"PropertyValue","name":"YubHub","value":"job_dacc9b06-4d8"},"title":"Member of Technical Staff - Principal Data Infrastructure Engineer","description":"<p>As Microsoft continues to push the boundaries of AI, we are on the lookout for passionate individuals to work with us on the most interesting and challenging AI questions of our time. Our vision is bold and broad , to build systems that have true artificial intelligence across agents, applications, services, and infrastructure. It’s also inclusive: we aim to make AI accessible to all , consumers, businesses, developers , so that everyone can realize its benefits.</p>\n<p>We’re looking for a Member of Technical Staff – Principal Data Infrastructure Engineer. This role is a dynamic blend of Platform Engineering, DevOps/SRE, and Big Data Infrastructure Engineering, focused on enabling large-scale data and ML pipelines and intelligent systems. If you’ve architected big data platforms from the ground up and are eager to apply that expertise to consumer AI, we want to hear from you.</p>\n<p>You’ll bring:</p>\n<p>Deep technical expertise A passion for automation and observability Fluency in distributed systems Creativity to design scalable solutions And just as importantly: empathy, collaboration, and a growth mindset</p>\n<p>Microsoft’s mission is to empower every person and every organization on the planet to achieve more. As employees we come together with a growth mindset, innovate to empower others, and collaborate to realize our shared goals. Each day we build on our values of respect, integrity, and accountability to create a culture of inclusion where everyone can thrive at work and beyond.</p>\n<p>Starting January 26, 2026, Microsoft AI (MAI) employees who live within a 50- mile commute of a designated Microsoft office in the U.S. or 25-mile commute of a non-U.S., country-specific location are expected to work from the office at least four days per week. This expectation is subject to local law and may vary by jurisdiction.</p>\n<p>Responsibilities:</p>\n<p>Architect and maintain scalable, reliable, and observable Big Data Infrastructure for mission-critical AI applications. Champion DevOps and SRE best practices,automated deployments, service monitoring, and incident response. Build a self-service big data platform that empowers data and platform engineers and researchers. Develop robust CI/CD pipelines and automate infrastructure provisioning using Infrastructure as Code tools (Bicep, Terraform, ARM). Collaborate with Data Engineers, Data Scientists, AI Researchers, and Developers to deliver secure, seamless big data workflows. Lead technical design reviews and uphold a clean, secure, and well-documented codebase. Proactively identify and resolve bottlenecks in data pipelines and infrastructure. Optimize system performance across storage, compute, and analytics layers. Partner with Security teams to enhance system security (IAM, OAuth, Kerberos). Embody and promote Microsoft’s values: Respect, Integrity, Accountability, and Inclusion.</p>\n<p>Qualifications:</p>\n<p>Required Qualifications: Master’s Degree in Computer Science, Math, Software Engineering, Computer Engineering, or related field AND 4+ years experience in business analytics, data science, software development, data modeling, or data engineering OR Bachelor’s Degree in Computer Science, Math, Software Engineering, Computer Engineering, or related field AND 6+ years experience in business analytics, data science, software development, data modeling, or data engineering OR equivalent experience.</p>\n<p>Preferred Qualifications: 4+ years in Big Data Infrastructure, DevOps, SRE, or Platform Engineering. 3+ years of hands-on experience managing and scaling distributed systems,from bare-metal to cloud-native environments. 2+ years deploying containerized applications using Kubernetes and Helm/Kustomize. Solid scripting and automation skills using Python, Bash, or PowerShell. Proven success in CI/CD pipeline management, release automation, and production troubleshooting. Experience working with Databricks for scalable data processing and analytics. Familiarity with security practices in infrastructure environments, including IAM, OAuth, and Kerberos administration. Proven experience with cloud-native infrastructure across Azure, AWS, or GCP. Hands-on expertise with modern data platforms like Databricks, including: Deep understanding of data storage and processing technologies: Relational &amp; NoSQL databases Key-value stores. Spark compute engines. Distributed file systems (e.g., HDFS, ADLS Gen2). Messaging systems (e.g., Event Hub, Kafka, RabbitMQ). Capacity planning and incident management for large-scale big data systems. Solid collaboration history with Data Engineers, Data Scientists, ML Engineers, Networking, and Security teams. Familiarity with modern web stacks: TypeScript, Node.js, React, and optionally PHP. Exposure to agentic workflows, deep learning, or AI frameworks. Practical experience integrating LLMs (e.g., GPT-based models) into daily workflows,automating documentation, code generation, reviews, and operational intelligence. Solid grasp of prompt engineering techniques to design, optimize, and evaluate interactions with LLMs. Demonstrated ability to troubleshoot and resolve complex performance and scalability issues across infrastructure layers. Excellent interpersonal and communication skills, with a solid passion for mentorship and continuous learning. Experience applying LLMs to DevOps workflows, enhancing incident response, and streamlining cross-functional collaboration is a solid advantage.</p>\n<p>#MicrosoftAI #mai-datainsights #mai-datainsights</p>\n<p style=\"margin-top:24px;font-size:13px;color:#666;\">XML job scraping automation by <a href=\"https://yubhub.co\">YubHub</a></p>","url":"https://yubhub.co/jobs/job_dacc9b06-4d8","directApply":true,"hiringOrganization":{"@type":"Organization","name":"Microsoft","sameAs":"https://microsoft.ai","logo":"https://logos.yubhub.co/microsoft.ai.png"},"x-apply-url":"https://microsoft.ai/job/member-of-technical-staff-principal-data-infrastructure-engineer-2/","x-work-arrangement":"hybrid","x-experience-level":"staff","x-job-type":"full-time","x-salary-range":"$139,900 – $274,800 per year","x-skills-required":["Big Data Infrastructure","DevOps","SRE","Platform Engineering","Distributed Systems","Cloud-Native Infrastructure","Azure","AWS","GCP","Databricks","CI/CD Pipelines","Infrastructure as Code","Bicep","Terraform","ARM","Python","Bash","PowerShell","Kubernetes","Helm","Kustomize","LLMs","GPT-based models","Prompt Engineering","Agentic Workflows","Deep Learning","AI Frameworks"],"x-skills-preferred":["Containerized Applications","Security Practices","IAM","OAuth","Kerberos Administration","Web Stacks","TypeScript","Node.js","React","PHP","Modern Data Platforms","Spark Compute Engines","Distributed File Systems","Messaging Systems","Capacity Planning","Incident Management"],"datePosted":"2026-04-24T12:12:26.106Z","jobLocation":{"@type":"Place","address":{"@type":"PostalAddress","addressLocality":"Redmond"}},"employmentType":"FULL_TIME","occupationalCategory":"Engineering","industry":"Technology","skills":"Big Data Infrastructure, DevOps, SRE, Platform Engineering, Distributed Systems, Cloud-Native Infrastructure, Azure, AWS, GCP, Databricks, CI/CD Pipelines, Infrastructure as Code, Bicep, Terraform, ARM, Python, Bash, PowerShell, Kubernetes, Helm, Kustomize, LLMs, GPT-based models, Prompt Engineering, Agentic Workflows, Deep Learning, AI Frameworks, Containerized Applications, Security Practices, IAM, OAuth, Kerberos Administration, Web Stacks, TypeScript, Node.js, React, PHP, Modern Data Platforms, Spark Compute Engines, Distributed File Systems, Messaging Systems, Capacity Planning, Incident Management","baseSalary":{"@type":"MonetaryAmount","currency":"USD","value":{"@type":"QuantitativeValue","minValue":139900,"maxValue":274800,"unitText":"YEAR"}}},{"@context":"https://schema.org","@type":"JobPosting","identifier":{"@type":"PropertyValue","name":"YubHub","value":"job_d34e24c9-5ec"},"title":"People Partner, Japan & APAC (Singapore)","description":"<p>We are seeking a People Partner to join our team in Singapore. As a People Partner, you will serve as the dedicated People Partner for Figma&#39;s Japan &amp; APAC region, working closely with leaders across the regions in Singapore, Japan, India, and Australia. You will help the organisation scale effectively by strengthening leadership capability, shaping organisational design, and aligning talent strategy to business priorities. You will also collaborate closely with cross-functional partners across the People Team, including functional People Partners, Compensation, Recruiting, Learning &amp; Development, BEI, and People Relations, to drive high-impact initiatives that support the business.</p>\n<p>Key responsibilities include: Acting as the primary People Partner presence for Japan &amp; APAC, providing local context and understanding, and regional business understanding to partnerships. Working closely with functional People Partners to support the continued scaling of the teams in region by evolving team structures, roles, and operating models in line with business growth. Coaching senior leaders to strengthen leadership capability, team effectiveness, and performance in a high-growth, target-driven environment. Identifying and addressing organisational friction points by improving cross-functional alignment and operational clarity across teams. Embedding as a key partner across cross-functional stakeholders, including functional People Partners, Compensation, Recruiting, Learning &amp; Development, BEI, and People Relations, to drive and deliver high-impact people initiatives. Leveraging data, organisational insights, and employee feedback to proactively inform decisions and drive measurable improvements in team health, effectiveness, and performance. Developing and driving a Japan &amp; APAC people strategy that reflects both the region&#39;s stage of growth and its commercial priorities. Exploring and embedding AI tools and approaches that extend your reach and impact across the region.</p>\n<p>Requirements include: 8+ years of HR/People experience, including significant experience as an HR Business Partner supporting senior leaders in a high-growth company. Multi-jurisdiction APAC experience. Practical working knowledge of employment law, cultural norms, and people practices across multiple jurisdictions. Generalist capability, demonstrated experience spanning Employee Relations &amp; Operations, as well as strategic business partnering. Experience partnering with GTM or revenue organisations (Sales, Marketing, and/or Product Support), with an understanding of go-to-market dynamics. Strong understanding of cultural nuance and the ability to adapt your approach across the diverse markets of Japan &amp; APAC. Demonstrated ability to influence senior leaders and drive alignment in fast-paced, ambiguous environments, including when operating remotely from CoEs.</p>\n<p>Preferred qualifications include: Experience in high-growth SaaS or product-led companies. Fluency in Japanese.</p>\n<p>At Figma, we celebrate and support our differences. We know employing a team rich in diverse thoughts, experiences, and opinions allows our employees, our product, and our community to flourish. Figma is an equal opportunity workplace - we are dedicated to equal employment opportunities regardless of race, color, ancestry, religion, sex, national origin, sexual orientation, age, citizenship, marital status, disability, gender identity/expression, veteran status, or any other characteristic protected by law.</p>\n<p style=\"margin-top:24px;font-size:13px;color:#666;\">XML job scraping automation by <a href=\"https://yubhub.co\">YubHub</a></p>","url":"https://yubhub.co/jobs/job_d34e24c9-5ec","directApply":true,"hiringOrganization":{"@type":"Organization","name":"Figma","sameAs":"https://www.figma.com/","logo":"https://logos.yubhub.co/figma.com.png"},"x-apply-url":"https://job-boards.greenhouse.io/figma/jobs/5978282004","x-work-arrangement":"hybrid","x-experience-level":"senior","x-job-type":"full-time","x-salary-range":null,"x-skills-required":["HR","People Management","Leadership Development","Organisational Design","Talent Strategy","Cross-Functional Collaboration","Data Analysis","Employee Feedback","AI Tools","APAC Experience","Employment Law","Cultural Norms","People Practices"],"x-skills-preferred":["High-Growth SaaS","Product-Led Companies","Fluency in Japanese"],"datePosted":"2026-04-24T12:12:23.682Z","jobLocation":{"@type":"Place","address":{"@type":"PostalAddress","addressLocality":"Singapore"}},"employmentType":"FULL_TIME","occupationalCategory":"HR","industry":"Technology","skills":"HR, People Management, Leadership Development, Organisational Design, Talent Strategy, Cross-Functional Collaboration, Data Analysis, Employee Feedback, AI Tools, APAC Experience, Employment Law, Cultural Norms, People Practices, High-Growth SaaS, Product-Led Companies, Fluency in Japanese"},{"@context":"https://schema.org","@type":"JobPosting","identifier":{"@type":"PropertyValue","name":"YubHub","value":"job_2291f859-746"},"title":"MTS - Site Reliability Engineer","description":"<p>As Microsoft continues to push the boundaries of AI, we are on the lookout for experienced Site Reliability Engineers to work with us on the most interesting and challenging AI questions of our time.</p>\n<p>Our vision is bold and broad , to build systems that have true artificial intelligence across agents, applications, services, and infrastructure. It’s also inclusive: we aim to make AI accessible to all , consumers, businesses, developers , so that everyone can realize its benefits.</p>\n<p>We’re looking for an experienced Site Reliability Engineer (SRE) to join our infrastructure team. In this role, you’ll blend software engineering and systems engineering to keep our large-scale distributed AI infrastructure reliable and efficient. You’ll work closely with ML researchers, data engineers, and product developers to design and operate the platforms that power training, fine-tuning, and serving generative AI models.</p>\n<p>Responsibilities:</p>\n<p>Reliability &amp; Availability: Ensure uptime, resiliency, and fault tolerance of AI model training and inference systems.</p>\n<p>Observability: Design and maintain monitoring, alerting, and logging systems to provide real-time visibility into model serving pipelines and infra.</p>\n<p>Performance Optimization: Analyze system performance and scalability, optimize resource utilization (compute, GPU clusters, storage, networking).</p>\n<p>Automation &amp; Tooling: Build automation for deployments, incident response, scaling, and failover in hybrid cloud/on-prem CPU+GPU environments.</p>\n<p>Incident Management: Lead on-call rotations, troubleshoot production issues, conduct blameless postmortems, and drive continuous improvements.</p>\n<p>Security &amp; Compliance: Ensure data privacy, compliance, and secure operations across model training and serving environments.</p>\n<p>Collaboration: Partner with ML engineers and platform teams to improve developer experience and accelerate research-to-production workflows.</p>\n<p>Qualifications:</p>\n<p>Required Qualifications:</p>\n<p>4+ years of experience in Site Reliability Engineering, DevOps, or Infrastructure Engineering roles.</p>\n<p>Strong proficiency in Kubernetes, Docker, and container orchestration.</p>\n<p>Knowledge of CI/CD pipelines for Inference and ML model deployment.</p>\n<p>Hands-on experience with public cloud platforms like Azure/AWS/GCP and infrastructure-as-code.</p>\n<p>Expertise in monitoring &amp; observability tools (Grafana, Datadog, OpenTelemetry, etc.).</p>\n<p>Strong programming/scripting skills in Python, Go, or Bash.</p>\n<p>Solid knowledge of distributed systems, networking, and storage.</p>\n<p>Experience running large-scale GPU clusters for ML/AI workloads (preferred).</p>\n<p>Familiarity with ML training/inference pipelines.</p>\n<p>Experience with high-performance computing (HPC) and workload schedulers (Kubernetes operators).</p>\n<p>Background in capacity planning &amp; cost optimization for GPU-heavy environments.</p>\n<p>Work on cutting-edge infrastructure that powers the future of Generative AI.</p>\n<p>Collaborate with world-class researchers and engineers.</p>\n<p>Impact millions of users through reliable and responsible AI deployments.</p>\n<p>Competitive compensation, equity options, and comprehensive benefits.</p>\n<p>Software Engineering IC4 – The typical base pay range for this role across the U.S. is USD $119,800 – $234,700 per year.</p>\n<p>Software Engineering IC5 – The typical base pay range for this role across the U.S. is USD $139,900 – $274,800 per year.</p>\n<p style=\"margin-top:24px;font-size:13px;color:#666;\">XML job scraping automation by <a href=\"https://yubhub.co\">YubHub</a></p>","url":"https://yubhub.co/jobs/job_2291f859-746","directApply":true,"hiringOrganization":{"@type":"Organization","name":"Microsoft","sameAs":"https://microsoft.ai","logo":"https://logos.yubhub.co/microsoft.ai.png"},"x-apply-url":"https://microsoft.ai/job/mts-site-reliability-engineer-3/","x-work-arrangement":"hybrid","x-experience-level":"senior","x-job-type":"full-time","x-salary-range":"$119,800 - $234,700 per year","x-skills-required":["Kubernetes","Docker","container orchestration","CI/CD pipelines","public cloud platforms","infrastructure-as-code","monitoring & observability tools","Python","Go","Bash","distributed systems","networking","storage","GPU clusters","ML training/inference pipelines","high-performance computing","workload schedulers","capacity planning & cost optimization"],"x-skills-preferred":[],"datePosted":"2026-04-24T12:12:10.488Z","jobLocation":{"@type":"Place","address":{"@type":"PostalAddress","addressLocality":"Mountain View"}},"employmentType":"FULL_TIME","occupationalCategory":"Engineering","industry":"Technology","skills":"Kubernetes, Docker, container orchestration, CI/CD pipelines, public cloud platforms, infrastructure-as-code, monitoring & observability tools, Python, Go, Bash, distributed systems, networking, storage, GPU clusters, ML training/inference pipelines, high-performance computing, workload schedulers, capacity planning & cost optimization","baseSalary":{"@type":"MonetaryAmount","currency":"USD","value":{"@type":"QuantitativeValue","minValue":119800,"maxValue":234700,"unitText":"YEAR"}}},{"@context":"https://schema.org","@type":"JobPosting","identifier":{"@type":"PropertyValue","name":"YubHub","value":"job_492042ed-9ee"},"title":"Member of Technical Staff - Data Engineer","description":"<p>As Microsoft continues to push the boundaries of AI, we are on the lookout for individuals to work with us on the most interesting and challenging AI questions of our time. Our vision is bold and broad , to build systems that have true artificial intelligence across agents, applications, services, and infrastructure. It’s also inclusive: we aim to make AI accessible to all , consumers, businesses, developers , so that everyone can realize its benefits.</p>\n<p>We’re looking for someone who possesses technical prowess, a methodical approach to problem-solving, proficiency in big data processing technologies, and a mastery of templating to architect solutions that stand the test of time and who will bring an abundance of positive energy, empathy, and kindness to the team every day, in addition to being highly effective.</p>\n<p>The Data Platform Engineering team is responsible for building core data pipelines that help fine tune models, support introspection and retrospection of data so that we can constantly evolve and improve human AI interactions.</p>\n<p>Microsoft’s mission is to empower every person and every organization on the planet to achieve more. As employees we come together with a growth mindset, innovate to empower others, and collaborate to realize our shared goals. Each day we build on our values of respect, integrity, and accountability to create a culture of inclusion where everyone can thrive at work and beyond.</p>\n<p>Starting January 26, 2026, MAI employees are expected to work from a designated Microsoft office at least four days a week if they live within 50 miles (U.S.) or 25 miles (non-U.S., country-specific) of that location. This expectation is subject to local law and may vary by jurisdiction.</p>\n<p>Responsibilities:</p>\n<ul>\n<li>Build scalable data pipelines for sourcing, transforming and publishing data assets for AI use cases.</li>\n<li>Work collaboratively with other Platform, infrastructure, application engineers as well as AI Researchers to build next generation data platform products and services.</li>\n<li>Ship high-quality, well-tested, secure, and maintainable code.</li>\n<li>Find a path to get things done despite roadblocks to get your work into the hands of users quickly and iteratively.</li>\n<li>Enjoy working in a fast-paced, design-driven, product development cycle.</li>\n<li>Embody our Culture and Values.</li>\n</ul>\n<p>Qualifications:</p>\n<ul>\n<li>Bachelor’s Degree in Computer Science, Math, Software Engineering, Computer Engineering, or related field AND 6+ years experience in business analytics, data science, software development, data modeling or data engineering work OR Master’s Degree in Computer Science, Math, Software Engineering, Computer Engineering, or related field AND 4+ years experience in business analytics, data science, software development, or data engineering work OR equivalent experience.</li>\n<li>4+ years technical engineering experience building data processing applications (batch and streaming) with coding in languages including, but not limited to, Python, Java, Spark, SQL.</li>\n<li>Experience working with Apache Hadoop eco system, Kafka, NoSQL, etc.</li>\n<li>3+ years experience with data governance, data compliance and/or data security.</li>\n<li>2+ years’ experience building scalable services on top of public cloud infrastructure like Azure, AWS, or GCP.</li>\n<li>Extensive use datastores like RDBMS, key-value stores, etc.</li>\n<li>2+ years’ experience building distributed systems at scale and extensive systems knowledge that spans bare-metal hosts to containers to networking.</li>\n<li>Ability to identify, analyze, and resolve complex technical issues, ensuring optimal performance, scalability, and user experience.</li>\n<li>Dedication to writing clean, maintainable, and well-documented code with a focus on application quality, performance, and security.</li>\n<li>Demonstrated interpersonal skills and ability to work closely with cross-functional teams, including product managers, designers, and other engineers.</li>\n<li>Ability to clearly communicate complex technical concepts to both technical and non-technical stakeholders.</li>\n<li>Interest in learning new technologies and staying up to date with industry trends, best practices, and emerging technologies in web development and AI.</li>\n<li>Ability to work in a fast-paced environment, manage multiple priorities, and adapt to changing requirements and deadlines.</li>\n</ul>\n<p style=\"margin-top:24px;font-size:13px;color:#666;\">XML job scraping automation by <a href=\"https://yubhub.co\">YubHub</a></p>","url":"https://yubhub.co/jobs/job_492042ed-9ee","directApply":true,"hiringOrganization":{"@type":"Organization","name":"Microsoft AI","sameAs":"https://microsoft.ai","logo":"https://logos.yubhub.co/microsoft.ai.png"},"x-apply-url":"https://microsoft.ai/job/member-of-technical-staff-data-engineer-6/","x-work-arrangement":"hybrid","x-experience-level":"staff","x-job-type":"full-time","x-salary-range":"$139,900 - $274,800 per year","x-skills-required":["Python","Java","Spark","SQL","Apache Hadoop","Kafka","NoSQL","data governance","data compliance","data security","Azure","AWS","GCP","RDBMS","key-value stores"],"x-skills-preferred":["distributed systems","containerization","networking","web development","AI"],"datePosted":"2026-04-24T12:11:56.893Z","jobLocation":{"@type":"Place","address":{"@type":"PostalAddress","addressLocality":"Redmond"}},"employmentType":"FULL_TIME","occupationalCategory":"Engineering","industry":"Technology","skills":"Python, Java, Spark, SQL, Apache Hadoop, Kafka, NoSQL, data governance, data compliance, data security, Azure, AWS, GCP, RDBMS, key-value stores, distributed systems, containerization, networking, web development, AI","baseSalary":{"@type":"MonetaryAmount","currency":"USD","value":{"@type":"QuantitativeValue","minValue":139900,"maxValue":274800,"unitText":"YEAR"}}},{"@context":"https://schema.org","@type":"JobPosting","identifier":{"@type":"PropertyValue","name":"YubHub","value":"job_a8f02572-a83"},"title":"Data & AI Platform Architect (Professional Services)","description":"<p>You will work with clients on short to medium-term customer engagements on their big data challenges using the Databricks platform. You will provide data engineering, data science, and cloud technology projects which require integrating with client systems, training, and other technical tasks to help customers to get most value out of their data.</p>\n<p>The impact you will have:</p>\n<ul>\n<li>Work on a variety of impactful customer technical projects which may include designing and building reference architectures, creating how-to&#39;s and productionalizing customer use cases</li>\n<li>Guide strategic customers as they implement transformational big data projects, 3rd party migrations, including end-to-end design, build and deployment of industry-leading big data and AI applications</li>\n<li>Consult on architecture and design; bootstrap or implement customer projects which leads to a customers&#39; successful understanding, evaluation and adoption of Databricks</li>\n</ul>\n<p>What we look for:</p>\n<ul>\n<li>Extensive experience in data engineering, data platforms &amp; analytics</li>\n<li>Comfortable writing code in either Python or Scala</li>\n<li>Working knowledge of two or more common Cloud ecosystems (AWS, Azure, GCP) with expertise in at least one</li>\n<li>Deep experience with distributed computing with Apache Spark™ and knowledge of Spark runtime internals</li>\n<li>Familiarity with CI/CD for production deployments</li>\n<li>Working knowledge of MLOps</li>\n<li>Design and deployment of performant end-to-end data architectures</li>\n<li>Experience with technical project delivery - managing scope and timelines</li>\n<li>Documentation and white-boarding skills</li>\n<li>Experience working with clients and managing conflicts</li>\n<li>Build skills in technical areas which support the deployment and integration of Databricks-based solutions to complete customer projects</li>\n<li>Travel to customers 10% of the time</li>\n</ul>\n<p>About Databricks:</p>\n<p>Databricks is the data and AI company. More than 10,000 organizations worldwide , including Comcast, Condé Nast, Grammarly, and over 50% of the Fortune 500 , rely on the Databricks Data Intelligence Platform to unify and democratize data, analytics and AI.</p>\n<p style=\"margin-top:24px;font-size:13px;color:#666;\">XML job scraping automation by <a href=\"https://yubhub.co\">YubHub</a></p>","url":"https://yubhub.co/jobs/job_a8f02572-a83","directApply":true,"hiringOrganization":{"@type":"Organization","name":"Databricks","sameAs":"https://databricks.com","logo":"https://logos.yubhub.co/databricks.com.png"},"x-apply-url":"https://job-boards.greenhouse.io/databricks/jobs/8462016002","x-work-arrangement":"onsite","x-experience-level":"senior","x-job-type":"full-time","x-salary-range":null,"x-skills-required":["data engineering","data platforms & analytics","Python","Scala","Cloud ecosystems (AWS, Azure, GCP)","Apache Spark","CI/CD for production deployments","MLOps","performant end-to-end data architectures","technical project delivery","documentation and white-boarding skills","client management"],"x-skills-preferred":["Databricks Certification"],"datePosted":"2026-04-24T12:11:38.006Z","jobLocation":{"@type":"Place","address":{"@type":"PostalAddress","addressLocality":"Amsterdam, Netherlands"}},"employmentType":"FULL_TIME","occupationalCategory":"Engineering","industry":"Technology","skills":"data engineering, data platforms & analytics, Python, Scala, Cloud ecosystems (AWS, Azure, GCP), Apache Spark, CI/CD for production deployments, MLOps, performant end-to-end data architectures, technical project delivery, documentation and white-boarding skills, client management, Databricks Certification"},{"@context":"https://schema.org","@type":"JobPosting","identifier":{"@type":"PropertyValue","name":"YubHub","value":"job_76a1c961-155"},"title":"Member of Technical Staff, Full Stack - ML Efficiency & Observability - MAI Superintelligence Team","description":"<p>We&#39;re looking for a Member of Technical Staff – Full Stack Engineer, ML Efficiency &amp; Observability to help us efficiently manage our compute capacity. You will wear multiple hats and work on engineering, research, and everything in between. Your contributions will span capacity, efficiency, data architecture, training and inference infrastructures, and many other exciting topics at the cutting edge of AI.</p>\n<p>As a Senior Engineer – Full Stack, ML Efficiency &amp; Observability, you will be responsible for building world-class user experience for our executives as well as out ML researcher. You’ll work closely with research and framework teams to turn their requirements into intuitive experiences that lead to efficiency improvements.</p>\n<p>Microsoft Superintelligence Team</p>\n<p>The MAIST is a startup-like team inside Microsoft AI, created to push the boundaries of AI toward Humanist Superintelligence,ultra-capable systems that remain controllable, safety-aligned, and anchored to human values. Our mission is to create AI that amplifies human potential while ensuring humanity remains firmly in control.</p>\n<p>Responsibilities:</p>\n<ul>\n<li>Design and develop features for our capacity management portal</li>\n<li>Design and develop features to provide visibility into model performance and quality across our fleet</li>\n<li>Partner with ML researchers and PMs to translate functional requirements into highly functional, intuitive and appealing interfaces</li>\n<li>Integrate with backend APIs from schedulers to training frameworks to build visibility across the training life cycle</li>\n<li>Explore, develop, and adapt new innovations to the software development process</li>\n<li>Contribute to the development of internal tooling and infrastructure</li>\n<li>Implement best software development practices to ensure code quality. Hold a high quality bar.</li>\n</ul>\n<p>Qualifications:</p>\n<ul>\n<li>Bachelor’s Degree in Computer Science, Math, Software Engineering, Computer Engineering, or related field AND 4+ years technical engineering experience with coding in languages including, but not limited to, C, C++, C#, Java, JavaScript, or Python OR equivalent experience.</li>\n<li>4+ years experience in business analytics, data science, software development, data modeling or data engineering work</li>\n</ul>\n<p style=\"margin-top:24px;font-size:13px;color:#666;\">XML job scraping automation by <a href=\"https://yubhub.co\">YubHub</a></p>","url":"https://yubhub.co/jobs/job_76a1c961-155","directApply":true,"hiringOrganization":{"@type":"Organization","name":"Microsoft AI","sameAs":"https://microsoft.ai","logo":"https://logos.yubhub.co/microsoft.ai.png"},"x-apply-url":"https://microsoft.ai/job/member-of-technical-staff-full-stack-ml-efficiency-observability-mai-superintelligence-team-2/","x-work-arrangement":"onsite","x-experience-level":"senior","x-job-type":"full-time","x-salary-range":"$119,800 - $234,700 per year","x-skills-required":["C","C++","C#","Java","JavaScript","Python","Capacity Management","Efficiency Management","ML Training","Inference"],"x-skills-preferred":["Generative AI tools","Development & Debugging with dev environments like Visual Studio or Visual Studio Code","Software development experience with Generative AI tools","Experience in leading technical projects and supporting architectural decisions with data"],"datePosted":"2026-04-24T12:11:30.958Z","jobLocation":{"@type":"Place","address":{"@type":"PostalAddress","addressLocality":"Redmond"}},"employmentType":"FULL_TIME","occupationalCategory":"Engineering","industry":"Technology","skills":"C, C++, C#, Java, JavaScript, Python, Capacity Management, Efficiency Management, ML Training, Inference, Generative AI tools, Development & Debugging with dev environments like Visual Studio or Visual Studio Code, Software development experience with Generative AI tools, Experience in leading technical projects and supporting architectural decisions with data","baseSalary":{"@type":"MonetaryAmount","currency":"USD","value":{"@type":"QuantitativeValue","minValue":119800,"maxValue":234700,"unitText":"YEAR"}}},{"@context":"https://schema.org","@type":"JobPosting","identifier":{"@type":"PropertyValue","name":"YubHub","value":"job_3c9b96bf-348"},"title":"Software Engineer II","description":"<p>Imagine helping millions of users discover the best local businesses and services,right when they need them. At Bing Places, we’re on a mission to improve the quality and relevance of local search results across Microsoft platforms. You’ll be part of a team that blends data science, engineering, and product thinking to deliver intelligent, high-impact experiences that shape how people interact with the world around them.</p>\n<p>As a Software Engineer II in Bing Places, you will design and build scalable systems that enhance the accuracy, freshness, and trustworthiness of local search results. You’ll collaborate across disciplines to integrate diverse data sources, develop intelligent ranking algorithms, and ship features that directly impact millions of users. This opportunity will allow you to accelerate your career growth, deepen your understanding of geospatial and business data, and sharpen your skills in distributed systems and machine learning.</p>\n<p>We offer flexible work arrangements, including partial work-from-home options. Microsoft’s mission is to empower every person and every organization on the planet to achieve more. As employees we come together with a growth mindset, innovate to empower others, and collaborate to realize our shared goals. Each day we build on our values of respect, integrity, and accountability to create a culture of inclusion where everyone can thrive at work and beyond.</p>\n<p style=\"margin-top:24px;font-size:13px;color:#666;\">XML job scraping automation by <a href=\"https://yubhub.co\">YubHub</a></p>","url":"https://yubhub.co/jobs/job_3c9b96bf-348","directApply":true,"hiringOrganization":{"@type":"Organization","name":"Microsoft AI","sameAs":"https://microsoft.ai","logo":"https://logos.yubhub.co/microsoft.ai.png"},"x-apply-url":"https://microsoft.ai/job/software-engineer-ii-19/","x-work-arrangement":"hybrid","x-experience-level":"mid","x-job-type":"full-time","x-salary-range":"$100,600 - $199,000 per year","x-skills-required":["C","C++","C#","Java","JavaScript","Python","Apache Hadoop","Spark"],"x-skills-preferred":["Azure Cloud","Azure Data Factory (ADF)","Azure Machine Learning (AML)"],"datePosted":"2026-04-24T12:11:28.826Z","jobLocation":{"@type":"Place","address":{"@type":"PostalAddress","addressLocality":"Bellevue"}},"employmentType":"FULL_TIME","occupationalCategory":"Engineering","industry":"Technology","skills":"C, C++, C#, Java, JavaScript, Python, Apache Hadoop, Spark, Azure Cloud, Azure Data Factory (ADF), Azure Machine Learning (AML)","baseSalary":{"@type":"MonetaryAmount","currency":"USD","value":{"@type":"QuantitativeValue","minValue":100600,"maxValue":199000,"unitText":"YEAR"}}},{"@context":"https://schema.org","@type":"JobPosting","identifier":{"@type":"PropertyValue","name":"YubHub","value":"job_e4fc0509-d39"},"title":"Resident Solutions Architect","description":"<p>As a Resident Solutions Architect in our Professional Services team, you will work with clients on short to medium term customer engagements on their big data challenges using the Databricks platform.</p>\n<p>You will provide data engineering, data science, and cloud technology projects which require integrating with client systems, training, and other technical tasks to help customers to get most value out of their data.</p>\n<p>RSAs are billable and know how to complete projects according to specification with excellent customer service.</p>\n<p>You will report to the Sr. Manager, Professional Services.</p>\n<p>The impact you will have:</p>\n<ul>\n<li>You will work on a variety of impactful customer technical projects which may include building reference architectures, how-to&#39;s and production grade MVPs / Greenfield projects</li>\n</ul>\n<ul>\n<li>Guide strategic customers as they implement transformational big data projects, 3rd party migrations, including end-to-end design, build and deployment of industry-leading big data and AI applications</li>\n</ul>\n<ul>\n<li>Consult on architecture and design; bootstrap or implement strategic customer projects which leads to a customers&#39; successful understanding, evaluation and adoption of Databricks</li>\n</ul>\n<ul>\n<li>Provide an escalated level of support for customer operational issues.</li>\n</ul>\n<ul>\n<li>You will work with the Databricks technical team, Project Manager, Architect and Customer team to ensure the technical components of the engagement are delivered to meet customer&#39;s needs</li>\n</ul>\n<ul>\n<li>Work with Engineering and Databricks Customer Support to provide product and implementation feedback and to guide rapid resolution for engagement specific product and support issues</li>\n</ul>\n<p>What we look for:</p>\n<ul>\n<li>8+ years experience with Big Data Technologies such as Apache Spark, Kafka, Cloud Native and Data Lakes in a customer-facing post-sales, technical architecture or consulting role</li>\n</ul>\n<ul>\n<li>Comfortable writing code in either Python or Scala</li>\n</ul>\n<ul>\n<li>Experience with technical project delivery - managing scope and timelines</li>\n</ul>\n<ul>\n<li>Documentation and white-boarding skills</li>\n</ul>\n<ul>\n<li>Experience working with clients and managing conflicts</li>\n</ul>\n<ul>\n<li>Build skills in technical areas which support the deployment and integration of Databricks-based solutions to complete customer projects</li>\n</ul>\n<ul>\n<li>Travel to customers 20 - 30% of the time</li>\n</ul>\n<p>Nice to have: Databricks Certification</p>\n<p style=\"margin-top:24px;font-size:13px;color:#666;\">XML job scraping automation by <a href=\"https://yubhub.co\">YubHub</a></p>","url":"https://yubhub.co/jobs/job_e4fc0509-d39","directApply":true,"hiringOrganization":{"@type":"Organization","name":"Databricks","sameAs":"https://databricks.com","logo":"https://logos.yubhub.co/databricks.com.png"},"x-apply-url":"https://job-boards.greenhouse.io/databricks/jobs/8514430002","x-work-arrangement":"onsite","x-experience-level":"senior","x-job-type":"full-time","x-salary-range":null,"x-skills-required":["Apache Spark","Kafka","Cloud Native","Data Lakes","Python","Scala","Technical project delivery","Documentation and white-boarding skills","Experience working with clients and managing conflicts"],"x-skills-preferred":[],"datePosted":"2026-04-24T12:11:25.394Z","jobLocation":{"@type":"Place","address":{"@type":"PostalAddress","addressLocality":"Western Australia, Australia"}},"employmentType":"FULL_TIME","occupationalCategory":"Engineering","industry":"Technology","skills":"Apache Spark, Kafka, Cloud Native, Data Lakes, Python, Scala, Technical project delivery, Documentation and white-boarding skills, Experience working with clients and managing conflicts"},{"@context":"https://schema.org","@type":"JobPosting","identifier":{"@type":"PropertyValue","name":"YubHub","value":"job_2fa970ee-3db"},"title":"Member of Technical Staff - Data Engineer","description":"<p>As Microsoft continues to push the boundaries of AI, we are on the lookout for individuals to work with us on the most interesting and challenging AI questions of our time. Our vision is bold and broad , to build systems that have true artificial intelligence across agents, applications, services, and infrastructure. It’s also inclusive: we aim to make AI accessible to all , consumers, businesses, developers , so that everyone can realize its benefits.</p>\n<p>We’re looking for someone who possesses technical prowess, a methodical approach to problem-solving, proficiency in big data processing technologies, and a mastery of templating to architect solutions that stand the test of time and who will bring an abundance of positive energy, empathy, and kindness to the team every day, in addition to being highly effective.</p>\n<p>The Data Platform Engineering team is responsible for building core data pipelines that help fine tune models, support introspection and retrospection of data so that we can constantly evolve and improve human AI interactions.</p>\n<p>Microsoft’s mission is to empower every person and every organization on the planet to achieve more. As employees we come together with a growth mindset, innovate to empower others, and collaborate to realize our shared goals. Each day we build on our values of respect, integrity, and accountability to create a culture of inclusion where everyone can thrive at work and beyond.</p>\n<p>Starting January 26, 2026, MAI employees are expected to work from a designated Microsoft office at least four days a week if they live within 50 miles (U.S.) or 25 miles (non-U.S., country-specific) of that location. This expectation is subject to local law and may vary by jurisdiction.</p>\n<p>Responsibilities: Build scalable data pipelines for sourcing, transforming and publishing data assets for AI use cases. Work collaboratively with other Platform, infrastructure, application engineers as well as AI Researchers to build next generation data platform products and services. Ship high-quality, well-tested, secure, and maintainable code. Find a path to get things done despite roadblocks to get your work into the hands of users quickly and iteratively. Enjoy working in a fast-paced, design-driven, product development cycle. Embody our Culture and Values.</p>\n<p>Qualifications: Required Qualifications: Bachelor’s Degree in Computer Science, Math, Software Engineering, Computer Engineering, or related field AND 6+ years experience in business analytics, data science, software development, data modeling or data engineering work OR Master’s Degree in Computer Science, Math, Software Engineering, Computer Engineering, or related field AND 4+ years experience in business analytics, data science, software development, or data engineering work OR equivalent experience. Preferred Qualifications: 4+ years technical engineering experience building data processing applications (batch and streaming) with coding in languages including, but not limited to, Python, Java, Spark, SQL. Experience working with Apache Hadoop eco system, Kafka, NoSQL, etc. 3+ years experience with data governance, data compliance and/or data security. 2+ years’ experience building scalable services on top of public cloud infrastructure like Azure, AWS, or GCP. Extensive use datastores like RDBMS, key-value stores, etc. 2+ years’ experience building distributed systems at scale and extensive systems knowledge that spans bare-metal hosts to containers to networking. Ability to identify, analyze, and resolve complex technical issues, ensuring optimal performance, scalability, and user experience. Dedication to writing clean, maintainable, and well-documented code with a focus on application quality, performance, and security. Demonstrated interpersonal skills and ability to work closely with cross-functional teams, including product managers, designers, and other engineers. Ability to clearly communicate complex technical concepts to both technical and non-technical stakeholders. Interest in learning new technologies and staying up to date with industry trends, best practices, and emerging technologies in web development and AI. Ability to work in a fast-paced environment, manage multiple priorities, and adapt to changing requirements and deadlines.</p>\n<p style=\"margin-top:24px;font-size:13px;color:#666;\">XML job scraping automation by <a href=\"https://yubhub.co\">YubHub</a></p>","url":"https://yubhub.co/jobs/job_2fa970ee-3db","directApply":true,"hiringOrganization":{"@type":"Organization","name":"Microsoft AI","sameAs":"https://microsoft.ai","logo":"https://logos.yubhub.co/microsoft.ai.png"},"x-apply-url":"https://microsoft.ai/job/member-of-technical-staff-data-engineer-4/","x-work-arrangement":"hybrid","x-experience-level":"staff","x-job-type":"full-time","x-salary-range":"$139,900 – $274,800 per year","x-skills-required":["Python","Java","Spark","SQL","Apache Hadoop","Kafka","NoSQL","data governance","data compliance","data security","Azure","AWS","GCP","RDBMS","key-value stores"],"x-skills-preferred":["distributed systems","containerization","networking","web development","AI"],"datePosted":"2026-04-24T12:11:05.400Z","jobLocation":{"@type":"Place","address":{"@type":"PostalAddress","addressLocality":"New York"}},"employmentType":"FULL_TIME","occupationalCategory":"Engineering","industry":"Technology","skills":"Python, Java, Spark, SQL, Apache Hadoop, Kafka, NoSQL, data governance, data compliance, data security, Azure, AWS, GCP, RDBMS, key-value stores, distributed systems, containerization, networking, web development, AI","baseSalary":{"@type":"MonetaryAmount","currency":"USD","value":{"@type":"QuantitativeValue","minValue":139900,"maxValue":274800,"unitText":"YEAR"}}},{"@context":"https://schema.org","@type":"JobPosting","identifier":{"@type":"PropertyValue","name":"YubHub","value":"job_40513a9a-d3f"},"title":"AI Engineer - FDE (Forward Deployed Engineer)","description":"<p>The AI Forward Deployed Engineering (AI FDE) team is a highly specialized customer-facing AI team at Databricks. We deliver professional services engagements to help our customers build and productionize first-of-its-kind AI applications. We work cross-functionally to shape long-term strategic priorities and initiatives alongside engineering, product, and developer relations, as well as support internal subject matter expert (SME) teams.</p>\n<p>We view our team as an ensemble: we look for individuals with strong, unique specializations to improve the overall strength of the team. This team is the right fit for you if you love working with customers, teammates, and fueling your curiosity for the latest trends in GenAI, LLMOps, and ML more broadly.</p>\n<p>The impact you will have:</p>\n<ul>\n<li>Develop cutting-edge GenAI solutions, incorporating the latest techniques from our Mosaic AI research to solve customer problems</li>\n<li>Own production rollouts of consumer and internally facing GenAI applications</li>\n<li>Serve as a trusted technical advisor to customers across a variety of domains</li>\n<li>Present at conferences such as Data + AI Summit, recognized as a thought leader internally and externally</li>\n<li>Collaborate cross-functionally with the product and engineering teams to influence priorities and shape the product roadmap</li>\n</ul>\n<p>What we look for:</p>\n<ul>\n<li>Experience building GenAI applications, including RAG, multi-agent systems, Text2SQL, fine-tuning, etc., with tools such as HuggingFace, LangChain, and DSPy</li>\n<li>Expertise in deploying production-grade GenAI applications, including evaluation and optimizations</li>\n<li>Extensive years of hands-on industry data science experience, leveraging common machine learning and data science tools, i.e. pandas, scikit-learn, PyTorch, etc.</li>\n<li>Experience building production-grade machine learning deployments on AWS, Azure, or GCP</li>\n<li>Graduate degree in a quantitative discipline (Computer Science, Engineering, Statistics, Operations Research, etc.) or equivalent practical experience</li>\n<li>Experience communicating and/or teaching technical concepts to non-technical and technical audiences alike</li>\n<li>Passion for collaboration, life-long learning, and driving business value through AI</li>\n<li>[Preferred] Experience using the Databricks Intelligence Platform and Apache Spark to process large-scale distributed datasets</li>\n<li>We require fluency in English and have a preference for candidates who also speak Mandarin</li>\n</ul>\n<p style=\"margin-top:24px;font-size:13px;color:#666;\">XML job scraping automation by <a href=\"https://yubhub.co\">YubHub</a></p>","url":"https://yubhub.co/jobs/job_40513a9a-d3f","directApply":true,"hiringOrganization":{"@type":"Organization","name":"Databricks","sameAs":"https://databricks.com","logo":"https://logos.yubhub.co/databricks.com.png"},"x-apply-url":"https://job-boards.greenhouse.io/databricks/jobs/8503080002","x-work-arrangement":"onsite","x-experience-level":"senior","x-job-type":"full-time","x-salary-range":null,"x-skills-required":["GenAI","HuggingFace","LangChain","DSPy","pandas","scikit-learn","PyTorch","AWS","Azure","GCP","Apache Spark"],"x-skills-preferred":["Databricks Intelligence Platform","Mosaic AI research"],"datePosted":"2026-04-24T12:11:02.068Z","jobLocation":{"@type":"Place","address":{"@type":"PostalAddress","addressLocality":"Singapore"}},"employmentType":"FULL_TIME","occupationalCategory":"Engineering","industry":"Technology","skills":"GenAI, HuggingFace, LangChain, DSPy, pandas, scikit-learn, PyTorch, AWS, Azure, GCP, Apache Spark, Databricks Intelligence Platform, Mosaic AI research"},{"@context":"https://schema.org","@type":"JobPosting","identifier":{"@type":"PropertyValue","name":"YubHub","value":"job_7770176b-afc"},"title":"Sr. Solutions Engineer France","description":"<p>At Databricks, our core values are at the heart of everything we do. Our culture of proactiveness and a customer-centric mindset guides us to create a unified platform that makes data science and analytics accessible to everyone.</p>\n<p>We aim to inspire our customers to make informed decisions that push their business forward. We provide a user-friendly and intuitive platform that makes it easy to turn insights into action and fosters a culture of creativity, experimentation, and continuous improvement.</p>\n<p>As a Sr. Solutions Engineer, you will be an essential part of this mission, using your technical expertise to demonstrate how our Data and Intelligence Platform can help customers solve their complex data challenges.</p>\n<p>You&#39;ll work with a collaborative, customer-focused team that values innovation and creativity. You&#39;ll use your skills to create customized solutions to help our customers achieve their goals and guide their businesses forward.</p>\n<p>Join us in our quest to change how people work with data and make a better world!</p>\n<p>The impact you will have:</p>\n<ul>\n<li>Form successful relationships with clients throughout your assigned territory, providing technical and business value to Databricks customers in collaboration with Account Executives.</li>\n</ul>\n<ul>\n<li>Operate as an expert in big data analytics to excite customers about Databricks. You will develop into a &#39;champion&#39; and trusted advisor on multiple issues of architecture, design, and implementation to lead to the successful adoption of the Databricks Data Intelligence Platform.</li>\n</ul>\n<ul>\n<li>Scale best practices in your field and support customers by authoring reference architectures, how-tos, and demo applications, and help build the Databricks community in your region by leading workshops, seminars, and meet-ups.</li>\n</ul>\n<ul>\n<li>Grow your knowledge and expertise to the level of a technical and/or industry specialist.</li>\n</ul>\n<p>What we look for:</p>\n<ul>\n<li>Engage customers in technical sales, challenge their questions, guide clear outcomes, and communicate technical and value propositions.</li>\n</ul>\n<ul>\n<li>Passion for delivering technical propositions, identifying customers&#39; pain points and explaining essential areas for business value to develop a trusted advisor skillset.</li>\n</ul>\n<ul>\n<li>Knowledgeable in a core Big Data Analytics domain with some exposure to advanced Data Engineering and/or Data science use cases.</li>\n</ul>\n<ul>\n<li>Experience diving deeper into solution architecture and expertise with at least one major public cloud platform.</li>\n</ul>\n<ul>\n<li>Code in a core programming language like Python, Java, or Scala.</li>\n</ul>\n<ul>\n<li>A foundational understanding of Apache Spark architecture is preferable; hands-on skills will benefit the role.</li>\n</ul>\n<p>Notes on mandatory requirements:</p>\n<ul>\n<li>Flexibility to travel (up to 20-30% as required for customer meetings, events, and training).</li>\n</ul>\n<ul>\n<li>Business proficiency in French and English is required. Fluency in additional regional languages may be advantageous.</li>\n</ul>\n<p style=\"margin-top:24px;font-size:13px;color:#666;\">XML job scraping automation by <a href=\"https://yubhub.co\">YubHub</a></p>","url":"https://yubhub.co/jobs/job_7770176b-afc","directApply":true,"hiringOrganization":{"@type":"Organization","name":"Databricks","sameAs":"https://databricks.com","logo":"https://logos.yubhub.co/databricks.com.png"},"x-apply-url":"https://job-boards.greenhouse.io/databricks/jobs/8452392002","x-work-arrangement":"onsite","x-experience-level":"senior","x-job-type":"full-time","x-salary-range":null,"x-skills-required":["Big Data Analytics","Data Engineering","Data Science","Apache Spark","Python","Java","Scala"],"x-skills-preferred":[],"datePosted":"2026-04-24T12:09:51.671Z","jobLocation":{"@type":"Place","address":{"@type":"PostalAddress","addressLocality":"Paris, France"}},"employmentType":"FULL_TIME","occupationalCategory":"Engineering","industry":"Technology","skills":"Big Data Analytics, Data Engineering, Data Science, Apache Spark, Python, Java, Scala"},{"@context":"https://schema.org","@type":"JobPosting","identifier":{"@type":"PropertyValue","name":"YubHub","value":"job_934f1c6a-e10"},"title":"IT Manager","description":"<p>As an IT Manager, you will be responsible for ensuring the availability, security, and efficiency of the company&#39;s entire technological infrastructure. You will act as a strategic link between the business and technology, leading key projects, coordinating with Global IT, managing resources, suppliers, and budget, and guaranteeing operational continuity in a constantly evolving environment.</p>\n<p><strong>Areas of Responsibility</strong></p>\n<p><strong>Availability and System Stability</strong></p>\n<p>Ensure the correct functioning of networks, servers, and cloud environments,ratelining maintenance and resolving critical incidents to ensure service continuity.</p>\n<p><strong>Cybersecurity</strong></p>\n<p>Implement cybersecurity policies, manage backups and contingency plans, and serve as the local security reference point (CISO).</p>\n<p><strong>Project Leadership</strong></p>\n<p>Plan and execute implementations, migrations, and improvements; define requirements with business areas; and manage costs, risks, and deadlines.</p>\n<p><strong>Corporate System and Application Management</strong></p>\n<p>Monitor the performance of SAP, C4C, O365, and other key systems, ensuring alignment with Global IT and evaluating new technological tools.</p>\n<p><strong>Technical Support and User Experience</strong></p>\n<p>Guarantee an effective helpdesk, monitor KPIs, and improve incident resolution processes.</p>\n<p><strong>Supplier and Contract Management</strong></p>\n<p>Negotiate with local suppliers, manage licenses and renewals, and ensure quality and compliance with standards.</p>\n<p><strong>Budget Control</strong></p>\n<p>Manage purchases and investments, administer the annual IT budget, and optimize costs by evaluating ROI.</p>\n<p><strong>Technological Strategy and Digitalization</strong></p>\n<p>Identify opportunities for automation and digital improvement and define the technological strategy aligned with the business.</p>\n<p><strong>People Management</strong></p>\n<p>Lead and develop the team, promoting good practices, continuous training, and a solid technological culture.</p>\n<p style=\"margin-top:24px;font-size:13px;color:#666;\">XML job scraping automation by <a href=\"https://yubhub.co\">YubHub</a></p>","url":"https://yubhub.co/jobs/job_934f1c6a-e10","directApply":true,"hiringOrganization":{"@type":"Organization","name":"FUCHS","sameAs":"https://jobs.fuchs.com","logo":"https://logos.yubhub.co/jobs.fuchs.com.png"},"x-apply-url":"https://jobs.fuchs.com/job/Castellbisbal-IT-Manager-B-08755/1376853833/","x-work-arrangement":"onsite","x-experience-level":"senior","x-job-type":"full-time","x-salary-range":null,"x-skills-required":["solid experience in networks, protocols, and standards","dominion of Microsoft 365 and Azure","knowledge of Windows / Windows Server and Linux","knowledge of Power Platform and data privacy","knowledge of processes in industrial companies"],"x-skills-preferred":["orientation to the customer","effective communication","teamwork","critical thinking","initiative","strategic capacity"],"datePosted":"2026-04-24T12:09:09.767Z","jobLocation":{"@type":"Place","address":{"@type":"PostalAddress","addressLocality":"Castellbisbal"}},"employmentType":"FULL_TIME","occupationalCategory":"IT","industry":"Automotive","skills":"solid experience in networks, protocols, and standards, dominion of Microsoft 365 and Azure, knowledge of Windows / Windows Server and Linux, knowledge of Power Platform and data privacy, knowledge of processes in industrial companies, orientation to the customer, effective communication, teamwork, critical thinking, initiative, strategic capacity"},{"@context":"https://schema.org","@type":"JobPosting","identifier":{"@type":"PropertyValue","name":"YubHub","value":"job_a7186358-298"},"title":"Sr. Manager, Field Engineering - Agencies","description":"<p>We are seeking a dynamic Sr. Manager, Field Engineering - Agencies to lead a team of Solution Architects in our Agency segment. As a key member of our Field Engineering team, you will be responsible for driving the technical success of our customers in the Agencies vertical. This includes hiring, training, and growing a team of Solutions Architects, making customers successful with Databricks, and establishing relationships across the business to ensure customer and team success.</p>\n<p>The Agencies vertical sits at the intersection of data, AI, and advertising. You will lead a team that works with some of the most data-intensive companies in the world, holding companies managing billions in media spend and influencing the broader buy and sell side ecosystem.</p>\n<p>Key responsibilities include:</p>\n<ul>\n<li>Hiring, training, and growing a team of Solutions Architects</li>\n<li>Making customers successful with Databricks</li>\n<li>Establishing relationships across the business to ensure customer and team success</li>\n<li>Partnering with sales leadership to hit sales and consumption targets</li>\n<li>Keeping your team of SAs ahead of the technical curve</li>\n</ul>\n<p>To be successful in this role, you will need to have a strong technical background, excellent leadership skills, and the ability to communicate effectively with both technical and non-technical stakeholders.</p>\n<p style=\"margin-top:24px;font-size:13px;color:#666;\">XML job scraping automation by <a href=\"https://yubhub.co\">YubHub</a></p>","url":"https://yubhub.co/jobs/job_a7186358-298","directApply":true,"hiringOrganization":{"@type":"Organization","name":"Databricks","sameAs":"https://databricks.com","logo":"https://logos.yubhub.co/databricks.com.png"},"x-apply-url":"https://job-boards.greenhouse.io/databricks/jobs/8250195002","x-work-arrangement":"hybrid","x-experience-level":"senior","x-job-type":"full-time","x-salary-range":"$192,100-$264,175 USD","x-skills-required":["data warehousing","big data","machine learning","solution architecture","technical leadership","customer success","sales leadership"],"x-skills-preferred":["Databricks","Apache Spark","Delta Lake","MLflow","Lakehouse"],"datePosted":"2026-04-24T12:08:15.332Z","jobLocation":{"@type":"Place","address":{"@type":"PostalAddress","addressLocality":"Northeast - United States"}},"employmentType":"FULL_TIME","occupationalCategory":"Engineering","industry":"Technology","skills":"data warehousing, big data, machine learning, solution architecture, technical leadership, customer success, sales leadership, Databricks, Apache Spark, Delta Lake, MLflow, Lakehouse","baseSalary":{"@type":"MonetaryAmount","currency":"USD","value":{"@type":"QuantitativeValue","minValue":192100,"maxValue":264175,"unitText":"YEAR"}}},{"@context":"https://schema.org","@type":"JobPosting","identifier":{"@type":"PropertyValue","name":"YubHub","value":"job_3b29aed1-6ae"},"title":"Sr. Manager, AI Forward Deployed Engineering (FDE)","description":"<p>We are looking for a world-class leader to lead and grow our AI FDE team. As a Sr. Manager, AI Forward Deployed Engineering (FDE), you will lead customers on their AI/ML transformation with Databricks, push the boundaries of our product, recruit and develop top data scientists/machine learning engineers, and manage a portfolio of key accounts.</p>\n<p>In this role, you will:</p>\n<ul>\n<li>Lead and scale a world-class AI/ML professional services team, including hiring, mentoring, and building a team structure to support long-term growth and execution at scale.</li>\n<li>Develop and expand executive relationships with key customers and partners, acting as a trusted advisor during complex technical engagements and AI transformations.</li>\n<li>Align with Field Engineering and Sales Leaders to define joint strategies for strategic accounts and ensure strong delivery coordination across functions.</li>\n<li>Lead strategic AI PS initiatives, practice development, and standardized delivery processes; design scalable engagement models and reusable solutions for repeatability across the global team.</li>\n<li>Shape cross-functional collaboration by influencing Product, R&amp;D, and GTM,ensuring voice-of-customer insights and delivery learnings help inform the product roadmap and GTM strategy.</li>\n<li>Own OKRs for AI-services led accounts, revenue, utilization, and public references.</li>\n<li>Represent Databricks as a thought leader in AI/ML.</li>\n</ul>\n<p>The ideal candidate will have extensive experience managing, hiring, and building a team of high-performing data scientists/ML engineers and leaders, with a track record of scaling organizations through developing scalable processes and cultivating leaders.</p>\n<p style=\"margin-top:24px;font-size:13px;color:#666;\">XML job scraping automation by <a href=\"https://yubhub.co\">YubHub</a></p>","url":"https://yubhub.co/jobs/job_3b29aed1-6ae","directApply":true,"hiringOrganization":{"@type":"Organization","name":"Databricks","sameAs":"https://databricks.com","logo":"https://logos.yubhub.co/databricks.com.png"},"x-apply-url":"https://job-boards.greenhouse.io/databricks/jobs/8515642002","x-work-arrangement":"remote","x-experience-level":"senior","x-job-type":"full-time","x-salary-range":"$211,800-$291,300 USD","x-skills-required":["Machine Learning","GenAI","Data Science","Cloud Computing","Leadership","Team Management","Strategic Planning","Cross-Functional Collaboration"],"x-skills-preferred":["Databricks","Apache Spark","Delta Lake","MLflow"],"datePosted":"2026-04-24T12:07:58.029Z","jobLocation":{"@type":"Place","address":{"@type":"PostalAddress","addressLocality":"United States"}},"jobLocationType":"TELECOMMUTE","employmentType":"FULL_TIME","occupationalCategory":"Engineering","industry":"Technology","skills":"Machine Learning, GenAI, Data Science, Cloud Computing, Leadership, Team Management, Strategic Planning, Cross-Functional Collaboration, Databricks, Apache Spark, Delta Lake, MLflow","baseSalary":{"@type":"MonetaryAmount","currency":"USD","value":{"@type":"QuantitativeValue","minValue":211800,"maxValue":291300,"unitText":"YEAR"}}},{"@context":"https://schema.org","@type":"JobPosting","identifier":{"@type":"PropertyValue","name":"YubHub","value":"job_66f32121-bb7"},"title":"Business Development Representative - Italian Speaking","description":"<p>About Us</p>\n<p>At Cloudflare, we are on a mission to help build a better Internet. Today the company runs one of the world’s largest networks that powers millions of websites and other Internet properties for customers ranging from individual bloggers to SMBs to Fortune 500 companies.</p>\n<p>This is an open application. About the Team:</p>\n<p>In this role, you will contribute to Cloudflare&#39;s revenue generation engine by delivering pipeline at scale to the Sales counterparts. You will do this through a maniacal focus on people, process, and tools. The Business Development organization is anchored in a culture focused on the development &amp; training of its employees, incubating talent across the company, teamwork, and celebrating success.</p>\n<p>With flawless execution, we believe the Business Development organization will be a competitive differentiator for Cloudflare. This is a great opportunity to be a member of our high performing Sales team at a hyper-growth technology company.</p>\n<p>The Business Development Representative (BDR) will:</p>\n<ul>\n<li>Be the first point of contact for customers that need help finding solutions</li>\n<li>Develop your customer centric sales skills to deliver a stellar customer experience</li>\n<li>Learn Cloudflare’s products and services in detail</li>\n</ul>\n<p>Similar to other roles that Cloudflare, this role has a tenure requirement of 18-24 months before you may be eligible to apply for another role within the company.</p>\n<p>Location: London, the UK or Lisbon, Portugal</p>\n<p>Languages required: Italian and English</p>\n<p>This is a great opportunity to be an early member of a high performing sales team at a fast growing technology company. We are looking for ownership-oriented team members with excellent communication skills and technical curiosity.</p>\n<p>As the Business Development Representative (BDR), you will:</p>\n<ul>\n<li>Create excellent customer experiences</li>\n<li>Learn customer-centric sales skills</li>\n<li>Become an expert in Cloudflare’s product</li>\n</ul>\n<p>Team members have opportunities to move into roles across the organization, especially in mid-market sales, customer success, solutions engineering, and sales operations</p>\n<p>Day in the Life of BDR at Cloudflare</p>\n<ul>\n<li>Own and meet target quota related to number of qualified opportunities, response SLA, value of sales pipeline, and revenue</li>\n<li>Develop new business opportunities from inbound and marketing-generated leads</li>\n<li>Discover pain points and use case, map them to broad set of Cloudflare solutions and qualify for Enterprise sales opportunities</li>\n<li>Work cross-functionally with stakeholders (account executives, marketing, sales operations, fellow BDRs)</li>\n<li>Report, track, and manage sales activities and results using SFDC</li>\n<li>Play an active role in the creation and iteration of team processes</li>\n</ul>\n<p>Examples of desirable skills, knowledge and experience</p>\n<ul>\n<li>Self-motivated; entrepreneurial spirit</li>\n<li>Ability to work as part of a team or independently</li>\n<li>Analytical, organization and time management skills</li>\n<li>Comfortable working in a fast-paced, dynamic environment</li>\n<li>Strong interpersonal communication skills</li>\n<li>Customer-oriented mindset with empathy and curiosity</li>\n<li>Aptitude to learn technical concepts/terms</li>\n<li>Ability to manage multiple tasks/projects simultaneously</li>\n</ul>\n<p>Minimum 1 years of experience in BDR or in a similar capacity in technology industry is preferred, specifically in SaaS will be a plus</p>\n<p>Experience in Outreach and Salesforce is a plus</p>\n<p>What Makes Cloudflare Special?</p>\n<p>We’re not just a highly ambitious, large-scale technology company. We’re a highly ambitious, large-scale technology company with a soul. Fundamental to our mission to help build a better Internet is protecting the free and open Internet.</p>\n<p>Project Galileo: Since 2014, we&#39;ve equipped more than 2,400 journalism and civil society organizations in 111 countries with powerful tools to defend themselves against attacks that would otherwise censor their work, technology already used by Cloudflare’s enterprise customers--at no cost.</p>\n<p>Athenian Project: In 2017, we created the Athenian Project to ensure that state and local governments have the highest level of protection and reliability for free, so that their constituents have access to election information and voter registration. Since the project, we&#39;ve provided services to more than 425 local government election websites in 33 states.</p>\n<p>1.1.1.1: We released 1.1.1.1 to help fix the foundation of the Internet by building a faster, more secure and privacy-centric public DNS resolver. This is available publicly for everyone to use - it is the first consumer-focused service Cloudflare has ever released.</p>\n<p>Here’s the deal - we don’t store client IP addresses never, ever. We will continue to abide by our privacy commitment and ensure that no user data is sold to advertisers or used to target consumers.</p>\n<p>Sound like something you’d like to be a part of? We’d love to hear from you!</p>\n<p>This position may require access to information protected under U.S. export control laws, including the U.S. Export Administration Regulations. Please note that any offer of employment may be conditioned on your authorization to receive software or technology controlled under these U.S. export laws without sponsorship for an export license.</p>\n<p>Cloudflare is proud to be an equal opportunity employer. We are committed to providing equal employment opportunity for all people and place great value in both diversity and inclusiveness. All qualified applicants will be considered for employment without regard to their, or any other person&#39;s, perceived or actual race, color, religion, sex, gender, gender identity, gender expression, sexual orientation, national origin, ancestry, citizenship, age, physical or mental disability, medical condition, family care status, or any other basis protected by law.</p>\n<p>We are an AA/Veterans/Disabled Employer. Cloudflare provides reasonable accommodations to qualified individuals with disabilities. Please tell us if you require a reasonable accommodation to apply for a job. Examples of reasonable accommodations include, but are not limited to, changing the application process, providing documents in an alternate format, using a sign language interpreter, or using specialized equipment. If you require a reasonable accommodation to apply for a job, please contact us via e-mail at hr@cloudflare.com or via mail at 101 Townsend St. San Francisco, CA 94107.</p>\n<p style=\"margin-top:24px;font-size:13px;color:#666;\">XML job scraping automation by <a href=\"https://yubhub.co\">YubHub</a></p>","url":"https://yubhub.co/jobs/job_66f32121-bb7","directApply":true,"hiringOrganization":{"@type":"Organization","name":"Cloudflare","sameAs":"https://www.cloudflare.com/","logo":"https://logos.yubhub.co/cloudflare.com.png"},"x-apply-url":"https://job-boards.greenhouse.io/cloudflare/jobs/7845802","x-work-arrangement":"hybrid","x-experience-level":null,"x-job-type":null,"x-salary-range":null,"x-skills-required":["Self-motivated; entrepreneurial spirit","Ability to work as part of a team or independently","Analytical, organization and time management skills","Comfortable working in a fast-paced, dynamic environment","Strong interpersonal communication skills","Customer-oriented mindset with empathy and curiosity","Aptitude to learn technical concepts/terms","Ability to manage multiple tasks/projects simultaneously"],"x-skills-preferred":["Experience in Outreach and Salesforce","Minimum 1 years of experience in BDR or in a similar capacity in technology industry","Specifically in SaaS will be a plus"],"datePosted":"2026-04-24T12:05:21.216Z","jobLocation":{"@type":"Place","address":{"@type":"PostalAddress","addressLocality":"Hybrid"}},"occupationalCategory":"Sales","industry":"Technology","skills":"Self-motivated; entrepreneurial spirit, Ability to work as part of a team or independently, Analytical, organization and time management skills, Comfortable working in a fast-paced, dynamic environment, Strong interpersonal communication skills, Customer-oriented mindset with empathy and curiosity, Aptitude to learn technical concepts/terms, Ability to manage multiple tasks/projects simultaneously, Experience in Outreach and Salesforce, Minimum 1 years of experience in BDR or in a similar capacity in technology industry, Specifically in SaaS will be a plus"},{"@context":"https://schema.org","@type":"JobPosting","identifier":{"@type":"PropertyValue","name":"YubHub","value":"job_546d42de-3ec"},"title":"Business Development Representative - German Speaking","description":"<p>About Us</p>\n<p>At Cloudflare, we are on a mission to help build a better Internet. Today the company runs one of the world’s largest networks that powers millions of websites and other Internet properties for customers ranging from individual bloggers to SMBs to Fortune 500 companies.</p>\n<p>As a Business Development Representative (BDR), you will contribute to Cloudflare&#39;s revenue generation engine by delivering pipeline at scale to the Sales counterparts. You will do this through a maniacal focus on people, process, and tools.</p>\n<p>The Business Development organization is anchored in a culture focused on the development &amp; training of its employees, incubating talent across the company, teamwork, and celebrating success.</p>\n<p>Responsibilities</p>\n<ul>\n<li>Be the first point of contact for customers that need help finding solutions</li>\n<li>Develop your customer centric sales skills to deliver a stellar customer experience</li>\n<li>Learn Cloudflare’s products and services in detail</li>\n</ul>\n<p>Day in the Life of BDR at Cloudflare</p>\n<ul>\n<li>Own and meet target quota related to number of qualified opportunities, response SLA, value of sales pipeline, and revenue</li>\n<li>Develop new business opportunities from inbound and marketing-generated leads</li>\n<li>Discover pain points and use case, map them to broad set of Cloudflare solutions and qualify for Enterprise sales opportunities</li>\n<li>Work cross-functionally with stakeholders (account executives, marketing, sales operations, fellow BDRs)</li>\n<li>Report, track, and manage sales activities and results using SFDC</li>\n</ul>\n<p>Examples of desirable skills, knowledge and experience</p>\n<ul>\n<li>Self-motivated; entrepreneurial spirit</li>\n<li>Ability to work as part of a team or independently</li>\n<li>Analytical, organization and time management skills</li>\n<li>Comfortable working in a fast-paced, dynamic environment</li>\n<li>Strong interpersonal communication skills</li>\n<li>Customer-oriented mindset with empathy and curiosity</li>\n<li>Aptitude to learn technical concepts/terms</li>\n<li>Ability to manage multiple tasks/projects simultaneously</li>\n</ul>\n<p>What Makes Cloudflare Special?</p>\n<p>We’re not just a highly ambitious, large-scale technology company. We’re a highly ambitious, large-scale technology company with a soul. Fundamental to our mission to help build a better Internet is protecting the free and open Internet.</p>\n<p>Project Galileo: Since 2014, we&#39;ve equipped more than 2,400 journalism and civil society organizations in 111 countries with powerful tools to defend themselves against attacks that would otherwise censor their work, technology already used by Cloudflare’s enterprise customers--at no cost.</p>\n<p>Athenian Project: In 2017, we created the Athenian Project to ensure that state and local governments have the highest level of protection and reliability for free, so that their constituents have access to election information and voter registration. Since the project, we&#39;ve provided services to more than 425 local government election websites in 33 states.</p>\n<p>1.1.1.1: We released 1.1.1.1 to help fix the foundation of the Internet by building a faster, more secure and privacy-centric public DNS resolver. This is available publicly for everyone to use - it is the first consumer-focused service Cloudflare has ever released.</p>\n<p>Here’s the deal - we don’t store client IP addresses never, ever. We will continue to abide by our privacy commitment and ensure that no user data is sold to advertisers or used to target consumers.</p>\n<p>Benefits</p>\n<ul>\n<li>Opportunity to be an early member of a high performing sales team at a fast growing technology company</li>\n<li>Ownership-oriented team members with excellent communication skills and technical curiosity</li>\n<li>Opportunities to move into roles across the organization, especially in mid-market sales, customer success, solutions engineering, and sales operations</li>\n</ul>\n<p>Experience Level: entry Employment Type: full-time Workplace Type: hybrid Category: Sales Industry: Technology Salary Range: Not stated Salary Min: Not stated Salary Max: Not stated Salary Currency: USD Salary Period: year Required Skills:</p>\n<ul>\n<li>Self-motivated; entrepreneurial spirit</li>\n<li>Ability to work as part of a team or independently</li>\n<li>Analytical, organization and time management skills</li>\n<li>Comfortable working in a fast-paced, dynamic environment</li>\n<li>Strong interpersonal communication skills</li>\n<li>Customer-oriented mindset with empathy and curiosity</li>\n<li>Aptitude to learn technical concepts/terms</li>\n<li>Ability to manage multiple tasks/projects simultaneously</li>\n</ul>\n<p>Preferred Skills:</p>\n<ul>\n<li>Experience in Outreach and Salesforce</li>\n<li>Experience in BDR or in a similar capacity in technology industry</li>\n</ul>\n<p style=\"margin-top:24px;font-size:13px;color:#666;\">XML job scraping automation by <a href=\"https://yubhub.co\">YubHub</a></p>","url":"https://yubhub.co/jobs/job_546d42de-3ec","directApply":true,"hiringOrganization":{"@type":"Organization","name":"Cloudflare","sameAs":"https://www.cloudflare.com/","logo":"https://logos.yubhub.co/cloudflare.com.png"},"x-apply-url":"https://job-boards.greenhouse.io/cloudflare/jobs/7504487","x-work-arrangement":"hybrid","x-experience-level":"entry","x-job-type":"full-time","x-salary-range":null,"x-skills-required":["Self-motivated; entrepreneurial spirit","Ability to work as part of a team or independently","Analytical, organization and time management skills","Comfortable working in a fast-paced, dynamic environment","Strong interpersonal communication skills","Customer-oriented mindset with empathy and curiosity","Aptitude to learn technical concepts/terms","Ability to manage multiple tasks/projects simultaneously"],"x-skills-preferred":["Experience in Outreach and Salesforce","Experience in BDR or in a similar capacity in technology industry"],"datePosted":"2026-04-24T12:04:54.542Z","jobLocation":{"@type":"Place","address":{"@type":"PostalAddress","addressLocality":"Hybrid"}},"employmentType":"FULL_TIME","occupationalCategory":"Sales","industry":"Technology","skills":"Self-motivated; entrepreneurial spirit, Ability to work as part of a team or independently, Analytical, organization and time management skills, Comfortable working in a fast-paced, dynamic environment, Strong interpersonal communication skills, Customer-oriented mindset with empathy and curiosity, Aptitude to learn technical concepts/terms, Ability to manage multiple tasks/projects simultaneously, Experience in Outreach and Salesforce, Experience in BDR or in a similar capacity in technology industry"},{"@context":"https://schema.org","@type":"JobPosting","identifier":{"@type":"PropertyValue","name":"YubHub","value":"job_71777139-e5d"},"title":"Business Development Representative, ASEAN (Bahasa speaking)","description":"<p>About Us</p>\n<p>At Cloudflare, we are on a mission to help build a better Internet. Today the company runs one of the world’s largest networks that powers millions of websites and other Internet properties for customers ranging from individual bloggers to SMBs to Fortune 500 companies.</p>\n<p>In this role, you will contribute to Cloudflare&#39;s revenue generation engine by delivering pipeline at scale to the Sales counterparts. You will do this through a maniacal focus on people, process, and tools. The Business Development organization is anchored in a culture focused on the development &amp; training of its employees, incubating talent across the company, teamwork, and celebrating success.</p>\n<p>With flawless execution, we believe the Business Development organization will be a competitive differentiator for Cloudflare. This is a great opportunity to be a member of our high performing Sales team at a hyper-growth technology company.</p>\n<p>Responsibilities</p>\n<ul>\n<li>Be the first point of contact for customers that need help finding solutions</li>\n<li>Develop your customer centric sales skills to deliver a stellar customer experience</li>\n<li>Learn Cloudflare’s products and services in detail</li>\n</ul>\n<p>About the Role</p>\n<p>In this role, you will be responsible for being the “face of Cloudflare” and account resource for our PAYGO customers. You will manage your own “book of business” to nurture relationships with our free, pro, and business customers to identify opportunities for expansion.</p>\n<p>This role requires you to have a basic understanding of Cloudflare’s suite of products to be able to provide a range of recommendations and solutions to our customers. You will be leveraging tools such as Google Sheets/Airtable, internal applications, Sales Navigator, and ZoomInfo to map key customers to the right product suite for them.</p>\n<p>Day in the Life of BDR at Cloudflare</p>\n<ul>\n<li>Own and meet target quota related to number of qualified opportunities, value of sales pipeline, and revenue</li>\n<li>Develop new business opportunities from existing customer base</li>\n<li>Identify target accounts with strategic timing and strong use cases through qualitative and data driven approach</li>\n<li>Work cross-functionally with stakeholders (account executives, marketing, sales operations, fellow BDRs)</li>\n<li>Help lead BDR team-wide campaigns or initiatives (we’re a collaborative group)</li>\n<li>Write emails and letters you’d love to open; make calls you’d love to receive; ask compelling questions</li>\n<li>Report, track, and manage sales activities and results using SFDC</li>\n<li>Play an active role in the creation and iteration of team processes</li>\n</ul>\n<p>Examples of desirable skills, knowledge and experience</p>\n<ul>\n<li>Self-motivated; entrepreneurial spirit</li>\n<li>Comfortable working in a fast-paced, dynamic environment</li>\n<li>Strong interpersonal communication skills</li>\n<li>Customer-oriented mindset with empathy and curiosity</li>\n<li>Aptitude to learn technical concepts/terms</li>\n<li>Ability to manage multiple tasks/projects simultaneously</li>\n<li>As the primary point of contact for customers in Indonesia, proficiency in Bahasa is required</li>\n<li>Minimum 1 years of experience in BDR or in a similar capacity in technology industry is preferred, specifically in SaaS will be a plus</li>\n<li>Experience in prospecting to Enterprise level organisations and Public Sector is a plus</li>\n<li>Experience in Google Sheets, Outreach, SFDC reporting, and data analysis is a plus</li>\n</ul>\n<p style=\"margin-top:24px;font-size:13px;color:#666;\">XML job scraping automation by <a href=\"https://yubhub.co\">YubHub</a></p>","url":"https://yubhub.co/jobs/job_71777139-e5d","directApply":true,"hiringOrganization":{"@type":"Organization","name":"Cloudflare","sameAs":"https://www.cloudflare.com/","logo":"https://logos.yubhub.co/cloudflare.com.png"},"x-apply-url":"https://job-boards.greenhouse.io/cloudflare/jobs/7676967","x-work-arrangement":"hybrid","x-experience-level":"entry","x-job-type":"full-time","x-salary-range":null,"x-skills-required":["self-motivated","entrepreneurial spirit","strong interpersonal communication skills","customer-oriented mindset","aptitude to learn technical concepts/terms","ability to manage multiple tasks/projects simultaneously","proficiency in Bahasa","experience in BDR or similar capacity in technology industry","experience in prospecting to Enterprise level organisations and Public Sector","experience in Google Sheets, Outreach, SFDC reporting, and data analysis"],"x-skills-preferred":[],"datePosted":"2026-04-24T12:04:41.374Z","jobLocation":{"@type":"Place","address":{"@type":"PostalAddress","addressLocality":"Hybrid"}},"employmentType":"FULL_TIME","occupationalCategory":"Sales","industry":"Technology","skills":"self-motivated, entrepreneurial spirit, strong interpersonal communication skills, customer-oriented mindset, aptitude to learn technical concepts/terms, ability to manage multiple tasks/projects simultaneously, proficiency in Bahasa, experience in BDR or similar capacity in technology industry, experience in prospecting to Enterprise level organisations and Public Sector, experience in Google Sheets, Outreach, SFDC reporting, and data analysis"},{"@context":"https://schema.org","@type":"JobPosting","identifier":{"@type":"PropertyValue","name":"YubHub","value":"job_272750a8-710"},"title":"Consultant","description":"<p>As a Consultant at MHP, you will operate infrastructure in AWS using Terraform, create technical concepts for new features and enhancements within a Scrum Team, develop and maintain scalable Java Spring Boot microservices, and work with AWS and Kubernetes.</p>\n<p>You will have expertise in backend programming using Java and Spring Boot, experience with AWS, including services like S3, EC2, and Lambda, and experience with Terraform for creating and managing AWS infrastructure.</p>\n<p>You will also have experience with tools such as IntelliJ and REST tools (Postman or similar), proficiency in working with Kubernetes for microservices, advanced-level AWS certification, experience with Apache Kafka event streaming, experience working with MongoDB database, and experience working with GitLab CI/CD pipelines.</p>\n<p>You will start by arrangement, work full-time (40h) with 27 vacation days, and have an unlimited employment contract. You will need a valid work permit and be fluent in written and spoken English.</p>\n<p>At MHP, you will continuously grow with your projects and objectives in an innovative and supportive environment. You will be part of a strong team spirit, where every win, big or small, belongs to all of us. You will welcome curiosity, creativity, and unconventional thinking patterns, and recognize the importance of healthy, tight-knit communities and sustainable environmental changes.</p>\n<p style=\"margin-top:24px;font-size:13px;color:#666;\">XML job scraping automation by <a href=\"https://yubhub.co\">YubHub</a></p>","url":"https://yubhub.co/jobs/job_272750a8-710","directApply":true,"hiringOrganization":{"@type":"Organization","name":"MHP","sameAs":"http://www.mhp.com/","logo":"https://logos.yubhub.co/mhp.com.png"},"x-apply-url":"https://jobs.porsche.com/index.php?ac=jobad&id=18226","x-work-arrangement":"onsite","x-experience-level":"mid","x-job-type":"full-time","x-salary-range":null,"x-skills-required":["Java","Spring Boot","AWS","Terraform","Kubernetes","IntelliJ","REST tools","Apache Kafka","MongoDB","GitLab CI/CD pipelines"],"x-skills-preferred":[],"datePosted":"2026-04-22T17:25:42.569Z","employmentType":"FULL_TIME","occupationalCategory":"IT","industry":"Consulting","skills":"Java, Spring Boot, AWS, Terraform, Kubernetes, IntelliJ, REST tools, Apache Kafka, MongoDB, GitLab CI/CD pipelines"},{"@context":"https://schema.org","@type":"JobPosting","identifier":{"@type":"PropertyValue","name":"YubHub","value":"job_35760bd6-623"},"title":"Operations Manager","description":"<p>The Operations Manager oversees the operations strategy for Honda Federal Credit Union, encompassing physical branch office strategies, ITMs, operations, and operational support. This leadership role is accountable for the overall performance of operational initiatives ranging from new product development, managing operational systems, overseeing and optimizing operational efficiency, member experience, staff experience, and performance of operational service delivery within the credit union.</p>\n<p>Key responsibilities include developing and meeting Operations Department goals, budget, and objectives as outlined in HFCU&#39;s annual strategic plan. Ensuring compliance with all applicable laws and regulations related to operational practices by developing internal policies and procedures. Providing coaching and counseling to Operations leaders and staff to maximize efficiency and effectiveness.</p>\n<p>Actively participating in project implementation, including attending project team meetings, managing assigned projects, and completing project deliverables. Identifying and creating strategic project initiatives to meet the changing technological landscape related to ITMs, core systems, application portals, artificial intelligence, digital service support, and data analytics.</p>\n<p>Managing, maintaining, and owning the following operational components: ITM strategic plan, vendor relationship, servicing, regulatory/compliance; Account/Loan Origination platform, vendor relationship, updates, and servicing; Member-facing QC strategy and procedures; Digital services testing, program testing, updates procedures to data processing systems as assigned.</p>\n<p>Creating and maintaining strategic plan and updates for retail facilities, physical branch operation infrastructure, and prioritizing facility plans, security protocols, and collaborating with marketing to ensure appropriate member-facing and staff-facing branding and design.</p>\n<p style=\"margin-top:24px;font-size:13px;color:#666;\">XML job scraping automation by <a href=\"https://yubhub.co\">YubHub</a></p>","url":"https://yubhub.co/jobs/job_35760bd6-623","directApply":true,"hiringOrganization":{"@type":"Organization","name":"Honda Federal Credit Union","sameAs":"https://careers.honda.com","logo":"https://logos.yubhub.co/careers.honda.com.png"},"x-apply-url":"https://careers.honda.com/us/en/job/10778/Operations-Manager","x-work-arrangement":"onsite","x-experience-level":"senior","x-job-type":"full-time","x-salary-range":"$82,500.00 - $123,800.00","x-skills-required":["Bachelor's degree in business, finance, operations, or a related field","Seven or more years experience in a financial institution in a retail branch, member service, operations, or contact center environment","Three or more years of leadership experience in a financial institution in a member service capacity","Ability to organize and effectively direct subordinates","Ability to read, analyze, and interpret common financial and technical journals, financial reports, and legal documents"],"x-skills-preferred":[],"datePosted":"2026-04-22T17:22:49.683Z","jobLocation":{"@type":"Place","address":{"@type":"PostalAddress","addressLocality":"Marysville"}},"employmentType":"FULL_TIME","occupationalCategory":"Finance","industry":"Finance","skills":"Bachelor's degree in business, finance, operations, or a related field, Seven or more years experience in a financial institution in a retail branch, member service, operations, or contact center environment, Three or more years of leadership experience in a financial institution in a member service capacity, Ability to organize and effectively direct subordinates, Ability to read, analyze, and interpret common financial and technical journals, financial reports, and legal documents","baseSalary":{"@type":"MonetaryAmount","currency":"USD","value":{"@type":"QuantitativeValue","minValue":82500,"maxValue":123800,"unitText":"YEAR"}}},{"@context":"https://schema.org","@type":"JobPosting","identifier":{"@type":"PropertyValue","name":"YubHub","value":"job_6a63b8e9-84c"},"title":"Principal Manufacturing Engineer","description":"<p>What Makes a Honda, is Who makes a Honda</p>\n<p>Honda has a clear vision for the future, and it’s a joyful one. We are looking for individuals with the skills, courage, persistence, and dreams that will help us reach our future-focused goals. At our core is innovation. Honda is constantly innovating and developing solutions to drive our business with record success. We strive to be a company that serves as a source of “power” that supports people around the world who are trying to do things based on their own initiative and that helps people expand their own potential. To this end, Honda strives to realize “the joy and freedom of mobility” by developing new technologies and an innovative approach to achieve a “zero environmental footprint.”</p>\n<p>We are looking for qualified individuals with diverse backgrounds, experiences, continuous improvement values, and a strong work ethic to join our team.</p>\n<p>If your goals and values align with Honda’s, we want you to join our team to Bring the Future!</p>\n<p><strong>JOB PURPOSE:</strong></p>\n<p>Lead in development of problem solving, continuous improvement and associate mentoring/development, in alignment with efficient production, new model introduction, business plan implementation and innovation to achieve or exceed targets that promote associate success.</p>\n<p><strong>KEY ACCOUNTABILITIES:</strong></p>\n<p>Recommend strategic investments in technology, tools, and processes to support future product lines and production capacity</p>\n<p>Continuous development of self, colleagues and team through training and mentoring to proactively improve areas of management expertise for personal and team growth</p>\n<p>Identify risks, remove barriers, and maintain alignment across engineering, operations, supply chain, and production teams</p>\n<p>Evaluate and assess concerns identified by team to help or guide them to address or correct and resolve issues utilizing PDCA</p>\n<p>Define and maintain a multiyear engineering roadmap aligned with business goals and manufacturing requirements</p>\n<p>Manage projects for business plan and new model from concept through launch, ensuring on-time and on-budget delivery</p>\n<p>Manage required organizational resources (manpower, structure, budget) necessary to achieve operational expectations</p>\n<p>Collaborate in the development of Business Plan Strategy and new technologies or model introductions to improve production characteristics and ensure voice of the floor is heard</p>\n<p>Ensure engineering work complies with manufacturing standards, regulatory requirements, and internal quality systems</p>\n<p><strong>QUALIFICATIONS, EXPERIENCE, &amp; SKILLS:</strong></p>\n<ul>\n<li>Minimum educational qualifications: Bachelor&#39;s degree in engineering or engineering technology (e.g., mechanical, electrical, industrial and robotics) or other equivalent relevant work experience.</li>\n<li>Minimum relevant work experience: 8 to 12 or more years experience of based on education</li>\n<li>Other job-specific skills:</li>\n<li>Knowledge and experience with Honda business planning process.</li>\n<li>Expertise within manufacturing department of assignment.</li>\n<li>Background in model development, forecasting, or capacity planning</li>\n<li>Strong presentation skills for communicating forecasts, risks, and strategic recommendations</li>\n<li>Be a pro-active problem-solver who anticipates risks and removes barriers.</li>\n</ul>\n<p><strong>Visa sponsorship issues:</strong></p>\n<p>This position is not eligible for work visa sponsorship</p>\n<p><strong>What differentiates Honda and makes us an employer of choice?</strong></p>\n<p><strong>Total Rewards:</strong></p>\n<p>Competitive Base Salary (pay will be based on several variables that include, but not limited to geographic location, work experience, etc.)</p>\n<p>Regional Bonus (when applicable)</p>\n<p>Manager Lease Car Program (No Cost - Car, Maintenance, and Insurance included)</p>\n<p>Industry-leading Benefit Plans (Medical, Dental, Vision, Rx)</p>\n<p>Paid time off, including vacation, holidays, shutdown</p>\n<p>Company Paid Short-Term and Long-Term Disability</p>\n<p>401K Plan with company match + additional contribution</p>\n<p>Relocation assistance (if eligible)</p>\n<p><strong>Career Growth:</strong></p>\n<p>Advancement Opportunities</p>\n<p>Career Mobility</p>\n<p>Education Reimbursement for Continued learning</p>\n<p>Training and Development Programs</p>\n<p><strong>Additional Offerings:</strong></p>\n<p>Lifestyle Account</p>\n<p>Childcare Reimbursement Account</p>\n<p>Elder Care Support</p>\n<p>Tuition Assistance &amp; Student Loan Repayment</p>\n<p>Wellbeing Program</p>\n<p>Community Service and Engagement Programs</p>\n<p>Product Programs</p>\n<p style=\"margin-top:24px;font-size:13px;color:#666;\">XML job scraping automation by <a href=\"https://yubhub.co\">YubHub</a></p>","url":"https://yubhub.co/jobs/job_6a63b8e9-84c","directApply":true,"hiringOrganization":{"@type":"Organization","name":"Honda","sameAs":"https://careers.honda.com","logo":"https://logos.yubhub.co/careers.honda.com.png"},"x-apply-url":"https://careers.honda.com/us/en/job/10602/Principal-Manufacturing-Engineer","x-work-arrangement":"onsite","x-experience-level":"senior","x-job-type":"full-time","x-salary-range":"$99,100.00 - $148,600.00","x-skills-required":["Knowledge and experience with Honda business planning process","Expertise within manufacturing department of assignment","Background in model development, forecasting, or capacity planning","Strong presentation skills for communicating forecasts, risks, and strategic recommendations","Be a pro-active problem-solver who anticipates risks and removes barriers"],"x-skills-preferred":[],"datePosted":"2026-04-22T17:22:45.570Z","jobLocation":{"@type":"Place","address":{"@type":"PostalAddress","addressLocality":"Haw River"}},"employmentType":"FULL_TIME","occupationalCategory":"Engineering","industry":"Automotive","skills":"Knowledge and experience with Honda business planning process, Expertise within manufacturing department of assignment, Background in model development, forecasting, or capacity planning, Strong presentation skills for communicating forecasts, risks, and strategic recommendations, Be a pro-active problem-solver who anticipates risks and removes barriers","baseSalary":{"@type":"MonetaryAmount","currency":"USD","value":{"@type":"QuantitativeValue","minValue":99100,"maxValue":148600,"unitText":"YEAR"}}},{"@context":"https://schema.org","@type":"JobPosting","identifier":{"@type":"PropertyValue","name":"YubHub","value":"job_b33cbd91-bc9"},"title":"Systematic Production Support Engineer","description":"<p>We are seeking an experienced Systematic Production Support Engineer to help us scale our systematic operations and support engineering capabilities. This role directly supports portfolio management teams across Millennium, with operational excellence at the core. Our efforts are focused on delivering the highest quality returns to our investors – providing a world-class and reliable trading and technology platform is essential to this mission.</p>\n<p>As a Systematic Production Support Engineer, you will be responsible for building, developing, and maintaining a reliable, scalable, and integrated platform for trading strategy monitoring, reporting, and operations. You will work closely with portfolio managers and other internal customers to reduce operational risk through the implementation of monitoring, reporting, and trade workflow solutions, as well as automated systems and processes focused on trading and operations.</p>\n<p>Key responsibilities include:</p>\n<ul>\n<li>Building, developing, and maintaining a reliable, scalable, and integrated platform for trading strategy monitoring, reporting, and operations</li>\n<li>Working with portfolio managers and other internal customers to reduce operational risk through the implementation of monitoring, reporting, and trade workflow solutions</li>\n<li>Implementing automated systems and processes focused on trading and operations</li>\n<li>Streamlining development and deployment processes</li>\n</ul>\n<p>Technical qualifications include:</p>\n<ul>\n<li>5+ years of development experience in Python</li>\n<li>Experience working in a Linux/Unix environment</li>\n<li>Experience working with PostgreSQL or other relational databases</li>\n</ul>\n<p>Preferred skills and experience include:</p>\n<ul>\n<li>Understanding of NLP, supervised/non-supervised learning, and Generative AI models</li>\n<li>Experience operating and monitoring low-latency trading environments</li>\n<li>Familiarity with quantitative finance and electronic trading concepts</li>\n<li>Familiarity with financial data</li>\n<li>Broad understanding of equities, futures, FX, or other financial instruments</li>\n<li>Experience designing and developing distributed systems with a focus on backend development in C/C++, Java, Scala, Go, or C#</li>\n<li>Experience with Apache/Confluent Kafka</li>\n<li>Experience automating SDLC pipelines (e.g., Jenkins, TeamCity, or AWS CodePipeline)</li>\n<li>Experience with containerization and orchestration technologies</li>\n<li>Experience building and deploying systems that utilize services provided by AWS, GCP, or Azure</li>\n<li>Contributions to open-source projects</li>\n</ul>\n<p>This is a unique opportunity to drive significant value creation for one of the world&#39;s leading investment managers.</p>\n<p style=\"margin-top:24px;font-size:13px;color:#666;\">XML job scraping automation by <a href=\"https://yubhub.co\">YubHub</a></p>","url":"https://yubhub.co/jobs/job_b33cbd91-bc9","directApply":true,"hiringOrganization":{"@type":"Organization","name":"Unknown","sameAs":"https://mlp.eightfold.ai","logo":"https://logos.yubhub.co/mlp.eightfold.ai.png"},"x-apply-url":"https://mlp.eightfold.ai/careers/job/755954716155","x-work-arrangement":"onsite","x-experience-level":"senior","x-job-type":"full-time","x-salary-range":null,"x-skills-required":["Python","Linux/Unix","PostgreSQL","NLP","supervised/non-supervised learning","Generative AI models","low-latency trading environments","quantitative finance","electronic trading concepts","financial data","equities","futures","FX","distributed systems","backend development","C/C++","Java","Scala","Go","C#","Apache/Confluent Kafka","SDLC pipelines","containerization","orchestration technologies","AWS","GCP","Azure"],"x-skills-preferred":["Understanding of NLP, supervised/non-supervised learning, and Generative AI models","Experience operating and monitoring low-latency trading environments","Familiarity with quantitative finance and electronic trading concepts","Familiarity with financial data","Broad understanding of equities, futures, FX, or other financial instruments","Experience designing and developing distributed systems with a focus on backend development in C/C++, Java, Scala, Go, or C#","Experience with Apache/Confluent Kafka","Experience automating SDLC pipelines (e.g., Jenkins, TeamCity, or AWS CodePipeline)","Experience with containerization and orchestration technologies","Experience building and deploying systems that utilize services provided by AWS, GCP, or Azure","Contributions to open-source projects"],"datePosted":"2026-04-18T22:14:36.583Z","jobLocation":{"@type":"Place","address":{"@type":"PostalAddress","addressLocality":"Miami, Florida, United States of America"}},"employmentType":"FULL_TIME","occupationalCategory":"Engineering","industry":"Finance","skills":"Python, Linux/Unix, PostgreSQL, NLP, supervised/non-supervised learning, Generative AI models, low-latency trading environments, quantitative finance, electronic trading concepts, financial data, equities, futures, FX, distributed systems, backend development, C/C++, Java, Scala, Go, C#, Apache/Confluent Kafka, SDLC pipelines, containerization, orchestration technologies, AWS, GCP, Azure, Understanding of NLP, supervised/non-supervised learning, and Generative AI models, Experience operating and monitoring low-latency trading environments, Familiarity with quantitative finance and electronic trading concepts, Familiarity with financial data, Broad understanding of equities, futures, FX, or other financial instruments, Experience designing and developing distributed systems with a focus on backend development in C/C++, Java, Scala, Go, or C#, Experience with Apache/Confluent Kafka, Experience automating SDLC pipelines (e.g., Jenkins, TeamCity, or AWS CodePipeline), Experience with containerization and orchestration technologies, Experience building and deploying systems that utilize services provided by AWS, GCP, or Azure, Contributions to open-source projects"},{"@context":"https://schema.org","@type":"JobPosting","identifier":{"@type":"PropertyValue","name":"YubHub","value":"job_0987988a-011"},"title":"Feature Framework Engineer","description":"<p>The Systematic Platform Execution &amp; Exchange Data (SPEED) Team is at the core of Millennium&#39;s Equities, Quant Strategies, and Shared Services Technology organisation.</p>\n<p>We are looking for a C++ engineer to design and build high-performance, low-latency applications that process large volumes of real-time data.</p>\n<p>Principal Responsibilities:</p>\n<ul>\n<li>Design, implement, and maintain high-performance C++ services handling high message rates and low-latency workloads.</li>\n</ul>\n<ul>\n<li>Optimise existing components for latency, throughput, and CPU/memory efficiency.</li>\n</ul>\n<ul>\n<li>Develop and tune networking, messaging, and I/O layers to handle large data volumes reliably.</li>\n</ul>\n<ul>\n<li>Profile and debug performance issues at application, OS, and network levels.</li>\n</ul>\n<ul>\n<li>Collaborate with quantitative, trading, and infrastructure teams to translate requirements into robust technical solutions.</li>\n</ul>\n<ul>\n<li>Write clean, production-quality code with appropriate tests and documentation.</li>\n</ul>\n<ul>\n<li>Participate in code reviews, design discussions, and continuous improvement of engineering practices.</li>\n</ul>\n<p>Required Qualifications:</p>\n<ul>\n<li>Strong proficiency in modern C++ (C++17/20 or later).</li>\n</ul>\n<ul>\n<li>5+ years of experience.</li>\n</ul>\n<ul>\n<li>Analytics Focus: KDB / Q Experience for large market data, modern data analysis with pytorch, pandas and modern tooling including Apache arrow.</li>\n</ul>\n<ul>\n<li>Familiar with basics statistics as applied to financial research.</li>\n</ul>\n<ul>\n<li>Proven experience building performance-critical, real-time, or low-latency systems.</li>\n</ul>\n<ul>\n<li>Strong knowledge of computer science fundamentals: data structures, algorithms, memory management, and optimisation.</li>\n</ul>\n<ul>\n<li>Experience using profiling, benchmarking, and performance analysis tools.</li>\n</ul>\n<ul>\n<li>Proficiency with version control (Git) and standard build systems.</li>\n</ul>\n<ul>\n<li>Excellent problem-solving skills and attention to detail.</li>\n</ul>\n<ul>\n<li>Strong interpersonal skills with a proven ability to navigate large organisations.</li>\n</ul>\n<p>Preferred Qualifications:</p>\n<ul>\n<li>Experience with kernel bypass or user-space networking technologies.</li>\n</ul>\n<ul>\n<li>Familiarity with AI productivity enhancing coding tools.</li>\n</ul>\n<ul>\n<li>Experience in financial markets, market data distribution, order routing, or exchange connectivity.</li>\n</ul>\n<ul>\n<li>Experience with monitoring/telemetry for high-performance systems.</li>\n</ul>\n<ul>\n<li>Familiarity with scripting languages for tooling and automation.</li>\n</ul>\n<ul>\n<li>AI: Familiarity with AI productivity enhancing coding tools.</li>\n</ul>\n<p>Personal Attributes:</p>\n<ul>\n<li>Obsessed with performance, measurement, and data-driven optimisation.</li>\n</ul>\n<ul>\n<li>Comfortable owning features end-to-end and operating in a production environment.</li>\n</ul>\n<ul>\n<li>Clear communicator who can work closely with both technical and non-technical stakeholders.</li>\n</ul>\n<ul>\n<li>Proactive, self-directed, and able to thrive in a highly iterative environment.</li>\n</ul>\n<p>The estimated base salary range for this position is $175,000 to $250,000, which is specific to New York and may change in the future.</p>\n<p style=\"margin-top:24px;font-size:13px;color:#666;\">XML job scraping automation by <a href=\"https://yubhub.co\">YubHub</a></p>","url":"https://yubhub.co/jobs/job_0987988a-011","directApply":true,"hiringOrganization":{"@type":"Organization","name":"Unknown","sameAs":"https://mlp.eightfold.ai","logo":"https://logos.yubhub.co/mlp.eightfold.ai.png"},"x-apply-url":"https://mlp.eightfold.ai/careers/job/755955682418","x-work-arrangement":"onsite","x-experience-level":"senior","x-job-type":"full-time","x-salary-range":"$175,000 to $250,000","x-skills-required":["modern C++","KDB / Q","pytorch","pandas","Apache arrow","data structures","algorithms","memory management","optimisation","profiling","benchmarking","performance analysis tools","version control","standard build systems"],"x-skills-preferred":["kernel bypass","user-space networking technologies","AI productivity enhancing coding tools","financial markets","market data distribution","order routing","exchange connectivity","monitoring/telemetry","scripting languages"],"datePosted":"2026-04-18T22:14:03.382Z","jobLocation":{"@type":"Place","address":{"@type":"PostalAddress","addressLocality":"New York, New York, United States of America"}},"employmentType":"FULL_TIME","occupationalCategory":"Engineering","industry":"Finance","skills":"modern C++, KDB / Q, pytorch, pandas, Apache arrow, data structures, algorithms, memory management, optimisation, profiling, benchmarking, performance analysis tools, version control, standard build systems, kernel bypass, user-space networking technologies, AI productivity enhancing coding tools, financial markets, market data distribution, order routing, exchange connectivity, monitoring/telemetry, scripting languages","baseSalary":{"@type":"MonetaryAmount","currency":"USD","value":{"@type":"QuantitativeValue","minValue":175000,"maxValue":250000,"unitText":"YEAR"}}},{"@context":"https://schema.org","@type":"JobPosting","identifier":{"@type":"PropertyValue","name":"YubHub","value":"job_32932504-2b5"},"title":"Systematic Production Support Engineer","description":"<p>We are looking for an experienced professional to help us scale our systematic operations and support engineering capabilities.</p>\n<p>This role directly supports portfolio management teams across Millennium, with operational excellence at the core. Our efforts are focused on delivering the highest quality returns to our investors – providing a world-class and reliable trading and technology platform is essential to this mission.</p>\n<p>This is a unique opportunity to drive significant value creation for one of the world&#39;s leading investment managers.</p>\n<p>Principal Responsibilities:</p>\n<ul>\n<li>Build, develop and maintain a reliable, scalable, and integrated platform for trading strategy monitoring, reporting, and operations.</li>\n<li>Work with portfolio managers and other internal customers to reduce operational risk through:</li>\n<li>Implementation of monitoring, reporting, and trade workflow solutions.</li>\n<li>Implementation of automated systems and processes focused on trading and operations.</li>\n<li>Streamlining development and deployment processes.</li>\n<li>Implementation of MCP servers focused on assisting rest of the Support Engineering team as well as proactively monitoring production environment.</li>\n</ul>\n<p>Technical Qualification:</p>\n<ul>\n<li>5+ years of development experience in Python.</li>\n<li>Experience working in a Linux / Unix environment.</li>\n<li>Experience working with PostgreSQL or other relational databases.</li>\n<li>Ability to understand and discuss requirements from portfolio managers.</li>\n</ul>\n<p>Preferred Skills and Experience:</p>\n<ul>\n<li>Understanding of NLP, supervised/non-supervised learning and Generative AI models.</li>\n<li>Experience operating and monitoring low-latency trading environments.</li>\n<li>Familiarity with quantitative finance and electronic trading concepts.</li>\n<li>Familiarity with financial data.</li>\n<li>Broad understanding of equities, futures, FX, or other financial instruments.</li>\n<li>Experience designing and developing distributed systems with a focus on backend development in C/C++, Java, Scala, Go, or C#.</li>\n<li>Experience with Apache / Confluent Kafka.</li>\n<li>Experience automating SDLC pipelines (e.g., Jenkins, TeamCity, or AWS CodePipeline).</li>\n<li>Experience with containerization and orchestration technologies.</li>\n<li>Experience building and deploying systems that utilize services provided by AWS, GCP or Azure.</li>\n<li>Contributions to open-source projects.</li>\n</ul>\n<p>The estimated base salary range for this position is $100,000 to $175,000, which is specific to New York and may change in the future. Millennium pays a total compensation package which includes a base salary, discretionary performance bonus, and a comprehensive benefits package. When finalizing an offer, we take into consideration an individual&#39;s experience level and the qualifications they bring to the role to formulate a competitive total compensation package.</p>\n<p style=\"margin-top:24px;font-size:13px;color:#666;\">XML job scraping automation by <a href=\"https://yubhub.co\">YubHub</a></p>","url":"https://yubhub.co/jobs/job_32932504-2b5","directApply":true,"hiringOrganization":{"@type":"Organization","name":"Equity IT","sameAs":"https://mlp.eightfold.ai","logo":"https://logos.yubhub.co/mlp.eightfold.ai.png"},"x-apply-url":"https://mlp.eightfold.ai/careers/job/755954627501","x-work-arrangement":"onsite","x-experience-level":"senior","x-job-type":"full-time","x-salary-range":"$100,000 to $175,000","x-skills-required":["Python","Linux / Unix","PostgreSQL","NLP","supervised/non-supervised learning","Generative AI models"],"x-skills-preferred":["Apache / Confluent Kafka","C/C++","Java","Scala","Go","C#","containerization","orchestration technologies","AWS","GCP","Azure"],"datePosted":"2026-04-18T22:13:42.254Z","jobLocation":{"@type":"Place","address":{"@type":"PostalAddress","addressLocality":"New York, New York, United States of America · Old Greenwich, Connecticut, United States of America"}},"employmentType":"FULL_TIME","occupationalCategory":"Engineering","industry":"Finance","skills":"Python, Linux / Unix, PostgreSQL, NLP, supervised/non-supervised learning, Generative AI models, Apache / Confluent Kafka, C/C++, Java, Scala, Go, C#, containerization, orchestration technologies, AWS, GCP, Azure","baseSalary":{"@type":"MonetaryAmount","currency":"USD","value":{"@type":"QuantitativeValue","minValue":100000,"maxValue":175000,"unitText":"YEAR"}}},{"@context":"https://schema.org","@type":"JobPosting","identifier":{"@type":"PropertyValue","name":"YubHub","value":"job_d46c1e8f-b8c"},"title":"Strategic Deals Lead, Compute & Infrastructure","description":"<p>We are seeking a Strategic Deals Lead, Compute &amp; Infrastructure team member to drive the planning and execution of programs critical to Anthropic&#39;s compute infrastructure strategy.</p>\n<p>In this role, you will manage internal and external stakeholders to bring clarity to our compute technology roadmaps, help prioritise across technical and non-technical teams, and focus on securing and delivering compute capacity.</p>\n<p>As a key member of our team, you will work closely with engineering, finance, and partnership teams to drive execution of technical roadmaps, support deal structuring, and manage the operational aspects of our compute partnerships.</p>\n<p>This role combines technical program management with elements of strategic operations, partnership development, and financial analysis.</p>\n<p>Responsibilities:</p>\n<ul>\n<li>Drive cross-functional coordination across Engineering, Finance, and external partners to define, scope, and deliver on compute partnership initiatives</li>\n</ul>\n<ul>\n<li>Develop and maintain detailed project plans, timelines, and status reporting for technical programs related to compute infrastructure and partnerships</li>\n</ul>\n<ul>\n<li>Partner with engineering leaders to translate technical requirements into actionable roadmaps and track execution against milestones</li>\n</ul>\n<ul>\n<li>Support the structuring and negotiation of strategic compute deals, including financial modelling, term analysis, and vendor evaluation</li>\n</ul>\n<ul>\n<li>Build and maintain relationships with key stakeholders at cloud providers and infrastructure partners</li>\n</ul>\n<ul>\n<li>Develop and manage systems, processes, and documentation to support program management efficiency and stakeholder visibility</li>\n</ul>\n<ul>\n<li>Analyse financial and operational data to inform decision-making on compute capacity planning and vendor strategy</li>\n</ul>\n<ul>\n<li>Provide clear and transparent reporting on program status, issues, and risks to leadership</li>\n</ul>\n<p>You might be a good fit if you have:</p>\n<ul>\n<li>8-10 years of experience in technical product/program management, business development, or strategic partnerships roles at technology companies</li>\n</ul>\n<ul>\n<li>Experience structuring and negotiating strategic customer deals or partnerships within the technology space (cloud services, semiconductors, data centre/infrastructure)</li>\n</ul>\n<ul>\n<li>Background in cloud computing, data centre infrastructure, compute/silicon development, or technology-focused investment banking or consulting</li>\n</ul>\n<ul>\n<li>Familiarity with data centre infrastructure, compute hardware, and/or silicon development cycles</li>\n</ul>\n<ul>\n<li>Comfort with financial analysis and modelling; experience with vendor financing arrangements is a plus</li>\n</ul>\n<ul>\n<li>Strong interpersonal and communication skills with the ability to influence and align diverse stakeholders</li>\n</ul>\n<ul>\n<li>Ability to drive clarity in ambiguous environments and manage competing priorities with high-quality execution</li>\n</ul>\n<ul>\n<li>A track record of managing cross-functional initiatives in fast-paced, scaling technology environments</li>\n</ul>\n<ul>\n<li>A passion for Anthropic&#39;s mission and ensuring safe AI development</li>\n</ul>\n<p>Strong candidates may also have:</p>\n<ul>\n<li>Experience managing external partnerships with large-scale cloud providers or hardware vendors</li>\n</ul>\n<ul>\n<li>Understanding of AI/ML infrastructure requirements and compute capacity planning</li>\n</ul>\n<ul>\n<li>Experience with vendor financing, equipment leasing, or infrastructure investment analysis</li>\n</ul>\n<ul>\n<li>Background in technical due diligence or technology M&amp;A</li>\n</ul>\n<p>The annual compensation range for this role is $250,000-$310,000 USD.</p>\n<p style=\"margin-top:24px;font-size:13px;color:#666;\">XML job scraping automation by <a href=\"https://yubhub.co\">YubHub</a></p>","url":"https://yubhub.co/jobs/job_d46c1e8f-b8c","directApply":true,"hiringOrganization":{"@type":"Organization","name":"Anthropic","sameAs":"https://www.anthropic.com/","logo":"https://logos.yubhub.co/anthropic.com.png"},"x-apply-url":"https://job-boards.greenhouse.io/anthropic/jobs/5169670008","x-work-arrangement":"hybrid","x-experience-level":"senior","x-job-type":"full-time","x-salary-range":"$250,000-$310,000 USD","x-skills-required":["Technical product/program management","Business development","Strategic partnerships","Cloud computing","Data centre infrastructure","Compute/silicon development","Financial analysis and modelling","Vendor financing arrangements"],"x-skills-preferred":["Experience structuring and negotiating strategic customer deals or partnerships","Background in technology-focused investment banking or consulting","Familiarity with data centre infrastructure, compute hardware, and/or silicon development cycles","Understanding of AI/ML infrastructure requirements and compute capacity planning","Experience with vendor financing, equipment leasing, or infrastructure investment analysis"],"datePosted":"2026-04-18T16:00:33.921Z","jobLocation":{"@type":"Place","address":{"@type":"PostalAddress","addressLocality":"San Francisco, CA"}},"employmentType":"FULL_TIME","occupationalCategory":"Engineering","industry":"Technology","skills":"Technical product/program management, Business development, Strategic partnerships, Cloud computing, Data centre infrastructure, Compute/silicon development, Financial analysis and modelling, Vendor financing arrangements, Experience structuring and negotiating strategic customer deals or partnerships, Background in technology-focused investment banking or consulting, Familiarity with data centre infrastructure, compute hardware, and/or silicon development cycles, Understanding of AI/ML infrastructure requirements and compute capacity planning, Experience with vendor financing, equipment leasing, or infrastructure investment analysis","baseSalary":{"@type":"MonetaryAmount","currency":"USD","value":{"@type":"QuantitativeValue","minValue":250000,"maxValue":310000,"unitText":"YEAR"}}},{"@context":"https://schema.org","@type":"JobPosting","identifier":{"@type":"PropertyValue","name":"YubHub","value":"job_76c9a01c-58a"},"title":"Data Center Portfolio Planning & Execution Lead","description":"<p>We&#39;re looking for a Data Center Portfolio Planning &amp; Execution Lead to drive the planning and framework that ensures every site moves smoothly from the front-end phases through design, construction, equipment delivery, commissioning, and operational readiness.</p>\n<p>This role owns the portfolio-level operating system: translating capacity supply pipeline into integrated project plans that span every phase of delivery, building the tooling and automation that runs it at scale, and maintaining Anthropic&#39;s datacenter capacity catalog , a lifecycle view of our fleet that supports both execution orchestration and steady-state capacity planning.</p>\n<p>Responsibilities:</p>\n<ul>\n<li>Manage the integrated master plan for each site across the portfolio , stitching power ramp, design, construction, sourcing, deployment, and operations readiness into a single coordinated schedule with clear milestones and dependencies</li>\n<li>Develop and maintain Anthropic&#39;s datacenter catalog for deployed and in-progress capacity. Manage the portfolio-level view of physical infrastructure &amp; cluster interfaces across all sites and partners to enable planning decisions such as equipment fungibility, accelerator platforms, tech insertion, or workload allocation</li>\n<li>Define and run the stage gates and decision locks for cluster delivery , from lease execution to design lock through procurement, construction, equipment installation, commissioning, and handover</li>\n<li>Drive gate reviews, manage exceptions, and track the downstream impact of deviations across the portfolio</li>\n<li>Manage portfolio reviews and risk tracking for DC Infra leadership and Compute Supply</li>\n</ul>\n<p>Tooling &amp; process:</p>\n<ul>\n<li>Develop tooling and automation to enable cross-functional planning flow-down from datacenter capacity availability dates</li>\n<li>Partner with Design, Supply Chain, Construction, and DC Ops program leads to drive cross-pillar process improvements as portfolio scales</li>\n</ul>\n<p>You may be a good fit if you:</p>\n<ul>\n<li>Are familiar with the full datacenter buildout lifecycle: pipeline → design → sourcing → construction → Cx → deployment</li>\n<li>Have run integrated portfolio or master-schedule planning across a fleet of capital projects (datacenter, energy, fab, or similar) where multiple functional orgs each own a phase</li>\n<li>Have built a stage-gate or decision-lock system from scratch and gotten functional leads to adopt it</li>\n<li>Have re-architected a deployment or delivery process at scale and can point to the cycle-time or throughput result</li>\n<li>Build the tooling yourself using AI-assisted development , stand up planning dashboards, schedule automation, and data pipelines from Smartsheet/P6/partner systems</li>\n<li>Proactively surface schedule risk across functions , comfortable flagging a problem in someone else&#39;s domain before it becomes a slip</li>\n<li>Track record of driving outcomes through influence with cross-functional partners</li>\n</ul>\n<p>Strong candidates may also have:</p>\n<ul>\n<li>Experience building a portfolio planning and execution function from scratch at a hyperscaler or large industrial owner</li>\n<li>Exposure to capacity planning or S&amp;OP processes that connect demand forecast to physical build</li>\n<li>Experience product-managing internal planning, workflow, or scheduling systems</li>\n</ul>\n<p>The annual compensation range for this role is $365,000-$485,000 USD.</p>\n<p style=\"margin-top:24px;font-size:13px;color:#666;\">XML job scraping automation by <a href=\"https://yubhub.co\">YubHub</a></p>","url":"https://yubhub.co/jobs/job_76c9a01c-58a","directApply":true,"hiringOrganization":{"@type":"Organization","name":"Anthropic","sameAs":"https://www.anthropic.com","logo":"https://logos.yubhub.co/anthropic.com.png"},"x-apply-url":"https://job-boards.greenhouse.io/anthropic/jobs/5188939008","x-work-arrangement":"hybrid","x-experience-level":"senior","x-job-type":"full-time","x-salary-range":"$365,000-$485,000 USD","x-skills-required":["data center portfolio planning","execution lead","portfolio-level operating system","capacity supply pipeline","integrated project plans","tooling and automation","datacenter capacity catalog","lifecycle view of fleet","execution orchestration","steady-state capacity planning","stage gates","decision locks","cluster delivery","lease execution","design lock","procurement","construction","equipment installation","commissioning","handover","cross-functional planning","flow-down","datacenter capacity availability dates","cross-pillar process improvements","AI-assisted development","planning dashboards","schedule automation","data pipelines","Smartsheet","P6","partner systems","schedule risk","cross-functional partners","portfolio planning","execution function","hyperscaler","large industrial owner","capacity planning","S&OP processes","demand forecast","physical build","internal planning","workflow","scheduling systems"],"x-skills-preferred":[],"datePosted":"2026-04-18T15:59:03.702Z","jobLocation":{"@type":"Place","address":{"@type":"PostalAddress","addressLocality":"Remote-Friendly, United States"}},"jobLocationType":"TELECOMMUTE","employmentType":"FULL_TIME","occupationalCategory":"Engineering","industry":"Technology","skills":"data center portfolio planning, execution lead, portfolio-level operating system, capacity supply pipeline, integrated project plans, tooling and automation, datacenter capacity catalog, lifecycle view of fleet, execution orchestration, steady-state capacity planning, stage gates, decision locks, cluster delivery, lease execution, design lock, procurement, construction, equipment installation, commissioning, handover, cross-functional planning, flow-down, datacenter capacity availability dates, cross-pillar process improvements, AI-assisted development, planning dashboards, schedule automation, data pipelines, Smartsheet, P6, partner systems, schedule risk, cross-functional partners, portfolio planning, execution function, hyperscaler, large industrial owner, capacity planning, S&OP processes, demand forecast, physical build, internal planning, workflow, scheduling systems","baseSalary":{"@type":"MonetaryAmount","currency":"USD","value":{"@type":"QuantitativeValue","minValue":365000,"maxValue":485000,"unitText":"YEAR"}}},{"@context":"https://schema.org","@type":"JobPosting","identifier":{"@type":"PropertyValue","name":"YubHub","value":"job_04c1ff49-2d1"},"title":"Data Platform Solutions Architect (Professional Services)","description":"<p>We&#39;re hiring for multiple roles within our Professional Services team. As a Data Platform Solutions Architect, you will work with clients on short to medium-term customer engagements on their big data challenges using the Databricks platform. You will provide data engineering, data science, and cloud technology projects which require integrating with client systems, training, and other technical tasks to help customers to get most value out of their data.</p>\n<p>The impact you will have:</p>\n<ul>\n<li>Work on a variety of impactful customer technical projects which may include designing and building reference architectures, creating how-to&#39;s and productionalizing customer use cases</li>\n<li>Work with engagement managers to scope variety of professional services work with input from the customer</li>\n<li>Guide strategic customers as they implement transformational big data projects, 3rd party migrations, including end-to-end design, build and deployment of industry-leading big data and AI applications</li>\n<li>Consult on architecture and design; bootstrap or implement customer projects which leads to a customers&#39; successful understanding, evaluation and adoption of Databricks.</li>\n<li>Provide an escalated level of support for customer operational issues.</li>\n</ul>\n<p>What we look for:</p>\n<ul>\n<li>Extensive experience in data engineering, data platforms &amp; analytics</li>\n<li>Comfortable writing code in either Python or Scala</li>\n<li>Working knowledge of two or more common Cloud ecosystems (AWS, Azure, GCP) with expertise in at least one</li>\n<li>Deep experience with distributed computing with Apache Spark™ and knowledge of Spark runtime internals</li>\n<li>Familiarity with CI/CD for production deployments</li>\n<li>Working knowledge of MLOps</li>\n<li>Design and deployment of performant end-to-end data architectures</li>\n<li>Experience with technical project delivery - managing scope and timelines.</li>\n<li>Documentation and white-boarding skills.</li>\n<li>Experience working with clients and managing conflicts.</li>\n<li>Build skills in technical areas which support the deployment and integration of Databricks-based solutions to complete customer projects.</li>\n<li>Travel to customers 10% of the time</li>\n</ul>\n<p>[Preferred] Databricks Certification but not essential</p>\n<p style=\"margin-top:24px;font-size:13px;color:#666;\">XML job scraping automation by <a href=\"https://yubhub.co\">YubHub</a></p>","url":"https://yubhub.co/jobs/job_04c1ff49-2d1","directApply":true,"hiringOrganization":{"@type":"Organization","name":"Databricks","sameAs":"https://databricks.com","logo":"https://logos.yubhub.co/databricks.com.png"},"x-apply-url":"https://job-boards.greenhouse.io/databricks/jobs/8396801002","x-work-arrangement":"hybrid","x-experience-level":"senior","x-job-type":"full-time","x-salary-range":null,"x-skills-required":["data engineering","data platforms & analytics","Python","Scala","Cloud ecosystems (AWS, Azure, GCP)","Apache Spark","CI/CD for production deployments","MLOps","technical project delivery","documentation and white-boarding skills"],"x-skills-preferred":["Databricks Certification"],"datePosted":"2026-04-18T15:58:52.546Z","jobLocation":{"@type":"Place","address":{"@type":"PostalAddress","addressLocality":"London, United Kingdom"}},"employmentType":"FULL_TIME","occupationalCategory":"Engineering","industry":"Technology","skills":"data engineering, data platforms & analytics, Python, Scala, Cloud ecosystems (AWS, Azure, GCP), Apache Spark, CI/CD for production deployments, MLOps, technical project delivery, documentation and white-boarding skills, Databricks Certification"},{"@context":"https://schema.org","@type":"JobPosting","identifier":{"@type":"PropertyValue","name":"YubHub","value":"job_e58b08f7-c31"},"title":"Senior Data Engineer","description":"<p>As a Senior Data Engineer on the Analytics Team, you will collaborate with stakeholders across the company to design, build and implement data pipelines and models that enable our next generation of technology to be deployed around the world. You will have a hand in helping shape the data platform vision at Anduril.</p>\n<p>We&#39;re looking for software and data engineers who are seeking high impact collaborative roles focused on driving operational execution. Ideally you are looking to learn what it takes to build the next generation of defence technology.</p>\n<p>Your responsibilities will include leading the design and roadmap for our data platform, partnering with operations, product, and engineering to advocate best practices and build supporting systems and infrastructure for the various data needs, owning the ingest and egress frameworks for data pipelines that stitch together various data sources in order to produce valuable data products that drive the business, and managing a large user base and providing true data self-service at scale.</p>\n<p>We use Palantir Foundry as our central hub for data-driven applications, visualizations and large-scale data analysis across the Anduril org. We also use SQLMesh for data transformations, Athena for querying data, Apache Iceberg as our table format, and Flyte for orchestration.</p>\n<p>Required qualifications include 5+ years of experience in a data engineering role building products, ideally in a fast-paced environment, good foundations in Python or another language, experience with Spark, PySpark, SQL and dbt, experience with Enterprise Data Systems like Palantir Foundry, and experience with or interest in learning how to develop data services and data products.</p>\n<p>The salary range for this role is $166,000-$220,000 USD.</p>\n<p style=\"margin-top:24px;font-size:13px;color:#666;\">XML job scraping automation by <a href=\"https://yubhub.co\">YubHub</a></p>","url":"https://yubhub.co/jobs/job_e58b08f7-c31","directApply":true,"hiringOrganization":{"@type":"Organization","name":"Anduril","sameAs":"https://www.anduril.com/","logo":"https://logos.yubhub.co/anduril.com.png"},"x-apply-url":"https://job-boards.greenhouse.io/andurilindustries/jobs/4587312007","x-work-arrangement":"onsite","x-experience-level":"senior","x-job-type":"full-time","x-salary-range":"$166,000-$220,000 USD","x-skills-required":["Python","Spark","PySpark","SQL","dbt","Palantir Foundry","SQLMesh","Athena","Apache Iceberg","Flyte"],"x-skills-preferred":[],"datePosted":"2026-04-18T15:58:44.003Z","jobLocation":{"@type":"Place","address":{"@type":"PostalAddress","addressLocality":"Costa Mesa, California, United States"}},"employmentType":"FULL_TIME","occupationalCategory":"Engineering","industry":"Technology","skills":"Python, Spark, PySpark, SQL, dbt, Palantir Foundry, SQLMesh, Athena, Apache Iceberg, Flyte","baseSalary":{"@type":"MonetaryAmount","currency":"USD","value":{"@type":"QuantitativeValue","minValue":166000,"maxValue":220000,"unitText":"YEAR"}}},{"@context":"https://schema.org","@type":"JobPosting","identifier":{"@type":"PropertyValue","name":"YubHub","value":"job_5b244f27-9fd"},"title":"Resident Solutions Architect - Communications, Media, Entertainment & Games","description":"<p>As a Resident Solutions Architect in our Professional Services team, you will work with clients on short to medium term customer engagements on their big data challenges using the Databricks platform. You will provide data engineering, data science, and cloud technology projects which require integrating with client systems, training, and other technical tasks to help customers to get most value out of their data.</p>\n<p>You will work on a variety of impactful customer technical projects which may include designing and building reference architectures, creating how-to&#39;s and productionalizing customer use cases. You will work with engagement managers to scope variety of professional services work with input from the customer.</p>\n<p>Guide strategic customers as they implement transformational big data projects, 3rd party migrations, including end-to-end design, build and deployment of industry-leading big data and AI applications. Consult on architecture and design; bootstrap or implement customer projects which leads to a customers&#39; successful understanding, evaluation and adoption of Databricks.</p>\n<p>Provide an escalated level of support for customer operational issues. You will work with the Databricks technical team, Project Manager, Architect and Customer team to ensure the technical components of the engagement are delivered to meet customer&#39;s needs.</p>\n<p>Work with Engineering and Databricks Customer Support to provide product and implementation feedback and to guide rapid resolution for engagement specific product and support issues.</p>\n<p>The ideal candidate will have 6+ years experience in data engineering, data platforms &amp; analytics, comfortable writing code in either Python or Scala, working knowledge of two or more common Cloud ecosystems (AWS, Azure, GCP) with expertise in at least one, deep experience with distributed computing with Apache Spark™ and knowledge of Spark runtime internals, familiarity with CI/CD for production deployments, working knowledge of MLOps, design and deployment of performant end-to-end data architectures, experience with technical project delivery - managing scope and timelines, documentation and white-boarding skills, experience working with clients and managing conflicts, build skills in technical areas which support the deployment and integration of Databricks-based solutions to complete customer projects.</p>\n<p>Travel to customers 20% of the time.</p>\n<p style=\"margin-top:24px;font-size:13px;color:#666;\">XML job scraping automation by <a href=\"https://yubhub.co\">YubHub</a></p>","url":"https://yubhub.co/jobs/job_5b244f27-9fd","directApply":true,"hiringOrganization":{"@type":"Organization","name":"Databricks","sameAs":"https://databricks.com","logo":"https://logos.yubhub.co/databricks.com.png"},"x-apply-url":"https://job-boards.greenhouse.io/databricks/jobs/8461258002","x-work-arrangement":"hybrid","x-experience-level":"senior","x-job-type":"full-time","x-salary-range":"$180,656-$248,360 USD","x-skills-required":["data engineering","data platforms & analytics","Python","Scala","Cloud ecosystems (AWS, Azure, GCP)","Apache Spark","CI/CD for production deployments","MLOps","end-to-end data architectures","technical project delivery","documentation and white-boarding skills","client management"],"x-skills-preferred":[],"datePosted":"2026-04-18T15:58:34.588Z","jobLocation":{"@type":"Place","address":{"@type":"PostalAddress","addressLocality":"Raleigh, North Carolina"}},"employmentType":"FULL_TIME","occupationalCategory":"Engineering","industry":"Technology","skills":"data engineering, data platforms & analytics, Python, Scala, Cloud ecosystems (AWS, Azure, GCP), Apache Spark, CI/CD for production deployments, MLOps, end-to-end data architectures, technical project delivery, documentation and white-boarding skills, client management","baseSalary":{"@type":"MonetaryAmount","currency":"USD","value":{"@type":"QuantitativeValue","minValue":180656,"maxValue":248360,"unitText":"YEAR"}}},{"@context":"https://schema.org","@type":"JobPosting","identifier":{"@type":"PropertyValue","name":"YubHub","value":"job_e5fa8591-cb8"},"title":"Solutions Architect: Data & AI","description":"<p>As a Solutions Architect (Analytics, AI, Big Data, Public Cloud), you will guide the technical evaluation phase in a hands-on environment throughout the sales process. You will be a technical advisor internally to the sales team, and work with the product team as an advocate of your customers in the field.</p>\n<p>You will help our customers to achieve tangible data-driven outcomes through the use of our Databricks Lakehouse Platform, helping data teams complete projects and integrate our platform into their enterprise Ecosystem.</p>\n<p>The impact you will have:</p>\n<ul>\n<li>You will be a Big Data Analytics expert on aspects of architecture and design</li>\n<li>Lead your clients through evaluating and adopting Databricks including hands-on Apache Spark programming and integration with the wider cloud ecosystem</li>\n<li>Support your customers by authoring reference architectures, how-tos, and demo applications</li>\n<li>Integrate Databricks with 3rd-party applications to support customer architectures</li>\n<li>Engage with the technical community by leading workshops, seminars and meet-ups</li>\n</ul>\n<p>Together with your Account Executive, you will form successful relationships with clients throughout your assigned territory to provide technical and business value</p>\n<p>What we look for:</p>\n<ul>\n<li>Strong consulting / customer facing experience, working with external clients across a variety of industry markets</li>\n<li>Core strength in either data engineering or data science technologies</li>\n<li>8+ years of experience demonstrating technical concepts, including demos, presenting and white-boarding</li>\n<li>8+ years of experience designing architectures within a public cloud (AWS, Azure or GCP)</li>\n<li>6+ years of experience with Big Data technologies, including Apache Spark, AI, Data Science, Data Engineering, Hadoop, Cassandra, and others</li>\n<li>Coding experience in Python, R, Java, Apache Spark or Scala</li>\n</ul>\n<p>About Databricks</p>\n<p>Databricks is the data and AI company. More than 10,000 organizations worldwide , including Comcast, Condé Nast, Grammarly, and over 50% of the Fortune 500 , rely on the Databricks Data Intelligence Platform to unify and democratize data, analytics and AI.</p>\n<p>Databricks is headquartered in San Francisco, with offices around the globe and was founded by the original creators of Lakehouse, Apache Spark, Delta Lake and MLflow. To learn more, follow Databricks on Twitter, LinkedIn and Facebook.</p>\n<p>Benefits</p>\n<p>At Databricks, we strive to provide comprehensive benefits and perks that meet the needs of all of our employees. For specific details on the benefits offered in your region click here.</p>\n<p>Our Commitment to Diversity and Inclusion</p>\n<p>At Databricks, we are committed to fostering a diverse and inclusive culture where everyone can excel. We take great care to ensure that our hiring practices are inclusive and meet equal employment opportunity standards.</p>\n<p>Individuals looking for employment at Databricks are considered without regard to age, color, disability, ethnicity, family or marital status, gender identity or expression, language, national origin, physical and mental ability, political affiliation, race, religion, sexual orientation, socio-economic status, veteran status, and other protected characteristics.</p>\n<p>Compliance</p>\n<p>If access to export-controlled technology or source code is required for performance of job duties, it is within Employer&#39;s discretion whether to apply for a U.S. government license for such positions, and Employer may decline to proceed with an applicant on this basis alone.</p>\n<p style=\"margin-top:24px;font-size:13px;color:#666;\">XML job scraping automation by <a href=\"https://yubhub.co\">YubHub</a></p>","url":"https://yubhub.co/jobs/job_e5fa8591-cb8","directApply":true,"hiringOrganization":{"@type":"Organization","name":"Databricks","sameAs":"https://databricks.com","logo":"https://logos.yubhub.co/databricks.com.png"},"x-apply-url":"https://job-boards.greenhouse.io/databricks/jobs/8353757002","x-work-arrangement":"onsite","x-experience-level":"senior","x-job-type":"full-time","x-salary-range":null,"x-skills-required":["Big Data Analytics","Apache Spark","AI","Data Science","Data Engineering","Hadoop","Cassandra","Python","R","Java","Scala"],"x-skills-preferred":[],"datePosted":"2026-04-18T15:58:24.843Z","jobLocation":{"@type":"Place","address":{"@type":"PostalAddress","addressLocality":"Bengaluru, India"}},"employmentType":"FULL_TIME","occupationalCategory":"Engineering","industry":"Technology","skills":"Big Data Analytics, Apache Spark, AI, Data Science, Data Engineering, Hadoop, Cassandra, Python, R, Java, Scala"},{"@context":"https://schema.org","@type":"JobPosting","identifier":{"@type":"PropertyValue","name":"YubHub","value":"job_a38ec886-62e"},"title":"AI Engineer - FDE (Forward Deployed Engineer)","description":"<p>Mission</p>\n<p>The AI Forward Deployed Engineering (AI FDE) team is a highly specialized customer-facing AI team at Databricks. We deliver professional services engagements to help our customers build and productionize first-of-its-kind AI applications.</p>\n<p>We work cross-functionally to shape long-term strategic priorities and initiatives alongside engineering, product, and developer relations, as well as support internal subject matter expert (SME) teams. We view our team as an ensemble: we look for individuals with strong, unique specializations to improve the overall strength of the team.</p>\n<p>This team is the right fit for you if you love working with customers, teammates, and fueling your curiosity for the latest trends in GenAI, LLMOps, and ML more broadly. This role can be remote.</p>\n<p>The impact you will have:</p>\n<ul>\n<li>Develop cutting-edge GenAI solutions, incorporating the latest techniques from our Mosaic AI research to solve customer problems</li>\n</ul>\n<ul>\n<li>Own production rollouts of consumer and internally facing GenAI applications</li>\n</ul>\n<ul>\n<li>Serve as a trusted technical advisor to customers across a variety of domains</li>\n</ul>\n<ul>\n<li>Present at conferences such as Data + AI Summit, recognized as a thought leader internally and externally</li>\n</ul>\n<ul>\n<li>Collaborate cross-functionally with the product and engineering teams to influence priorities and shape the product roadmap</li>\n</ul>\n<p>What we look for:</p>\n<ul>\n<li>Experience building GenAI applications, including RAG, multi-agent systems, Text2SQL, fine-tuning, etc., with tools such as HuggingFace, LangChain, and DSPy</li>\n</ul>\n<ul>\n<li>Minimum of 5+ years of relevant experience as a Data Scientist preferably working in a consulting role</li>\n</ul>\n<ul>\n<li>Expertise in deploying production-grade GenAI applications, including evaluation and optimizations</li>\n</ul>\n<ul>\n<li>Extensive years of hands-on industry data science experience, leveraging common machine learning and data science tools, i.e. pandas, scikit-learn, PyTorch, etc.</li>\n</ul>\n<ul>\n<li>Experience building production-grade machine learning deployments on AWS, Azure, or GCP</li>\n</ul>\n<ul>\n<li>Graduate degree in a quantitative discipline (Computer Science, Engineering, Statistics, Operations Research, etc.) or equivalent practical experience</li>\n</ul>\n<ul>\n<li>Experience communicating and/or teaching technical concepts to non-technical and technical audiences alike</li>\n</ul>\n<ul>\n<li>Passion for collaboration, life-long learning, and driving business value through AI</li>\n</ul>\n<ul>\n<li>Preferred experience using the Databricks Intelligence Platform and Apache Spark to process large-scale distributed datasets</li>\n</ul>\n<p style=\"margin-top:24px;font-size:13px;color:#666;\">XML job scraping automation by <a href=\"https://yubhub.co\">YubHub</a></p>","url":"https://yubhub.co/jobs/job_a38ec886-62e","directApply":true,"hiringOrganization":{"@type":"Organization","name":"Databricks","sameAs":"https://databricks.com","logo":"https://logos.yubhub.co/databricks.com.png"},"x-apply-url":"https://job-boards.greenhouse.io/databricks/jobs/8099751002","x-work-arrangement":"remote","x-experience-level":"senior","x-job-type":"full-time","x-salary-range":null,"x-skills-required":["GenAI","HuggingFace","LangChain","DSPy","pandas","scikit-learn","PyTorch","AWS","Azure","GCP","Apache Spark"],"x-skills-preferred":["Databricks Intelligence Platform","Mosaic AI research"],"datePosted":"2026-04-18T15:58:10.707Z","jobLocation":{"@type":"Place","address":{"@type":"PostalAddress","addressLocality":"Remote - India"}},"jobLocationType":"TELECOMMUTE","employmentType":"FULL_TIME","occupationalCategory":"Engineering","industry":"Technology","skills":"GenAI, HuggingFace, LangChain, DSPy, pandas, scikit-learn, PyTorch, AWS, Azure, GCP, Apache Spark, Databricks Intelligence Platform, Mosaic AI research"},{"@context":"https://schema.org","@type":"JobPosting","identifier":{"@type":"PropertyValue","name":"YubHub","value":"job_b05b9f90-7d3"},"title":"Data Center Engineer, Resource Efficiency – Compute Supply","description":"<p><strong>About the Role</strong></p>\n<p>As a Power &amp; Resource Efficiency Engineer, you&#39;ll sit at the intersection of IT and facilities , building the systems, models, and control loops that optimize how we allocate and consume power, cooling, and physical capacity across our TPU/GPU fleet.</p>\n<p>You&#39;ll own the technical strategy for turning raw data center capacity into reliable, efficient compute, working across power topology, workload scheduling, and real-time telemetry to push utilization as close to the physical envelope as possible while maintaining our availability commitments.</p>\n<p><strong>Responsibilities</strong></p>\n<ul>\n<li>Build models that forecast consumption across electrical and mechanical subsystems, informing capacity planning, energy procurement, oversubscription targets and risks, including statistical modeling of cluster utilization, workload profiles, and failure modes.</li>\n</ul>\n<ul>\n<li>Design IT/OT interfaces that bridge compute orchestration with facility controls, enabling real-time telemetry across accelerator hardware, power distribution, cooling, and schedulers.</li>\n</ul>\n<ul>\n<li>Build and operate load management systems that use power and cooling topology to enable load management and power/thermal-aware placement to maximize throughput while meeting SLOs.</li>\n</ul>\n<ul>\n<li>Partner with data center providers to drive design optimizations and hold them accountable to SLA-grade performance standards, providing technical diligence on partner architectures.</li>\n</ul>\n<p><strong>What We&#39;re Looking For</strong></p>\n<ul>\n<li>Deep knowledge of data center power distribution and cooling architectures, and how they interact with IT load profiles. Experience with reliability engineering, SLA development, and failure-mode analysis.</li>\n</ul>\n<ul>\n<li>Proficiency in statistical modeling and simulation for infrastructure capacity or power utilization.</li>\n</ul>\n<ul>\n<li>Familiarity with SCADA/BMS/EPMS, telemetry pipelines, and control systems. Experience building software that bridges IT and OT.</li>\n</ul>\n<ul>\n<li>Exposure to accelerator deployments and their power management interfaces strongly preferred.</li>\n</ul>\n<ul>\n<li>Demand response, grid interaction, or behind-the-meter generation experience is a plus.</li>\n</ul>\n<ul>\n<li>Ability to translate between infrastructure engineering, software teams, and external partners.</li>\n</ul>\n<p><strong>Required Qualifications</strong></p>\n<ul>\n<li>Bachelor&#39;s degree in Electrical Engineering, Mechanical Engineering, Power Systems, Controls Engineering, or a related field.</li>\n</ul>\n<ul>\n<li>5+ years of experience in data center infrastructure or facility engineering.</li>\n</ul>\n<ul>\n<li>Demonstrated experience with data center power distribution and cooling system architectures.</li>\n</ul>\n<ul>\n<li>Experience building or operating software-based power management, load scheduling, or control systems.</li>\n</ul>\n<ul>\n<li>Proficiency in Python or similar languages for statistical modeling, simulation, or automation of data center infrastructure optimizations.</li>\n</ul>\n<ul>\n<li>Familiarity with SCADA, BMS, EPMS, or industrial control systems and associated protocols (Modbus, BACnet, SNMP).</li>\n</ul>\n<ul>\n<li>Track record of cross-functional collaboration across hardware, software, and facilities teams.</li>\n</ul>\n<p><strong>Preferred Qualifications</strong></p>\n<ul>\n<li>Master&#39;s or PhD in Controls, Power Systems, or related discipline and 3+ years of experience in data center infrastructure or facility engineering.</li>\n</ul>\n<ul>\n<li>Experience with accelerator-class deployments and their power management interfaces.</li>\n</ul>\n<ul>\n<li>Background in control theory, dynamical systems, or cyber-physical systems design.</li>\n</ul>\n<ul>\n<li>Experience with energy storage, microgrid integration, demand response, or behind-the-meter generation.</li>\n</ul>\n<ul>\n<li>Familiarity with reliability engineering methods.</li>\n</ul>\n<ul>\n<li>Experience with SLA development, availability modeling, or service credit frameworks.</li>\n</ul>\n<ul>\n<li>Exposure to ML/optimization techniques applied to infrastructure or energy systems.</li>\n</ul>\n<p><strong>Salary</strong></p>\n<p>The annual compensation range for this role is $320,000-$405,000 USD.</p>\n<p><strong>Benefits</strong></p>\n<p>We offer competitive compensation and benefits, optional equity donation matching, generous vacation and parental leave, flexible working hours, and a lovely office space in which to collaborate with our team.</p>\n<p style=\"margin-top:24px;font-size:13px;color:#666;\">XML job scraping automation by <a href=\"https://yubhub.co\">YubHub</a></p>","url":"https://yubhub.co/jobs/job_b05b9f90-7d3","directApply":true,"hiringOrganization":{"@type":"Organization","name":"Anthropic","sameAs":"https://www.anthropic.com/","logo":"https://logos.yubhub.co/anthropic.com.png"},"x-apply-url":"https://job-boards.greenhouse.io/anthropic/jobs/5159642008","x-work-arrangement":"hybrid","x-experience-level":"senior","x-job-type":"full-time","x-salary-range":"$320,000-$405,000 USD","x-skills-required":["data center power distribution","cooling architectures","IT load profiles","reliability engineering","SLA development","failure-mode analysis","statistical modeling","simulation","infrastructure capacity","power utilization","SCADA/BMS/EPMS","telemetry pipelines","control systems","accelerator deployments","power management interfaces","demand response","grid interaction","behind-the-meter generation","Python","automation","data center infrastructure optimizations","SCADA","BMS","EPMS","industrial control systems","Modbus","BACnet","SNMP"],"x-skills-preferred":["accelerator-class deployments","control theory","dynamical systems","cyber-physical systems design","energy storage","microgrid integration","reliability engineering methods","availability modeling","service credit frameworks","ML/optimization techniques"],"datePosted":"2026-04-18T15:58:06.281Z","jobLocation":{"@type":"Place","address":{"@type":"PostalAddress","addressLocality":"Remote-Friendly, United States"}},"jobLocationType":"TELECOMMUTE","employmentType":"FULL_TIME","occupationalCategory":"Engineering","industry":"Technology","skills":"data center power distribution, cooling architectures, IT load profiles, reliability engineering, SLA development, failure-mode analysis, statistical modeling, simulation, infrastructure capacity, power utilization, SCADA/BMS/EPMS, telemetry pipelines, control systems, accelerator deployments, power management interfaces, demand response, grid interaction, behind-the-meter generation, Python, automation, data center infrastructure optimizations, SCADA, BMS, EPMS, industrial control systems, Modbus, BACnet, SNMP, accelerator-class deployments, control theory, dynamical systems, cyber-physical systems design, energy storage, microgrid integration, reliability engineering methods, availability modeling, service credit frameworks, ML/optimization techniques","baseSalary":{"@type":"MonetaryAmount","currency":"USD","value":{"@type":"QuantitativeValue","minValue":320000,"maxValue":405000,"unitText":"YEAR"}}},{"@context":"https://schema.org","@type":"JobPosting","identifier":{"@type":"PropertyValue","name":"YubHub","value":"job_b2f6f807-fc6"},"title":"Software Engineer - Distributed Data Systems","description":"<p>At Databricks, we are building and running the world&#39;s best data and AI infrastructure platform so our customers can use deep data insights to improve their business.</p>\n<p>We are looking for a software engineer to join our team as a founding member of our Belgrade site. As a software engineer, you will be involved in the entire development cycle and exemplify all core Databricks values.</p>\n<p>The responsibilities you will have:</p>\n<ul>\n<li>Drive requirements clarity and design decisions for ambiguous problems</li>\n<li>Produce technical design documents and project plans</li>\n<li>Develop new features</li>\n<li>Mentor more junior engineers</li>\n<li>Test and rollout to production, monitoring</li>\n</ul>\n<p>What we look for:</p>\n<ul>\n<li>BS in Computer Science or equivalent practical experience in databases or distributed systems</li>\n<li>Comfortable working towards a multi-year vision with incremental deliverables</li>\n<li>Motivated by delivering customer value and impact</li>\n<li>3+ years of production level experience in either Java, Scala or C++</li>\n<li>Solid foundation in algorithms and data structures and their real-world use cases</li>\n<li>Experience with distributed systems, databases, and big data systems (Apache Spark, Hadoop)</li>\n</ul>\n<p>At Databricks, we strive to provide comprehensive benefits and perks that meet the needs of all of our employees. For specific details on the benefits offered in your region, please click here.</p>\n<p style=\"margin-top:24px;font-size:13px;color:#666;\">XML job scraping automation by <a href=\"https://yubhub.co\">YubHub</a></p>","url":"https://yubhub.co/jobs/job_b2f6f807-fc6","directApply":true,"hiringOrganization":{"@type":"Organization","name":"Databricks","sameAs":"https://databricks.com","logo":"https://logos.yubhub.co/databricks.com.png"},"x-apply-url":"https://job-boards.greenhouse.io/databricks/jobs/8012691002","x-work-arrangement":"onsite","x-experience-level":"mid","x-job-type":"full-time","x-salary-range":null,"x-skills-required":["Java","Scala","C++","Algorithms","Data Structures","Distributed Systems","Databases","Big Data Systems","Apache Spark","Hadoop"],"x-skills-preferred":[],"datePosted":"2026-04-18T15:57:53.371Z","jobLocation":{"@type":"Place","address":{"@type":"PostalAddress","addressLocality":"Belgrade, Serbia"}},"employmentType":"FULL_TIME","occupationalCategory":"Engineering","industry":"Technology","skills":"Java, Scala, C++, Algorithms, Data Structures, Distributed Systems, Databases, Big Data Systems, Apache Spark, Hadoop"},{"@context":"https://schema.org","@type":"JobPosting","identifier":{"@type":"PropertyValue","name":"YubHub","value":"job_10290548-1ea"},"title":"Solutions Architect - Public Sector (LEAPS)","description":"<p>As a Solutions Architect - Public Sector at Databricks, you will be part of the Field Engineering team responsible for leading the growth of the Databricks Unified Analytics Platform. The role involves working with customers, teammates, the product team, and post-sales teams to identify use cases for Databricks, develop architectures and solutions using our platform, and guide customers through implementation to accomplish value.</p>\n<p>Key responsibilities include: Partnering with the sales team to help customers understand how Databricks can help solve their business problems Providing technical leadership for customers to evaluate and adopt Databricks Consulting on big data architecture, implementing proof of concepts for strategic customer projects, data science and machine learning projects, and validating integrations with cloud services and other 3rd party applications Building and presenting reference architectures, how-tos, and demo applications for customers Becoming an expert in, and promoting Databricks-inspired open-source projects (Spark, Delta Lake, MLflow, and Koalas) across developer communities through meetups, conferences, and webinars Traveling to customers in your region</p>\n<p>We look for candidates with 5+ years of experience in a customer-facing pre-sales, technical architecture, or consulting role, with expertise in designing and architecting distributed data systems. Experience with public cloud providers such as AWS, Azure, or GCP, data engineering technologies (e.g., Spark, Hadoop, Kafka), and data warehousing (e.g., SQL, OLTP/OLAP/DSS) is also required.</p>\n<p style=\"margin-top:24px;font-size:13px;color:#666;\">XML job scraping automation by <a href=\"https://yubhub.co\">YubHub</a></p>","url":"https://yubhub.co/jobs/job_10290548-1ea","directApply":true,"hiringOrganization":{"@type":"Organization","name":"Databricks","sameAs":"https://databricks.com","logo":"https://logos.yubhub.co/databricks.com.png"},"x-apply-url":"https://job-boards.greenhouse.io/databricks/jobs/8320126002","x-work-arrangement":"onsite","x-experience-level":"senior","x-job-type":"full-time","x-salary-range":"$180,000-$247,500 USD","x-skills-required":["Apache Spark","MLflow","Delta Lake","Python","Scala","Java","SQL","R","AWS","Azure","GCP","Data Engineering","Data Warehousing"],"x-skills-preferred":[],"datePosted":"2026-04-18T15:57:53.145Z","jobLocation":{"@type":"Place","address":{"@type":"PostalAddress","addressLocality":"Maryland; Virginia; Washington, D.C."}},"employmentType":"FULL_TIME","occupationalCategory":"Engineering","industry":"Technology","skills":"Apache Spark, MLflow, Delta Lake, Python, Scala, Java, SQL, R, AWS, Azure, GCP, Data Engineering, Data Warehousing","baseSalary":{"@type":"MonetaryAmount","currency":"USD","value":{"@type":"QuantitativeValue","minValue":180000,"maxValue":247500,"unitText":"YEAR"}}},{"@context":"https://schema.org","@type":"JobPosting","identifier":{"@type":"PropertyValue","name":"YubHub","value":"job_7af76f0d-cb6"},"title":"Geo Hunter Account Executive","description":"<p>As a Geo Hunter Account Executive on Databricks&#39; LATAM team, you will be responsible for selling Databricks&#39; enterprise cloud data platform powered by Apache Spark to customers in Brazil. You will have the opportunity to close new accounts, increase consumption and create new workloads in existing accounts, and exceed activity, pipeline, and revenue targets.</p>\n<p>Key responsibilities include:</p>\n<ul>\n<li>Presenting a territory plan within the first 90 days</li>\n<li>Meeting with CIOs, IT executives, LOB executives, program managers, and other important partners</li>\n<li>Closing both new accounts and existing accounts</li>\n<li>Identifying and closing quick, small wins while managing longer, complex sales cycles</li>\n<li>Exceeding activity, pipeline, and revenue targets</li>\n<li>Tracking all customer details including use case, purchase time frames, next steps, and forecasting in Salesforce</li>\n</ul>\n<p>Requirements include:</p>\n<ul>\n<li>Native Portuguese speaker with strong English language skills</li>\n<li>Previous experience in field sales within big data, cloud, and SaaS sales</li>\n<li>Prior customer relationships with CIOs, program managers, and essential decision makers</li>\n<li>Ability to simply articulate intricate cloud technologies</li>\n<li>3+ years of relevant full-cycle sales experience exceeding quotas</li>\n<li>Understanding of Apache Spark and big data preferable</li>\n</ul>\n<p>Benefits include accelerators above 100% quota attainment and a commitment to diversity and inclusion.</p>\n<p style=\"margin-top:24px;font-size:13px;color:#666;\">XML job scraping automation by <a href=\"https://yubhub.co\">YubHub</a></p>","url":"https://yubhub.co/jobs/job_7af76f0d-cb6","directApply":true,"hiringOrganization":{"@type":"Organization","name":"Databricks","sameAs":"https://databricks.com","logo":"https://logos.yubhub.co/databricks.com.png"},"x-apply-url":"https://job-boards.greenhouse.io/databricks/jobs/7675324002","x-work-arrangement":"onsite","x-experience-level":"executive","x-job-type":"full-time","x-salary-range":null,"x-skills-required":["native Portuguese speaker","field sales experience","big data","cloud","SaaS sales","Apache Spark","Salesforce"],"x-skills-preferred":["communication skills","problem-solving skills"],"datePosted":"2026-04-18T15:57:48.833Z","jobLocation":{"@type":"Place","address":{"@type":"PostalAddress","addressLocality":"Sao Paulo, Brazil"}},"employmentType":"FULL_TIME","occupationalCategory":"Sales","industry":"Technology","skills":"native Portuguese speaker, field sales experience, big data, cloud, SaaS sales, Apache Spark, Salesforce, communication skills, problem-solving skills"},{"@context":"https://schema.org","@type":"JobPosting","identifier":{"@type":"PropertyValue","name":"YubHub","value":"job_1ba129b2-e3a"},"title":"Solutions Architect (Hong-Kong)","description":"<p>We are seeking a Solutions Architect to join our Field Engineering team in Singapore. As a Solutions Architect, you will be responsible for demonstrating how our Data Intelligence Platform can help customers solve their complex data challenges. You will work with a collaborative, customer-focused team who values innovation and creativity, using your skills to create customized solutions to help our customers achieve their goals and guide their businesses forward.</p>\n<p>The impact you will have:</p>\n<ul>\n<li>Form successful relationships with clients in Hong Kong to provide technical and business value in collaboration with an Account Executive and a Senior Solutions Architect.</li>\n<li>Gain excitement from clients about Databricks through hands-on evaluation and Apache Spark programming, integrating with the wider cloud ecosystem and 3rd party applications.</li>\n<li>Contribute to building the Databricks technical community through engagement at workshops, seminars, and meet-ups.</li>\n<li>Become a Big Data Analytics advisor on aspects of architecture and design.</li>\n<li>Support your customers by authoring reference architectures, how-tos, and demo applications.</li>\n<li>Develop both technically and in the pre-sales aspect with the goal of becoming an independently operating Solutions Architect.</li>\n</ul>\n<p>What we look for:</p>\n<ul>\n<li>Familiarity working with clients, creating a narrative, answering customer questions, aligning the agenda with important interests, and achieving tangible outcomes.</li>\n<li>Ability to independently deliver a technical proposition, identify customers&#39; pain-points, and explain important areas for business value to develop a trusted advisor skillset.</li>\n<li>Code in a core programming language such as Python, Java, or Scala.</li>\n<li>Knowledgeable in a core Big Data Analytics domain with some exposure to advanced proofs-of-concept and an understanding of a major public cloud platform.</li>\n<li>Experience diving deeper into solution architecture and design.</li>\n<li>Proficiency in Cantonese is required as this role serves clients based in Hong Kong and involves direct customer communications in Cantonese</li>\n</ul>\n<p style=\"margin-top:24px;font-size:13px;color:#666;\">XML job scraping automation by <a href=\"https://yubhub.co\">YubHub</a></p>","url":"https://yubhub.co/jobs/job_1ba129b2-e3a","directApply":true,"hiringOrganization":{"@type":"Organization","name":"Databricks","sameAs":"https://databricks.com","logo":"https://logos.yubhub.co/databricks.com.png"},"x-apply-url":"https://job-boards.greenhouse.io/databricks/jobs/8437010002","x-work-arrangement":"onsite","x-experience-level":"senior","x-job-type":"full-time","x-salary-range":null,"x-skills-required":["Apache Spark","Python","Java","Scala","Big Data Analytics","Cloud Computing"],"x-skills-preferred":[],"datePosted":"2026-04-18T15:57:32.290Z","jobLocation":{"@type":"Place","address":{"@type":"PostalAddress","addressLocality":"Singapore"}},"employmentType":"FULL_TIME","occupationalCategory":"Engineering","industry":"Technology","skills":"Apache Spark, Python, Java, Scala, Big Data Analytics, Cloud Computing"},{"@context":"https://schema.org","@type":"JobPosting","identifier":{"@type":"PropertyValue","name":"YubHub","value":"job_fdc6f0f9-900"},"title":"Resident Solutions Architect - Communications, Media, Entertainment & Games","description":"<p>As a Resident Solutions Architect in our Professional Services team, you will work with clients on short to medium term customer engagements on their big data challenges using the Databricks platform.</p>\n<p>You will provide data engineering, data science, and cloud technology projects which require integrating with client systems, training, and other technical tasks to help customers to get most value out of their data.</p>\n<p>RSAs are billable and know how to complete projects according to specification with excellent customer service.</p>\n<p>You will report to the regional Manager/Lead.</p>\n<p>The impact you will have:</p>\n<ul>\n<li>Work on a variety of impactful customer technical projects which may include designing and building reference architectures, creating how-to&#39;s and productionalizing customer use cases</li>\n<li>Work with engagement managers to scope variety of professional services work with input from the customer</li>\n<li>Guide strategic customers as they implement transformational big data projects, 3rd party migrations, including end-to-end design, build and deployment of industry-leading big data and AI applications</li>\n<li>Consult on architecture and design; bootstrap or implement customer projects which leads to a customers&#39; successful understanding, evaluation and adoption of Databricks.</li>\n<li>Provide an escalated level of support for customer operational issues.</li>\n<li>Work with the Databricks technical team, Project Manager, Architect and Customer team to ensure the technical components of the engagement are delivered to meet customer&#39;s needs.</li>\n<li>Work with Engineering and Databricks Customer Support to provide product and implementation feedback and to guide rapid resolution for engagement specific product and support issues.</li>\n</ul>\n<p>What we look for:</p>\n<ul>\n<li>6+ years experience in data engineering, data platforms &amp; analytics</li>\n<li>Comfortable writing code in either Python or Scala</li>\n<li>Working knowledge of two or more common Cloud ecosystems (AWS, Azure, GCP) with expertise in at least one</li>\n<li>Deep experience with distributed computing with Apache Spark™ and knowledge of Spark runtime internals</li>\n<li>Familiarity with CI/CD for production deployments</li>\n<li>Working knowledge of MLOps</li>\n<li>Design and deployment of performant end-to-end data architectures</li>\n<li>Experience with technical project delivery - managing scope and timelines.</li>\n<li>Documentation and white-boarding skills.</li>\n<li>Experience working with clients and managing conflicts.</li>\n<li>Build skills in technical areas which support the deployment and integration of Databricks-based solutions to complete customer projects.</li>\n<li>Travel to customers 20% of the time</li>\n</ul>\n<p>Databricks Certification</p>\n<p>Pay Range Transparency</p>\n<p>Databricks is committed to fair and equitable compensation practices. The pay range(s) for this role is listed below and represents the expected base salary range for non-commissionable roles or on-target earnings for commissionable roles.</p>\n<p>Actual compensation packages are based on several factors that are unique to each candidate, including but not limited to job-related skills, depth of experience, relevant certifications and training, and specific work location.</p>\n<p>Based on the factors above, Databricks anticipated utilizing the full width of the range.</p>\n<p>The total compensation package for this position may also include eligibility for annual performance bonus, equity, and the benefits listed above.</p>\n<p>For more information regarding which range your location is in visit our page here.</p>\n<p>Zone 1 Pay Range $180,656-$248,360 USD</p>\n<p>Zone 2 Pay Range $180,656-$248,360 USD</p>\n<p>Zone 3 Pay Range $180,656-$248,360 USD</p>\n<p>Zone 4 Pay Range $180,656-$248,360 USD</p>\n<p style=\"margin-top:24px;font-size:13px;color:#666;\">XML job scraping automation by <a href=\"https://yubhub.co\">YubHub</a></p>","url":"https://yubhub.co/jobs/job_fdc6f0f9-900","directApply":true,"hiringOrganization":{"@type":"Organization","name":"Databricks","sameAs":"https://databricks.com","logo":"https://logos.yubhub.co/databricks.com.png"},"x-apply-url":"https://job-boards.greenhouse.io/databricks/jobs/8461168002","x-work-arrangement":"onsite","x-experience-level":"senior","x-job-type":"full-time","x-salary-range":"$180,656-$248,360 USD","x-skills-required":["data engineering","data science","cloud technology","Apache Spark","distributed computing","CI/CD","MLOps","performant end-to-end data architectures","technical project delivery","documentation and white-boarding skills","client management"],"x-skills-preferred":[],"datePosted":"2026-04-18T15:57:29.214Z","jobLocation":{"@type":"Place","address":{"@type":"PostalAddress","addressLocality":"Los Angeles, California"}},"employmentType":"FULL_TIME","occupationalCategory":"Engineering","industry":"Technology","skills":"data engineering, data science, cloud technology, Apache Spark, distributed computing, CI/CD, MLOps, performant end-to-end data architectures, technical project delivery, documentation and white-boarding skills, client management","baseSalary":{"@type":"MonetaryAmount","currency":"USD","value":{"@type":"QuantitativeValue","minValue":180656,"maxValue":248360,"unitText":"YEAR"}}},{"@context":"https://schema.org","@type":"JobPosting","identifier":{"@type":"PropertyValue","name":"YubHub","value":"job_cb18189c-d78"},"title":"Solutions Architect (Pre-sales) - Kansai Region","description":"<p>As a Pre-sales Solutions Architect (Analytics, AI, Big Data, Public Cloud) – Kansai Region, your mission will be to drive successful technical evaluations and solution designs for some of our focus customers in the Kansai region (Osaka/Kyoto) for Databricks Japan.</p>\n<p>You are passionate about data and AI, love getting hands-on with technology, and enjoy communicating its value to both technical and non-technical stakeholders. Partnering closely with Account Executives, you will lead the technical discovery, architecture design, and proof-of-concept phases, and act as a trusted advisor to our customers on their data and AI strategy.</p>\n<p>You will help customers realize tangible, data-driven outcomes on the Databricks Lakehouse Platform by guiding data and AI teams to design, build, and operationalize solutions within their enterprise ecosystem.</p>\n<p>Responsibilities:</p>\n<ul>\n<li>Be a Big Data Analytics expert on aspects of architecture and design</li>\n<li>Lead your prospects through evaluating and adopting Databricks</li>\n<li>Support your customers by authoring reference architectures, how-tos, and demo applications</li>\n<li>Integrate Databricks with 3rd-party applications to support customer architectures</li>\n<li>Engage with the technical community by leading workshops, seminars, and meet-ups</li>\n</ul>\n<p>What we look for:</p>\n<ul>\n<li>Pre-sales or post-sales experience working with external clients across a variety of industry markets</li>\n<li>Understanding of customer-facing pre-sales or consulting role with a core strength in either Data Engineering or Data Science advantageous</li>\n<li>Experience demonstrating technical concepts, including presenting and whiteboarding</li>\n<li>Experience designing and implementing architectures within public clouds (AWS, Azure, or GCP)</li>\n<li>Experience with Big Data technologies, including Apache Spark, AI, Data Science, Data Engineering, Hadoop, Cassandra, and others</li>\n<li>Fluent coding experience in Python or Scala implementing Apache Spark, Java, and R is also desirable</li>\n<li>Experience working with Enterprise Accounts</li>\n<li>Written and verbal fluency in Japanese</li>\n</ul>\n<p>Benefits:</p>\n<p>At Databricks, we strive to provide comprehensive benefits and perks that meet the needs of all of our employees. For specific details on the benefits offered in your region, click here.</p>\n<p style=\"margin-top:24px;font-size:13px;color:#666;\">XML job scraping automation by <a href=\"https://yubhub.co\">YubHub</a></p>","url":"https://yubhub.co/jobs/job_cb18189c-d78","directApply":true,"hiringOrganization":{"@type":"Organization","name":"Databricks","sameAs":"https://databricks.com","logo":"https://logos.yubhub.co/databricks.com.png"},"x-apply-url":"https://job-boards.greenhouse.io/databricks/jobs/8437028002","x-work-arrangement":"onsite","x-experience-level":"senior","x-job-type":"full-time","x-salary-range":null,"x-skills-required":["Big Data Analytics","Apache Spark","AI","Data Science","Data Engineering","Hadoop","Cassandra","Python","Scala","Java","R","Public Cloud","AWS","Azure","GCP"],"x-skills-preferred":[],"datePosted":"2026-04-18T15:57:24.678Z","jobLocation":{"@type":"Place","address":{"@type":"PostalAddress","addressLocality":"Japan"}},"employmentType":"FULL_TIME","occupationalCategory":"Engineering","industry":"Technology","skills":"Big Data Analytics, Apache Spark, AI, Data Science, Data Engineering, Hadoop, Cassandra, Python, Scala, Java, R, Public Cloud, AWS, Azure, GCP"},{"@context":"https://schema.org","@type":"JobPosting","identifier":{"@type":"PropertyValue","name":"YubHub","value":"job_dd67fe82-1c8"},"title":"Solutions Architect : Data & AI","description":"<p>As a Solutions Architect (Analytics, AI, Big Data, Public Cloud), you will guide the technical evaluation phase in a hands-on environment throughout the sales process. You will be a technical advisor internally to the sales team, and work with the product team as an advocate of your customers in the field.</p>\n<p>You will help our customers to achieve tangible data-driven outcomes through the use of our Databricks Lakehouse Platform, helping data teams complete projects and integrate our platform into their enterprise Ecosystem.</p>\n<p>The impact you will have:</p>\n<ul>\n<li>You will be a Big Data Analytics expert on aspects of architecture and design</li>\n<li>Lead your clients through evaluating and adopting Databricks including hands-on Apache Spark programming and integration with the wider cloud ecosystem</li>\n<li>Support your customers by authoring reference architectures, how-tos, and demo applications</li>\n<li>Integrate Databricks with 3rd-party applications to support customer architectures</li>\n<li>Engage with the technical community by leading workshops, seminars and meet-ups</li>\n</ul>\n<p>Together with your Account Executive, you will form successful relationships with clients throughout your assigned territory to provide technical and business value</p>\n<p>What we look for:</p>\n<ul>\n<li>Strong consulting / customer facing experience, working with external clients across a variety of industry markets</li>\n<li>Core strength in either data engineering or data science technologies</li>\n<li>8+ years of experience demonstrating technical concepts, including demos, presenting and white-boarding</li>\n<li>8+ years of experience designing architectures within a public cloud (AWS, Azure or GCP)</li>\n<li>6+ years of experience with Big Data technologies, including Apache Spark, AI, Data Science, Data Engineering, Hadoop, Cassandra, and others</li>\n<li>Coding experience in Python, R, Java, Apache Spark or Scala</li>\n</ul>\n<p>About Databricks</p>\n<p>Databricks is the data and AI company. More than 10,000 organizations worldwide , including Comcast, Condé Nast, Grammarly, and over 50% of the Fortune 500 , rely on the Databricks Data Intelligence Platform to unify and democratize data, analytics and AI.</p>\n<p>Benefits</p>\n<p>At Databricks, we strive to provide comprehensive benefits and perks that meet the needs of all of our employees. For specific details on the benefits offered in your region click here.</p>\n<p>Our Commitment to Diversity and Inclusion</p>\n<p>At Databricks, we are committed to fostering a diverse and inclusive culture where everyone can excel. We take great care to ensure that our hiring practices are inclusive and meet equal employment opportunity standards.</p>\n<p>Individuals looking for employment at Databricks are considered without regard to age, color, disability, ethnicity, family or marital status, gender identity or expression, language, national origin, physical and mental ability, political affiliation, race, religion, sexual orientation, socio-economic status, veteran status, and other protected characteristics.</p>\n<p>Compliance</p>\n<p>If access to export-controlled technology or source code is required for performance of job duties, it is within Employer&#39;s discretion whether to apply for a U.S. government license for such positions, and Employer may decline to proceed with an applicant on this basis alone.</p>\n<p style=\"margin-top:24px;font-size:13px;color:#666;\">XML job scraping automation by <a href=\"https://yubhub.co\">YubHub</a></p>","url":"https://yubhub.co/jobs/job_dd67fe82-1c8","directApply":true,"hiringOrganization":{"@type":"Organization","name":"Databricks","sameAs":"https://databricks.com","logo":"https://logos.yubhub.co/databricks.com.png"},"x-apply-url":"https://job-boards.greenhouse.io/databricks/jobs/8346277002","x-work-arrangement":"onsite","x-experience-level":"senior","x-job-type":"full-time","x-salary-range":null,"x-skills-required":["Big Data technologies","Apache Spark","AI","Data Science","Data Engineering","Hadoop","Cassandra","Python","R","Java","Scala"],"x-skills-preferred":[],"datePosted":"2026-04-18T15:57:18.281Z","jobLocation":{"@type":"Place","address":{"@type":"PostalAddress","addressLocality":"Pune, India"}},"employmentType":"FULL_TIME","occupationalCategory":"Engineering","industry":"Technology","skills":"Big Data technologies, Apache Spark, AI, Data Science, Data Engineering, Hadoop, Cassandra, Python, R, Java, Scala"},{"@context":"https://schema.org","@type":"JobPosting","identifier":{"@type":"PropertyValue","name":"YubHub","value":"job_ca38c08d-e8f"},"title":"Staff Data Scientist - Marketing Analytics","description":"<p>Join us to build the measurement and decision engine for patient growth.</p>\n<p>As a Staff Data Scientist, Marketing Analytics, you will be the senior analytical and strategic leader who makes marketing performance legible, credible, and actionable. You will partner closely with Growth Marketing leadership and channel owners across paid, lifecycle, and organic, plus Finance, Product, and Engineering.</p>\n<p>Your job is to help Headway answer the questions that matter:</p>\n<ul>\n<li>What is truly incremental?</li>\n</ul>\n<ul>\n<li>Where should we invest next?</li>\n</ul>\n<ul>\n<li>What is driving performance shifts?</li>\n</ul>\n<ul>\n<li>How do we scale what works without fooling ourselves?</li>\n</ul>\n<p>You will build the frameworks, analyses, and modeling approaches that enable the marketing team to move faster with confidence. This is high-stakes decision support for a growth engine that needs to compound and certainly not a dashboard-only role or “just attribution” role.</p>\n<p>Responsibilities:</p>\n<ul>\n<li>Own incrementality measurement across channels. Design and analyze geo tests, holdouts, lift tests, and quasi-experimental approaches when randomized tests are not feasible. Define clear guardrails, decision rules, and what “good” looks like.</li>\n</ul>\n<ul>\n<li>Build a marketing measurement system that leaders trust. Define canonical metrics (CAC, LTV, payback, conversion, retention, capacity-adjusted ROI), ensure definitions are consistent, and create a clear measurement narrative that aligns Marketing, Finance, and Product.</li>\n</ul>\n<ul>\n<li>Turn ambiguity into a plan. When performance changes, you will diagnose why, quantify contributing drivers, and recommend concrete actions. You will be the person who can say, “Here’s what moved, here’s why we believe it moved, and here’s what we do next.”</li>\n</ul>\n<ul>\n<li>Develop and evolve modeling approaches where they create leverage. Build practical models such as LTV and retention forecasting, cohort value prediction, causal uplift models for lifecycle, and marketing mix modeling when appropriate. Focus on models that survive contact with reality: calibration, backtesting, and decision usefulness.</li>\n</ul>\n<ul>\n<li>Partner with Engineering on the measurement plumbing. Improve event instrumentation, identity resolution assumptions, offline conversion integration, and data quality monitoring so measurement is robust. Advocate for minimal, decision-critical requirements that unlock reliable learning.</li>\n</ul>\n<ul>\n<li>Design learning loops that scale. Create repeatable experimentation and analysis templates for channel and creative testing, including measurement of message by audience by surface. Increase testing velocity without lowering the truth standard.</li>\n</ul>\n<ul>\n<li>Influence strategy, not just reporting. Bring an evidence-based point of view on channel allocation, growth constraints, saturation, diminishing returns, and the tradeoffs between short-term acquisition and long-term retention and care outcomes.</li>\n</ul>\n<ul>\n<li>Uplevel the team. Mentor analysts and data scientists working on growth, set quality standards, and help establish best practices across experimentation, causal inference, and forecasting.</li>\n</ul>\n<p>What will make you successful:</p>\n<ul>\n<li>10+ years using data science, analytics, and experimentation to drive decisions in marketing, growth, or marketplace environments (or equivalent scope and demonstrated impact).</li>\n</ul>\n<ul>\n<li>Deep expertise in causal inference and incrementality in real-world marketing systems: you know the failure modes (selection bias, channel cannibalization, platform noise, attribution myths) and how to design around them.</li>\n</ul>\n<ul>\n<li>Strong SQL plus strong proficiency in Python or R, with the ability to build reliable, reusable analytical workflows.</li>\n</ul>\n<ul>\n<li>Practical modeling skill, especially as applied to marketing and growth: cohorting, forecasting, LTV estimation, saturation and diminishing returns, MMM concepts, calibration and monitoring.</li>\n</ul>\n<ul>\n<li>Track record of influencing executive decisions with clear recommendations and measurable outcomes, not just analysis.</li>\n</ul>\n<ul>\n<li>Excellent communication: you can make complex measurement logic understandable and defensible to non-technical partners, and you can call out uncertainty without losing momentum.</li>\n</ul>\n<ul>\n<li>High ownership and strong judgment: you prioritize what changes decisions, you move quickly, and you know when to slow down because the risk is real.</li>\n</ul>\n<ul>\n<li>You are motivated by the mission. Access and affordability in mental healthcare are not abstract problems here.</li>\n</ul>\n<p>Nice to have:</p>\n<ul>\n<li>Experience with geo experiments, marketplace constraints, or capacity-aware marketing optimization.</li>\n</ul>\n<ul>\n<li>Experience measuring acquisition quality beyond conversion: downstream engagement, retention, clinical matching quality, and unit economics.</li>\n</ul>\n<ul>\n<li>Familiarity with lifecycle marketing measurement (incrementality, uplift, experimentation design for messaging).</li>\n</ul>\n<ul>\n<li>Experience partnering with Finance on budget allocation, payback, and scenario planning.</li>\n</ul>\n<ul>\n<li>Comfort working with imperfect identity, privacy constraints, and evolving attribution ecosystems.</li>\n</ul>\n<p style=\"margin-top:24px;font-size:13px;color:#666;\">XML job scraping automation by <a href=\"https://yubhub.co\">YubHub</a></p>","url":"https://yubhub.co/jobs/job_ca38c08d-e8f","directApply":true,"hiringOrganization":{"@type":"Organization","name":"Headway","sameAs":"https://www.headway.com/","logo":"https://logos.yubhub.co/headway.com.png"},"x-apply-url":"https://job-boards.greenhouse.io/headway/jobs/5751646004","x-work-arrangement":"remote","x-experience-level":"staff","x-job-type":"full-time","x-salary-range":"$212,000 - $265,000","x-skills-required":["data science","analytics","experimentation","marketing","growth","SQL","Python","R","causal inference","incrementality","modeling","forecasting","LTV estimation","saturation","diminishing returns","MMM concepts","calibration","monitoring"],"x-skills-preferred":["geo experiments","marketplace constraints","capacity-aware marketing optimization","acquisition quality","downstream engagement","retention","clinical matching quality","unit economics","lifecycle marketing measurement","uplift","experimentation design for messaging","budget allocation","payback","scenario planning"],"datePosted":"2026-04-18T15:57:07.522Z","jobLocation":{"@type":"Place","address":{"@type":"PostalAddress","addressLocality":"New York, New York, United States; San Francisco, California, United States; Seattle, Washington, United States"}},"jobLocationType":"TELECOMMUTE","employmentType":"FULL_TIME","occupationalCategory":"Marketing","industry":"Healthcare","skills":"data science, analytics, experimentation, marketing, growth, SQL, Python, R, causal inference, incrementality, modeling, forecasting, LTV estimation, saturation, diminishing returns, MMM concepts, calibration, monitoring, geo experiments, marketplace constraints, capacity-aware marketing optimization, acquisition quality, downstream engagement, retention, clinical matching quality, unit economics, lifecycle marketing measurement, uplift, experimentation design for messaging, budget allocation, payback, scenario planning","baseSalary":{"@type":"MonetaryAmount","currency":"USD","value":{"@type":"QuantitativeValue","minValue":212000,"maxValue":265000,"unitText":"YEAR"}}},{"@context":"https://schema.org","@type":"JobPosting","identifier":{"@type":"PropertyValue","name":"YubHub","value":"job_03224784-9c2"},"title":"Senior Data Engineering Manager","description":"<p>Job Title: Senior Data Engineering Manager</p>\n<p>Location: Dublin, Ireland</p>\n<p>Department: R&amp;D</p>\n<p>Job Description:</p>\n<p>Intercom is seeking a Senior Data Engineering Manager to lead the design and evolution of the core infrastructure that powers our entire data ecosystem. As a leader, you will partner with product and business teams to drive key data initiatives and ensure the success of our data engineering team.</p>\n<p>Responsibilities:</p>\n<ul>\n<li>Next-Gen Platform Evolution: Partner with product and business teams to design and implement the next generation of our data stack, ensuring it can meet the demands of advanced analytics and AI applications.</li>\n</ul>\n<ul>\n<li>Enablement Through Tooling: Partner closely with Analytics Engineers, Analysts, and Data Scientists to build self-service tooling and infrastructure that enables them to move fast and deploy safely.</li>\n</ul>\n<ul>\n<li>Data Quality Guardianship: Implement advanced monitoring systems to proactively detect, surface, and resolve data quality issues across our high-throughput environment.</li>\n</ul>\n<ul>\n<li>Driving Automation: Develop automation and tooling that streamlines the creation and discovery of high-quality analytics data, making the entire data lifecycle more efficient.</li>\n</ul>\n<p>Strategic Impact You&#39;ll Drive:</p>\n<ul>\n<li>GTM Data Platform Strategy: Build the data acquisition strategy that will enable us to build the next generation of business-focused internal software.</li>\n</ul>\n<ul>\n<li>Conversational BI Strategy: Lead the charge to shift away from complex, technical reporting toward natural language interaction to make data truly democratized and accessible.</li>\n</ul>\n<ul>\n<li>Platform &amp; Warehousing Strategy: Lead the architectural- and cost review and revamp of our core data infrastructure to ensure it can scale exponentially for future growth and advanced use cases.</li>\n</ul>\n<p>Recent Wins You&#39;ll Build Upon:</p>\n<ul>\n<li>AI-assisted Local Analytics Development Environment for Airflow and DBT.</li>\n</ul>\n<ul>\n<li>Data-rich AI apps containerized on Snowflake SPCS.</li>\n</ul>\n<ul>\n<li>A new, modern data catalog solution.</li>\n</ul>\n<ul>\n<li>Migrating critical MySQL ingestion pipelines from Aurora to PlanetScale.</li>\n</ul>\n<p>Who You Are:</p>\n<ul>\n<li>A leader, a builder, and a problem-solver who thrives on solving real-world business problems.</li>\n</ul>\n<ul>\n<li>7+ years of experience in the data space, leading teams of 6+ engineers.</li>\n</ul>\n<ul>\n<li>Stakeholder focus: ability to communicate complex technical solutions to a business-focused audience and vice versa.</li>\n</ul>\n<ul>\n<li>Technical depth: not afraid to get hands dirty and write code when needed.</li>\n</ul>\n<ul>\n<li>A leader and mentor: naturally recognizes opportunities to step back and mentor others.</li>\n</ul>\n<p>Bonus Points (Our Modern Stack Knowledge):</p>\n<ul>\n<li>Airflow at scale: extensive experience working with Apache Airflow, especially the nuances of operating it reliably in a high-volume environment.</li>\n</ul>\n<ul>\n<li>Modern data stack fluency: familiarity with tools like Snowflake and DBT.</li>\n</ul>\n<ul>\n<li>Future-focused: keeps a keen eye on industry trends and emerging technologies.</li>\n</ul>\n<p>Benefits:</p>\n<ul>\n<li>Competitive salary and equity in a fast-growing start-up.</li>\n</ul>\n<ul>\n<li>We serve lunch every weekday, plus a variety of snack foods and a fully stocked kitchen.</li>\n</ul>\n<ul>\n<li>Regular compensation reviews - we reward great work!</li>\n</ul>\n<ul>\n<li>Pension scheme &amp; match up to 4%.</li>\n</ul>\n<ul>\n<li>Peace of mind with life assurance, as well as comprehensive health and dental insurance for you and your dependents.</li>\n</ul>\n<ul>\n<li>Open vacation policy and flexible holidays so you can take time off when you need it.</li>\n</ul>\n<ul>\n<li>Paid maternity leave, as well as 6 weeks paternity leave for fathers, to let you spend valuable time with your loved ones.</li>\n</ul>\n<ul>\n<li>If you’re cycling, we’ve got you covered on the Cycle-to-Work Scheme. With secure bike storage too.</li>\n</ul>\n<ul>\n<li>MacBooks are our standard, but we also offer Windows for certain roles when needed.</li>\n</ul>\n<p>Policies:</p>\n<ul>\n<li>Intercom has a hybrid working policy. We believe that working in person helps us stay connected, collaborate easier and create a great culture while still providing flexibility to work from home.</li>\n</ul>\n<ul>\n<li>We have a radically open and accepting culture at Intercom. We avoid spending time on divisive subjects to foster a safe and cohesive work environment for everyone.</li>\n</ul>\n<p style=\"margin-top:24px;font-size:13px;color:#666;\">XML job scraping automation by <a href=\"https://yubhub.co\">YubHub</a></p>","url":"https://yubhub.co/jobs/job_03224784-9c2","directApply":true,"hiringOrganization":{"@type":"Organization","name":"Intercom","sameAs":"https://www.intercom.com/","logo":"https://logos.yubhub.co/intercom.com.png"},"x-apply-url":"https://job-boards.greenhouse.io/intercom/jobs/7574762","x-work-arrangement":"hybrid","x-experience-level":"senior","x-job-type":"full-time","x-salary-range":null,"x-skills-required":["Airflow","Apache Airflow","DBT","Snowflake","Data Engineering","Data Science","Analytics","Data Management","Data Quality","Automation","Cloud Computing","Data Warehousing","Big Data","Machine Learning","Artificial Intelligence"],"x-skills-preferred":[],"datePosted":"2026-04-18T15:57:06.635Z","jobLocation":{"@type":"Place","address":{"@type":"PostalAddress","addressLocality":"Dublin, Ireland"}},"employmentType":"FULL_TIME","occupationalCategory":"Engineering","industry":"Technology","skills":"Airflow, Apache Airflow, DBT, Snowflake, Data Engineering, Data Science, Analytics, Data Management, Data Quality, Automation, Cloud Computing, Data Warehousing, Big Data, Machine Learning, Artificial Intelligence"},{"@context":"https://schema.org","@type":"JobPosting","identifier":{"@type":"PropertyValue","name":"YubHub","value":"job_0fb2e339-447"},"title":"Enterprise Hunter Account Executive (FSI - North)","description":"<p>As an Enterprise Account Executive in Databricks, you will be responsible for selling the company&#39;s enterprise cloud data platform powered by Apache Spark to financial services institutions in India. Your goal will be to close new accounts while maintaining existing ones, and to exceed activity, pipeline, and revenue targets.</p>\n<p>Key responsibilities include:</p>\n<ul>\n<li>Presenting a territory plan within the first 90 days</li>\n<li>Meeting with CIOs, IT executives, LOB executives, program managers, and other important partners</li>\n<li>Closing both new accounts and existing accounts</li>\n<li>Identifying and closing quick, small wins while managing longer, complex sales cycles</li>\n<li>Exceeding activity, pipeline, and revenue targets</li>\n<li>Tracking all customer details including use case, purchase time frames, next steps, and forecasting in Salesforce</li>\n</ul>\n<p>To succeed in this role, you will need to have 7+ years of experience in enterprise sales, with a proven track record of exceeding quotas and closing new accounts. You should also have a strong understanding of cloud technologies and be able to articulate intricate concepts simply.</p>\n<p>In addition to your technical skills, you will need to be a strong communicator and be able to build relationships with key decision-makers. You should also be comfortable working in a fast-paced environment and be able to adapt to changing priorities.</p>\n<p>If you are a motivated and results-driven sales professional who is looking for a new challenge, we encourage you to apply for this role.</p>\n<p style=\"margin-top:24px;font-size:13px;color:#666;\">XML job scraping automation by <a href=\"https://yubhub.co\">YubHub</a></p>","url":"https://yubhub.co/jobs/job_0fb2e339-447","directApply":true,"hiringOrganization":{"@type":"Organization","name":"Databricks","sameAs":"https://databricks.com/","logo":"https://logos.yubhub.co/databricks.com.png"},"x-apply-url":"https://job-boards.greenhouse.io/databricks/jobs/8438952002","x-work-arrangement":"onsite","x-experience-level":"executive","x-job-type":"full-time","x-salary-range":null,"x-skills-required":["Enterprise sales","Cloud technologies","Apache Spark","Salesforce","Customer relationship building"],"x-skills-preferred":["Big data","Data analytics","Artificial intelligence"],"datePosted":"2026-04-18T15:56:57.783Z","jobLocation":{"@type":"Place","address":{"@type":"PostalAddress","addressLocality":"Delhi, India"}},"employmentType":"FULL_TIME","occupationalCategory":"Sales","industry":"Technology","skills":"Enterprise sales, Cloud technologies, Apache Spark, Salesforce, Customer relationship building, Big data, Data analytics, Artificial intelligence"}]}